This adds two ways to check for subscription support. First, one can now check
whether the transport method (HTTP/WS/etc.) is capable of subscriptions using
the new Client.SupportsSubscriptions method.
Second, the error returned by Subscribe can now reliably be tested using this
pattern:
sub, err := client.Subscribe(...)
if errors.Is(err, rpc.ErrNotificationsUnsupported) {
// no subscription support
}
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR adds server-side limits for JSON-RPC batch requests. Before this change, batches
were limited only by processing time. The server would pick calls from the batch and
answer them until the response timeout occurred, then stop processing the remaining batch
items.
Here, we are adding two additional limits which can be configured:
- the 'item limit': batches can have at most N items
- the 'response size limit': batches can contain at most X response bytes
These limits are optional in package rpc. In Geth, we set a default limit of 1000 items
and 25MB response size.
When a batch goes over the limit, an error response is returned to the client. However,
doing this correctly isn't always possible. In JSON-RPC, only method calls with a valid
`id` can be responded to. Since batches may also contain non-call messages or
notifications, the best effort thing we can do to report an error with the batch itself is
reporting the limit violation as an error for the first method call in the batch. If a batch is
too large, but contains only notifications and responses, the error will be reported with
a null `id`.
The RPC client was also changed so it can deal with errors resulting from too large
batches. An older client connected to the server code in this PR could get stuck
until the request timeout occurred when the batch is too large. **Upgrading to a version
of the RPC client containing this change is strongly recommended to avoid timeout issues.**
For some weird reason, when writing the original client implementation, @fjl worked off of
the assumption that responses could be distributed across batches arbitrarily. So for a
batch request containing requests `[A B C]`, the server could respond with `[A B C]` but
also with `[A B] [C]` or even `[A] [B] [C]` and it wouldn't make a difference to the
client.
So in the implementation of BatchCallContext, the client waited for all requests in the
batch individually. If the server didn't respond to some of the requests in the batch, the
client would eventually just time out (if a context was used).
With the addition of batch limits into the server, we anticipate that people will hit this
kind of error way more often. To handle this properly, the client now waits for a single
response batch and expects it to contain all responses to the requests.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
The change fixes unmarshaling of JSON null results into json.RawMessage.
---------
Co-authored-by: Jason Yuan <jason.yuan@curvegrid.com>
Co-authored-by: Jason Yuan <jason.yuan869@gmail.com>
Here we add special handling for sending an error response when the write timeout of the
HTTP server is just about to expire. This is surprisingly difficult to get right, since is
must be ensured that all output is fully flushed in time, which needs support from
multiple levels of the RPC handler stack:
The timeout response can't use chunked transfer-encoding because there is no way to write
the final terminating chunk. net/http writes it when the topmost handler returns, but the
timeout will already be over by the time that happens. We decided to disable chunked
encoding by setting content-length explicitly.
Gzip compression must also be disabled for timeout responses because we don't know the
true content-length before compressing all output, i.e. compression would reintroduce
chunked transfer-encoding.
This adds a generic mechanism for 'dial options' in the RPC client,
and also implements a specific dial option for the JWT authentication
mechanism used by the engine API. Some real tests for the server-side
authentication handling are also added.
Co-authored-by: Joshua Gutow <jgutow@optimism.io>
Co-authored-by: Felix Lange <fjl@twurst.com>
This replaces the sketchy and undocumented string context keys for HTTP requests
with a defined interface. Using string keys with context is discouraged because
they may clash with keys created by other packages.
We added these keys to make connection metadata available in the signer, so this
change also updates signer/core to use the new PeerInfo API.
This avoids quadratic time complexity in the lookup of the batch element
corresponding to an RPC response. Unfortunately, the new approach
requires additional memory for the mapping from ID to index.
Fixes#22805
This change makes the client attempt to reconnect when a write fails.
We already had reconnect support, but the reconnect would previously
happen on the next call after an error. Being more eager leads to a
smoother experience overall.
* rpc: improve codec abstraction
rpc.ServerCodec is an opaque interface. There was only one way to get a
codec using existing APIs: rpc.NewJSONCodec. This change exports
newCodec (as NewFuncCodec) and NewJSONCodec (as NewCodec). It also makes
all codec methods non-public to avoid showing internals in godoc.
While here, remove codec options in tests because they are not
supported anymore.
* p2p/simulations: use github.com/gorilla/websocket
This package was the last remaining user of golang.org/x/net/websocket.
Migrating to the new library wasn't straightforward because it is no
longer possible to treat WebSocket connections as a net.Conn.
* vendor: delete golang.org/x/net/websocket
* rpc: fix godoc comments and run gofmt
* rpc: implement websockets with github.com/gorilla/websocket
This change makes package rpc use the github.com/gorilla/websocket
package for WebSockets instead of golang.org/x/net/websocket. The new
library is more robust and supports all WebSocket features including
continuation frames.
There are new tests for two issues with the previously-used library:
- TestWebsocketClientPing checks handling of Ping frames.
- TestWebsocketLargeCall checks whether the request size limit is
applied correctly.
* rpc: raise HTTP/WebSocket request size limit to 5MB
* rpc: remove default origin for client connections
The client used to put the local hostname into the Origin header because
the server wanted an origin to accept the connection, but that's silly:
Origin is for browsers/websites. The nobody would whitelist a particular
hostname.
Now that the server doesn't need Origin anymore, don't bother setting
one for clients. Users who need an origin can use DialWebsocket to
create a client with arbitrary origin if needed.
* vendor: put golang.org/x/net/websocket back
* rpc: don't set Origin header for empty (default) origin
* rpc: add HTTP status code to handshake error
This makes it easier to debug failing connections.
* ethstats: use github.com/gorilla/websocket
* rpc: fix lint
This PR updates a comment about the maximum client subscription buffer
to reflect changes made previously, and fixes a test that wouldn't fail
when wantError == true but execution did not return an error.
When cancelling the context for a call on a HTTP-based client while the
call is running, the select in requestOp.wait may hit the <-context.Done()
case instead of the <-op.resp case. This doesn't happen often -- our
cancel test hasn't caught this even though it ran thousands of times
on CI since the RPC client was added.
Fixes#19714
New APIs added:
client.RegisterName(namespace, service) // makes service available to server
client.Notify(ctx, method, args...) // sends a notification
ClientFromContext(ctx) // to get a client in handler method
This is essentially a rewrite of the server-side code. JSON-RPC
processing code is now the same on both server and client side. Many
minor issues were fixed in the process and there is a new test suite for
JSON-RPC spec compliance (and non-compliance in some cases).
List of behavior changes:
- Method handlers are now called with a per-request context instead of a
per-connection context. The context is canceled right after the method
returns.
- Subscription error channels are always closed when the connection
ends. There is no need to also wait on the Notifier's Closed channel
to detect whether the subscription has ended.
- Client now omits "params" instead of sending "params": null when there
are no arguments to a call. The previous behavior was not compliant
with the spec. The server still accepts "params": null.
- Floating point numbers are allowed as "id". The spec doesn't allow
them, but we handle request "id" as json.RawMessage and guarantee that
the same number will be sent back.
- Logging is improved significantly. There is now a message at DEBUG
level for each RPC call served.
This commit adds all changes needed for the merge of swarm-network-rewrite.
The changes:
- build: increase linter timeout
- contracts/ens: export ensNode
- log: add Output method and enable fractional seconds in format
- metrics: relax test timeout
- p2p: reduced some log levels, updates to simulation packages
- rpc: increased maxClientSubscriptionBuffer to 20000
This commit introduces a network simulation framework which
can be used to run simulated networks of devp2p nodes. The
intention is to use this for testing protocols, performing
benchmarks and visualising emergent network behaviour.
There is no need to depend on the old context package now that the
minimum Go version is 1.7. The move to "context" eliminates our weird
vendoring setup. Some vendored code still uses golang.org/x/net/context
and it is now vendored in the normal way.
This change triggered new vet checks around context.WithTimeout which
didn't fire with golang.org/x/net/context.
I initially made the client block if the 100-element buffer was
exceeded. It turns out that this is inconvenient for simple uses of the
client which subscribe and perform calls on the same goroutine, e.g.
client, _ := rpc.Dial(...)
ch := make(chan int) // note: no buffer
sub, _ := client.EthSubscribe(ch, "something")
for event := range ch {
client.Call(...)
}
This innocent looking code will lock up if the server suddenly decides
to send 2000 notifications. In this case, the client's main loop won't
accept the call because it is trying to deliver a notification to ch.
The issue is kind of hard to explain in the docs and few people will
actually read them. Buffering is the simple option and works with close
to no overhead for subscribers that always listen.