This fixes a rare issue where the client subscription forwarding loop
would attempt send on the subscription's channel after Unsubscribe has
returned, leading to a panic if the subscription channel was already
closed by the user. Example:
sub, _ := client.Subscribe(..., channel, ...)
sub.Unsubscribe()
close(channel)
The race occurred because Unsubscribe called quitWithServer to tell the
forwarding loop to stop sending on sub.channel, but did not wait for the
loop to actually come down. This is fixed by adding an additional channel
to track the shutdown, on which Unsubscribe now waits.
Fixes#22322
* internal/ethapi: return revert reason for eth_call
* internal/ethapi: moved revert reason logic to doCall
* accounts/abi/bind/backends: added revert reason logic to simulated backend
* internal/ethapi: fixed linting error
* internal/ethapi: check if require reason can be unpacked
* internal/ethapi: better error logic
* internal/ethapi: simplify logic
* internal/ethapi: return vmError()
* internal/ethapi: move handling of revert out of docall
* graphql: removed revert logic until spec change
* rpc: internal/ethapi: added custom error types
* graphql: use returndata instead of return
Return() checks if there is an error. If an error is found, we return nil.
For most use cases it can be beneficial to return the output even if there
was an error. This code should be changed anyway once the spec supports
error reasons in graphql responses
* accounts/abi/bind/backends: added tests for revert reason
* internal/ethapi: add errorCode to revert error
* internal/ethapi: add errorCode of 3 to revertError
* internal/ethapi: unified estimateGasErrors, simplified logic
* internal/ethapi: unified handling of errors in DoEstimateGas
* rpc: print error data field
* accounts/abi/bind/backends: unify simulatedBackend and RPC
* internal/ethapi: added binary data to revertError data
* internal/ethapi: refactored unpacking logic into newRevertError
* accounts/abi/bind/backends: fix EstimateGas
* accounts, console, internal, rpc: minor error interface cleanups
* Revert "accounts, console, internal, rpc: minor error interface cleanups"
This reverts commit 2d3ef53c5304e429a04983210a417c1f4e0dafb7.
* re-apply the good parts of 2d3ef53c53
* rpc: add test for returning server error data from client
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
The leaks were mostly in unit tests, and could all be resolved by
adding suitably-sized channel buffers or by restructuring the test
to not send on a channel after an error has occurred.
There is an unavoidable goroutine leak in Console.Interactive: when
we receive a signal, the line reader cannot be unblocked and will get
stuck. This leak is now documented and I've tried to make it slightly
less bad by adding a one-element buffer to the output channels of
the line-reading loop. Should the reader eventually awake from its
blocked state (i.e. when stdin is closed), at least it won't get stuck
trying to send to the interpreter loop which has quit long ago.
Co-authored-by: Felix Lange <fjl@twurst.com>
* rpc: remove 'exported or builtin' restriction for parameters
There is no technial reason for this restriction because package reflect
can create values of any type. Requiring parameters and return values to
be exported causes a lot of noise in package exports.
* rpc: fix staticcheck warnings
This PR updates a comment about the maximum client subscription buffer
to reflect changes made previously, and fixes a test that wouldn't fail
when wantError == true but execution did not return an error.
New APIs added:
client.RegisterName(namespace, service) // makes service available to server
client.Notify(ctx, method, args...) // sends a notification
ClientFromContext(ctx) // to get a client in handler method
This is essentially a rewrite of the server-side code. JSON-RPC
processing code is now the same on both server and client side. Many
minor issues were fixed in the process and there is a new test suite for
JSON-RPC spec compliance (and non-compliance in some cases).
List of behavior changes:
- Method handlers are now called with a per-request context instead of a
per-connection context. The context is canceled right after the method
returns.
- Subscription error channels are always closed when the connection
ends. There is no need to also wait on the Notifier's Closed channel
to detect whether the subscription has ended.
- Client now omits "params" instead of sending "params": null when there
are no arguments to a call. The previous behavior was not compliant
with the spec. The server still accepts "params": null.
- Floating point numbers are allowed as "id". The spec doesn't allow
them, but we handle request "id" as json.RawMessage and guarantee that
the same number will be sent back.
- Logging is improved significantly. There is now a message at DEBUG
level for each RPC call served.
This commit introduces a network simulation framework which
can be used to run simulated networks of devp2p nodes. The
intention is to use this for testing protocols, performing
benchmarks and visualising emergent network behaviour.
Currently http cors and websocket origins are a comma separated string in the
config object. These are replaced with string arrays that are more expressive in
case of a config file.
There is no need to depend on the old context package now that the
minimum Go version is 1.7. The move to "context" eliminates our weird
vendoring setup. Some vendored code still uses golang.org/x/net/context
and it is now vendored in the normal way.
This change triggered new vet checks around context.WithTimeout which
didn't fire with golang.org/x/net/context.
I initially made the client block if the 100-element buffer was
exceeded. It turns out that this is inconvenient for simple uses of the
client which subscribe and perform calls on the same goroutine, e.g.
client, _ := rpc.Dial(...)
ch := make(chan int) // note: no buffer
sub, _ := client.EthSubscribe(ch, "something")
for event := range ch {
client.Call(...)
}
This innocent looking code will lock up if the server suddenly decides
to send 2000 notifications. In this case, the client's main loop won't
accept the call because it is trying to deliver a notification to ch.
The issue is kind of hard to explain in the docs and few people will
actually read them. Buffering is the simple option and works with close
to no overhead for subscribers that always listen.