This enables the following linters
- typecheck
- unused
- staticcheck
- bidichk
- durationcheck
- exportloopref
- gosec
WIth a few exceptions.
- We use a deprecated protobuf in trezor. I didn't want to mess with that, since I cannot meaningfully test any changes there.
- The deprecated TypeMux is used in a few places still, so the warning for it is silenced for now.
- Using string type in context.WithValue is apparently wrong, one should use a custom type, to prevent collisions between different places in the hierarchy of callers. That should be fixed at some point, but may require some attention.
- The warnings for using weak random generator are squashed, since we use a lot of random without need for cryptographic guarantees.
This commit replaces ioutil.TempDir with t.TempDir in tests. The
directory created by t.TempDir is automatically removed when the test
and all its subtests complete.
Prior to this commit, temporary directory created using ioutil.TempDir
had to be removed manually by calling os.RemoveAll, which is omitted in
some tests. The error handling boilerplate e.g.
defer func() {
if err := os.RemoveAll(dir); err != nil {
t.Fatal(err)
}
}
is also tedious, but t.TempDir handles this for us nicely.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
This change makes use of the new code generator rlp/rlpgen to improve the
performance of RLP encoding for Header and StateAccount. It also speeds up
encoding of ReceiptForStorage using the new rlp.EncoderBuffer API.
The change is much less transparent than I wanted it to be, because Header and
StateAccount now have an EncodeRLP method defined with pointer receiver. It
used to be possible to encode non-pointer values of these types, but the new
method prevents that and attempting to encode unadressable values (even if
part of another value) will return an error. The error can be surprising and may
pop up in places that previously didn't expect any errors.
To make things work, I also needed to update all code paths (mostly in unit tests)
that lead to encoding of non-pointer values, and pass a pointer instead.
Benchmark results:
name old time/op new time/op delta
EncodeRLP/legacy-header-8 328ns ± 0% 237ns ± 1% -27.63% (p=0.000 n=8+8)
EncodeRLP/london-header-8 353ns ± 0% 247ns ± 1% -30.06% (p=0.000 n=8+8)
EncodeRLP/receipt-for-storage-8 237ns ± 0% 123ns ± 0% -47.86% (p=0.000 n=8+7)
EncodeRLP/receipt-full-8 297ns ± 0% 301ns ± 1% +1.39% (p=0.000 n=8+8)
name old speed new speed delta
EncodeRLP/legacy-header-8 1.66GB/s ± 0% 2.29GB/s ± 1% +38.19% (p=0.000 n=8+8)
EncodeRLP/london-header-8 1.55GB/s ± 0% 2.22GB/s ± 1% +42.99% (p=0.000 n=8+8)
EncodeRLP/receipt-for-storage-8 38.0MB/s ± 0% 64.8MB/s ± 0% +70.48% (p=0.000 n=8+7)
EncodeRLP/receipt-full-8 910MB/s ± 0% 897MB/s ± 1% -1.37% (p=0.000 n=8+8)
name old alloc/op new alloc/op delta
EncodeRLP/legacy-header-8 0.00B 0.00B ~ (all equal)
EncodeRLP/london-header-8 0.00B 0.00B ~ (all equal)
EncodeRLP/receipt-for-storage-8 64.0B ± 0% 0.0B -100.00% (p=0.000 n=8+8)
EncodeRLP/receipt-full-8 320B ± 0% 320B ± 0% ~ (all equal)
Some benchmarks in eth/filters were not good: they weren't reproducible, relying on geth chaindata to be present.
Another one was rejected because the receipt was lacking a backing transcation.
The p2p simulation benchmark had a lot of the warnings below, due to the framework calling both
Stop() and Close(). Apparently, the simulated adapter is the only implementation which has a Close(),
and there is no need to call both Stop and Close on it.
* core: fix warning flagging the use of DeepEqual on error
* apply the same change everywhere possible
* revert change that was committed by mistake
* fix build error
* Update config.go
* revert changes to ConfigCompatError
* review feedback
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR ensures that wiping all data associated with a node (apart from its nodekey)
will not generate already used sequence number for the ENRs, since all remote nodes
would reject them until they out-number the previously published largest one.
The big complication with this scheme is that every local update to the ENR can
potentially bump the sequence number by one. In order to ensure that local updates
do not outrun the clock, the sequence number is a millisecond-precision timestamp,
and updates are throttled to occur at most once per millisecond.
Co-authored-by: Felix Lange <fjl@twurst.com>
In p2p/dial.go, conn.flags was accessed without using sync/atomic.
This race is fixed by removing the access.
In p2p/enode/iter_test.go, a similar race is resolved by writing the field atomically.
Co-authored-by: Felix Lange <fjl@twurst.com>
This change significantly improves the performance of RLPx message reads
and writes. In the previous implementation, reading and writing of
message frames performed multiple reads and writes on the underlying
network connection, and allocated a new []byte buffer for every read.
In the new implementation, reads and writes re-use buffers, and perform
much fewer system calls on the underlying connection. This doubles the
theoretically achievable throughput on a single connection, as shown by
the benchmark result:
name old speed new speed delta
Throughput-8 70.3MB/s ± 0% 155.4MB/s ± 0% +121.11% (p=0.000 n=9+8)
The change also removes support for the legacy, pre-EIP-8 handshake encoding.
As of May 2021, no actively maintained client sends this format.
This removes the error log message that says
Ethereum peer removal failed ... err="peer not registered"
The error happened because removePeer was called multiple
times: once to disconnect the peer, and another time when the
handler exited. With this change, removePeer now has the sole
purpose of disconnecting the peer. Unregistering happens exactly
once, when the handler exits.
This change extracts the peer QoS tracking logic from eth/downloader, moving
it into the new package p2p/msgrate. The job of msgrate.Tracker is determining
suitable timeout values and request sizes per peer.
The snap sync scheduler now uses msgrate.Tracker instead of the hard-coded 15s
timeout. This should make the sync work better on network links with high latency.
This changes the definitions of Ping and Pong, adding an optional field
for the sequence number. This field was previously encoded/decoded using
the "tail" struct tag, but using "optional" is much nicer.
This removes auto-configuration of the snap.*.ethdisco.net DNS discovery tree.
Since measurements have shown that > 75% of nodes in all.*.ethdisco.net support
snap, we have decided to retire the dedicated index for snap and just use the eth
tree instead.
The dial iterators of eth and snap now use the same DNS tree in the default configuration,
so both iterators should use the same DNS discovery client instance. This ensures that
the record cache and rate limit are shared. Records will not be requested multiple times.
While testing the change, I noticed that duplicate DNS requests do happen even
when the client instance is shared. This is because the two iterators request the tree
root, link tree root, and first levels of the tree in lockstep. To avoid this problem, the
change also adds a singleflight.Group instance in the client. When one iterator
attempts to resolve an entry which is already being resolved, the singleflight object
waits for the existing resolve call to finish and returns the entry to both places.
When receiving PING from an IPv4 address over IPv6, the implementation sent
back a IPv4-in-IPv6 address. This change makes it reflect the IPv4 address.
* eth/protocols, prp/tracker: add support for req/rep rtt tracking
* p2p/tracker: sanity cap the number of pending requests
* pap/tracker: linter <3
* p2p/tracker: disable entire tracker if no metrics are enabled
This fixes the calculation of the tree branch factor. With the new
formula, we now creat at most 13 children instead of 30, ensuring
the TXT record size will be below 370 bytes.
This PR implements the first one of the "lespay" UDP queries which
is already useful in itself: the capacity query. The server pool is making
use of this query by doing a cheap UDP query to determine whether it is
worth starting the more expensive TCP connection process.
In the random sync algorithm used by the DNS node iterator, we first pick a random
tree and then perform one sync action on that tree. This happens in a loop until any
node is found. If no trees contain any nodes, the iterator will enter a hot loop spinning
at 100% CPU.
The fix is complicated. The iterator now checks if a meaningful sync action can
be performed on any tree. If there is nothing to do, it waits for the next root record
recheck time to arrive and then tries again.
Fixes#22306
Prevents a situation where we (not running snap) connects with a peer running snap, and get stalled waiting for snap registration to succeed (which will never happen), which cause a waitgroup wait to halt shutdown
This PR enables running the new discv5 protocol in both LES client
and server mode. In client mode it mixes discv5 and dnsdisc iterators
(if both are enabled) and filters incoming ENRs for "les" tag and fork ID.
The old p2p/discv5 package and all references to it are removed.
Co-authored-by: Felix Lange <fjl@twurst.com>
USB enumeration still occured. Make sure it will only occur if --usb is set.
This also deprecates the 'NoUSB' config file option in favor of a new option 'USB'.
The database panicked for invalid IPs. This is usually no problem
because all code paths leading to node DB access verify the IP, but it's
dangerous because improper validation can turn this panic into a DoS
vulnerability. The quick fix here is to just turn database accesses
using invalid IP into a noop. This isn't great, but I'm planning to
remove the node DB for discv5 long-term, so it should be fine to have
this quick fix for half a year.
Fixes#21849
This PR fixes a deadlock reported here: #21925
The cause is that many operations may be pending, but if the close happens, only one of them gets awoken and exits, the others remain waiting for a signal that never comes.
This fixes a deadlock that could occur when a response packet arrived
after a call had already received enough responses and was about to
signal completion to the dispatch loop.
Co-authored-by: Felix Lange <fjl@twurst.com>
- Remove the ws:// prefix from the status endpoint since
the ws:// is already included in the stack.WSEndpoint().
- Don't register the services again in the node start.
Registration is already done in the initialization stage.
- Expose admin namespace via websocket.
This namespace is necessary for connecting the peers via websocket.
- Offer logging relevant options for exec adapter.
It's really painful to mix all log output in the single console. So
this PR offers two additional options for exec adapter in this case
testers can config the log output(e.g. file output) and log level
for each p2p node.
This adds a few tiny fixes for les and the p2p simulation framework:
LES Parts
- Keep the LES-SERVER connection even it's non-synced
We had this idea to reject the connections in LES protocol if the les-server itself is
not synced. However, in LES protocol we will also receive the connection from another
les-server. In this case even the local node is not synced yet, we should keep the tcp
connection for other protocols(e.g. eth protocol).
- Don't count "invalid message" for non-existing GetBlockHeadersMsg request
In the eth syncing mechanism (full sync, fast sync, light sync), it will try to fetch
some non-existent blocks or headers(to ensure we indeed download all the missing chain).
In this case, it's possible that the les-server will receive the request for
non-existent headers. So don't count it as the "invalid message" for scheduling
dropping.
- Copy the announce object in the closure
Before the les-server pushes the latest headers to all connected clients, it will create
a closure and queue it in the underlying request scheduler. In some scenarios it's
problematic. E.g, in private networks, the block can be mined very fast. So before the
first closure is executed, we may already update the latest_announce object. So actually
the "announce" object we want to send is replaced.
The downsize is the client will receive two announces with the same td and then drop the
server.
P2P Simulation Framework
- Don't double register the protocol services in p2p-simulation "Start".
The protocols upon the devp2p are registered in the "New node stage". So don't reigster
them again when starting a node in the p2p simulation framework
- Add one more new config field "ExternalSigner", in order to use clef service in the
framework.
* peer: return localAddr instead of name to prevent spam
We currently use the name (which can be freely set by the peer) in several log messages.
This enables malicious actors to write spam into your geth log.
This commit returns the localAddr instead of the freely settable name.
* p2p: reduce usage of peer.Name in warn messages
* eth, p2p: use truncated names
* Update peer.go
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Felix Lange <fjl@twurst.com>
For some reason, using the shared hash causes a cryptographic incompatibility
when using Go 1.15. I noticed this during the development of Discovery v5.1
when I added test vector verification.
The go library commit that broke this is golang/go@97240d5, but the
way we used HKDF is slightly dodgy anyway and it's not a regression.
This change moves the RLPx protocol implementation into a separate package,
p2p/rlpx. The new package can be used to establish RLPx connections for
protocol testing purposes.
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR adds an extra guarantee to NodeStateMachine: it ensures that all
immediate effects of a certain change are processed before any subsequent
effects of any of the immediate effects on the same node. In the original
version, if a cascaded change caused a subscription callback to be called
multiple times for the same node then these calls might have happened in a
wrong chronological order.
For example:
- a subscription to flag0 changes flag1 and flag2
- a subscription to flag1 changes flag3
- a subscription to flag1, flag2 and flag3 was called in the following order:
[flag1] -> [flag1, flag3]
[] -> [flag1]
[flag1, flag3] -> [flag1, flag2, flag3]
This happened because the tree of changes was traversed in a "depth-first
order". Now it is traversed in a "breadth-first order"; each node has a
FIFO queue for pending callbacks and each triggered subscription callback
is added to the end of the list. The already existing guarantees are
retained; no SetState or SetField returns until the callback queue of the
node is empty again. Just like before, it is the responsibility of the
state machine design to ensure that infinite state loops are not possible.
Multiple changes affecting the same node can still happen simultaneously;
in this case the changes can be interleaved in the FIFO of the node but the
correct order is still guaranteed.
A new unit test is also added to verify callback order in the above scenario.
This change improves discovery behavior in small networks. Very small
networks would often fail to bootstrap because all member nodes were
dropping table content due to findnode failure. The check is now changed
to avoid dropping nodes on findnode failure when their bucket is almost
empty. It also relaxes the liveness check requirement for FINDNODE/v4
response nodes, returning unverified nodes as results when there aren't
any verified nodes yet.
The "findnode failed" log now reports whether the node was dropped
instead of the number of results. The value of the "results" was
always zero by definition.
Co-authored-by: Felix Lange <fjl@twurst.com>
This adds a lock around requests because some routers can't handle
concurrent requests. Requests are also rate-limited.
The Map function request a new mapping exactly when the map timeout
occurs instead of 5 minutes earlier. This should prevent duplicate mappings.
This PR significantly changes the APIs for instantiating Ethereum nodes in
a Go program. The new APIs are not backwards-compatible, but we feel that
this is made up for by the much simpler way of registering services on
node.Node. You can find more information and rationale in the design
document: https://gist.github.com/renaynay/5bec2de19fde66f4d04c535fd24f0775.
There is also a new feature in Node's Go API: it is now possible to
register arbitrary handlers on the user-facing HTTP server. In geth, this
facility is used to enable GraphQL.
There is a single minor change relevant for geth users in this PR: The
GraphQL API is no longer available separately from the JSON-RPC HTTP
server. If you want GraphQL, you need to enable it using the
./geth --http --graphql flag combination.
The --graphql.port and --graphql.addr flags are no longer available.
This PR reimplements the light client server pool. It is also a first step
to move certain logic into a new lespay package. This package will contain
the implementation of the lespay token sale functions, the token buying and
selling logic and other components related to peer selection/prioritization
and service quality evaluation. Over the long term this package will be
reusable for incentivizing future protocols.
Since the LES peer logic is now based on enode.Iterator, it can now use
DNS-based fallback discovery to find servers.
This document describes the function of the new components:
https://gist.github.com/zsfelfoldi/3c7ace895234b7b345ab4f71dab102d4
* p2p: add low port check in dialer
We already have a check like this for UDP ports, add a similar one in
the dialer. This prevents dials to port zero and it's also an extra
layer of protection against spamming HTTP servers.
* p2p/discover: use errLowPort in v4 code
* p2p: change port check
* p2p: add comment
* p2p/simulations/adapters: ensure assigned port is in all node records
It is possible to specify enode URLs using domain name since
commit b90cdbaa79, but the code comment still said that only
IP addresses are allowed.
Co-authored-by: admin@komgo.io <KomgoRocks2018!>
This adds two new methods to UDPv5, AllNodes and LocalNode.
AllNodes returns all the nodes stored in the local table; this is
useful for the purposes of metrics collection and also debugging any
potential issues with other discovery v5 implementations.
LocalNode returns the local node object. The reason for exposing this
is so that users can modify and set/delete new key-value entries in
the local record.
This PR adds service value measurement statistics to the light client. It
also adds a private API that makes these statistics accessible. A follow-up
PR will add the new server pool which uses these statistics to select
servers with good performance.
This document describes the function of the new components:
https://gist.github.com/zsfelfoldi/3c7ace895234b7b345ab4f71dab102d4
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
This adds an implementation of the current discovery v5 spec.
There is full integration with cmd/devp2p and enode.Iterator in this
version. In theory we could enable the new protocol as a replacement of
discovery v4 at any time. In practice, there will likely be a few more
changes to the spec and implementation before this can happen.
This adds additional logic to re-resolve the root name of a tree when a
couple of leaf requests have failed. We need this change to avoid
getting into a failure state where leaf requests keep failing for half
an hour when the tree has been updated.
* p2p: new dial scheduler
This change replaces the peer-to-peer dial scheduler with a new and
improved implementation. The new code is better than the previous
implementation in two key aspects:
- The time between discovery of a node and dialing that node is
significantly lower in the new version. The old dialState kept
a buffer of nodes and launched a task to refill it whenever the buffer
became empty. This worked well with the discovery interface we used to
have, but doesn't really work with the new iterator-based discovery
API.
- Selection of static dial candidates (created by Server.AddPeer or
through static-nodes.json) performs much better for large amounts of
static peers. Connections to static nodes are now limited like dynanic
dials and can no longer overstep MaxPeers or the dial ratio.
* p2p/simulations/adapters: adapt to new NodeDialer interface
* p2p: re-add check for self in checkDial
* p2p: remove peersetCh
* p2p: allow static dials when discovery is disabled
* p2p: add test for dialScheduler.removeStatic
* p2p: remove blank line
* p2p: fix documentation of maxDialPeers
* p2p: change "ok" to "added" in static node log
* p2p: improve dialTask docs
Also increase log level for "Can't resolve node"
* p2p: ensure dial resolver is truly nil without discovery
* p2p: add "looking for peers" log message
* p2p: clean up Server.run comments
* p2p: fix maxDialedConns for maxpeers < dialRatio
Always allocate at least one dial slot unless dialing is disabled using
NoDial or MaxPeers == 0. Most importantly, this fixes MaxPeers == 1 to
dedicate the sole slot to dialing instead of listening.
* p2p: fix RemovePeer to disconnect the peer again
Also make RemovePeer synchronous and add a test.
* p2p: remove "Connection set up" log message
* p2p: clean up connection logging
We previously logged outgoing connection failures up to three times.
- in SetupConn() as "Setting up connection failed addr=..."
- in setupConn() with an error-specific message and "id=... addr=..."
- in dial() as "Dial error task=..."
This commit ensures a single log message is emitted per failure and adds
"id=... addr=... conn=..." everywhere (id= omitted when the ID isn't
known yet).
Also avoid printing a log message when a static dial fails but can't be
resolved because discv4 is disabled. The light client hit this case all
the time, increasing the message count to four lines per failed
connection.
* p2p: document that RemovePeer blocks
This is a temporary fix for a problem which started happening when the
dialer was changed to read nodes from an enode.Iterator. Before the
iterator change, discovery queries would always return within a couple
seconds even if there was no Internet access. Since the iterator won't
return unless a node is actually found, discoverTask can take much
longer. This means that the 'emergency connect' logic might not execute
in time, leading to a stuck node.