This removes the error added in #20597 in favor of a log message at
error level. Failing to start broke a bunch of people's setups and is
probably not the right thing to do for this check.
This adds additional logic to re-resolve the root name of a tree when a
couple of leaf requests have failed. We need this change to avoid
getting into a failure state where leaf requests keep failing for half
an hour when the tree has been updated.
* node: expose config in service context
* eth: integrate p2p/dnsdisc
* cmd/geth: add some DNS flags
* eth: remove DNS URLs
* cmd/utils: configure DNS names for testnets
* params: update DNS URLs
* cmd/geth: configure mainnet DNS
* cmd/utils: rename DNS flag and fix flag processing
* cmd/utils: remove debug print
* node: fix test
* p2p: new dial scheduler
This change replaces the peer-to-peer dial scheduler with a new and
improved implementation. The new code is better than the previous
implementation in two key aspects:
- The time between discovery of a node and dialing that node is
significantly lower in the new version. The old dialState kept
a buffer of nodes and launched a task to refill it whenever the buffer
became empty. This worked well with the discovery interface we used to
have, but doesn't really work with the new iterator-based discovery
API.
- Selection of static dial candidates (created by Server.AddPeer or
through static-nodes.json) performs much better for large amounts of
static peers. Connections to static nodes are now limited like dynanic
dials and can no longer overstep MaxPeers or the dial ratio.
* p2p/simulations/adapters: adapt to new NodeDialer interface
* p2p: re-add check for self in checkDial
* p2p: remove peersetCh
* p2p: allow static dials when discovery is disabled
* p2p: add test for dialScheduler.removeStatic
* p2p: remove blank line
* p2p: fix documentation of maxDialPeers
* p2p: change "ok" to "added" in static node log
* p2p: improve dialTask docs
Also increase log level for "Can't resolve node"
* p2p: ensure dial resolver is truly nil without discovery
* p2p: add "looking for peers" log message
* p2p: clean up Server.run comments
* p2p: fix maxDialedConns for maxpeers < dialRatio
Always allocate at least one dial slot unless dialing is disabled using
NoDial or MaxPeers == 0. Most importantly, this fixes MaxPeers == 1 to
dedicate the sole slot to dialing instead of listening.
* p2p: fix RemovePeer to disconnect the peer again
Also make RemovePeer synchronous and add a test.
* p2p: remove "Connection set up" log message
* p2p: clean up connection logging
We previously logged outgoing connection failures up to three times.
- in SetupConn() as "Setting up connection failed addr=..."
- in setupConn() with an error-specific message and "id=... addr=..."
- in dial() as "Dial error task=..."
This commit ensures a single log message is emitted per failure and adds
"id=... addr=... conn=..." everywhere (id= omitted when the ID isn't
known yet).
Also avoid printing a log message when a static dial fails but can't be
resolved because discv4 is disabled. The light client hit this case all
the time, increasing the message count to four lines per failed
connection.
* p2p: document that RemovePeer blocks
The feature update allows the GraphQL API endpoint to retrieve
transaction signature R,S,V parameters.
Co-authored-by: amitshah <amitshah0t7@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
For longer records and subtree entries, the deployer created two
separate TXT records. This doesn't work as intended because the client
will receive the two records in arbitrary order. The fix is to encode
longer values as "string1""string2" instead of "string1", "string2".
This encoding creates a single record on AWS Route53.
Adds the 'geth dumpgenesis' command, which writes the configured
genesis in JSON format to stdout. This provides a way to generate the
data (structure and content) that can then be used with the 'geth init'
command.
* core/vm/runtime: add test for blockhash
* core/evm: less iteration in blockhash
* core/vm/runtime: nitpickfix
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
This change makes the client attempt to reconnect when a write fails.
We already had reconnect support, but the reconnect would previously
happen on the next call after an error. Being more eager leads to a
smoother experience overall.