* cmd, eth, miner: disable advance sealing if user require
* cmd, console, miner, les, eth: wrap the miner config
* eth: remove todo
* cmd, miner: revert noadvance flag
The reason for this is: if the transaction execution is even longer
than block time, then this kind of transactions is DoS attack.
* swarm/storage/feed/lookup: Add context handling/forwarding
* swarm/storage/feed/lookup: Add test to catch bad hint
* swarm/storage/feed/lookup: Added context cancellation test
* swarm/api: fix file descriptor leak in NewTestSwarmServer
Swarm storage (localstore) was not closed. That resulted a
"too many open files" error if `TestClientUploadDownloadRawEncrypted`
was run with `-count 1000`.
* cmd/swarm: speed up StartNewNodes() by parallelization
Reduce cluster startup time from 13s to 7s.
* swarm/api: disable flaky TestClientUploadDownloadRawEncrypted with -race
* swarm/storage: disable flaky TestLDBStoreCollectGarbage (-race)
With race detection turned on the disabled cases often fail with:
"ldbstore_test.go:535: expected surplus chunk 150 to be missing, but got no error"
* cmd/swarm: fix process leak in TestACT and TestSwarmUp
Each test run we start 3 nodes, but we did not terminate them. So
those 3 nodes continued eating up 1.2GB (3.4GB with -race) after test
completion.
6b6c4d1c27 changed how we start clusters
to speed up tests. The changeset merged together test cases
and introduced a global cluster. But "forgot" about termination.
Let's get rid of "global cluster" so we have a clear owner of
termination (some time sacrifice), while leaving subtests to use the
same cluster.
* metrics/prometheus: added prometheus http server and metrics collector
* metrics/prometheus: minor cleanups
* metrics/prometheus: named keys instead name in tag
* metrics/prometheus: minor typo cleanups, sorted report
* accounts, core, internal, node: Add support for smartcard wallets
* accounts, internal: Changes in response to review
* vendor: pull in missing go-echd library
* accounts/scwallet, console: user friendly card opening
* accounts/scwallet: ordered wallets, tighter events, derivation logs
* accounts, console: frendly card errors, support pin unblock
* accounts/scwallet: fix crypto API change
* accounts/scwallet: rebase and update
* Fix some linter issues
* Remove the direct dependency on libpcsclite
Instead, use a go library that communicates with pcscd over a socket.
Also update the changes introduced by @gravityblast since this PR's
inception
* Temporary fix to the ADBU status call
* fix wallet status update
This is a temporary fix, better checks need to
be performed once the whole process has been
validated.
* Fix key derivation
* Add some documentation
* Update a comment to reflect the workings of the updated system
* Vendor keycard-go/derivationpath
* Formatting fixes
* Add instructions on how to install the card
* Achieve full transaction signature+sending
* PK derivation has to be supported by the card
* Fix linter issues
* Upgrade to keycard app v2.1.1
* Set gballet as codeowner of the smartcard wallet dir
* fix unnecessary condition linter warning
* refuse to overwrite the master key of a previously initialized card
* refresh the account list when initializing the card
* Update the card preparation instructions based on review feedback
* 'sanitize' JSON input
Co-Authored-By: gballet <gballet@gmail.com>
* Apply suggestions from code review
Co-Authored-By: gballet <gballet@gmail.com>
* fix a serialization error
* more review feedback
* More review feedback
* Can now specify the number of empty accounts to derive
* Fix rebase error: include norm package
* Update bip-39 ref and remove ebfe/scard from vendor
* Add missing dependency
This PR fixes this, moving domain.ChainId from the map's initializer down to a separate if statement which checks the existance of ChainId's value, similar to the rest of the fields, before adding it. I've also included a new test to demonstrate the issue
This resolves a minor issue where neighbors responses containing less
than 16 nodes would bump the failure counter, removing the node. One
situation where this can happen is a private deployment where the total
number of extant nodes is less than 16.
Issue found by @jsying.