* p2p/discover: remove unused function
* p2p/enode: use localItemKey for local sequence number
I added localItemKey for this purpose in #18963, but then
forgot to actually use it. This changes the database layout
yet again and requires bumping the version number.
* node: require LocalAppData variable
This avoids path inconsistencies on Windows XP.
Hat tip to @MicahZoltu for catching this so quickly.
* node: fix typo
* travis, build: switch to NDK 19b, fix gomobile builds
* travis, build: move NDK into its final bundle location
* travis: disable Android build on PRs once again
This change
- implements concurrent LES request serving even for a single peer.
- replaces the request cost estimation method with a cost table based on
benchmarks which gives much more consistent results. Until now the
allowed number of light peers was just a guess which probably contributed
a lot to the fluctuating quality of available service. Everything related
to request cost is implemented in a single object, the 'cost tracker'. It
uses a fixed cost table with a global 'correction factor'. Benchmark code
is included and can be run at any time to adapt costs to low-level
implementation changes.
- reimplements flowcontrol.ClientManager in a cleaner and more efficient
way, with added capabilities: There is now control over bandwidth, which
allows using the flow control parameters for client prioritization.
Target utilization over 100 percent is now supported to model concurrent
request processing. Total serving bandwidth is reduced during block
processing to prevent database contention.
- implements an RPC API for the LES servers allowing server operators to
assign priority bandwidth to certain clients and change prioritized
status even while the client is connected. The new API is meant for
cases where server operators charge for LES using an off-protocol mechanism.
- adds a unit test for the new client manager.
- adds an end-to-end test using the network simulator that tests bandwidth
control functions through the new API.
* swarm/pss: fix data race on HandshakeController.symKeyIndex
The HandshakeController.symKeyIndex map was accessed concurrently.
Since insufficient test coverage the race is not detected every time.
However, running TestClientHandshake a 100 times seems to be enough to
reproduce the race.
Note: I've chosen HandshakeController.lock to protect
HandshakeController.symKeyIndex as that was already protected in a few
functions by that lock.
Additionally:
- removed unused testStore
- enabled tests in handshake_test.go as they pass
- removed code duplication by adding getSymKey()
* swarm/pss: fix a data race on HandshakeController.keyC
* swarm/pss: fix data races with on Pss.symKeyPool
* swarm/storage/mock: implement listings methods for mem and rpc stores
* swarm/storage/mock/rpc: add comments and newTestStore helper function
* swarm/storage/mock/mem: add missing comments
* swarm/storage/mock: add comments to new types and constants
* swarm/storage/mock/db: implement listings for mock/db global store
* swarm/storage/mock/test: add comments for MockStoreListings
* swarm/storage/mock/explorer: initial implementation
* cmd/swarm/global-store: add chunk explorer
* cmd/swarm/global-store: add chunk explorer tests
* swarm/storage/mock/explorer: add tests
* swarm/storage/mock/explorer: add swagger api definition
* swarm/storage/mock/explorer: not-zero test values for invalid addr and key
* swarm/storage/mock/explorer: test wildcard cors origin
* swarm/storage/mock/db: renames based on Fabio's suggestions
* swarm/storage/mock/explorer: add more comments to testHandler function
* cmd/swarm/global-store: terminate subprocess with Kill in tests
* swarm/storage/localstore: close localstore in two tests
* swarm/storage/localstore: fix a possible deadlock in tests
* swarm/storage/localstore: re-enable pull subs tests for travis race
* swarm/storage/localstore: stop sending to errChan on context done in tests
* swarm/storage/localstore: better want check in readPullSubscriptionBin
* swarm/storage/localstore: protect chunk put with addr lock in tests
* swamr/storage/localstore: wait for gc and writeGCSize workers on Close
* swarm/storage/localstore: more correct testDB_collectGarbageWorker
* swarm/storage/localstore: set DB Close timeout to 5s
The current loop continuation condition is always true as a uint8
is always being checked whether it is less than 255 (its maximum
value). Since the loop starts with the value 1, the loop termination
can be guarranteed to exit once the value overflows to 0.
* swarm/storage: increase mget timeout in common_test.go
TestDbStoreCorrect_1k sometimes timed out with -race on Travis.
--- FAIL: TestDbStoreCorrect_1k (24.63s)
common_test.go:194: testStore failed: timed out after 10s
* swarm: remove unused vars from TestSnapshotSyncWithServer
nodeCount and chunkCount is returned from setupSim and those values
we use.
* swarm: move race/norace helpers from stream to testutil
As we will need to use the flag in other packages, too.
* swarm: refactor TestSwarmNetwork case
Extract long running test cases for better visibility.
* swarm/network: skip TestSyncingViaGlobalSync with -race
As panics on Travis.
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x7e351b]
* swarm: run TestSwarmNetwork with fewer nodes with -race
As otherwise we always get test failure with `network_test.go:374:
context deadline exceeded` even with raised `Timeout`.
* swarm/network: run TestDeliveryFromNodes with fewer nodes with -race
Test on Travis times out with 8 or more nodes if -race flag is present.
* swarm/network: smaller node count for discovery tests with -race
TestDiscoveryPersistenceSimulationSimAdapters failed on Travis with
`-race` flag present. The failure was due to extensive memory usage,
coming from the CGO runtime. Using a smaller node count resolves the
issue.
=== RUN TestDiscoveryPersistenceSimulationSimAdapter
==7227==ERROR: ThreadSanitizer failed to allocate 0x80000 (524288) bytes of clock allocator (error code: 12)
FATAL: ThreadSanitizer CHECK failed: ./gotsan.cc:6976 "((0 && "unable to mmap")) != (0)" (0x0, 0x0)
FAIL github.com/ethereum/go-ethereum/swarm/network/simulations/discovery 804.826s
* swarm/network: run TestFileRetrieval with fewer nodes with -race
Otherwise we get a failure due to extensive memory usage, as the CGO
runtime cannot allocate more bytes.
=== RUN TestFileRetrieval
==7366==ERROR: ThreadSanitizer failed to allocate 0x80000 (524288) bytes of clock allocator (error code: 12)
FATAL: ThreadSanitizer CHECK failed: ./gotsan.cc:6976 "((0 && "unable to mmap")) != (0)" (0x0, 0x0)
FAIL github.com/ethereum/go-ethereum/swarm/network/stream 155.165s
* swarm/network: run TestRetrieval with fewer nodes with -race
Otherwise we get a failure due to extensive memory usage, as the CGO
runtime cannot allocate more bytes ("ThreadSanitizer failed to
allocate").
* swarm/network: skip flaky TestGetSubscriptionsRPC on Travis w/ -race
Test fails a lot with something like:
streamer_test.go:1332: Real subscriptions and expected amount don't match; real: 0, expected: 20
* swarm/storage: skip TestDB_SubscribePull* tests on Travis w/ -race
Travis just hangs...
ok github.com/ethereum/go-ethereum/swarm/storage/feed/lookup 1.307s
keepalive
keepalive
keepalive
or panics after a while.
Without these tests the race detector job is now stable. Let's
invetigate these tests in a separate issue:
https://github.com/ethersphere/go-ethereum/issues/1245
* swarm/newtork: WIP Span request span until delivery and put
* swarm/storage: Introduce new trace across single fetcher lifespan
* swarm/network: Put span ids for sendpriority in context value
* swarm: Add global span store in tracing
* swarm/tracing: Add context key constants
* swarm/tracing: Add comments
* swarm/storage: Remove redundant fix for filestore
* swarm/tracing: Elaborate constants comments
* swarm/network, swarm/storage, swarm:tracing: Minor cleanup