cmd/swarm/swarm-smoke: improve smoke tests (#1337)
swarm/network: remove dead code (#1339)
swarm/network: remove FetchStore and SyncChunkStore in favor of NetStore (#1342)
* swarm/network: fix data races in TestInitialPeersMsg test
* swarm/network: add Kademlia.Saturation method with lock
* swarm/network: add Hive.Peer method to safely retrieve a bzz peer
* swarm/network: remove duplicate comment
* p2p/testing: prevent goroutine leak in ProtocolTester
* swarm/network: fix data race in newBzzBaseTesterWithAddrs
* swarm/network: fix goroutone leaks in testInitialPeersMsg
* swarm/network: raise number of peer check attempts in testInitialPeersMsg
* swarm/network: use Hive.Peer in Hive.PeerInfo function
* swarm/network: reduce the scope of mutex lock in newBzzBaseTesterWithAddrs
* swarm/storage: disable TestCleanIndex with race detector
* swarm/storage/feed/lookup: Add context handling/forwarding
* swarm/storage/feed/lookup: Add test to catch bad hint
* swarm/storage/feed/lookup: Added context cancellation test
* swarm/api: fix file descriptor leak in NewTestSwarmServer
Swarm storage (localstore) was not closed. That resulted a
"too many open files" error if `TestClientUploadDownloadRawEncrypted`
was run with `-count 1000`.
* cmd/swarm: speed up StartNewNodes() by parallelization
Reduce cluster startup time from 13s to 7s.
* swarm/api: disable flaky TestClientUploadDownloadRawEncrypted with -race
* swarm/storage: disable flaky TestLDBStoreCollectGarbage (-race)
With race detection turned on the disabled cases often fail with:
"ldbstore_test.go:535: expected surplus chunk 150 to be missing, but got no error"
* cmd/swarm: fix process leak in TestACT and TestSwarmUp
Each test run we start 3 nodes, but we did not terminate them. So
those 3 nodes continued eating up 1.2GB (3.4GB with -race) after test
completion.
6b6c4d1c27 changed how we start clusters
to speed up tests. The changeset merged together test cases
and introduced a global cluster. But "forgot" about termination.
Let's get rid of "global cluster" so we have a clear owner of
termination (some time sacrifice), while leaving subtests to use the
same cluster.
* swarm/network: propagate span with ctx
* swarm/network: try to stop stream.send.request spans on time
* swarm/storage: add chunk ref as a log to netstore.fetcher span
* swarm/storage/mock: implement listings methods for mem and rpc stores
* swarm/storage/mock/rpc: add comments and newTestStore helper function
* swarm/storage/mock/mem: add missing comments
* swarm/storage/mock: add comments to new types and constants
* swarm/storage/mock/db: implement listings for mock/db global store
* swarm/storage/mock/test: add comments for MockStoreListings
* swarm/storage/mock/explorer: initial implementation
* cmd/swarm/global-store: add chunk explorer
* cmd/swarm/global-store: add chunk explorer tests
* swarm/storage/mock/explorer: add tests
* swarm/storage/mock/explorer: add swagger api definition
* swarm/storage/mock/explorer: not-zero test values for invalid addr and key
* swarm/storage/mock/explorer: test wildcard cors origin
* swarm/storage/mock/db: renames based on Fabio's suggestions
* swarm/storage/mock/explorer: add more comments to testHandler function
* cmd/swarm/global-store: terminate subprocess with Kill in tests
* swarm/storage/localstore: close localstore in two tests
* swarm/storage/localstore: fix a possible deadlock in tests
* swarm/storage/localstore: re-enable pull subs tests for travis race
* swarm/storage/localstore: stop sending to errChan on context done in tests
* swarm/storage/localstore: better want check in readPullSubscriptionBin
* swarm/storage/localstore: protect chunk put with addr lock in tests
* swamr/storage/localstore: wait for gc and writeGCSize workers on Close
* swarm/storage/localstore: more correct testDB_collectGarbageWorker
* swarm/storage/localstore: set DB Close timeout to 5s
The current loop continuation condition is always true as a uint8
is always being checked whether it is less than 255 (its maximum
value). Since the loop starts with the value 1, the loop termination
can be guarranteed to exit once the value overflows to 0.
* swarm/storage: increase mget timeout in common_test.go
TestDbStoreCorrect_1k sometimes timed out with -race on Travis.
--- FAIL: TestDbStoreCorrect_1k (24.63s)
common_test.go:194: testStore failed: timed out after 10s
* swarm: remove unused vars from TestSnapshotSyncWithServer
nodeCount and chunkCount is returned from setupSim and those values
we use.
* swarm: move race/norace helpers from stream to testutil
As we will need to use the flag in other packages, too.
* swarm: refactor TestSwarmNetwork case
Extract long running test cases for better visibility.
* swarm/network: skip TestSyncingViaGlobalSync with -race
As panics on Travis.
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x7e351b]
* swarm: run TestSwarmNetwork with fewer nodes with -race
As otherwise we always get test failure with `network_test.go:374:
context deadline exceeded` even with raised `Timeout`.
* swarm/network: run TestDeliveryFromNodes with fewer nodes with -race
Test on Travis times out with 8 or more nodes if -race flag is present.
* swarm/network: smaller node count for discovery tests with -race
TestDiscoveryPersistenceSimulationSimAdapters failed on Travis with
`-race` flag present. The failure was due to extensive memory usage,
coming from the CGO runtime. Using a smaller node count resolves the
issue.
=== RUN TestDiscoveryPersistenceSimulationSimAdapter
==7227==ERROR: ThreadSanitizer failed to allocate 0x80000 (524288) bytes of clock allocator (error code: 12)
FATAL: ThreadSanitizer CHECK failed: ./gotsan.cc:6976 "((0 && "unable to mmap")) != (0)" (0x0, 0x0)
FAIL github.com/ethereum/go-ethereum/swarm/network/simulations/discovery 804.826s
* swarm/network: run TestFileRetrieval with fewer nodes with -race
Otherwise we get a failure due to extensive memory usage, as the CGO
runtime cannot allocate more bytes.
=== RUN TestFileRetrieval
==7366==ERROR: ThreadSanitizer failed to allocate 0x80000 (524288) bytes of clock allocator (error code: 12)
FATAL: ThreadSanitizer CHECK failed: ./gotsan.cc:6976 "((0 && "unable to mmap")) != (0)" (0x0, 0x0)
FAIL github.com/ethereum/go-ethereum/swarm/network/stream 155.165s
* swarm/network: run TestRetrieval with fewer nodes with -race
Otherwise we get a failure due to extensive memory usage, as the CGO
runtime cannot allocate more bytes ("ThreadSanitizer failed to
allocate").
* swarm/network: skip flaky TestGetSubscriptionsRPC on Travis w/ -race
Test fails a lot with something like:
streamer_test.go:1332: Real subscriptions and expected amount don't match; real: 0, expected: 20
* swarm/storage: skip TestDB_SubscribePull* tests on Travis w/ -race
Travis just hangs...
ok github.com/ethereum/go-ethereum/swarm/storage/feed/lookup 1.307s
keepalive
keepalive
keepalive
or panics after a while.
Without these tests the race detector job is now stable. Let's
invetigate these tests in a separate issue:
https://github.com/ethersphere/go-ethereum/issues/1245
* swarm/newtork: WIP Span request span until delivery and put
* swarm/storage: Introduce new trace across single fetcher lifespan
* swarm/network: Put span ids for sendpriority in context value
* swarm: Add global span store in tracing
* swarm/tracing: Add context key constants
* swarm/tracing: Add comments
* swarm/storage: Remove redundant fix for filestore
* swarm/tracing: Elaborate constants comments
* swarm/network, swarm/storage, swarm:tracing: Minor cleanup