* build: add support for different package and binary names
* build: bump up copyright date
* build: change default PackageName to empty string
* build, internal, swarm: enhance build/release process
* build: hack ethereum-swarm as a "depends" in deb package
* build/ci: remove redundant variables
* build, cmd, mobile, params, swarm: remove VERSION file; rename Version to VersionMeta;
* internal: remove VERSION() method which reads VERSION file
* build: fix VersionFilePath to Version
* Makefile: remove clean_go_build_cache.sh until it works
* Makefile: revert removal of clean_go_build_cache.sh
Prior to this change, when geth was started with `geth -dev -rpc`,
it would report a network id of `1` in response to the `net_version` RPC
request. But the actual network id it used to verify transactions
was `1337`.
This change causes geth instead respond with `1337` to the `net_version`
RPC when geth is started with `geth -dev -rpc`.
* cmd,node,rpc: add allowedHosts to prevent dns rebinding attacks
* p2p,node: Fix bug with dumpconfig introduced in r54aeb8e4c0bb9f0e7a6c67258af67df3b266af3d
* rpc: add wildcard support for rpcallowedhosts + go fmt
* cmd/geth, cmd/utils, node, rpc: ignore direct ip(v4/6) addresses in rpc virtual hostnames check
* http, rpc, utils: make vhosts into map, address review concerns
* node: change log messages to use geth standard (not sprintf)
* rpc: fix spelling
This commit affects p2p/discv5 "topic discovery" by running it on
the same UDP port where the old discovery works. This is realized
by giving an "unhandled" packet channel to the old v4 discovery
packet handler where all invalid packets are sent. These packets
are then processed by v5. v5 packets are always invalid when
interpreted by v4 and vice versa. This is ensured by adding one
to the first byte of the packet hash in v5 packets.
DiscoveryV5Bootnodes is also changed to point to new bootnodes
that are implementing the changed packet format with modified
hash. Existing and new v5 bootnodes are both running on different
ports ATM.
* dashboard: footer, deep state update
* dashboard: resolve asset path
* dashboard: remove bundle.js
* dashboard: prevent state update on every reconnection
* dashboard: fix linter issue
* dashboard, cmd: minor UI fix, include commit hash
* remove geth binary
* dashboard: gitCommit renamed to commit
* dashboard: move the geth version to the right, make commit optional
* dashboard: commit limited to 7 characters
* dashboard: limit commit length on client side
* dashboard: run go generate
* common/fdlimit: Move fdlimit files to separate package
When go-ethereum is used as a library the calling program need to set
the FD limit.
This commit extract fdlimit files to a separate package so it can be
used outside of go-ethereum.
* common/fdlimit: Remove FdLimit from functions signature
* common/fdlimit: Rename fdlimit functions
* cmd/utils: Add check on hard limit, skip test if below target
* cmd/utils: Cross platform compatible fd limit test
* cmd/utils: Remove syscall.Rlimit in test
* cmd/utils: comment fd utility method
* cmd, consensus, eth: split ethash related config to it own
* eth, consensus: minor polish
* eth, consenus, console: compress pow testing config field to single one
* consensus, eth: document pow mode
* cmd, consensus, core, miner: instatx clique for --dev
* cmd, consensus, clique: support configurable --dev block times
* cmd, core: allow --dev to use persistent storage too
* ethdb: add Putter interface and Has method
* ethdb: improve docs and add IdealBatchSize
* ethdb: remove memory batch lock
Batches are not safe for concurrent use.
* core: use ethdb.Putter for Write* functions
This covers the easy cases.
* core/state: simplify StateSync
* trie: optimize local node check
* ethdb: add ValueSize to Batch
* core: optimize HasHeader check
This avoids one random database read get the block number. For many uses
of HasHeader, the expectation is that it's actually there. Using Has
avoids a load + decode of the value.
* core: write fast sync block data in batches
Collect writes into batches up to the ideal size instead of issuing many
small, concurrent writes.
* eth/downloader: commit larger state batches
Collect nodes into a batch up to the ideal size instead of committing
whenever a node is received.
* core: optimize HasBlock check
This avoids a random database read to get the number.
* core: use numberCache in HasHeader
numberCache has higher capacity, increasing the odds of finding the
header without a database lookup.
* core: write imported block data using a batch
Restore batch writes of state and add blocks, tx entries, receipts to
the same batch. The change also simplifies the miner.
This commit also removes posting of logs when a forked block is imported.
* core: fix DB write error handling
* ethdb: use RLock for Has
* core: fix HasBlock comment