State and receipt deliveries from a previous eth/62+ sync can hang if
the downloader has moved on to syncing with eth/61. Fix this by also
draining the eth/63 channels while waiting for eth/61 data.
A nicer solution would be to take care of the channels in a central
place, but that would involve a major rewrite.
Unexpected deliveries could block indefinitely if they arrived at the
right time. The fix is to ensure that the cancellation channel is
always closed when the sync ends, unblocking any deliveries. Also remove
the atomic check for whether a sync is currently running because it
doesn't help and can be misleading.
Cancelling always seems to break the tests though. The downloader
spawned d.process whenever new data arrived, making it somewhat hard to
track when block processing was actually done. Fix this by running
d.process in a dedicated goroutine that is tied to the lifecycle of the
sync. d.process gets notified of new work by the queue instead of being
invoked all the time. This removes a ton of weird workaround code,
including a hairy use of atomic CAS.
This removes the burden on a single object to take care of all
validation and state processing. Now instead the validation is done by
the `core.BlockValidator` (`types.Validator`) that takes care of both
header and uncle validation through the `ValidateBlock` method and state
validation through the `ValidateState` method. The state processing is
done by a new object `core.StateProcessor` (`types.Processor`) and
accepts a new state as input and uses that to process the given block's
transactions (and uncles for rewords) to calculate the state root for
the next block (P_n + 1).
There are a bunch of changes required to make this work:
- in miner: allow unregistering agents, fix RemoteAgent.Stop
- in eth/filters: make FilterSystem.Stop not crash
- in rpc/comms: move listen loop to platform-independent code
Fixes#1930. I ran the shell loop there for a few minutes and didn't see
any changes in the memory profile.
XEth.gpo was being initialized as needed. WithState copies the XEth
struct including the gpo field. If gpo was nil at the time of the copy
and Call or Transact were invoked on it, an additional GPO listenLoop
would be spawned.
Move the lazy initialization to GasPriceOracle instead so the same GPO
instance is shared among all created XEths.
Fixes#1317
Might help with #1930
* xeth, rpc: implement eth_getNatSpec for tx confirmations
* rename silly docserver -> httpclient
* eth/backend: httpclient now accessible via eth.Ethereum init-d via config.DocRoot
* cmd: introduce separate CLI flag for DocRoot (defaults to homedir)
* common/path: delete unused assetpath func, separate HomeDir func
* lines with leading space are ommitted from history
* exit processed even with whitespace around
* all whitespace lines (not only empty ones) are ignored
add 7 missing commands to admin api autocomplete
registrar: methods now return proper error if reg addresses are not set. fixes#1457
rpc/console: fix personal.newAccount() regression. Now all comms accept interactive password
registrar: add registrar tests for errors
crypto: catch AES decryption error on presale wallet import + fix error msg format. fixes#1580
CLI: improve error message when starting a second instance of geth. fixes#1564
cli/accounts: unlock multiple accounts. fixes#1785
* make unlocking multiple accounts work with inline <() fd
* passwdfile now correctly read only once
* improve logs
* fix CLI help text for unlocking
fix regression with docRoot / admin API
* docRoot/jspath passed to rpc/api ParseApis, which passes onto adminApi
* docRoot field for JS console in order to pass when RPC is (re)started
* improve flag desc for jspath
common/docserver: catch http errors from response
fix rpc/api tests
common/natspec: fix end to end test (skipped because takes 8s)
registrar: fix major regression:
* deploy registrars on frontier
* register HashsReg and UrlHint in GlobalRegistrar.
* set all 3 contract addresses in code
* zero out addresses first in tests
Log filtering is now using a MIPmap like approach where addresses of
logs are added to a mapped bloom bin. The current levels for the MIP are
in ranges of 1.000.000, 500.000, 100.000, 50.000, 1.000. Logs are
therefor filtered in batches of 1.000.
* Moved `vm.Transfer` to `core` package and changed execution to call
`env.Transfer` instead of `core.Transfer` directly.
* core/vm: byte code VM moved to jump table instead of switch
* Moved `vm.Transfer` to `core` package and changed execution to call
`env.Transfer` instead of `core.Transfer` directly.
* Byte code VM now shares the same code as the JITVM
* Renamed Context to Contract
* Changed initialiser of state transition & unexported methods
* Removed the Execution object and refactor `Call`, `CallCode` &
`Create` in to their own functions instead of being methods.
* Removed the hard dep on the state for the VM. The VM now
depends on a Database interface returned by the environment. In the
process the core now depends less on the statedb by usage of the env
* Moved `Log` from package `core/state` to package `core/vm`.
Moved the filtering system from `event` to `eth/filters` package and
removed the `core.Filter` object. The `filters.Filter` object now
requires a `common.Database` rather than a `eth.Backend` and invokes the
`core.GetBlockByX` directly rather than thru a "manager".