* core: speed up GenerateChain
Use a mock implementation of ChainReader instead of creating
and destroying a BlockChain object for each generated block.
* eth/downloader: speed up tests by generating chain only once
This change reworks the downloader tests so they share a common test
blockchain instead of generating a chain in every test. The tests are
roughly twice as fast now.
This adds the global accumulated refund counter to the standard
json output as a numeric json value. Previously this was not very
interesting since it was not used much, but with the new sstore
gas changes the value is a lot more interesting from a consensus
investigation perspective.
* first impl of eth_getProof
* fixed docu
* added comments and refactored based on comments from holiman
* created structs
* handle errors correctly
* change Value to *hexutil.Big in order to have the same output as parity
* use ProofList as return type
Interpreter initialization is left to the PRs implementing them.
Options for external interpreters are passed after a colon in the
`--vm.ewasm` and `--vm.evm` switches.
Makes Interface interface a bit more stateless and abstract.
Obviously this change is dictated by EVMC design. The EVMC tries to keep the responsibility for EVM features totally inside the VMs, if feasible. This makes VM "stateless" because VM does not need to pass any information between executions, all information is included in parameters of the execute function.
This commit does a few things at once:
- Updates the tests to contain the latest data from ethereum/tests repo.
- Enables Constantinople state tests. This is needed to be able to
fuzz-test the evm with constantinople rules.
- Fixes the error in opSAR that we've known about for some time. I was
kind of saving it to see if we hit upon it with the random test
generator, but it's difficult to both enable the tests and have the
bug there -- we don't want to forget about it, so maybe it's better
to just fix it.
* miner: commit state which is relative with sealing result
* consensus, core, miner, mobile: introduce sealHash interface
* miner: evict pending task with threshold
* miner: go fmt
This PR enables the indexers to work in light client mode by
downloading a part of these tries (the Merkle proofs of the last
values of the last known section) in order to be able to add new
values and recalculate subsequent hashes. It also adds CHT data to
NodeInfo.
- Update benchmarks to use a pool of int pools.
Unless benchmarks are aborted with segmentation fault.
Signed-off-by: Hyung-Kyu Choi <hqueue@users.noreply.github.com>
- Define an Interpreter interface
- One contract can call contracts from other interpreter types.
- Pass the interpreter to the operands instead of the evm.
This is meant to prevent type assertions in operands.
* core: fix func TxDifference
fix a typo in func comment;
change named return to unnamed as there's explicit return in the body
* fix another typo in TxDifference
The current trie memory database/cache that we do pruning on stores
trie nodes as binary rlp encoded blobs, and also stores the node
relationships/references for GC purposes. However, most of the trie
nodes (everything apart from a value node) is in essence just a
collection of references.
This PR switches out the RLP encoded trie blobs with the
collapsed-but-not-serialized trie nodes. This permits most of the
references to be recovered from within the node data structure,
avoiding the need to track them a second time (expensive memory wise).
* vm/test: add tests+benchmarks for mstore
* core/vm: less alloc and copying for mstore
* core/vm: less allocs in sload
* vm: check for errors more correctly
This removes a golint warning: type name will be used as trie.TrieSync by
other packages, and that stutters; consider calling this Sync.
In hexToKeybytes len(hex) is even and (even+1)/2 == even/2, remove the +1.
* core: use a wrapped `map` and `sync.RWMutex` for `TxPool.all` to remove contention in `TxPool.Get`.
* core: Remove redundant `txLookup.Find` and improve comments on txLookup methods.
The 'from' and 'to' methods on StateTransitions are reader methods and
shouldn't have inadvertent side effects on state.
It is safe to remove the check in 'from' because account existence is
implicitly checked by the nonce and balance checks. If the account has
non-zero balance or nonce, it must exist. Even if the sender account has
nonce zero at the start of the state transition or no balance, the nonce
is incremented before execution and the account will be created at that
time.
It is safe to remove the check in 'to' because the EVM creates the
account if necessary.
Fixes#15119
* common: delete StringToAddress, StringToHash
These functions are confusing because they don't parse hex, but use the
bytes of the string. This change removes them, replacing all uses of
StringToAddress(s) by BytesToAddress([]byte(s)).
* eth/filters: remove incorrect use of common.BytesToAddress
Most of these methods did not contain all the relevant information
inside the object and were not using a similar formatting type.
Moreover, the existence of a suboptimal String method breaks usage
with more advanced data dumping tools like go-spew.
* core/vm, crypto/bn256: switch over to cloudflare library
* crypto/bn256: unmarshal constraint + start pure go impl
* crypto/bn256: combo cloudflare and google lib
* travis: drop 386 test job
- according to implementation of `IntrinsicGas`
we can continue execution since problem will be detected
later. However, early return is future-proof for changes.
Talk about "state" instead of "trie timing", "trie memory" and remove
the overzealous warning when the limit is just reached. Since the time
limit is always reached on slow machines, move the message to info level
so users don't freak out about internal details.
* core/types, core/vm, eth, tests: regenerate gencodec files
* Makefile: update devtools target
Install protoc-gen-go and print reminders about npm, solc and protoc.
Also switch to github.com/kevinburke/go-bindata because it's more
maintained.
* contracts/ens: update contracts and regenerate with solidity v0.4.19
The newer upstream version of the FIFSRegistrar contract doesn't set the
resolver anymore. The resolver is now deployed separately.
* contracts/release: regenerate with solidity v0.4.19
* contracts/chequebook: fix fallback and regenerate with solidity v0.4.19
The contract didn't have a fallback function, payments would be rejected
when compiled with newer solidity. References to 'mortal' and 'owned'
use the local file system so we can compile without network access.
* p2p/discv5: regenerate with recent stringer
* cmd/faucet: regenerate
* dashboard: regenerate
* eth/tracers: regenerate
* internal/jsre/deps: regenerate
* dashboard: avoid sed -i because it's not portable
* accounts/usbwallet/internal/trezor: fix go generate warnings
* core/vm: track 63/64 call gas off stack
Gas calculations in gasCall* relayed the available gas for calls by
replacing it on the stack. This lead to inconsistent traces, which we
papered over by copying the pre-execution stack in trace mode.
This change relays available gas using a temporary variable, off the
stack, and allows removing the weird copy.
* core/vm: remove stackCopy
* core/vm: pop call gas into pool
* core/vm: to -> addr
* core: allow price bump at threshold
* core: test changes to allow price bump at threshold
* core: reinstate tx replacement test underneath threshold
* core: minor test failure message cleanups
This PR implements the new LES protocol version extensions:
* new and more efficient Merkle proofs reply format (when replying to
a multiple Merkle proofs request, we just send a single set of trie
nodes containing all necessary nodes)
* BBT (BloomBitsTrie) works similarly to the existing CHT and contains
the bloombits search data to speed up log searches
* GetTxStatusMsg returns the inclusion position or the
pending/queued/unknown state of a transaction referenced by hash
* an optional signature of new block data (number/hash/td) can be
included in AnnounceMsg to provide an option for "very light
clients" (mobile/embedded devices) to skip expensive Ethash check
and accept multiple signatures of somewhat trusted servers (still a
lot better than trusting a single server completely and retrieving
everything through RPC). The new client mode is not implemented in
this PR, just the protocol extension.
* cmd, consensus, core, miner: instatx clique for --dev
* cmd, consensus, clique: support configurable --dev block times
* cmd, core: allow --dev to use persistent storage too
* core/types: make Signer derive address instead of public key
There are two reasons to do this now: The upcoming ethclient signer
doesn't know the public key, just the address. EIP 208 will introduce a
new signer which derives the 'entry point' address for transactions with
zero signature. The entry point has no public key.
Other changes to the interface ease the path make to moving signature
crypto out of core/types later.
* ethclient, mobile: add TransactionSender
The new method can get the right signer without any crypto, and without
knowledge of the signature scheme that was used when the transaction was
included.
When implementing the new bloombits based filter, I've accidentally broke null
topics by removing the special casing of common.Hash{} filter rules, which
acted as the wildcard topic until now.
This PR fixes the regression, but instead of using the magic hash
common.Hash{} as the null wildcard, the PR reworks the code to handle nil
topics during parsing, converting a JSON null into nil []common.Hash topic.
* params: Updated finalized gascosts for ECMUL/MODEXP
* core,tests: Updates pending new tests
* tests: Updated with new tests
* core: revert state transition bugfix
* tests: Add expected failures due to #15119
* ethdb: add Putter interface and Has method
* ethdb: improve docs and add IdealBatchSize
* ethdb: remove memory batch lock
Batches are not safe for concurrent use.
* core: use ethdb.Putter for Write* functions
This covers the easy cases.
* core/state: simplify StateSync
* trie: optimize local node check
* ethdb: add ValueSize to Batch
* core: optimize HasHeader check
This avoids one random database read get the block number. For many uses
of HasHeader, the expectation is that it's actually there. Using Has
avoids a load + decode of the value.
* core: write fast sync block data in batches
Collect writes into batches up to the ideal size instead of issuing many
small, concurrent writes.
* eth/downloader: commit larger state batches
Collect nodes into a batch up to the ideal size instead of committing
whenever a node is received.
* core: optimize HasBlock check
This avoids a random database read to get the number.
* core: use numberCache in HasHeader
numberCache has higher capacity, increasing the odds of finding the
header without a database lookup.
* core: write imported block data using a batch
Restore batch writes of state and add blocks, tx entries, receipts to
the same batch. The change also simplifies the miner.
This commit also removes posting of logs when a forked block is imported.
* core: fix DB write error handling
* ethdb: use RLock for Has
* core: fix HasBlock comment
This fixes a regression where the new Failed field in ReceiptForStorage
rejected previously stored receipts. Fix it by removing the new field
and store status in the PostState field. This also removes massive RLP
hackery around the status field.
* Fix STATICCALL so it is able to call precompiles too
* Fix write detection to use the correct value argument of CALL
* Fix write protection to ignore the value in CALLCODE
* core: reduce txpool event loop goroutines and sync structs
* cmd, core, eth: journal local transactions to disk
* core: journal replacement pending transactions too
* core: separate transaction journal from pool
* core: remove redundant storage of transactions and receipts
* core, eth, internal: new transaction schema usage polishes
* eth: implement upgrade mechanism for db deduplication
* core, eth: drop old sequential key db upgrader
* eth: close last iterator on successful db upgrage
* core: prefix the lookup entries to make their purpose clearer
Tests are now included as a submodule. This should make updating easier
and removes ~60MB of JSON data from the working copy.
State tests are replaced by General State Tests, which run the same test
with multiple fork configurations.
With the new test runner, consensus tests are run as subtests by walking
json files. Many hex issues have been fixed upstream since the last
update and most custom parsing code is replaced by existing JSON hex
types. Tests can now be marked as 'expected failures', ensuring that
fixes for those tests will trigger an update to test configuration. The
new test runner also supports parallel execution and the -short flag.
The commit reworks the transaction pool queue limitation tests
to cater for testing local accounts, also testing the nolocal flag.
In addition, it also fixes a panic if local transactions exceeded
the global queue allowance (no accounts left to drop from) and also
fixes queue eviction to operate on all accounts, not just the one
being updated.
This PR polishes the EIP 100 difficulty adjustment algorithm
to match the same mechanisms as the Homestead was implemented
to keep the code uniform. It also avoids a few memory allocs
by reusing big1 and big2, pulling it out of the common package
and into ethash.
The commit also fixes chain maker to forward the uncle hash
when creating a simulated chain (it wasn't needed until now
so we just skipped a copy there).
With this commit, core/state's access to the underlying key/value database is
mediated through an interface. Database errors are tracked in StateDB and
returned by CommitTo or the new Error method.
Motivation for this change: We can remove the light client's duplicated copy of
core/state. The light client now supports node iteration, so tracing and storage
enumeration can work with the light client (not implemented in this commit).
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
This commit is a preparation for the upcoming metropolis hardfork. It
prepares the state, core and vm packages such that integration with
metropolis becomes less of a hassle.
* Difficulty calculation requires header instead of individual
parameters
* statedb.StartRecord renamed to statedb.Prepare and added Finalise
method required by metropolis, which removes unwanted accounts from
the state (i.e. selfdestruct)
* State keeps record of destructed objects (in addition to dirty
objects)
* core/vm pre-compiles may now return errors
* core/vm pre-compiles gas check now take the full byte slice as argument
instead of just the size
* core/vm now keeps several hard-fork instruction tables instead of a
single instruction table and removes the need for hard-fork checks in
the instructions
* core/vm contains a empty restruction function which is added in
preparation of metropolis write-only mode operations
* Adds the bn256 curve
* Adds and sets the metropolis chain config block parameters (2^64-1)
The 'step' method is split into two parts, 'peek' and 'push'. peek
returns the next state but doesn't make it current.
The end of iteration was previously tracked by setting 'trie' to nil.
End of iteration is now tracked using the 'iteratorEnd' error, which is
slightly cleaner and requires less code.
Make it so each iterator has exactly one public constructor:
- NodeIterators can be created through a method.
- Iterators can be created through NewIterator on any NodeIterator.
In `touch` operation, only `touched` filed has been changed. Therefore
in the related undo function, only `touched` field should be reverted.
In addition, whether remove this obj from dirty map should depend on
prevDirty flag.
This commit adds pluggable consensus engines to go-ethereum. In short, it
introduces a generic consensus interface, and refactors the entire codebase to
use this interface.
This commit solves several issues concerning the genesis block:
* Genesis/ChainConfig loading was handled by cmd/geth code. This left
library users in the cold. They could specify a JSON-encoded
string and overwrite the config, but didn't get any of the additional
checks performed by geth.
* Decoding and writing of genesis JSON was conflated in
WriteGenesisBlock. This made it a lot harder to embed the genesis
block into the forthcoming config file loader. This commit changes
things so there is a single Genesis type that represents genesis
blocks. All uses of Write*Genesis* are changed to use the new type
instead.
* If the chain config supplied by the user was incompatible with the
current chain (i.e. the chain had already advanced beyond a scheduled
fork), it got overwritten. This is not an issue in practice because
previous forks have always had the highest total difficulty. It might
matter in the future though. The new code reverts the local chain to
the point of the fork when upgrading configuration.
The change to genesis block data removes compression library
dependencies from package core.
This commit makes the wrapper types more generally applicable.
encoding.TextMarshaler is supported by most codec implementations (e.g.
for yaml).
The tests now ensure that package json actually recognizes the custom
marshaler implementation irrespective of how it is implemented.
The Uint type has new tests, too. These are tricky because uint size
depends on the CPU word size. Turns out that there was one incorrect
case where decoding returned ErrUint64Range instead of ErrUintRange.
* Improved the standard evm tracer output and renamed it to WriteTrace
which now takes an io.Writer to write the logs to.
* Added WriteLogs which writes logs to the given writer in a readable
format.
* evm utility now also prints logs generated during the execution.
* common/math: optimize PaddedBigBytes, use it more
name old time/op new time/op delta
PaddedBigBytes-8 71.1ns ± 5% 46.1ns ± 1% -35.15% (p=0.000 n=20+19)
name old alloc/op new alloc/op delta
PaddedBigBytes-8 48.0B ± 0% 32.0B ± 0% -33.33% (p=0.000 n=20+20)
* all: unify big.Int zero checks
Various checks were in use. This commit replaces them all with Int.Sign,
which is cheaper and less code.
eg templates:
func before(x *big.Int) bool { return x.BitLen() == 0 }
func after(x *big.Int) bool { return x.Sign() == 0 }
func before(x *big.Int) bool { return x.BitLen() > 0 }
func after(x *big.Int) bool { return x.Sign() != 0 }
func before(x *big.Int) int { return x.Cmp(common.Big0) }
func after(x *big.Int) int { return x.Sign() }
* common/math, crypto/secp256k1: make ReadBits public in package math
* common: remove CurrencyToString
Move denomination values to params instead.
* common: delete dead code
* common: move big integer operations to common/math
This commit consolidates all big integer operations into common/math and
adds tests and documentation.
There should be no change in semantics for BigPow, BigMin, BigMax, S256,
U256, Exp and their behaviour is now locked in by tests.
The BigD, BytesToBig and Bytes2Big functions don't provide additional
value, all uses are replaced by new(big.Int).SetBytes().
BigToBytes is now called PaddedBigBytes, its minimum output size
parameter is now specified as the number of bytes instead of bits. The
single use of this function is in the EVM's MSTORE instruction.
Big and String2Big are replaced by ParseBig, which is slightly stricter.
It previously accepted leading zeros for hexadecimal inputs but treated
decimal inputs as octal if a leading zero digit was present.
ParseUint64 is used in places where String2Big was used to decode a
uint64.
The new functions MustParseBig and MustParseUint64 are now used in many
places where parsing errors were previously ignored.
* common: delete unused big integer variables
* accounts/abi: replace uses of BytesToBig with use of encoding/binary
* common: remove BytesToBig
* common: remove Bytes2Big
* common: remove BigTrue
* cmd/utils: add BigFlag and use it for error-checked integer flags
While here, remove environment variable processing for DirectoryFlag
because we don't use it.
* core: add missing error checks in genesis block parser
* common: remove String2Big
* cmd/evm: use utils.BigFlag
* common/math: check for 256 bit overflow in ParseBig
This is supposed to prevent silent overflow/truncation of values in the
genesis block JSON. Without this check, a genesis block that set a
balance larger than 256 bits would lead to weird behaviour in the VM.
* cmd/utils: fixup import
This PR implements a differenceIterator, which allows iterating over trie nodes
that exist in one trie but not in another. This is a prerequisite for most GC
strategies, in order to find obsolete nodes.
Reworked the EVM gas instructions to use 64bit integers rather than
arbitrary size big ints. All gas operations, be it additions,
multiplications or divisions, are checked and guarded against 64 bit
integer overflows.
In additon, most of the protocol paramaters in the params package have
been converted to uint64 and are now constants rather than variables.
* common/math: added overflow check ops
* core: vmenv, env renamed to evm
* eth, internal/ethapi, les: unmetered eth_call and cancel methods
* core/vm: implemented big.Int pool for evm instructions
* core/vm: unexported intPool methods & verification methods
* core/vm: added memoryGasCost overflow check and test
* core,eth,internal: Added `debug_getBadBlocks()` method
When bad blocks are discovered, these are stored within geth.
An RPC-endpoint makes them availablewithin the `debug`
namespace. This feature makes it easier to discover network forks.
```
* core, api: go format + docs
* core/blockchain: Documentation, fix minor nitpick
* core: fix failing blockchain test
Reworked the EVM gas instructions to use 64bit integers rather than
arbitrary size big ints. All gas operations, be it additions,
multiplications or divisions, are checked and guarded against 64 bit
integer overflows.
In additon, most of the protocol paramaters in the params package have
been converted to uint64 and are now constants rather than variables.
* common/math: added overflow check ops
* core: vmenv, env renamed to evm
* eth, internal/ethapi, les: unmetered eth_call and cancel methods
* core/vm: implemented big.Int pool for evm instructions
* core/vm: unexported intPool methods & verification methods
* core/vm: added memoryGasCost overflow check and test
The Subscription type is gone, all uses are replaced by
*TypeMuxSubscription. This change is prep-work for the
introduction of the new Subscription type in a later commit.
gorename -from '"github.com/ethereum/go-ethereum/event"::Event' -to TypeMuxEvent
gorename -from '"github.com/ethereum/go-ethereum/event"::muxsub' -to TypeMuxSubscription
gofmt -w -r 'Subscription -> *TypeMuxSubscription' ./event/*.go
find . -name '*.go' -and -not -regex '\./vendor/.*' \| xargs gofmt -w -r 'event.Subscription -> *event.TypeMuxSubscription'
Removal of dead code that appeard as if we had a consensus issue. This
however is not the case as the proper error catching happens in the vm
package instead.
* core: Made logging of reorgs more structured, also always log if reorg is > 63 blocks long
* core/blockchain: go fmt
* core/blockchain: Minor fixes to the reorg reporting
This significantly reduces the dependency closure of ethclient, which no
longer depends on core/vm as of this change.
All uses of vm.Logs are replaced by []*types.Log. NewLog is gone too,
the constructor simply returned a literal.
The run loop, which previously contained custom opcode executes have been
removed and has been simplified to a few checks.
Each operation consists of 4 elements: execution function, gas cost function,
stack validation function and memory size function. The execution function
implements the operation's runtime behaviour, the gas cost function implements
the operation gas costs function and greatly depends on the memory and stack,
the stack validation function validates the stack and makes sure that enough
items can be popped off and pushed on and the memory size function calculates
the memory required for the operation and returns it.
This commit also allows the EVM to go unmetered. This is helpful for offline
operations such as contract calls.
To address increasing complexity in code that handles signatures, this PR
discards all notion of "different" signature types at the library level. Both
the crypto and accounts package is reduced to only be able to produce plain
canonical secp256k1 signatures. This makes the crpyto APIs much cleaner,
simpler and harder to abuse.
The transaction pool keeps track of the current nonce in its local pendingState. When a
new block comes in the pendingState is reset. During the reset it fetches multiple times
the current state through the use of the currentState callback. When a second block comes
in during the reset its possible that the state changes during the reset. If that block
holds transactions that are currently in the pool the local pendingState that is used to
determine nonces can get out of sync.
Environment is now a struct (not an interface). This
reduces a lot of tech-debt throughout the codebase where a virtual
machine environment had to be implemented in order to test or run it.
The new environment is suitable to be used en the json tests, core
consensus and light client.
This field used to be assigned by the filter system and returned through
the RPC API. Now that we have a Go client that uses the underlying type,
the field needs to move. It is now assigned to true when the RemovedLogs
event is generated so the filter system doesn't need to care about the
field at all.
While here, remove the log list from ChainSideEvent. There are no users
of this field right now and any potential users could subscribe to
RemovedLogsEvent instead.
* core, core/types: refactored tx chain id checking
Refactored explicit chain id checking in to the Sender deriviation method
* cmd/utils, params: define chain ids
This commit implements EIP158 part 1, 2, 3 & 4
1. If an account is empty it's no longer written to the trie. An empty
account is defined as (balance=0, nonce=0, storage=0, code=0).
2. Delete an empty account if it's touched
3. An empty account is redefined as either non-existent or empty.
4. Zero value calls and zero value suicides no longer consume the 25k
reation costs.
params: moved core/config to params
Signed-off-by: Jeffrey Wilcke <jeffrey@ethereum.org>
These accessors were introduced by light client changes, but
the only method that is actually used is GetNumberU64. This
commit replaces all uses of .GetNumberU64 with .Number.Uint64.
* core: Add metrics collection for transaction events; replace/discard for pending and future queues, as well as invalid transactions
* core: change namespace for txpool metrics
* core: define more metrics (not yet used)
* core: implement more tx metrics for when transactions are dropped
* core: minor formatting tweeks (will squash later)
* core: remove superfluous meter, fix missing pending nofunds
* core, metrics: switch txpool meters to counters
This commit includes several API changes:
- The behavior of eth_sign is changed. It now accepts an arbitrary
message, prepends the well-known string
\x19Ethereum Signed Message:\n<length of message>
hashes the result using keccak256 and calculates the signature of
the hash. This breaks backwards compatability!
- personal_sign(hash, address [, password]) is added. It has the same
semantics as eth_sign but also accepts a password. The private key
used to sign the hash is temporarily unlocked in the scope of the
request.
- personal_recover(message, signature) is added and returns the
address for the account that created a signature.
This implements 1b & 1c of EIP150 by adding a new GasTable which must be
returned from the RuleSet config method. This table is used to determine
the gas prices for the current epoch.
Please note that when the CreateBySuicide gas price is set it is assumed
that we're in the new epoch phase.
In addition this PR will serve as temporary basis while refactorisation
in being done in the EVM64 PR, which will substentially overhaul the gas
price code.
* trie: store nodes as pointers
This avoids memory copies when unwrapping node interface values.
name old time/op new time/op delta
Get 388ns ± 8% 215ns ± 2% -44.56% (p=0.000 n=15+15)
GetDB 363ns ± 3% 202ns ± 2% -44.21% (p=0.000 n=15+15)
UpdateBE 1.57µs ± 2% 1.29µs ± 3% -17.80% (p=0.000 n=13+15)
UpdateLE 1.92µs ± 2% 1.61µs ± 2% -16.25% (p=0.000 n=14+14)
HashBE 2.16µs ± 6% 2.18µs ± 6% ~ (p=0.436 n=15+15)
HashLE 7.43µs ± 3% 7.21µs ± 3% -2.96% (p=0.000 n=15+13)
* trie: close temporary databases in GetDB benchmark
* trie: don't keep []byte from DB load around
Nodes decoded from a DB load kept hashes and values as sub-slices of
the DB value. This can be a problem because loading from leveldb often
returns []byte with a cap that's larger than necessary, increasing
memory usage.
* trie: unload old cached nodes
* trie, core/state: use cache unloading for account trie
* trie: use explicit private flags (fixes Go 1.5 reflection issue).
* trie: fixup cachegen overflow at request of nick
* core/state: rename journal size constant
Two new tests are skipped because they're buggy. Making some newer
random state tests work required implementing the 'compressed return
value encoding'.
This commit replaces the deep-copy based state revert mechanism with a
linear complexity journal. This commit also hides several internal
StateDB methods to limit the number of ways in which calling code can
use the journal incorrectly.
As usual consultation and bug fixes to the initial implementation were
provided by @karalabe, @obscuren and @Arachnid. Thank you!
that specifies the maximum number of elements in the `structLogs`
output. This option is useful for debugging a transaction that
involves a large number of repetition.
For example,
```
debug.traceTransaction(tx, {disableStorage: true, limit: 2})
```
shows at most the first two steps in the `structLogs`.
In this commit, core/types's types learn how to encode and decode
themselves as JSON. The encoding is very similar to what the RPC API
uses. The RPC API is missing some output fields (e.g. transaction
signature values) which will be added to the API in a later commit. Some
fields that the API generates are ignored by the decoder methods here.
This CL makes several refactors:
- Define a Tracer interface, implementing the `CaptureState` method
- Add the VM environment as the first argument of
`Tracer.CaptureState`
- Rename existing functionality `StructLogger` an make it an
implementation of `Tracer`
- Delete `StructLogCollector` and make `StructLogger` collect the logs
directly
- Change all callers to use the new `StructLogger` where necessary and
extract logs from that.
- Deletes the apparently obsolete and likely nonfunctional 'TraceCall'
from the eth API.
Callers that only wish accumulated logs can use the `StructLogger`
implementation straightforwardly. Callers that wish to efficiently
capture VM traces and operate on them without excessive copying can now
implement the `Tracer` interface to receive VM state at each step and
do with it as they wish.
This CL also removes the accumulation of logs from the vm.Environment;
this was necessary as part of the refactor, but also simplifies it by
removing a responsibility that doesn't directly belong to the
Environment.
This implements a generic approach to enabling soft forks by allowing
anyone to put in hashes of contracts that should not be interacted from.
This will help "The DAO" in their endevour to stop any whithdrawals from
any DAO contract by convincing the mining community to accept their code
hash.
Consensus rules dictate that objects can only be removed during the
finalisation of the transaction (i.e. after all calls have finished).
Thus calling a suicided contract twice from the same transaction:
A->B(S)->ret(A)->B(S) results in 2 suicides. Calling the suicided
object twice from two transactions: A->B(S), A->B, results in only one
suicide and a call to an empty object.
Our current debug tracing functionality replays all transaction that
were executed prior to the targetted transaction in order to provide
the user with an accurate trace.
As a side effect to calling StateDB.IntermediateRoot it also deletes any
suicides objects. Our tracing code never calls this function because it
isn't interested in the intermediate root. Becasue of this it caused a
bug in the tracing code where transactions that were send to priviously
deleted objects resulted in two suicides rather than one suicide and a
call to an empty object.
Fixes#2542
We used to have reporting of bad blocks, but it was disabled
before the Frontier release. We need it back because users
are usually unable to provide the full RLP data of a bad
block when it occurs.
A shortcoming of this particular implementation is that the
origin peer is not tracked for blocks received during eth/63
sync. No origin peer info is still better than no report at
all though.
This fixes an issue where it's theoretical possible to cause a consensus
failure when hitting the lower end of the difficulty, though pratically
impossible it's worth a fix.
Shutting down geth prints hundreds of annoying error messages in some
cases. The errors appear because the Stop method of eth.ProtocolManager,
miner.Miner and core.TxPool is asynchronous. Left over peer sessions
generate events which are processed after Stop even though the database
has already been closed.
The fix is to make Stop synchronous using sync.WaitGroup.
For eth.ProtocolManager, in order to make use of WaitGroup safe, we need
a way to stop new peer sessions from being added while waiting on the
WaitGroup. The eth protocol Run function now selects on a signaling
channel and adds to the WaitGroup only if ProtocolManager is not
shutting down.
For miner.worker and core.TxPool the number of goroutines is static,
WaitGroup can be used in the usual way without additional
synchronisation.
This is necessary for external users of the go-ethereum code who want to, for instance, build a custom node that plays back transactions, as core.ApplyTransaction requires a ChainConfig as a parameter.
According to our own instructions the genesis config attribute should be
"config". The genesis definition in the go code, however, has a field
called `ChainConfig`. This field now has a `json:"config"` struct tag so
that the json is properly unmarshalled.
This fixes#2482
Exposes some core methods to transition and compute new state
information and adds an additional return value to the transition db
method to fetch required gas for that particular message (excluding gas
refunds from any SSTORE[X] = 0 and SUICIDE.
Fixes#2395
The chain maker and the simulated backend now run with a homestead phase
beginning at block 0 (i.e. there's no frontier).
This commit also fixes up #2388
Added chain configuration options and write out during genesis database
insertion. If no "config" was found, nothing is written to the database.
Configurations are written on a per genesis base. This means
that any chain (which is identified by it's genesis hash) can have their
own chain settings.
The EVM was previously initialised and created for every CALL, CALLCODE,
DELEGATECALL and CREATE. This PR changes this behaviour so that the same
EVM can be used through the session and beyond as long as the
Environment sticks around.
Added a future lock which prevents the anything being added or removed
from or to the set when looping over the set of blocks. This fixes a nil
pointer in the range loop when trying to retrieve a block from the set
which was previously available but removed due to regular chain
processing.
Fixes#2305
Previously all blocks that were already in our chain were never re
announced as potential uncle block (e.g. ChainSideEvent). This is
problematic during mining where you want to gather as much possible
uncles as possible increasing the profit. This is now addressed in this
PR where during reorganisations of chains the old chain is regarded as
uncles.
Fixed#2298
Assuming the following scenario where a miner has 15% of all hashing
power and the ability to exert a moderate control over the network to
the point where if the attacker sees a message A, it can't stop A from
propagating, but what it **can** do is send a message B and ensure that
most nodes see B before A. The attacker can then selfish mine and
augment selfish mining strategy by giving his own blocks an advantage.
This change makes the time at which a block is received less relevant
and so the level of control an attacker has over the network no longer
makes a difference.
This change changes the current td algorithm `B_td > C_td` to the new
algorithm `B_td > C_td || B_td == C_td && rnd < 0.5`.
* Removed some strange code that didn't apply state reverting properly
* Refactored code setting from vm & state transition to the executioner
* Updated tests
* change gas cost for contract creating txs
* invalidate signature with s value greater than secp256k1 N / 2
* OOG contract creation if not enough gas to store code
* new difficulty adjustment algorithm
* new DELEGATECALL op code
Pending logs are now filterable through the Go API. Filter API changed
such that each filter type has it's own bucket and adding filter
explicitly requires you specify the bucket to put it in.
Implemented `runtime.Call` which uses - unlike Execute - the given state
for the execution and the address of the contract you wish to execute.
Unlike `Execute`, `Call` requires a config.
The test chain generated by makeChainFork included invalid uncle
headers, crashing the generator during the state commit.
The headers were invalid because they used the iteration counter as the
block number, even though makeChainFork uses a block with number > 0 as
the parent. Fix this by introducing BlockGen.Number, which allows
accessing the actual number of the block being generated.
When a chain reorganisation occurs we collect the logs that were deleted
during the chain reorganisation. The removed logs are posted to the
event mux indicating that those were deleted during the reorg.
The runtime environment can be used for simple basic execution of
contract code without the requirement of setting up a full stack and
operates fully in memory.
This removes the burden on a single object to take care of all
validation and state processing. Now instead the validation is done by
the `core.BlockValidator` (`types.Validator`) that takes care of both
header and uncle validation through the `ValidateBlock` method and state
validation through the `ValidateState` method. The state processing is
done by a new object `core.StateProcessor` (`types.Processor`) and
accepts a new state as input and uses that to process the given block's
transactions (and uncles for rewords) to calculate the state root for
the next block (P_n + 1).
The amount of gas available for tx execution was tracked in the
StateObject representing the coinbase account. This commit makes the gas
counter a separate type in package core, which avoids unintended
consequences of intertwining the counter with state logic.
Moved the execution of instructions to the instruction it self. This
will allow for specialised instructions (e.g. segments) to be execution
in the same manner as regular instructions.
Log filtering is now using a MIPmap like approach where addresses of
logs are added to a mapped bloom bin. The current levels for the MIP are
in ranges of 1.000.000, 500.000, 100.000, 50.000, 1.000. Logs are
therefor filtered in batches of 1.000.
* Moved `vm.Transfer` to `core` package and changed execution to call
`env.Transfer` instead of `core.Transfer` directly.
* core/vm: byte code VM moved to jump table instead of switch
* Moved `vm.Transfer` to `core` package and changed execution to call
`env.Transfer` instead of `core.Transfer` directly.
* Byte code VM now shares the same code as the JITVM
* Renamed Context to Contract
* Changed initialiser of state transition & unexported methods
* Removed the Execution object and refactor `Call`, `CallCode` &
`Create` in to their own functions instead of being methods.
* Removed the hard dep on the state for the VM. The VM now
depends on a Database interface returned by the environment. In the
process the core now depends less on the statedb by usage of the env
* Moved `Log` from package `core/state` to package `core/vm`.
Moved the filtering system from `event` to `eth/filters` package and
removed the `core.Filter` object. The `filters.Filter` object now
requires a `common.Database` rather than a `eth.Backend` and invokes the
`core.GetBlockByX` directly rather than thru a "manager".
This PR solves an issue with the chain manager posting a
`RemovedTransactionEvent`, the tx pool will try to
acquire the chainmanager lock which has previously been locked prior to
posting `RemovedTransactionEvent`. This results in a deadlock in the
core.
The test genesis block was not written properly, block insertion failed
immediately.
While here, fix the panic when shutting down "geth blocktest" with
Ctrl+C. The signal handler is now installed automatically, causing
ethereum.Stop to crash because everything is already stopped.
Added a `Difference` method to `types.Transactions` which sets the
receiver to the difference of a to b (NOTE: not a **and** b).
Transaction pool subscribes to RemovedTransactionEvent adding back to
those potential missing from the chain.
When a chain re-org occurs remove any transactions that were removed
from the canonical chain during the re-org as well as the receipts that
were generated in the process.
Closes#1746
When the transaction state recovery kicked in it assigned the last
(incorrect) nonce to the pending state which caused transactions with
the same nonce to occur.
Added test for nonce recovery
Reduced big int allocation by making stack items modifiable. Instead of
adding items such as `common.Big0` to the stack, `new(big.Int)` is
added instead. One must expect that any item that is added to the stack
might change.
The running flag will determine whether the chain manager is still
running or not. This will prevent the quit channel from being closed
twice resulting in a panic. This PR should fix this issue.
Closes#1559
Added PutBlockReceipts; storing receipts by blocks. Eventually this will
require pruning during some cleanup cycle. During forks the receipts by
block are used to get the new canonical receipts and transactions.
This PR fixes#1473 by rewriting transactions and receipts from the point
of where the fork occured.
* Update => SyncIntermediate
* Added SyncObjects
SyncIntermediate only updates whatever has changed, but, as a side
effect, requires much more disk space.
SyncObjects will only sync whatever is required for a block and will not
save intermediate state to disk. As drawback this requires more time
when more txs come in.
* Miners do now verify their own header, not their state.
* Changed old putTx and putReceipts to be exported
* Moved writing of transactions and receipts out of the block processer
in to the chain manager. Closes#1386
* Miner post ChainHeadEvent & ChainEvent. Closes#1388
Changed the transaction pool to listen for ChainHeadEvent when resetting
the state instead of ChainEvent. It makes very little sense to burst
through transactions while we are catching up (e.g., have more than one
block to process)
This fixes an issue with the lru cache not being available when calling
WriteBlock. WriteBlock previously always assumed to be called from the
InsertChain where the lru cache was always created prior to calling
WriteBlock. When being called from the worker this could lead in to a
nil pointer exception being thrown and causing database corruption.
Removed chain manager's select/channel approach when checking for
interrupts. Now using an atomic int32 instead which checked for every
block processed.
You can set the nonce of the block with `--genesisnonce`. When the
genesis nonce changes and it doesn't match with the first block in your
database it will fail. A new `datadir` must be given if the nonce of the
genesis block changes.
Removed the managed tx state from the chain manager to the transaction
pool where it's much easier to keep track of nonces (and manage them).
The transaction pool now also uses the queue and pending txs differently
where queued txs are now moved over to the pending queue (i.e. txs ready
for processing and propagation).
* JUMPDEST analysis is faster because less type conversions are performed.
* The map of JUMPDEST locations is now created lazily at the first JUMP.
* The result of the analysis is kept around for recursive invocations
through CALL/CALLCODE.
Fixes#1147
* Miner should no longer generate blocks with a time stamp less or equal
than it's parent.
* Future blocks are no longer processed and queued directly.
Closes#1118
ChainManager now uses a parallel approach to block processing where all
nonces are checked seperatly from the block processing process. This
speeds up the process by about 3 times on my i7
* core: Added GasPriceChange event
* eth: When one of the DB flush methods error a fatal error log message
is given. Hopefully this will prevent corrupted databases from
occuring.
* miner: remove transactions with low gas price. Closes#906, #903
* common/compiler: solidity compiler + tests
* rpc: eth_compilers, eth_compileSolidity + tests
* fix natspec test using keystore API, notice exp dynamically changes addr, cleanup
* resolver implements registrars and needs to create reg contract (temp)
* xeth: solidity compiler. expose getter Solc() and paths setter SetSolc(solcPath)
* ethereumApi: implement compiler related RPC calls using XEth - json struct tests
* admin: make use of XEth.SetSolc to allow runtime setting of compiler paths
* cli: command line flags solc to set custom solc bin path
* js admin api with new features debug and contractInfo modules
* wiki is the doc https://github.com/ethereum/go-ethereum/wiki/Contracts-and-Transactions
Chain reorgs weren't properly handled when a chain was further ahead.
Previously we'd end up with mixed chains in our canonical numbering
sequence. Added test for this type of forking.
```
/-o-o-o A
o-C-+
\-o-o-o-o B
```
Ends up with with C A1, A2, A3, B4
* Changed CalcGasLimit to no longer need current block
* Added a gas * price + value on tx validation
* Transactions in the pool are now re-validated once every X
Transactions are now propagated to peers from which we have not yet
received the transaction. This will significantly reduce the chatter on
the network.
Moved new mined block handler to the protocol handler and moved
transaction handling to protocol handler.
Implemented a new transaction queue. Transactions with a holes in their
nonce sequence are also not propagated over the network.
N: 0,1,2,5,6,7 = propagate 0..2 -- 5..N is kept in the tx pool
You can now specify `null` as a way of saying "not interested in this
topic, match all". core.Filter assumes the zero'd address to be the
wildcard. JSON rpc assumes empty strings to be wildcards.
During a split parent and grand parent were included in the database but
not in the canonical chain (numbered chain). Added a `merge` function
which finds the common ancestor of the chains and reinserts the missing
blocks.
- follow up locks and fix them
- chainManager: call SetQueued for parentErr future blocks, uncomment TD checks, unskip test
- make ErrIncorrectTD non-fatal to be forgiving to genuine mistaken nodes (temp) but demote them to guard against stuck best peers.
- add purging to bounded nodeCache (config nodeCacheSize)
- use nodeCache when creating blockpool entries and let non-best peers add blocks (performance boost)
- minor error in addError
- reduce idleBestPeerTimeout to 1 minute
- correct status counts and unskip status passing status test
- glogified logging
- queued bool // flag for blockpool to skip TD check
- set to true when future block queued
- in checkTD: skip check if queued
- TODO: add test (insertchain sets future block)
The transaction pool will now some easily be able to pre determine the
validity of a transaction by checking the following:
* Account existst
* gas limit higher than the instrinsic gas
* enough funds to pay upfront costs
* nonce check
Logs are now recorded per transactions instead of tossing them out after
each transaction. This should also fix an issue with
`eth_getFilterLogs` (#629) Also now implemented are the `transactionHash,
blockHash, transactionIndex, logIndex` on logs. Closes#654.
* Add params package with exported variables generated from
github.com/ethereum/common/blob/master/params.json
* Use params package variables in applicable places
* Add check for minimum gas limit in validation of block's gas limit
* Remove common/params.json from go-ethereum to avoid
outdated version of it
* Use absolute value of (block's gas limit) - (parent's gas limit)
in comparison with diff limit.
* Ensure the diff is strictly smaller than the allowed size.