There are two transaction parameter structures defined in
the codebase, although for different purposes. But most of
the parameters are shared. So it's nice to reduce the code
duplication by merging them together.
Co-authored-by: Martin Holst Swende <martin@swende.se>
This removes the error log message that says
Ethereum peer removal failed ... err="peer not registered"
The error happened because removePeer was called multiple
times: once to disconnect the peer, and another time when the
handler exited. With this change, removePeer now has the sole
purpose of disconnecting the peer. Unregistering happens exactly
once, when the handler exits.
* core/types, miner: create TxWithMinerFee wrapper, add EIP-1559 support to TransactionsByMinerFeeAndNonce
miner: set base fee when creating a new header, handle gas limit, log miner fees
* all: rename to NewTransactionsByPriceAndNonce
* core/types, miner: rename to NewTransactionsByPriceAndNonce + EffectiveTip
miner: activate 1559 for testGenerateBlockAndImport tests
* core,miner: revert naming to TransactionsByPriceAndTime
* core/types/transaction: update effective tip calculation logic
* miner: update aleut to london
* core/types/transaction_test: use correct signer for 1559 txs + add back sender check
* miner/worker: calculate gas target from gas limit
* core, miner: fix block gas limits for 1559
Co-authored-by: Ansgar Dietrichs <adietrichs@gmail.com>
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
This change extracts the peer QoS tracking logic from eth/downloader, moving
it into the new package p2p/msgrate. The job of msgrate.Tracker is determining
suitable timeout values and request sizes per peer.
The snap sync scheduler now uses msgrate.Tracker instead of the hard-coded 15s
timeout. This should make the sync work better on network links with high latency.
This is the initial implementation of EIP-1559 in packages core/types and core.
Mining, RPC, etc. will be added in subsequent commits.
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This removes auto-configuration of the snap.*.ethdisco.net DNS discovery tree.
Since measurements have shown that > 75% of nodes in all.*.ethdisco.net support
snap, we have decided to retire the dedicated index for snap and just use the eth
tree instead.
The dial iterators of eth and snap now use the same DNS tree in the default configuration,
so both iterators should use the same DNS discovery client instance. This ensures that
the record cache and rate limit are shared. Records will not be requested multiple times.
While testing the change, I noticed that duplicate DNS requests do happen even
when the client instance is shared. This is because the two iterators request the tree
root, link tree root, and first levels of the tree in lockstep. To avoid this problem, the
change also adds a singleflight.Group instance in the client. When one iterator
attempts to resolve an entry which is already being resolved, the singleflight object
waits for the existing resolve call to finish and returns the entry to both places.
* eth/protocols/snap: generate storage trie from full dirty snap data
* eth/protocols/snap: get rid of some more dead code
* eth/protocols/snap: less frequent logs, also log during trie generation
* eth/protocols/snap: implement dirty account range stack-hashing
* eth/protocols/snap: don't loop on account trie generation
* eth/protocols/snap: fix account format in trie
* core, eth, ethdb: glue snap packets together, but not chunks
* eth/protocols/snap: print completion log for snap phase
* eth/protocols/snap: extended tests
* eth/protocols/snap: make testcase pass
* eth/protocols/snap: fix account stacktrie commit without defer
* ethdb: fix key counts on reset
* eth/protocols: fix typos
* eth/protocols/snap: make better use of delivered data (#44)
* eth/protocols/snap: make better use of delivered data
* squashme
* eth/protocols/snap: reduce chunking
* squashme
* eth/protocols/snap: reduce chunking further
* eth/protocols/snap: break out hash range calculations
* eth/protocols/snap: use sort.Search instead of looping
* eth/protocols/snap: prevent crash on storage response with no keys
* eth/protocols/snap: nitpicks all around
* eth/protocols/snap: clear heal need on 1-chunk storage completion
* eth/protocols/snap: fix range chunker, add tests
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* trie: fix test API error
* eth/protocols/snap: fix some further liter issues
* eth/protocols/snap: fix accidental batch reuse
Co-authored-by: Martin Holst Swende <martin@swende.se>
* eth/protocols, prp/tracker: add support for req/rep rtt tracking
* p2p/tracker: sanity cap the number of pending requests
* pap/tracker: linter <3
* p2p/tracker: disable entire tracker if no metrics are enabled
This change adds the --catalyst flag, enabling an RPC API for eth2 integration.
In this initial version, catalyst mode also disables all peer-to-peer networking.
Co-authored-by: Mikhail Kalinin <noblesse.knight@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
* all: add thousandths separators for big numbers on log messages
* p2p/sentry: drop accidental file
* common, log: add fast number formatter
* common, eth/protocols/snap: simplifty fancy num types
* log: handle nil big ints
* core/vm: implement AccessListTracer
* eth: implement debug.createAccessList
* core/vm: fixed nil panics in accessListTracer
* eth: better error messages for createAccessList
* eth: some fixes on CreateAccessList
* eth: allow for provided accesslists
* eth: pass accesslist by value
* eth: remove created acocunt from accesslist
* core/vm: simplify access list tracer
* core/vm: unexport accessListTracer
* eth: return best guess if al iteration times out
* eth: return best guess if al iteration times out
* core: docstring, unexport methods
* eth: typo
* internal/ethapi: move createAccessList to eth package
* internal/ethapi: remove reexec from createAccessList
* internal/ethapi: break if al is equal to last run, not if gas is equal
* internal/web3ext: fixed arguments
* core/types: fixed equality check for accesslist
* core/types: no hardcoded vals
* core, internal: simplify access list generation, make it precise
* core/vm: fix typo
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
This removes the duplicated definition of eth_chainID
in package eth and updates the definition in internal/ethapi
to treat chain ID as a bigint.
Co-authored-by: Felix Lange <fjl@twurst.com>
The PR implements the --miner.notify.full flag that enables full pending block
notifications. When this flag is used, the block notifications sent to mining
endpoints contain the complete block header JSON instead of a work package
array.
Co-authored-by: AlexSSD7 <alexandersadovskyi7@protonmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Fixes the CaptureStart api to include the EVM, thus being able to set the statedb early on. This pr also exposes the struct we used internally in the interpreter to encapsulate the contract, mem, stack, rstack, so we pass it as a single struct to the tracer, and removes the error returns on the capture methods.
This adds support for EIP-2718 typed transactions as well as EIP-2930
access list transactions (tx type 1). These EIPs are scheduled for the
Berlin fork.
There very few changes to existing APIs in core/types, and several new APIs
to deal with access list transactions. In particular, there are two new
constructor functions for transactions: types.NewTx and types.SignNewTx.
Since the canonical encoding of typed transactions is not RLP-compatible,
Transaction now has new methods for encoding and decoding: MarshalBinary
and UnmarshalBinary.
The existing EIP-155 signer does not support the new transaction types.
All code dealing with transaction signatures should be updated to use the
newer EIP-2930 signer. To make this easier for future updates, we have
added new constructor functions for types.Signer: types.LatestSigner and
types.LatestSignerForChainID.
This change also adds support for the YoloV3 testnet.
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Ryan Schneider <ryanleeschneider@gmail.com>
This PR adds a more CLI flag, so that the les-server can serve light clients even the local node is not synced yet.
This functionality is needed in some testing environments(e.g. hive). After launching the les server, no more blocks will be imported so the node is always marked as "non-synced".
This PR prevents users from submitting transactions without EIP-155 enabled. This behaviour can be overridden by specifying the flag --rpc.allow-unprotected-txs=true.
This PR optimizes the broadcast loop. Instead of iterating twice through a given set of transactions to weed out which peers have and which do not have a tx, to send/announce transactions, we do it only once.
Prevents a situation where we (not running snap) connects with a peer running snap, and get stalled waiting for snap registration to succeed (which will never happen), which cause a waitgroup wait to halt shutdown
This moves the eth config definition into a separate package, eth/ethconfig.
Packages eth and les can now import this common package instead of
importing eth from les, reducing dependencies.
Co-authored-by: Felix Lange <fjl@twurst.com>
The PR makes use of the stacktrie, which is is more lenient on resource consumption, than the regular trie, in cases where we only need it for DeriveSha
* remove uneeded convertion type
* remove redundant type in composite literal
* omit explicit type where implicit
* remove unused redundant parenthesis
* remove redundant import alias duktape
Removes the yolov2 definition, adds yolov3, including EIP-2565. This PR also disables some of the erroneously generated blockchain and statetests, and adds the new genesis hash + alloc for yolov3.
This PR disables the CLI switches for yolo, since it's not complete until we merge support for 2930.
This moves the tracing RPC API implementation to package eth/tracers.
By doing so, package eth no longer depends on tracing and the duktape JS engine.
The change also enables tracing using the light client. All tracing methods work with the
light client, but it's a lot slower compared to using a full node.
This PR introduces a new config field SyncFromCheckpoint for light client.
In some special scenarios, it's required to start synchronization from some
arbitrary checkpoint or even from the scratch. So this PR offers this
flexibility to users so that the synchronization start point can be configured.
There are two relevant configs: SyncFromCheckpoint and Checkpoint.
- If the SyncFromCheckpoint is true, the light client will try to sync from the
specified checkpoint.
- If the Checkpoint is not configured, then the light client will sync from the
scratch(from the latest header if the database is not empty)
Additional notes: these two configs are not visible in the CLI flags but only
accessable in the config file.
Example Usage:
[Eth]
SyncFromCheckpoint = true
[Eth.Checkpoint]
SectionIndex = 100
SectionHead = "0xabc"
CHTRoot = "0xabc"
BloomRoot = "0xabc"
PS. Historical checkpoint can be retrieved from the synced full node or light
client via les_getCheckpoint API.
This changes the chainID RPC method to return an error when EIP-155 is not yet
active at the current block height. It used to simply return zero in this case, but
that's confusing.
During the snap and eth refactor, the net_version rpc call was falsely deprecated.
This restores the net_version RPC handler as most eth2 nodes and other software
depend on it.
This commit splits the eth package, separating the handling of eth and snap protocols. It also includes the capability to run snap sync (https://github.com/ethereum/devp2p/blob/master/caps/snap.md) , but does not enable it by default.
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR implements unclean shutdown marker. Every time geth boots, it adds a timestamp to a list of timestamps in the database. This list is capped at 10. At a clean shutdown, the timestamp is removed again.
Thus, when geth exits unclean, the marker remains, and at boot up we show the most recent unclean shutdowns to the user, which makes it easier to diagnose root-causes to certain problems.
Co-authored-by: Nagy Salem <me@muhnagy.com>
In miner/worker.go, there are two goroutine using channel w.newWorkCh: newWorkerLoop() sends to this channel, and mainLoop() receives from this channel. Only the receive operation is in a select.
However, w.exitCh may be closed by another goroutine. This is fine for the receive since receive is in select, but if the send operation is blocking, then it will block forever. This commit puts the send in a select, so it won't block even if w.exitCh is closed.
Similarly, there are two goroutines using channel errc: the parent that runs the test receives from it, and the child created at line 573 sends to it. If the parent goroutine exits too early by calling t.Fatalf() at line 614, then the child goroutine will be blocked at line 574 forever. This commit adds 1 buffer to errc. Now send will not block, and receive is not influenced because receive still needs to wait for the send.
* all: core: split vm.Config into BlockConfig and TxConfig
* core: core/vm: reset EVM between tx in block instead of creating new
* core/vm: added docs
core/types: use stacktrie for derivesha
trie: add stacktrie file
trie: fix linter
core/types: use stacktrie for derivesha
rebased: adapt stacktrie to the newer version of DeriveSha
Co-authored-by: Martin Holst Swende <martin@swende.se>
More linter fixes
review feedback: no key offset for nodes converted to hashes
trie: use EncodeRLP for full nodes
core/types: insert txs in order in derivesha
trie: tests for derivesha with stacktrie
trie: make stacktrie use pooled hashers
trie: make stacktrie reuse tmp slice space
trie: minor polishes on stacktrie
trie/stacktrie: less rlp dancing
core/types: explain the contorsions in DeriveSha
ci: fix goimport errors
trie: clear mem on subtrie hashing
squashme: linter fix
stracktrie: use pooling, less allocs (#3)
trie: in-place hex prefix, reduce allocs and add rawNode.EncodeRLP
Reintroduce the `[]node` method, add the missing `EncodeRLP` implementation for `rawNode` and calculate the hex prefix in place.
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Martin Holst Swende <martin@swende.se>
This changes how the downloader works, a little bit. Previously, when block sync started,
we immediately started filling up to 8192 blocks. Usually this is fine, blocks are small
in the early numbers. The threshold then is lowered as we measure the size of the blocks
that are filled.
However, if the node is shut down and restarts syncing while we're in a heavy segment,
that might be bad. This PR introduces a more conservative initial threshold of 2K blocks
instead.
* "Downloader queue stats" is now a DEBUG information
I think this info is more a DEBUG related information then an INFO. If it must remains an INFO, maybe it can be slow down to one time every 5 minutes or so.
* Update queue.go
"Downloader queue stats" information is now provided once every minute instead of once every 10 seconds.
This PR significantly changes the APIs for instantiating Ethereum nodes in
a Go program. The new APIs are not backwards-compatible, but we feel that
this is made up for by the much simpler way of registering services on
node.Node. You can find more information and rationale in the design
document: https://gist.github.com/renaynay/5bec2de19fde66f4d04c535fd24f0775.
There is also a new feature in Node's Go API: it is now possible to
register arbitrary handlers on the user-facing HTTP server. In geth, this
facility is used to enable GraphQL.
There is a single minor change relevant for geth users in this PR: The
GraphQL API is no longer available separately from the JSON-RPC HTTP
server. If you want GraphQL, you need to enable it using the
./geth --http --graphql flag combination.
The --graphql.port and --graphql.addr flags are no longer available.
* init
notes
removed some mentions of eth62, bumped protocol err too old to >=63
* remove sanity checks and bump supported protocol version up to 63
* remove 62 tests, still need to add 65
* remove 65 tests
* eth/downloader: refactor downloader + queue
downloader, fetcher: throttle-metrics, fetcher filter improvements, standalone resultcache
downloader: more accurate deliverytime calculation, less mem overhead in state requests
downloader/queue: increase underlying buffer of results, new throttle mechanism
eth/downloader: updates to tests
eth/downloader: fix up some review concerns
eth/downloader/queue: minor fixes
eth/downloader: minor fixes after review call
eth/downloader: testcases for queue.go
eth/downloader: minor change, don't set progress unless progress...
eth/downloader: fix flaw which prevented useless peers from being dropped
eth/downloader: try to fix tests
eth/downloader: verify non-deliveries against advertised remote head
eth/downloader: fix flaw with checking closed-status causing hang
eth/downloader: hashing avoidance
eth/downloader: review concerns + simplify resultcache and queue
eth/downloader: add back some locks, address review concerns
downloader/queue: fix remaining lock flaw
* eth/downloader: nitpick fixes
* eth/downloader: remove the *2*3/4 throttling threshold dance
* eth/downloader: print correct throttle threshold in stats
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
This change introduces garbage collection for the light client. Historical
chain data is deleted periodically. If you want to disable the GC, use
the --light.nopruning flag.
This fixes two issues with state sync restarts:
When sync restarts with a new root, some peers can have in-flight requests.
Since all peers with active requests were marked idle when exiting sync,
the new sync would schedule more requests for those peers. When the
response for the earlier request arrived, the new sync would reject it and
mark the peer idle again, rendering the peer useless until it disconnected.
The other issue was that peers would not be marked idle when they had
delivered a response, but the response hadn't been processed before
restarting the state sync. This also made the peer useless because it
would be permanently marked busy.
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR reduces the bandwidth used by the light client to compute the
recommended gas price. The current mechanism for suggesting the price is:
- retrieve recent 20 blocks
- get the lowest gas price of these blocks
- sort the price array and return the middle(60%) one
This works for full nodes, which have all blocks available locally.
However, this is very expensive for the light client because the light
client needs to retrieve block bodies from the network.
The PR changes the default options for light client. With the new config,
the light client only retrieves the two latest blocks, but in order to
collect more sample transactions, the 3 lowest prices are collected from
each block.
This PR also changes the behavior for empty blocks. If the block is empty,
the lastest price is reused for sampling.
* eth/downloaded: fixed datarace between synchronize and Progress
There was a race condition between `downloader.synchronize()` and `Progress` `syncWithPeer` `fetchHeight` `findAncestors` and `processHeaders`
This PR changes the behavior of the downloader a bit.
Previously the functions `Progress` `syncWithPeer` `fetchHeight` `findAncestors` and `processHeaders` read the syncMode anew within their loops. Now they read the syncMode at the start of their function and don't change it during their runtime.
* eth/downloaded: comment
* eth/downloader: added comment
* core, crypto: various allocation savings regarding tx handling
* core: reduce allocs for gas price comparison
This change reduces the allocations needed for comparing different transactions to each other.
A call to `tx.GasPrice()` copies the gas price as it has to be safe against modifications and
also needs to be threadsafe. For comparing and ordering different transactions we don't need
these guarantees
* core: added tx.GasPriceIntCmp for comparison without allocation
adds a method to remove unneeded allocation in comparison to tx.gasPrice
* core/types: pool legacykeccak256 objects in rlpHash
rlpHash is by far the most used function in core that allocates a legacyKeccak256 object on each call.
Since it is so widely used it makes sense to add pooling here so we relieve the GC.
On my machine these changes result in > 100 MILLION less allocations and > 30 GB less allocated memory.
* reverted some changes
* reverted some changes
* trie: use crypto.KeccakState instead of replicating code
Co-authored-by: Martin Holst Swende <martin@swende.se>
* eth/downloader tests: fix spurious failing test due to race between receipts/headers
* miner tests: fix travis failure on arm64
* eth/downloader: tests - store td in ancients too
* core/vm: use fixed uint256 library instead of big
* core/vm: remove intpools
* core/vm: upgrade uint256, fixes uint256.NewFromBig
* core/vm: use uint256.Int by value in Stack
* core/vm: upgrade uint256 to v1.0.0
* core/vm: don't preallocate space for 1024 stack items (only 16)
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR makes use of go 1.13 error handling, wrapping errors and using
errors.Is to check a wrapped root-cause. It also removes the travis
builders for go 1.11 and go 1.12.
This adds a new API method on core.BlockChain to allow interrupting
running data inserts, and calls the method before shutting down the
downloader.
The BlockChain interrupt checks are now done through a method instead
of inlining the atomic load everywhere. There is no loss of efficiency from
this and it makes the interrupt protocol a lot clearer because the check is
defined next to the method that sets the flag.
This PR reimplements the light client server pool. It is also a first step
to move certain logic into a new lespay package. This package will contain
the implementation of the lespay token sale functions, the token buying and
selling logic and other components related to peer selection/prioritization
and service quality evaluation. Over the long term this package will be
reusable for incentivizing future protocols.
Since the LES peer logic is now based on enode.Iterator, it can now use
DNS-based fallback discovery to find servers.
This document describes the function of the new components:
https://gist.github.com/zsfelfoldi/3c7ace895234b7b345ab4f71dab102d4
* cmd, core, eth: init tx lookup in background
* core/rawdb: tiny log fixes to make it clearer what's happening
* core, eth: fix rebase errors
* core/rawdb: make reindexing less generic, but more optimal
* rlp: implement rlp list iterator
* core/rawdb: new implementation of tx indexing/unindex using generic tx iterator and hashing rlp-data
* core/rawdb, cmd/utils: fix review concerns
* cmd/utils: fix merge issue
* core/rawdb: add some log formatting polishes
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* all: seperate consensus error and evm internal error
There are actually two types of error will be returned when
a tranaction/message call is executed: (a) consensus error
(b) evm internal error. The former should be converted to
a consensus issue, e.g. The sender doesn't enough asset to
purchase the gas it specifies. The latter is allowed since
evm itself is a blackbox and internal error is allowed to happen.
This PR emphasizes the difference by introducing a executionResult
structure. The evm error is embedded inside. So if any error
returned, it indicates consensus issue happens.
And also this PR improve the `EstimateGas` API to return the concrete
revert reason if the transaction always fails
* all: polish
* accounts/abi/bind/backends: add tests
* accounts/abi/bind/backends, internal: cleanup error message
* all: address comments
* core: fix lint
* accounts, core, eth, internal: address comments
* accounts, internal: resolve revert reason if possible
* accounts, internal: address comments
This new API allows reading accounts and their content by address range.
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>
* eth: improve shutdown synchronization
Most goroutines started by eth.Ethereum didn't have any shutdown sync at
all, which lead to weird error messages when quitting the client.
This change improves the clean shutdown path by stopping all internal
components in dependency order and waiting for them to actually be
stopped before shutdown is considered done. In particular, we now stop
everything related to peers before stopping 'resident' parts such as
core.BlockChain.
* eth: rewrite sync controller
* eth: remove sync start debug message
* eth: notify chainSyncer about new peers after handshake
* eth: move downloader.Cancel call into chainSyncer
* eth: make post-sync block broadcast synchronous
* eth: add comments
* core: change blockchain stop message
* eth: change closeBloomHandler channel type
Prior to this change, eth_call changed the balance of the sender account in the
EVM environment to 2^256 wei to cover the gas cost of the call execution.
We've had this behavior for a long time even though it's super confusing.
This commit sets the default call gasprice to zero instead of updating the balance,
which is better because it makes eth_call semantics less surprising. Removing
the built-in balance assignment also makes balance overrides work as expected.
* node: expose config in service context
* eth: integrate p2p/dnsdisc
* cmd/geth: add some DNS flags
* eth: remove DNS URLs
* cmd/utils: configure DNS names for testnets
* params: update DNS URLs
* cmd/geth: configure mainnet DNS
* cmd/utils: rename DNS flag and fix flag processing
* cmd/utils: remove debug print
* node: fix test
* travis: Enable ARM support
* Include fixes from 20039
* Add a trace to debug the invalid lookup issue
* Try increasing the timeout to see if the arm test passes
* Investigate the resolver issue
* Increase arm64 timeout for clique test
* increase timeout in tests for arm64
* Only test the failing tests
* Review feedback: don't export epsilon
* Remove investigation tricks+include fjl's feeback
* Revert the retry ahead of using the mock resolver
* Fix rebase errors
* core/evm, contracts: avoid copying memory for input in calls + make ecrecover not modify input buffer
* core/vm: optimize mstore a bit
* core/vm: change Get -> GetCopy in vm memory access
When we flush a batch of trie nodes into database during the state
sync, we should guarantee that all children should be flushed before
parent.
Actually the trie nodes commit order is strict by: children -> parent.
But when we flush all ready nodes into db, we don't need the order
anymore since
(1) they are all ready nodes (no more dependency)
(2) underlying database provides write atomicity
The precompile at 0x09 wraps the BLAKE2b F compression function:
https://tools.ietf.org/html/rfc7693#section-3.2
The precompile requires 6 inputs tightly encoded, taking exactly 213
bytes, as explained below.
- `rounds` - the number of rounds - 32-bit unsigned big-endian word
- `h` - the state vector - 8 unsigned 64-bit little-endian words
- `m` - the message block vector - 16 unsigned 64-bit little-endian words
- `t_0, t_1` - offset counters - 2 unsigned 64-bit little-endian words
- `f` - the final block indicator flag - 8-bit word
[4 bytes for rounds][64 bytes for h][128 bytes for m][8 bytes for t_0]
[8 bytes for t_1][1 byte for f]
The boolean `f` parameter is considered as `true` if set to `1`.
The boolean `f` parameter is considered as `false` if set to `0`.
All other values yield an invalid encoding of `f` error.
The precompile should compute the F function as specified in the RFC
(https://tools.ietf.org/html/rfc7693#section-3.2) and return the updated
state vector `h` with unchanged encoding (little-endian).
See EIP-152 for details.
* eth: chain config (genesis + fork) ENR entry
* core/forkid, eth: protocol independent fork ID, update to CRC32 spec
* core/forkid, eth: make forkid a struct, next uint64, enr struct, RLP
* core/forkid: change forkhash rlp encoding from int to [4]byte
* eth: fixup eth entry a bit and update it every block
* eth: fix lint
* eth: fix crash in ethclient tests
This PR adds some hardening in the lower levels of the protocol stack, to bail early on invalid data. Primarily, attacks that this PR protects against are on the "annoyance"-level, which would otherwise write a couple of megabytes of data into the log output, which is a bit resource intensive.
* core/state, cmd/geth: streaming json output dump cmd + optional code+storage
* dump: add option to continue even if preimages are missing
* core, evm: lint nits
* cmd: use local flags for dump, omit empty code/storage
* core/state: fix state dump test
* core, eth: some fixes for freezer
* vendor, core/rawdb, cmd/geth: add db inspector
* core, cmd/utils: check ancient store path forceily
* cmd/geth, common, core/rawdb: a few fixes
* cmd/geth: support windows file rename and fix rename error
* core: support ancient plugin
* core, cmd: streaming file copy
* cmd, consensus, core, tests: keep genesis in leveldb
* core: write txlookup during ancient init
* core: bump database version
* all: freezer style syncing
core, eth, les, light: clean up freezer relative APIs
core, eth, les, trie, ethdb, light: clean a bit
core, eth, les, light: add unit tests
core, light: rewrite setHead function
core, eth: fix downloader unit tests
core: add receipt chain insertion test
core: use constant instead of hardcoding table name
core: fix rollback
core: fix setHead
core/rawdb: remove canonical block first and then iterate side chain
core/rawdb, ethdb: add hasAncient interface
eth/downloader: calculate ancient limit via cht first
core, eth, ethdb: lots of fixes
* eth/downloader: print ancient disable log only for fast sync
* core, eth, trie: bloom filter for trie node dedup during fast sync
* eth/downloader, trie: address review comments
* core, ethdb, trie: restart fast-sync bloom construction now and again
* eth/downloader: initialize fast sync bloom on startup
* eth: reenable eth/62 until we properly remove it
This change makes getBalance, getCode, getStorageAt, getProof,
call, getTransactionCount return an error if the block number in
the request doesn't exist. getHeaderByNumber still returns null
for missing headers.
* cmd, eth, miner: disable advance sealing if user require
* cmd, console, miner, les, eth: wrap the miner config
* eth: remove todo
* cmd, miner: revert noadvance flag
The reason for this is: if the transaction execution is even longer
than block time, then this kind of transactions is DoS attack.
* core/vm: remove function call for stack validation from evm runloop
* core/vm: separate gas calc into static + dynamic
* core/vm: optimize push1
* core/vm: reuse pooled bigints for ADDRESS, ORIGIN and CALLER
* core/vm: use generic error message for jump/jumpi, to avoid string interpolation
* testdata: fix tests for new error message
* core/vm: use 64-bit memory calculations
* core/vm: fix error in memory calculation
* core/vm: address review concerns
* core/vm: avoid unnecessary use of big.Int:BitLen()
This change
- implements concurrent LES request serving even for a single peer.
- replaces the request cost estimation method with a cost table based on
benchmarks which gives much more consistent results. Until now the
allowed number of light peers was just a guess which probably contributed
a lot to the fluctuating quality of available service. Everything related
to request cost is implemented in a single object, the 'cost tracker'. It
uses a fixed cost table with a global 'correction factor'. Benchmark code
is included and can be run at any time to adapt costs to low-level
implementation changes.
- reimplements flowcontrol.ClientManager in a cleaner and more efficient
way, with added capabilities: There is now control over bandwidth, which
allows using the flow control parameters for client prioritization.
Target utilization over 100 percent is now supported to model concurrent
request processing. Total serving bandwidth is reduced during block
processing to prevent database contention.
- implements an RPC API for the LES servers allowing server operators to
assign priority bandwidth to certain clients and change prioritized
status even while the client is connected. The new API is meant for
cases where server operators charge for LES using an off-protocol mechanism.
- adds a unit test for the new client manager.
- adds an end-to-end test using the network simulator that tests bandwidth
control functions through the new API.
Simplifies the transaction presense check to use a function to
determine if the transaction is present in the block provided
to trace, which originally had a redundant parenthesis and used
a `exist` flag to dictate control flow.
This changes default location of the data directory to use the LOCALAPPDATA
environment variable, resolving issues with remote home directories an improving
compatibility with Cygwin.
Fixes#2239Fixes#2237Fixes#16437
* cmd, eth: Added in the flag to step geth once sync based on input
* cmd, eth: 16400 Add an option to stop geth once in sync.
* cmd: 16400 Add an option to stop geth once in sync. WIP
* cmd/geth/main, les/fletcher: added in light mode support
* cmd/geth/main, les/fletcher: Cleaned Comments and code for light mode
* cmd: 16400 Fixed formatting issue and cleaned code
* cmd, eth, les: 16400 Fixed formatting issues
* cmd, eth, les: Performed gofmt to update formatting
* cmd, eth, les: Fixed bugs resulting formatting
* cmd/geth, eth/, les: switched to downloader event
* eth: Fixed styling and gen_config
* eth/: Fix nil error in config file
* cmd/geth: Updated countdown log
* les/fetcher.go: Removed depcreated channel
* eth/downloader.go: Removed deprecated select
* cmd/geth, cmd/utils: Fixed minor issues
* eth: Reverted config files to proper format
* eth: Fixed typo in config file
* cmd/geth, eth/down: Updated code to use header time stamp
* eth/downloader: Changed the time threshold to 10 minutes
* cmd/geth, eth/downloader: Updated downloading event to pass latest header
* cmd/geth: Updated main to use right timer object
* cmd/geth: Removed unused failed event
* cmd/geth: added in correct time field with type assertion
* cmd/geth, cmd/utils: Updated flag to use boolean
* cmd/geth, cmd/utils, eth/downloader: Cleaned up code based on recommendations
* cmd/geth: Removed unneeded import
* cmd/geth, eth/downloader: fixed event field and suggested changes
* cmd/geth, cmd/utils: Updated flag and linting issue
* geth/core/eth: implement constantinople override flag
* les: implemnent constantinople override flag for les clients
* cmd/geth, eth, les: fix typo, move flag to experimentals
* Rejects peers that respond with a different hash for any of the passed in block numbers.
* Meant for emergency situations when the network forks unexpectedly.
Until this commit, when sending an RPC request that called `NewEVM`, a blank `vm.Config`
would be taken so as to set some options, based on the default configuration. If some extra
configuration switches were passed to the blockchain, those would be ignored.
This PR adds a function to get the config from the blockchain, and this is what is now used
for RPC calls.
Some subsequent changes need to be made, see https://github.com/ethereum/go-ethereum/pull/17955#pullrequestreview-182237244
for the details of the discussion.
* core: speed up GenerateChain
Use a mock implementation of ChainReader instead of creating
and destroying a BlockChain object for each generated block.
* eth/downloader: speed up tests by generating chain only once
This change reworks the downloader tests so they share a common test
blockchain instead of generating a chain in every test. The tests are
roughly twice as fast now.
This adds the global accumulated refund counter to the standard
json output as a numeric json value. Previously this was not very
interesting since it was not used much, but with the new sstore
gas changes the value is a lot more interesting from a consensus
investigation perspective.
Package p2p/enode provides a generalized representation of p2p nodes
which can contain arbitrary information in key/value pairs. It is also
the new home for the node database. The "v4" identity scheme is also
moved here from p2p/enr to remove the dependency on Ethereum crypto from
that package.
Record signature handling is changed significantly. The identity scheme
registry is removed and acceptable schemes must be passed to any method
that needs identity. This means records must now be validated explicitly
after decoding.
The enode API is designed to make signature handling easy and safe: most
APIs around the codebase work with enode.Node, which is a wrapper around
a valid record. Going from enr.Record to enode.Node requires a valid
signature.
* p2p/discover: port to p2p/enode
This ports the discovery code to the new node representation in
p2p/enode. The wire protocol is unchanged, this can be considered a
refactoring change. The Kademlia table can now deal with nodes using an
arbitrary identity scheme. This requires a few incompatible API changes:
- Table.Lookup is not available anymore. It used to take a public key
as argument because v4 protocol requires one. Its replacement is
LookupRandom.
- Table.Resolve takes *enode.Node instead of NodeID. This is also for
v4 protocol compatibility because nodes cannot be looked up by ID
alone.
- Types Node and NodeID are gone. Further commits in the series will be
fixes all over the the codebase to deal with those removals.
* p2p: port to p2p/enode and discovery changes
This adapts package p2p to the changes in p2p/discover. All uses of
discover.Node and discover.NodeID are replaced by their equivalents from
p2p/enode.
New API is added to retrieve the enode.Node instance of a peer. The
behavior of Server.Self with discovery disabled is improved. It now
tries much harder to report a working IP address, falling back to
127.0.0.1 if no suitable address can be determined through other means.
These changes were needed for tests of other packages later in the
series.
* p2p/simulations, p2p/testing: port to p2p/enode
No surprises here, mostly replacements of discover.Node, discover.NodeID
with their new equivalents. The 'interesting' API changes are:
- testing.ProtocolSession tracks complete nodes, not just their IDs.
- adapters.NodeConfig has a new method to create a complete node.
These changes were needed to make swarm tests work.
Note that the NodeID change makes the code incompatible with old
simulation snapshots.
* whisper/whisperv5, whisper/whisperv6: port to p2p/enode
This port was easy because whisper uses []byte for node IDs and
URL strings in the API.
* eth: port to p2p/enode
Again, easy to port because eth uses strings for node IDs and doesn't
care about node information in any way.
* les: port to p2p/enode
Apart from replacing discover.NodeID with enode.ID, most changes are in
the server pool code. It now deals with complete nodes instead
of (Pubkey, IP, Port) triples. The database format is unchanged for now,
but we should probably change it to use the node database later.
* node: port to p2p/enode
This change simply replaces discover.Node and discover.NodeID with their
new equivalents.
* swarm/network: port to p2p/enode
Swarm has its own node address representation, BzzAddr, containing both
an overlay address (the hash of a secp256k1 public key) and an underlay
address (enode:// URL).
There are no changes to the BzzAddr format in this commit, but certain
operations such as creating a BzzAddr from a node ID are now impossible
because node IDs aren't public keys anymore.
Most swarm-related changes in the series remove uses of
NewAddrFromNodeID, replacing it with NewAddr which takes a complete node
as argument. ToOverlayAddr is removed because we can just use the node
ID directly.
Interpreter initialization is left to the PRs implementing them.
Options for external interpreters are passed after a colon in the
`--vm.ewasm` and `--vm.evm` switches.
Makes Interface interface a bit more stateless and abstract.
Obviously this change is dictated by EVMC design. The EVMC tries to keep the responsibility for EVM features totally inside the VMs, if feasible. This makes VM "stateless" because VM does not need to pass any information between executions, all information is included in parameters of the execute function.
This PR enables the indexers to work in light client mode by
downloading a part of these tries (the Merkle proofs of the last
values of the last known section) in order to be able to add new
values and recalculate subsequent hashes. It also adds CHT data to
NodeInfo.
* consensus/ethash: start remote ggoroutine to handle remote mining
* consensus/ethash: expose remote miner api
* consensus/ethash: expose submitHashrate api
* miner, ethash: push empty block to sealer without waiting execution
* consensus, internal: add getHashrate API for ethash
* consensus: add three method for consensus interface
* miner: expose consensus engine running status to miner
* eth, miner: specify etherbase when miner created
* miner: commit new work when consensus engine is started
* consensus, miner: fix some logics
* all: delete useless interfaces
* consensus: polish a bit
The current trie memory database/cache that we do pruning on stores
trie nodes as binary rlp encoded blobs, and also stores the node
relationships/references for GC purposes. However, most of the trie
nodes (everything apart from a value node) is in essence just a
collection of references.
This PR switches out the RLP encoded trie blobs with the
collapsed-but-not-serialized trie nodes. This permits most of the
references to be recovered from within the node data structure,
avoiding the need to track them a second time (expensive memory wise).
This removes a golint warning: type name will be used as trie.TrieSync by
other packages, and that stutters; consider calling this Sync.
In hexToKeybytes len(hex) is even and (even+1)/2 == even/2, remove the +1.
* common: delete StringToAddress, StringToHash
These functions are confusing because they don't parse hex, but use the
bytes of the string. This change removes them, replacing all uses of
StringToAddress(s) by BytesToAddress([]byte(s)).
* eth/filters: remove incorrect use of common.BytesToAddress
The first should address a long term issue where we recommend a gas
price that is greater than that required for 50% of transactions in
recent blocks, which can lead to gas price inflation as people take
this figure and add a margin to it, resulting in a positive feedback
loop.
* core/types, core/vm, eth, tests: regenerate gencodec files
* Makefile: update devtools target
Install protoc-gen-go and print reminders about npm, solc and protoc.
Also switch to github.com/kevinburke/go-bindata because it's more
maintained.
* contracts/ens: update contracts and regenerate with solidity v0.4.19
The newer upstream version of the FIFSRegistrar contract doesn't set the
resolver anymore. The resolver is now deployed separately.
* contracts/release: regenerate with solidity v0.4.19
* contracts/chequebook: fix fallback and regenerate with solidity v0.4.19
The contract didn't have a fallback function, payments would be rejected
when compiled with newer solidity. References to 'mortal' and 'owned'
use the local file system so we can compile without network access.
* p2p/discv5: regenerate with recent stringer
* cmd/faucet: regenerate
* dashboard: regenerate
* eth/tracers: regenerate
* internal/jsre/deps: regenerate
* dashboard: avoid sed -i because it's not portable
* accounts/usbwallet/internal/trezor: fix go generate warnings
Updated use of Parallel and added some subtests to help isolate
them. Increased timeout in RequestHeadersByNumber so it
doesn't time out and causes other tests to break.
* cmd, consensus, eth: split ethash related config to it own
* eth, consensus: minor polish
* eth, consenus, console: compress pow testing config field to single one
* consensus, eth: document pow mode
* eth, internal: Implement using trie diffs
* eth, internal: Changes in response to review
* eth: More fixes to getModifiedAccountsBy*
* eth: minor polishes on error capitalization
This PR implements the new LES protocol version extensions:
* new and more efficient Merkle proofs reply format (when replying to
a multiple Merkle proofs request, we just send a single set of trie
nodes containing all necessary nodes)
* BBT (BloomBitsTrie) works similarly to the existing CHT and contains
the bloombits search data to speed up log searches
* GetTxStatusMsg returns the inclusion position or the
pending/queued/unknown state of a transaction referenced by hash
* an optional signature of new block data (number/hash/td) can be
included in AnnounceMsg to provide an option for "very light
clients" (mobile/embedded devices) to skip expensive Ethash check
and accept multiple signatures of somewhat trusted servers (still a
lot better than trusting a single server completely and retrieving
everything through RPC). The new client mode is not implemented in
this PR, just the protocol extension.
When implementing the new bloombits based filter, I've accidentally broke null
topics by removing the special casing of common.Hash{} filter rules, which
acted as the wildcard topic until now.
This PR fixes the regression, but instead of using the magic hash
common.Hash{} as the null wildcard, the PR reworks the code to handle nil
topics during parsing, converting a JSON null into nil []common.Hash topic.
* ethdb: add Putter interface and Has method
* ethdb: improve docs and add IdealBatchSize
* ethdb: remove memory batch lock
Batches are not safe for concurrent use.
* core: use ethdb.Putter for Write* functions
This covers the easy cases.
* core/state: simplify StateSync
* trie: optimize local node check
* ethdb: add ValueSize to Batch
* core: optimize HasHeader check
This avoids one random database read get the block number. For many uses
of HasHeader, the expectation is that it's actually there. Using Has
avoids a load + decode of the value.
* core: write fast sync block data in batches
Collect writes into batches up to the ideal size instead of issuing many
small, concurrent writes.
* eth/downloader: commit larger state batches
Collect nodes into a batch up to the ideal size instead of committing
whenever a node is received.
* core: optimize HasBlock check
This avoids a random database read to get the number.
* core: use numberCache in HasHeader
numberCache has higher capacity, increasing the odds of finding the
header without a database lookup.
* core: write imported block data using a batch
Restore batch writes of state and add blocks, tx entries, receipts to
the same batch. The change also simplifies the miner.
This commit also removes posting of logs when a forked block is imported.
* core: fix DB write error handling
* ethdb: use RLock for Has
* core: fix HasBlock comment
* core: reduce txpool event loop goroutines and sync structs
* cmd, core, eth: journal local transactions to disk
* core: journal replacement pending transactions too
* core: separate transaction journal from pool
* core: remove redundant storage of transactions and receipts
* core, eth, internal: new transaction schema usage polishes
* eth: implement upgrade mechanism for db deduplication
* core, eth: drop old sequential key db upgrader
* eth: close last iterator on successful db upgrage
* core: prefix the lookup entries to make their purpose clearer
With this commit, core/state's access to the underlying key/value database is
mediated through an interface. Database errors are tracked in StateDB and
returned by CommitTo or the new Error method.
Motivation for this change: We can remove the light client's duplicated copy of
core/state. The light client now supports node iteration, so tracing and storage
enumeration can work with the light client (not implemented in this commit).
* eth/downloader: separate state sync from queue
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.
State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.
Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
* eth/downloader: remove cancelTimeout channel
* eth/downloader: retry state requests on timeout
* eth/downloader: improve comment
* eth/downloader: mark peers idle when state sync is done
* eth/downloader: move pivot block splitting to processContent
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
* eth/downloader: limit state node retries
* eth/downloader: improve state node error handling and retry check
* eth/downloader: remove maxStateNodeRetries
It fails the sync too much.
* eth/downloader: remove last use of cancelCh in statesync.go
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
* eth/downloader: fix leak in runStateSync
* eth/downloader: don't run processFullSyncContent in LightSync mode
* eth/downloader: improve comments
* eth/downloader: fix vet, megacheck
* eth/downloader: remove unrequested tasks anyway
* eth/downloader, trie: various polishes around duplicate items
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.
The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.
The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.
The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
* eth/downloader: fix state request leak
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.
The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.
Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
* core, eth, trie: support batched trie sync db writes
* trie: rename SyncMemCache to syncMemBatch
Currently http cors and websocket origins are a comma separated string in the
config object. These are replaced with string arrays that are more expressive in
case of a config file.
* p2p/discover, p2p/discv5: add marshaling methods to Node
* p2p/netutil: make Netlist decodable from TOML
* common/math: encode nil HexOrDecimal256 as 0x0
* cmd/geth: add --config file flag
* cmd/geth: add missing license header
* eth: prettify Config again, fix tests
* eth: use gasprice.Config instead of duplicating its fields
* eth/gasprice: hide nil default from dumpconfig output
* cmd/geth: hide genesis block in dumpconfig output
* node: make tests compile
* console: fix tests
* cmd/geth: make TOML keys look exactly like Go struct fields
* p2p: use discovery by default
This makes the zero Config slightly more useful. It also fixes package
node tests because Node detects reuse of the datadir through the
NodeDatabase.
* cmd/geth: make ethstats URL settable through config file
* cmd/faucet: fix configuration
* cmd/geth: dedup attach tests
* eth: add comment for DefaultConfig
* eth: pass downloader.SyncMode in Config
This removes the FastSync, LightSync flags in favour of a more
general SyncMode flag.
* cmd/utils: remove jitvm flags
* cmd/utils: make mutually exclusive flag error prettier
It now reads:
Fatal: flags --dev, --testnet can't be used at the same time
* p2p: fix typo
* node: add DefaultConfig, use it for geth
* mobile: add missing NoDiscovery option
* cmd/utils: drop MakeNode
This exposed a couple of places that needed to be updated to use
node.DefaultConfig.
* node: fix typo
* eth: make fast sync the default mode
* cmd/utils: remove IPCApiFlag (unused)
* node: remove default IPC path
Set it in the frontends instead.
* cmd/geth: add --syncmode
* cmd/utils: make --ipcdisable and --ipcpath mutually exclusive
* cmd/utils: don't enable WS, HTTP when setting addr
* cmd/utils: fix --identity
This commit adds pluggable consensus engines to go-ethereum. In short, it
introduces a generic consensus interface, and refactors the entire codebase to
use this interface.
This commit solves several issues concerning the genesis block:
* Genesis/ChainConfig loading was handled by cmd/geth code. This left
library users in the cold. They could specify a JSON-encoded
string and overwrite the config, but didn't get any of the additional
checks performed by geth.
* Decoding and writing of genesis JSON was conflated in
WriteGenesisBlock. This made it a lot harder to embed the genesis
block into the forthcoming config file loader. This commit changes
things so there is a single Genesis type that represents genesis
blocks. All uses of Write*Genesis* are changed to use the new type
instead.
* If the chain config supplied by the user was incompatible with the
current chain (i.e. the chain had already advanced beyond a scheduled
fork), it got overwritten. This is not an issue in practice because
previous forks have always had the highest total difficulty. It might
matter in the future though. The new code reverts the local chain to
the point of the fork when upgrading configuration.
The change to genesis block data removes compression library
dependencies from package core.
There is no need to depend on the old context package now that the
minimum Go version is 1.7. The move to "context" eliminates our weird
vendoring setup. Some vendored code still uses golang.org/x/net/context
and it is now vendored in the normal way.
This change triggered new vet checks around context.WithTimeout which
didn't fire with golang.org/x/net/context.
* accounts, cmd, eth, ethdb: port logs over to new system
* ethdb: drop concept of cache distribution between dbs
* eth: fix some log nitpicks to make them nicer
* common: remove CurrencyToString
Move denomination values to params instead.
* common: delete dead code
* common: move big integer operations to common/math
This commit consolidates all big integer operations into common/math and
adds tests and documentation.
There should be no change in semantics for BigPow, BigMin, BigMax, S256,
U256, Exp and their behaviour is now locked in by tests.
The BigD, BytesToBig and Bytes2Big functions don't provide additional
value, all uses are replaced by new(big.Int).SetBytes().
BigToBytes is now called PaddedBigBytes, its minimum output size
parameter is now specified as the number of bytes instead of bits. The
single use of this function is in the EVM's MSTORE instruction.
Big and String2Big are replaced by ParseBig, which is slightly stricter.
It previously accepted leading zeros for hexadecimal inputs but treated
decimal inputs as octal if a leading zero digit was present.
ParseUint64 is used in places where String2Big was used to decode a
uint64.
The new functions MustParseBig and MustParseUint64 are now used in many
places where parsing errors were previously ignored.
* common: delete unused big integer variables
* accounts/abi: replace uses of BytesToBig with use of encoding/binary
* common: remove BytesToBig
* common: remove Bytes2Big
* common: remove BigTrue
* cmd/utils: add BigFlag and use it for error-checked integer flags
While here, remove environment variable processing for DirectoryFlag
because we don't use it.
* core: add missing error checks in genesis block parser
* common: remove String2Big
* cmd/evm: use utils.BigFlag
* common/math: check for 256 bit overflow in ParseBig
This is supposed to prevent silent overflow/truncation of values in the
genesis block JSON. Without this check, a genesis block that set a
balance larger than 256 bits would lead to weird behaviour in the VM.
* cmd/utils: fixup import
Reworked the EVM gas instructions to use 64bit integers rather than
arbitrary size big ints. All gas operations, be it additions,
multiplications or divisions, are checked and guarded against 64 bit
integer overflows.
In additon, most of the protocol paramaters in the params package have
been converted to uint64 and are now constants rather than variables.
* common/math: added overflow check ops
* core: vmenv, env renamed to evm
* eth, internal/ethapi, les: unmetered eth_call and cancel methods
* core/vm: implemented big.Int pool for evm instructions
* core/vm: unexported intPool methods & verification methods
* core/vm: added memoryGasCost overflow check and test
* core,eth,internal: Added `debug_getBadBlocks()` method
When bad blocks are discovered, these are stored within geth.
An RPC-endpoint makes them availablewithin the `debug`
namespace. This feature makes it easier to discover network forks.
```
* core, api: go format + docs
* core/blockchain: Documentation, fix minor nitpick
* core: fix failing blockchain test
Reworked the EVM gas instructions to use 64bit integers rather than
arbitrary size big ints. All gas operations, be it additions,
multiplications or divisions, are checked and guarded against 64 bit
integer overflows.
In additon, most of the protocol paramaters in the params package have
been converted to uint64 and are now constants rather than variables.
* common/math: added overflow check ops
* core: vmenv, env renamed to evm
* eth, internal/ethapi, les: unmetered eth_call and cancel methods
* core/vm: implemented big.Int pool for evm instructions
* core/vm: unexported intPool methods & verification methods
* core/vm: added memoryGasCost overflow check and test
The Subscription type is gone, all uses are replaced by
*TypeMuxSubscription. This change is prep-work for the
introduction of the new Subscription type in a later commit.
gorename -from '"github.com/ethereum/go-ethereum/event"::Event' -to TypeMuxEvent
gorename -from '"github.com/ethereum/go-ethereum/event"::muxsub' -to TypeMuxSubscription
gofmt -w -r 'Subscription -> *TypeMuxSubscription' ./event/*.go
find . -name '*.go' -and -not -regex '\./vendor/.*' \| xargs gofmt -w -r 'event.Subscription -> *event.TypeMuxSubscription'
Commit d3b751e accidentally deleted a crucial 'return' statement,
leading to a crash in case of an issue with node data. This change
improves the fix in PR #3591 by removing the lock entirely.
This significantly reduces the dependency closure of ethclient, which no
longer depends on core/vm as of this change.
All uses of vm.Logs are replaced by []*types.Log. NewLog is gone too,
the constructor simply returned a literal.
The run loop, which previously contained custom opcode executes have been
removed and has been simplified to a few checks.
Each operation consists of 4 elements: execution function, gas cost function,
stack validation function and memory size function. The execution function
implements the operation's runtime behaviour, the gas cost function implements
the operation gas costs function and greatly depends on the memory and stack,
the stack validation function validates the stack and makes sure that enough
items can be popped off and pushed on and the memory size function calculates
the memory required for the operation and returns it.
This commit also allows the EVM to go unmetered. This is helpful for offline
operations such as contract calls.
This commit introduces a FindOnce method for filters. FindOnce finds the next block that
matches the filter and returns all matching logs from that block. If there are no further
matching logs, it returns a nil slice. This method allows callers to iterate over large
sets of logs progressively.
The changes introduce a small inefficiency relating to mipmaps: the first time a filter is
called, it acts as if all mipmaps are matched, and thus iterates several blocks near the
requested start point. This is in the interest of simplicity and avoiding duplicate mipmap
lookups each time FindOnce is called.
The transaction pool keeps track of the current nonce in its local pendingState. When a
new block comes in the pendingState is reset. During the reset it fetches multiple times
the current state through the use of the currentState callback. When a second block comes
in during the reset its possible that the state changes during the reset. If that block
holds transactions that are currently in the pool the local pendingState that is used to
determine nonces can get out of sync.
Environment is now a struct (not an interface). This
reduces a lot of tech-debt throughout the codebase where a virtual
machine environment had to be implemented in order to test or run it.
The new environment is suitable to be used en the json tests, core
consensus and light client.
This field used to be assigned by the filter system and returned through
the RPC API. Now that we have a Go client that uses the underlying type,
the field needs to move. It is now assigned to true when the RemovedLogs
event is generated so the filter system doesn't need to care about the
field at all.
While here, remove the log list from ChainSideEvent. There are no users
of this field right now and any potential users could subscribe to
RemovedLogsEvent instead.
The registrar was broken, unmaintained and there is a much better
replacement: ENS.
(cherry picked from commit 6ca8f57b08d550613175260cab7633adcacbe6ab)
This commit implements EIP158 part 1, 2, 3 & 4
1. If an account is empty it's no longer written to the trie. An empty
account is defined as (balance=0, nonce=0, storage=0, code=0).
2. Delete an empty account if it's touched
3. An empty account is redefined as either non-existent or empty.
4. Zero value calls and zero value suicides no longer consume the 25k
reation costs.
params: moved core/config to params
Signed-off-by: Jeffrey Wilcke <jeffrey@ethereum.org>
These accessors were introduced by light client changes, but
the only method that is actually used is GetNumberU64. This
commit replaces all uses of .GetNumberU64 with .Number.Uint64.