This change significantly improves the performance of RLPx message reads
and writes. In the previous implementation, reading and writing of
message frames performed multiple reads and writes on the underlying
network connection, and allocated a new []byte buffer for every read.
In the new implementation, reads and writes re-use buffers, and perform
much fewer system calls on the underlying connection. This doubles the
theoretically achievable throughput on a single connection, as shown by
the benchmark result:
name old speed new speed delta
Throughput-8 70.3MB/s ± 0% 155.4MB/s ± 0% +121.11% (p=0.000 n=9+8)
The change also removes support for the legacy, pre-EIP-8 handshake encoding.
As of May 2021, no actively maintained client sends this format.
This change grows the static integer buffer in Stream to 32 bytes,
making it possible to decode 256bit integers without allocating a
temporary buffer.
In the recent commit 088da24, Stream struct size decreased from 120
bytes down to 88 bytes. This commit grows the struct to 112 bytes again,
but the size change will not degrade performance because Stream
instances are internally cached in sync.Pool.
name old time/op new time/op delta
DecodeBigInts-8 12.2µs ± 0% 8.6µs ± 4% -29.58% (p=0.000 n=9+10)
name old speed new speed delta
DecodeBigInts-8 230MB/s ± 0% 326MB/s ± 4% +42.04% (p=0.000 n=9+10)
All encoding/decoding operations read the type cache to find the
writer/decoder function responsible for a type. When analyzing CPU
profiles of geth during sync, I found that the use of sync.RWMutex in
cache lookups appears in the profiles. It seems we are running into
CPU cache contention problems when package rlp is heavily used
on all CPU cores during sync.
This change makes it use atomic.Value + a writer lock instead of
sync.RWMutex. In the common case where the typeinfo entry is present in
the cache, we simply fetch the map and lookup the type.
This commit makes various cleanup changes to rlp.Stream.
* rlp: shrink Stream struct
This removes a lot of unused padding space in Stream by reordering the
fields. The size of Stream changes from 120 bytes to 88 bytes. Stream
instances are internally cached and reused using sync.Pool, so this does
not improve performance.
* rlp: simplify list stack
The list stack kept track of the size of the current list context as
well as the current offset into it. The size had to be stored in the
stack in order to subtract it from the remaining bytes of any enclosing
list in ListEnd. It seems that this can be implemented in a simpler
way: just subtract the size from the enclosing list context in List instead.
This adds support for a new struct tag "optional". Using this tag, structs used
for RLP encoding/decoding can be extended in a backwards-compatible way,
by adding new fields at the end.
io.Reader may return n > 0 and io.EOF at the end of the input stream.
readFull did not handle this correctly, looking only at the error. This fixes
it to check for n == len(buf) as well.
This PR contains a minor optimization in derivesha, by exposing the RLP
int-encoding and making use of it to write integers directly to a
buffer (an RLP integer is known to never require more than 9 bytes
total). rlp.AppendUint64 might be useful in other places too.
The code assumes, just as before, that the hasher (a trie) will copy the
key internally, which it does when doing keybytesToHex(key).
Co-authored-by: Felix Lange <fjl@twurst.com>
This change further improves the performance of RLP encoding by removing
allocations for big.Int and [...]byte types. I have added a new benchmark
that measures RLP encoding of types.Block to verify that performance is
improved.
List headers made up 11% of all allocations during sync. This change
removes most of those allocations by keeping the list header values
cached in the encoder buffer instead. Since encoder buffers are pooled,
list headers are no longer allocated in the common case where an
encoder buffer is available for reuse.
Co-authored-by: Felix Lange <fjl@twurst.com>
* cmd, core, eth: init tx lookup in background
* core/rawdb: tiny log fixes to make it clearer what's happening
* core, eth: fix rebase errors
* core/rawdb: make reindexing less generic, but more optimal
* rlp: implement rlp list iterator
* core/rawdb: new implementation of tx indexing/unindex using generic tx iterator and hashing rlp-data
* core/rawdb, cmd/utils: fix review concerns
* cmd/utils: fix merge issue
* core/rawdb: add some log formatting polishes
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* build: use golangci-lint
This changes build/ci.go to download and run golangci-lint instead
of gometalinter.
* core/state: fix unnecessary conversion
* p2p/simulations: fix lock copying (found by go vet)
* signer/core: fix unnecessary conversions
* crypto/ecies: remove unused function cmpPublic
* core/rawdb: remove unused function print
* core/state: remove unused function xTestFuzzCutter
* core/vm: disable TestWriteExpectedValues in a different way
* core/forkid: remove unused function checksum
* les: remove unused type proofsData
* cmd/utils: remove unused functions prefixedNames, prefixFor
* crypto/bn256: run goimports
* p2p/nat: fix goimports lint issue
* cmd/clef: avoid using unkeyed struct fields
* les: cancel context in testRequest
* rlp: delete unreachable code
* core: gofmt
* internal/build: simplify DownloadFile for Go 1.11 compatibility
* build: remove go test --short flag
* .travis.yml: disable build cache
* whisper/whisperv6: fix ineffectual assignment in TestWhisperIdentityManagement
* .golangci.yml: enable goconst and ineffassign linters
* build: print message when there are no lint issues
* internal/build: refactor download a bit
* rlp: improve nil pointer handling
In both encoder and decoder, the rules for encoding nil pointers were a
bit hard to understand, and didn't leave much choice. Since RLP allows
two empty values (empty list, empty string), any protocol built on RLP
must choose either of these values to represent the null value in a
certain context.
This change adds choice in the form of two new struct tags, "nilString"
and "nilList". These can be used to specify how a nil pointer value is
encoded. The "nil" tag still exists, but its implementation is now
explicit and defines exactly how nil pointers are handled in a single
place.
Another important change in this commit is how nil pointers and the
Encoder interface interact. The EncodeRLP method was previously called
even on nil values, which was supposed to give users a choice of how
their value would be handled when nil. It turns out this is a stupid
idea. If you create a network protocol containing an object defined in
another package, it's better to be able to say that the object should be
a list or string when nil in the definition of the protocol message
rather than defining the encoding of nil on the object itself.
As of this commit, the encoding rules for pointers now take precedence
over the Encoder interface rule. I think the "nil" tag will work fine
for most cases. For special kinds of objects which are a struct in Go
but strings in RLP, code using the object can specify the desired
encoding of nil using the "nilString" and "nilList" tags.
* rlp: propagate struct field type errors
If a struct contained fields of undecodable type, the encoder and
decoder would panic instead of returning an error. Fix this by
propagating type errors in makeStruct{Writer,Decoder} and add a test.
These changes fix two corner cases related to internal handling of types
in package rlp: The "tail" struct tag can only be applied to the last field.
The check for this was wrong and didn't allow for private fields after the
field with the tag. Unsupported types (e.g. structs containing int) which
implement either the Encoder or Decoder interface but not both
couldn't be encoded/decoded.
Also fixes#19367
The bug can cause crashes if Read is called after EOF has been returned.
No code performs such calls right now, but hitting the bug gets more
likely as rlp.EncodeToReader gets used in more places.
Decoding did not reject byte arrays of length one with a single element
b where 55 < b < 128. Such byte arrays must be rejected because
they must be encoded as the single byte b instead.
The list size checking overflowed if the size information
for a value was bigger than the list. This is resolved by
always performing the check before reading.
The rules have changed as follows:
* When decoding into pointers, empty values no longer produce
a nil pointer. This can be overriden for struct fields using the
struct tag "nil".
* When decoding into structs, the input list must contain an element
for each field.
All integers (including size information in type tags) need to be
encoded using the smallest possible encoding. This commit expands the
stricter validation introduced for *big.Int in commit 59597d23a5
to all integer types and size tags.
A single zero byte carries information and should not set the pointer
to nil. This is arguably a corner case. While here, fix the comment
to explain pointer reuse.