Instead of using a limit of three nodes per message, we can pack more nodes
into each message based on ENR size. In my testing, this halves the number
of sent NODES messages, because ENR size is usually < 300 bytes.
This also adds RLP helper functions that compute the encoded size of
[]byte and string.
Co-authored-by: Martin Holst Swende <martin@swende.se>
This changes the CI / release builds to use the latest Go version. It also
upgrades golangci-lint to a newer version compatible with Go 1.19.
In Go 1.19, godoc has gained official support for links and lists. The
syntax for code blocks in doc comments has changed and now requires a
leading tab character. gofmt adapts comments to the new syntax
automatically, so there are a lot of comment re-formatting changes in this
PR. We need to apply the new format in order to pass the CI lint stage with
Go 1.19.
With the linter upgrade, I have decided to disable 'gosec' - it produces
too many false-positive warnings. The 'deadcode' and 'varcheck' linters
have also been removed because golangci-lint warns about them being
unmaintained. 'unused' provides similar coverage and we already have it
enabled, so we don't lose much with this change.
This enables the following linters
- typecheck
- unused
- staticcheck
- bidichk
- durationcheck
- exportloopref
- gosec
WIth a few exceptions.
- We use a deprecated protobuf in trezor. I didn't want to mess with that, since I cannot meaningfully test any changes there.
- The deprecated TypeMux is used in a few places still, so the warning for it is silenced for now.
- Using string type in context.WithValue is apparently wrong, one should use a custom type, to prevent collisions between different places in the hierarchy of callers. That should be fixed at some point, but may require some attention.
- The warnings for using weak random generator are squashed, since we use a lot of random without need for cryptographic guarantees.
This change speeds up trie hashing and all other activities that require
RLP encoding of trie nodes by approximately 20%. The speedup is achieved by
avoiding reflection overhead during node encoding.
The interface type trie.node now contains a method 'encode' that works with
rlp.EncoderBuffer. Management of EncoderBuffers is left to calling code.
trie.hasher, which is pooled to avoid allocations, now maintains an
EncoderBuffer. This means memory resources related to trie node encoding
are tied to the hasher pool.
Co-authored-by: Felix Lange <fjl@twurst.com>
This change adds a code generator tool for creating EncodeRLP method
implementations. The generated methods will behave identically to the
reflect-based encoder, but run faster because there is no reflection overhead.
Package rlp now provides the EncoderBuffer type for incremental encoding. This
is used by generated code, but the new methods can also be useful for
hand-written encoders.
There is also experimental support for generating DecodeRLP, and some new
methods have been added to the existing Stream type to support this. Creating
decoders with rlpgen is not recommended at this time because the generated
methods create very poor error reporting.
More detail about package rlp changes:
* rlp: externalize struct field processing / validation
This adds a new package, rlp/internal/rlpstruct, in preparation for the
RLP encoder generator.
I think the struct field rules are subtle enough to warrant extracting
this into their own package, even though it means that a bunch of
adapter code is needed for converting to/from rlpstruct.Type.
* rlp: add more decoder methods (for rlpgen)
This adds new methods on rlp.Stream:
- Uint64, Uint32, Uint16, Uint8, BigInt
- ReadBytes for decoding into []byte
- MoreDataInList - useful for optional list elements
* rlp: expose encoder buffer (for rlpgen)
This exposes the internal encoder buffer type for use in EncodeRLP
implementations.
The new EncoderBuffer type is a sort-of 'opaque handle' for a pointer to
encBuffer. It is implemented this way to ensure the global encBuffer pool
is handled correctly.
* core: fix warning flagging the use of DeepEqual on error
* apply the same change everywhere possible
* revert change that was committed by mistake
* fix build error
* Update config.go
* revert changes to ConfigCompatError
* review feedback
Co-authored-by: Felix Lange <fjl@twurst.com>
As per benchmark results below, these changes speed up encoding/decoding of
consensus objects a bit.
name old time/op new time/op delta
EncodeRLP/legacy-header-8 384ns ± 1% 331ns ± 3% -13.83% (p=0.000 n=7+8)
EncodeRLP/london-header-8 411ns ± 1% 359ns ± 2% -12.53% (p=0.000 n=8+8)
EncodeRLP/receipt-for-storage-8 251ns ± 0% 239ns ± 0% -4.97% (p=0.000 n=8+8)
EncodeRLP/receipt-full-8 319ns ± 0% 300ns ± 0% -5.89% (p=0.000 n=8+7)
EncodeRLP/legacy-transaction-8 389ns ± 1% 387ns ± 1% ~ (p=0.099 n=8+8)
EncodeRLP/access-transaction-8 607ns ± 0% 581ns ± 0% -4.26% (p=0.000 n=8+8)
EncodeRLP/1559-transaction-8 627ns ± 0% 606ns ± 1% -3.44% (p=0.000 n=8+8)
DecodeRLP/legacy-header-8 831ns ± 1% 813ns ± 1% -2.20% (p=0.000 n=8+8)
DecodeRLP/london-header-8 824ns ± 0% 804ns ± 1% -2.44% (p=0.000 n=8+7)
* rlp: pass length to byteArrayBytes
This makes it possible to inline byteArrayBytes. For arrays, the length is known
at encoder construction time, so the call to v.Len() can be avoided.
* rlp: avoid IsNil for pointer encoding
It's actually cheaper to use Elem first, because it performs less checks
on the value. If the pointer was nil, the result of Elem is 'invalid'.
* rlp: minor optimizations for slice/array encoding
For empty slices/arrays, we can avoid storing a list header entry in the
encoder buffer. Also avoid doing the tail check at encoding time because
it is already known at encoder construction time.
This change significantly improves the performance of RLPx message reads
and writes. In the previous implementation, reading and writing of
message frames performed multiple reads and writes on the underlying
network connection, and allocated a new []byte buffer for every read.
In the new implementation, reads and writes re-use buffers, and perform
much fewer system calls on the underlying connection. This doubles the
theoretically achievable throughput on a single connection, as shown by
the benchmark result:
name old speed new speed delta
Throughput-8 70.3MB/s ± 0% 155.4MB/s ± 0% +121.11% (p=0.000 n=9+8)
The change also removes support for the legacy, pre-EIP-8 handshake encoding.
As of May 2021, no actively maintained client sends this format.
This change grows the static integer buffer in Stream to 32 bytes,
making it possible to decode 256bit integers without allocating a
temporary buffer.
In the recent commit 088da24, Stream struct size decreased from 120
bytes down to 88 bytes. This commit grows the struct to 112 bytes again,
but the size change will not degrade performance because Stream
instances are internally cached in sync.Pool.
name old time/op new time/op delta
DecodeBigInts-8 12.2µs ± 0% 8.6µs ± 4% -29.58% (p=0.000 n=9+10)
name old speed new speed delta
DecodeBigInts-8 230MB/s ± 0% 326MB/s ± 4% +42.04% (p=0.000 n=9+10)
All encoding/decoding operations read the type cache to find the
writer/decoder function responsible for a type. When analyzing CPU
profiles of geth during sync, I found that the use of sync.RWMutex in
cache lookups appears in the profiles. It seems we are running into
CPU cache contention problems when package rlp is heavily used
on all CPU cores during sync.
This change makes it use atomic.Value + a writer lock instead of
sync.RWMutex. In the common case where the typeinfo entry is present in
the cache, we simply fetch the map and lookup the type.
This commit makes various cleanup changes to rlp.Stream.
* rlp: shrink Stream struct
This removes a lot of unused padding space in Stream by reordering the
fields. The size of Stream changes from 120 bytes to 88 bytes. Stream
instances are internally cached and reused using sync.Pool, so this does
not improve performance.
* rlp: simplify list stack
The list stack kept track of the size of the current list context as
well as the current offset into it. The size had to be stored in the
stack in order to subtract it from the remaining bytes of any enclosing
list in ListEnd. It seems that this can be implemented in a simpler
way: just subtract the size from the enclosing list context in List instead.
This adds support for a new struct tag "optional". Using this tag, structs used
for RLP encoding/decoding can be extended in a backwards-compatible way,
by adding new fields at the end.
io.Reader may return n > 0 and io.EOF at the end of the input stream.
readFull did not handle this correctly, looking only at the error. This fixes
it to check for n == len(buf) as well.
This PR contains a minor optimization in derivesha, by exposing the RLP
int-encoding and making use of it to write integers directly to a
buffer (an RLP integer is known to never require more than 9 bytes
total). rlp.AppendUint64 might be useful in other places too.
The code assumes, just as before, that the hasher (a trie) will copy the
key internally, which it does when doing keybytesToHex(key).
Co-authored-by: Felix Lange <fjl@twurst.com>
This change further improves the performance of RLP encoding by removing
allocations for big.Int and [...]byte types. I have added a new benchmark
that measures RLP encoding of types.Block to verify that performance is
improved.
List headers made up 11% of all allocations during sync. This change
removes most of those allocations by keeping the list header values
cached in the encoder buffer instead. Since encoder buffers are pooled,
list headers are no longer allocated in the common case where an
encoder buffer is available for reuse.
Co-authored-by: Felix Lange <fjl@twurst.com>
* cmd, core, eth: init tx lookup in background
* core/rawdb: tiny log fixes to make it clearer what's happening
* core, eth: fix rebase errors
* core/rawdb: make reindexing less generic, but more optimal
* rlp: implement rlp list iterator
* core/rawdb: new implementation of tx indexing/unindex using generic tx iterator and hashing rlp-data
* core/rawdb, cmd/utils: fix review concerns
* cmd/utils: fix merge issue
* core/rawdb: add some log formatting polishes
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* build: use golangci-lint
This changes build/ci.go to download and run golangci-lint instead
of gometalinter.
* core/state: fix unnecessary conversion
* p2p/simulations: fix lock copying (found by go vet)
* signer/core: fix unnecessary conversions
* crypto/ecies: remove unused function cmpPublic
* core/rawdb: remove unused function print
* core/state: remove unused function xTestFuzzCutter
* core/vm: disable TestWriteExpectedValues in a different way
* core/forkid: remove unused function checksum
* les: remove unused type proofsData
* cmd/utils: remove unused functions prefixedNames, prefixFor
* crypto/bn256: run goimports
* p2p/nat: fix goimports lint issue
* cmd/clef: avoid using unkeyed struct fields
* les: cancel context in testRequest
* rlp: delete unreachable code
* core: gofmt
* internal/build: simplify DownloadFile for Go 1.11 compatibility
* build: remove go test --short flag
* .travis.yml: disable build cache
* whisper/whisperv6: fix ineffectual assignment in TestWhisperIdentityManagement
* .golangci.yml: enable goconst and ineffassign linters
* build: print message when there are no lint issues
* internal/build: refactor download a bit
* rlp: improve nil pointer handling
In both encoder and decoder, the rules for encoding nil pointers were a
bit hard to understand, and didn't leave much choice. Since RLP allows
two empty values (empty list, empty string), any protocol built on RLP
must choose either of these values to represent the null value in a
certain context.
This change adds choice in the form of two new struct tags, "nilString"
and "nilList". These can be used to specify how a nil pointer value is
encoded. The "nil" tag still exists, but its implementation is now
explicit and defines exactly how nil pointers are handled in a single
place.
Another important change in this commit is how nil pointers and the
Encoder interface interact. The EncodeRLP method was previously called
even on nil values, which was supposed to give users a choice of how
their value would be handled when nil. It turns out this is a stupid
idea. If you create a network protocol containing an object defined in
another package, it's better to be able to say that the object should be
a list or string when nil in the definition of the protocol message
rather than defining the encoding of nil on the object itself.
As of this commit, the encoding rules for pointers now take precedence
over the Encoder interface rule. I think the "nil" tag will work fine
for most cases. For special kinds of objects which are a struct in Go
but strings in RLP, code using the object can specify the desired
encoding of nil using the "nilString" and "nilList" tags.
* rlp: propagate struct field type errors
If a struct contained fields of undecodable type, the encoder and
decoder would panic instead of returning an error. Fix this by
propagating type errors in makeStruct{Writer,Decoder} and add a test.
These changes fix two corner cases related to internal handling of types
in package rlp: The "tail" struct tag can only be applied to the last field.
The check for this was wrong and didn't allow for private fields after the
field with the tag. Unsupported types (e.g. structs containing int) which
implement either the Encoder or Decoder interface but not both
couldn't be encoded/decoded.
Also fixes#19367