Compare commits

..

3 Commits

Author SHA1 Message Date
erikdies
3e7abda081
Create issues-notion-sync.yml 2022-06-23 14:11:38 -04:00
i-norden
f4b7bd4a14 add new target branch for github actions 2022-02-16 14:46:25 -06:00
Elizabeth
3aead03aeb Statediffing geth
* Write state diff to CSV (#2)

* port statediff from 9b7fd9af80/statediff/statediff.go; minor fixes

* integrating state diff extracting, building, and persisting into geth processes

* work towards persisting created statediffs in ipfs; based off github.com/vulcanize/eth-block-extractor

* Add a state diff service

* Remove diff extractor from blockchain

* Update imports

* Move statediff on/off check to geth cmd config

* Update starting state diff service

* Add debugging logs for creating diff

* Add statediff extractor and builder tests and small refactoring

* Start to write statediff to a CSV

* Restructure statediff directory

* Pull CSV publishing methods into their own file

* Reformatting due to go fmt

* Add gomega to vendor dir

* Remove testing focuses

* Update statediff tests to use golang test pkg

instead of ginkgo

- builder_test
- extractor_test
- publisher_test

* Use hexutil.Encode instead of deprecated common.ToHex

* Remove OldValue from DiffBigInt and DiffUint64 fields

* Update builder test

* Remove old storage value from updated accounts

* Remove old values from created/deleted accounts

* Update publisher to account for only storing current account values

* Update service loop and fetching previous block

* Update testing

- remove statediff ginkgo test suite file
- move mocks to their own dir

* Updates per go fmt

* Updates to tests

* Pass statediff mode and path in through cli

* Return filename from publisher

* Remove some duplication in builder

* Remove code field from state diff output

this is the contract byte code, and it can still be obtained by querying
the db by the codeHash

* Consolidate acct diff structs for updated & updated/deleted accts

* Include block number in csv filename

* Clean up error logging

* Cleanup formatting, spelling, etc

* Address PR comments

* Add contract address and storage value to csv

* Refactor accumulating account row in csv publisher

* Add DiffStorage struct

* Add storage key to csv

* Address PR comments

* Fix publisher to include rows for accounts that don't have store updates

* Update builder test after merging in release/1.8

* Update test contract to include storage on contract intialization

- so that we're able to test that storage diffing works for created and
deleted accounts (not just updated accounts).

* Factor out a common trie iterator method in builder

* Apply goimports to statediff

* Apply gosimple changes to statediff

* Gracefully exit geth command(#4)

* Statediff for full node (#6)

* Open a trie from the in-memory database

* Use a node's LeafKey as an identifier instead of the address

It was proving difficult to find look the address up from a given path
with a full node (sometimes the value wouldn't exist in the disk db).
So, instead, for now we are using the node's LeafKey with is a Keccak256
hash of the address, so if we know the address we can figure out which
LeafKey it matches up to.

* Make sure that statediff has been processed before pruning

* Use blockchain stateCache.OpenTrie for storage diffs

* Clean up log lines and remove unnecessary fields from builder

* Apply go fmt changes

* Add a sleep to the blockchain test

* refactoring/reorganizing packages

* refactoring statediff builder and types and adjusted to relay proofs and paths (still need to make this optional)

* refactoring state diff service and adding api which allows for streaming state diff payloads over an rpc websocket subscription

* make proofs and paths optional + compress service loop into single for loop (may be missing something here)

* option to process intermediate nodes

* make state diff rlp serializable

* cli parameter to limit statediffing to select account addresses + test

* review fixes and fixes for issues ran into in integration

* review fixes; proper method signature for api; adjust service so that statediff processing is halted/paused until there is at least one subscriber listening for the results

* adjust buffering to improve stability; doc.go; fix notifier
err handling

* relay receipts with the rest of the data + review fixes/changes

* rpc method to get statediff at specific block; requires archival node or the block be within the pruning range

* fix linter issues

* include total difficulty to the payload

* fix state diff builder: emit actual leaf nodes instead of value nodes; diff on the leaf not on the value; emit correct path for intermediate nodes

* adjust statediff builder tests to changes and extend to test intermediate nodes; golint

* add genesis block to test; handle block 0 in StateDiffAt

* rlp files for mainnet blocks 0-3, for tests

* builder test on mainnet blocks

* common.BytesToHash(path) => crypto.Keaccak256(hash) in builder; BytesToHash produces same hash for e.g. []byte{} and []byte{\x00} - prefix \x00 steps are inconsequential to the hash result

* complete tests for early mainnet blocks

* diff type for representing deleted accounts

* fix builder so that we handle account deletions properly and properly diff storage when an account is moved to a new path; update params

* remove cli params; moving them to subscriber defined

* remove unneeded bc methods

* update service and api; statediffing params are now defined by user through api rather than by service provider by cli

* update top level tests

* add ability to watch specific storage slots (leaf keys) only

* comments; explain logic

* update mainnet blocks test

* update api_test.go

* storage leafkey filter test

* cleanup chain maker

* adjust chain maker for tests to add an empty account in block1 and switch to EIP-158 afterwards (now we just need to generate enough accounts until one causes the empty account to be touched and removed post-EIP-158 so we can simulate and test that process...); also added 2 new blocks where more contract storage is set and old slots are set to zero so they are removed so we can test that

* found an account whose creation causes the empty account to be moved to a new path; this should count as 'touching; the empty account and cause it to be removed according to eip-158... but it doesn't

* use new contract in unit tests that has self-destruct ability, so we can test eip-158 since simply moving an account to new path doesn't count as 'touchin' it

* handle storage deletions

* tests for eip-158 account removal and storage value deletions; there is one edge case left to test where we remove 1 account when only two exist such that the remaining account is moved up and replaces the root branch node

* finish testing known edge cases

* add endpoint to fetch all state and storage nodes at a given blockheight; useful for generating a recent atate cache/snapshot that we can diff forward from rather than needing to collect all diffs from genesis

* test for state trie builder

* if statediffing is on, lock tries in triedb until the statediffing service signals they are done using them

* fix mock blockchain; golint; bump patch

* increase maxRequestContentLength; bump patch

* log the sizes of the state objects we are sending

* CI build (#20)

* CI: run build on PR and on push to master

* CI: debug building geth

* CI: fix coping file

* CI: fix coping file v2

* CI: temporary upload file to release asset

* CI: get release upload_url by tag, upload asset to current relase

* CI: fix tag name

* fix ci build on statediff_at_anyblock-1.9.11 branch

* fix publishing assets in release

* use context deadline for timeout in eth_call

* collect and emit codehash=>code mappings for state objects

* subscription endpoint for retrieving all the codehash=>code mappings that exist at provided height

* Implement WriteStateDiffAt

* Writes state diffs directly to postgres

* Adds CLI flags to configure PG

* Refactors builder output with callbacks

* Copies refactored postgres handling code from ipld-eth-indexer

* rename PostgresCIDWriter.{index->upsert}*

* rm unused

* output code & codehash iteratively

* had to rf some types for this

* prometheus metrics output

* duplicate recent eth-indexer changes

* migrations and metrics...

* [wip] prom.Init() here? another CLI flag?

* tidy & DRY

* statediff WriteLoop service + CLI flag

* [wip] update test mocks

* todo - do something meaningful to test write loop

* logging

* use geth log

* port tests to go testing

* drop ginkgo/gomega

* fix and cleanup tests

* fail before defer statement

* delete vendor/ dir

* fixes after rebase onto 1.9.23

* fix API registration

* use golang 1.15.5 version (#34)

* bump version meta; add 0.0.11 branch to actions

* bump version meta; update github actions workflows

* statediff: refactor metrics

* Remove redundant statediff/indexer/prom tooling and use existing
prometheus integration.

* "indexer" namespace for metrics

* add reporting loop for db metrics

* doc

* metrics for statediff stats

* metrics namespace/subsystem = statediff/{indexer,service}

* statediff: use a worker pool (for direct writes)

* fix test

* fix chain event subscription

* log tweaks

* func name

* unused import

* intermediate chain event channel for metrics

* update github actions; linting

* add poststate and status to receipt ipld indexes

* stateDiffFor endpoints for fetching or writing statediff object by blockhash; bump statediff version

* fixes after rebase on to v1.10.1

* update github actions and version meta; go fmt

* add leaf key to removed 'nodes'

* include Postgres migrations and schema

* service documentation

* touching up

* update github actions after rebase

* fix connection leak (misplaced defer) and perform proper rollback on errs

* improve error logging; handle PushBlock internal err

* build docker image and publish it to Docker Hub on release

* add access list tx to unit tests

* MarshalBinary and UnmarshalBinary methods for receipt

* fix error caused by 2718 by using MarshalBinary instead of EncodeRLP methods

* ipld encoding/decoding tests

* update TxModel; add AccessListElementModel

* index tx type and access lists

* add access list metrics

* unit tests for tx_type and access list table

* unit tests for receipt marshal/unmarshal binary methods

* improve documentation of the encoding methods

* fix issue identified in linting

* update github actions and version meta after rebase

* unit test that fails undeterministically on eip2930 txs, giving same error we are seeing in prod

* Include genesis block state diff.

* documentation on versioning, rebasing, releasing; bump version meta

* Add geth and statediff unit test to CI.

* Set pgpassword in env.

* Added comments.

* Add new major branch to github action.

* Add support for Dynamic txn(EIP-1559).

* Update version meta to 0.0.24

* Verify block base fee in test.

* Fix base_fee type and add backward compatible test.

* Remove type definition for AccessListElementModel

* Change basefee to int64/bigint.

* block and uncle reward in PoA network = 0  (#87)

* in PoA networks there is no block and uncle rewards

* bump meta version

* (cherry picked from commit b64ca14689)

* Use Ropsten to test block reward.

* Add Makefile target to build static linux binaries.

* Strip symbol tables from static binaries.

* Fix block_fee to support NULL values.

* bump version meta.

* Add new major branch to github action.

* rename doc.go to README.md

* Create a seperate table for storing logs

* Bump statediff version to 0.0.26.

* add btree index to state/storage_cids.node_type; updated schema

* Dedup receipt data.

* Fix linter errors.

* Address comments.

* Bump statediff version to 0.0.27.

* new cli flag for initializing db first time service is ran

* only write Removed node ipld block (on db init) and reuse constant cid and mhkey

* test new handling of Removed nodes; don't require init flag

* log metrics

* Add new major branch to github action.

* Fix build.

* Update golang version in CI.

* Use ipld-eth-db in testing.

* Remove migration from repo.

* Add new major branch to github action.

*Use `GetTd` instead of `GetTdByHash`
6289137827

* Add new major branch to github action.

* Report DB metrics

* batch inserts to public.blocks

* v2 => v3  major refactor

* fixes and cli integration for new options

* update example command in readme

* ashwin's fix for failing pgx unit test

* update to use new schema; fix pgx driver

* indexer that writes sql stmts out to a file

* cli integration

* fix unit tests

* use node_id as PK/FK

* misc fixes/adjustments

* update README

* cleanup; more unit tests

* basefee is big.Int, it won't always fit in int64

* adjust for schema updates

* finish unit tests

* test harnest for arbitrary mainnet blocks and receipts

* cache problematic block locally for quicker testing/easier CI testing

* fix issue with log/logTrie processing

* remove some unecessary hashing operations

* handle edge case

* add more 'bad blocks' to mainnet_tests

* increase file write buffer size

* increase buffer further

* fix rct trie multicodec type

* extend testing

* log trie fk fix

* bump statediff meta version; use db v0.3.0 in compose

* skip file writing tests in CI, for now

* prevent parallel execution of tests in different pkgs (suspect this is what causes our deadlock to show up only in CI test env)
adjust write buffering

* fix rct unit tests

* fix README formatting

* port retry on deadlock detection feature

* new workflow on-'master' targets

* update version meta

* improve test coverage for logs

* fix possible race condition

* fix CI

* check tx pool state at end of unit tests

* better logging of rollbacks and dead lock retries
2022-02-16 14:40:08 -06:00
1300 changed files with 193644 additions and 201468 deletions

7
.github/CODEOWNERS vendored
View File

@ -5,15 +5,16 @@ accounts/usbwallet @karalabe
accounts/scwallet @gballet accounts/scwallet @gballet
accounts/abi @gballet @MariusVanDerWijden accounts/abi @gballet @MariusVanDerWijden
cmd/clef @holiman cmd/clef @holiman
cmd/puppeth @karalabe
consensus @karalabe consensus @karalabe
core/ @karalabe @holiman @rjl493456442 core/ @karalabe @holiman @rjl493456442
eth/ @karalabe @holiman @rjl493456442 eth/ @karalabe @holiman @rjl493456442
eth/catalyst/ @gballet eth/catalyst/ @gballet
eth/tracers/ @s1na graphql/ @gballet
graphql/ @s1na
les/ @zsfelfoldi @rjl493456442 les/ @zsfelfoldi @rjl493456442
light/ @zsfelfoldi @rjl493456442 light/ @zsfelfoldi @rjl493456442
node/ @fjl mobile/ @karalabe @ligi
node/ @fjl @renaynay
p2p/ @fjl @zsfelfoldi p2p/ @fjl @zsfelfoldi
rpc/ @fjl @holiman rpc/ @fjl @holiman
p2p/simulations @fjl p2p/simulations @fjl

View File

@ -35,6 +35,6 @@ and help.
## Configuration, dependencies, and tests ## Configuration, dependencies, and tests
Please see the [Developers' Guide](https://geth.ethereum.org/docs/developers/geth-developer/dev-guide) Please see the [Developers' Guide](https://geth.ethereum.org/docs/developers/devguide)
for more details on configuring your environment, managing project dependencies for more details on configuring your environment, managing project dependencies
and testing procedures. and testing procedures.

View File

@ -9,7 +9,6 @@ assignees: ''
#### System information #### System information
Geth version: `geth version` Geth version: `geth version`
CL client & version: e.g. lighthouse/nimbus/prysm@v1.0.0
OS & Version: Windows/Linux/OSX OS & Version: Windows/Linux/OSX
Commit hash : (if `develop`) Commit hash : (if `develop`)

View File

@ -6,10 +6,9 @@ jobs:
linter-check: linter-check:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/setup-go@v3 - uses: actions/setup-go@v2
with: with:
go-version: "1.19" go-version: '1.16.x'
check-latest: true
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- name: Run linter - name: Run linter
run: go run build/ci.go lint run: go run build/ci.go lint

View File

@ -0,0 +1,29 @@
name: Notion Sync
on:
workflow_dispatch:
issues:
types:
[
opened,
edited,
labeled,
unlabeled,
assigned,
unassigned,
milestoned,
demilestoned,
reopened,
closed,
]
jobs:
notion_job:
runs-on: ubuntu-latest
name: Add GitHub Issues to Notion
steps:
- name: Add GitHub Issues to Notion
uses: vulcanize/notion-github-action@v1.2.4-issueid
with:
notion-token: ${{ secrets.NOTION_TOKEN }}
notion-db: ${{ secrets.NOTION_DATABASE }}

View File

@ -1,21 +0,0 @@
name: MANUAL Override Publish geth binary to release
on:
workflow_dispatch:
inputs:
giteaPublishTag:
description: 'Package to publish TO on gitea; e.g. v1.10.25-statediff-4.2.1-alpha'
required: true
cercContainerTag:
description: 'Tagged Container to extract geth binary FROM'
required: true
jobs:
build:
name: Manual override publish of geth binary FROM tagged release TO TAGGED package on git.vdb.to
runs-on: ubuntu-latest
steps:
- name: Copy ethereum binary file
run: docker run --rm --entrypoint cat git.vdb.to/cerc-io/go-ethereum/go-ethereum:${{ github.event.inputs.cercContainerTag }} /usr/local/bin/geth > geth-linux-amd64
- name: curl
uses: enflo/curl-action@master
with:
curl: --user cerccicd:${{ secrets.GITEA_TOKEN }} --upload-file geth-linux-amd64 https://git.vdb.to/api/packages/cerc-io/generic/go-ethereum/${{ github.event.inputs.giteaPublishTag }}/geth-linux-amd64

View File

@ -1,21 +0,0 @@
name: MANUAL Override Publish geth binary to release
on:
workflow_dispatch:
inputs:
giteaPublishTag:
description: 'Package to publish TO on gitea; e.g. v1.10.25-statediff-4.2.1-alpha'
required: true
cercContainerTag:
description: 'Tagged Container to extract geth binary FROM'
required: true
jobs:
build:
name: Manual override publish of geth binary FROM tagged release TO TAGGED package on git.vdb.to
runs-on: ubuntu-latest
steps:
- name: Copy ethereum binary file
run: docker run --rm --entrypoint cat git.vdb.to/cerc-io/go-ethereum/go-ethereum:${{ github.event.inputs.cercContainerTag }} /usr/local/bin/geth > geth-linux-amd64
- name: curl
uses: enflo/curl-action@master
with:
curl: --user circcicd:${{ secrets.GITEA_TOKEN }} --upload-file geth-linux-amd64 https://git.vdb.to/api/packages/cerc-io/generic/go-ethereum/${{ github.event.inputs.giteaPublishTag }}/geth-linux-amd64

48
.github/workflows/on-master.yaml vendored Normal file
View File

@ -0,0 +1,48 @@
name: Docker Build and publish to Github
on:
push:
branches:
- statediff
- statediff-v2
- statediff-v3
- v1.10.16-statediff-v3
- v1.10.15-statediff-v3
- v1.10.15-statediff-v2
- v1.10.14-statediff
- v1.10.13-statediff
- v1.10.12-statediff
- v1.10.11-statediff-v3
- v1.10.11-statediff
- v1.10.10-statediff
- v1.10.9-statediff
- v1.10.8-statediff
- v1.10.7-statediff
- v1.10.6-statediff
- v1.10.5-statediff
- v1.10.4-statediff
- v1.10.3-statediff
- v1.10.2-statediff
- v1.10.1-statediff
- v1.9.25-statediff
- v1.9.24-statediff
- v1.9.23-statediff
- v1.9.11-statediff
jobs:
build:
name: Run docker build and publish
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run docker build
run: docker build -t vulcanize/go-ethereum -f Dockerfile .
- name: Get the version
id: vars
run: echo ::set-output name=sha::$(echo ${GITHUB_SHA:0:7})
- name: Tag docker image
run: docker tag vulcanize/go-ethereum docker.pkg.github.com/vulcanize/go-ethereum/go-ethereum:${{steps.vars.outputs.sha}}
- name: Docker Login
run: echo ${{ secrets.GITHUB_TOKEN }} | docker login https://docker.pkg.github.com -u vulcanize --password-stdin
- name: Docker Push
run: docker push docker.pkg.github.com/vulcanize/go-ethereum/go-ethereum:${{steps.vars.outputs.sha}}

View File

@ -3,5 +3,64 @@ name: Build and test
on: [pull_request] on: [pull_request]
jobs: jobs:
run-tests: build:
uses: ./.github/workflows/tests.yml name: Run docker build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run docker build
run: docker build -t vulcanize/go-ethereum .
geth-unit-test:
name: Run geth unit test
strategy:
matrix:
go-version: [ 1.16.x]
platform: [ubuntu-latest]
runs-on: ${{ matrix.platform }}
env:
GO111MODULE: on
GOPATH: /tmp/go
steps:
- name: Create GOPATH
run: mkdir -p /tmp/go
- name: Install Go
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- name: Checkout code
uses: actions/checkout@v2
- name: Run unit tests
run: |
make test
statediff-unit-test:
name: Run state diff unit test
env:
GOPATH: /tmp/go
strategy:
matrix:
go-version: [ 1.16.x]
platform: [ubuntu-latest]
runs-on: ${{ matrix.platform }}
steps:
- name: Create GOPATH
run: mkdir -p /tmp/go
- name: Install Go
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- name: Checkout code
uses: actions/checkout@v2
- name: Start database
run: docker-compose -f docker-compose.yml up -d ipld-eth-db
- name: Run unit tests
run:
make statedifftest

View File

@ -3,34 +3,39 @@ on:
release: release:
types: [published] types: [published]
jobs: jobs:
run-tests: push_to_registries:
uses: ./.github/workflows/tests.yml name: Publish assets to Release
build:
name: Run docker build and publish
needs: run-tests
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2
- name: Run docker build
run: docker build -t cerc-io/go-ethereum -f Dockerfile .
- name: Get the version - name: Get the version
id: vars id: vars
run: | run: |
echo ::set-output name=sha::$(echo ${GITHUB_SHA:0:7}) echo ::set-output name=sha::$(echo ${GITHUB_SHA:0:7})
echo ::set-output name=tag::$(echo ${GITHUB_REF#refs/tags/}) echo ::set-output name=tag::$(echo ${GITHUB_REF#refs/tags/})
- name: Tag docker image - name: Docker Login to Github Registry
run: docker tag cerc-io/go-ethereum git.vdb.to/cerc-io/go-ethereum/go-ethereum:${{steps.vars.outputs.sha}} run: echo ${{ secrets.GITHUB_TOKEN }} | docker login https://docker.pkg.github.com -u vulcanize --password-stdin
- name: Tag docker image - name: Docker Pull
run: docker tag git.vdb.to/cerc-io/go-ethereum/go-ethereum:${{steps.vars.outputs.sha}} git.vdb.to/cerc-io/go-ethereum/go-ethereum:${{steps.vars.outputs.tag}} run: docker pull docker.pkg.github.com/vulcanize/go-ethereum/go-ethereum:${{steps.vars.outputs.sha}}
- name: Docker Login
run: echo ${{ secrets.GITEA_PUBLISH_TOKEN }} | docker login https://git.vdb.to -u cerccicd --password-stdin
- name: Docker Push
run: docker push git.vdb.to/cerc-io/go-ethereum/go-ethereum:${{steps.vars.outputs.sha}}
- name: Docker Push TAGGED
run: docker push git.vdb.to/cerc-io/go-ethereum/go-ethereum:${{steps.vars.outputs.tag}}
- name: Copy ethereum binary file - name: Copy ethereum binary file
run: docker run --rm --entrypoint cat git.vdb.to/cerc-io/go-ethereum/go-ethereum:${{steps.vars.outputs.sha}} /usr/local/bin/geth > geth-linux-amd64 run: docker run --rm --entrypoint cat docker.pkg.github.com/vulcanize/go-ethereum/go-ethereum:${{steps.vars.outputs.sha}} /usr/local/bin/geth > geth-linux-amd64
- name: curl - name: Docker Login to Docker Registry
uses: enflo/curl-action@master run: echo ${{ secrets.VULCANIZEJENKINS_PAT }} | docker login -u vulcanizejenkins --password-stdin
- name: Tag docker image
run: docker tag docker.pkg.github.com/vulcanize/go-ethereum/go-ethereum:${{steps.vars.outputs.sha}} vulcanize/vdb-geth:${{steps.vars.outputs.tag}}
- name: Docker Push to Docker Hub
run: docker push vulcanize/vdb-geth:${{steps.vars.outputs.tag}}
- name: Get release
id: get_release
uses: bruceadams/get-release@v1.2.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload Release Asset
id: upload-release-asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with: with:
curl: --user cerccicd:${{ secrets.GITEA_PUBLISH_TOKEN }} --upload-file geth-linux-amd64 https://git.vdb.to/api/packages/cerc-io/generic/go-ethereum/${{steps.vars.outputs.tag}}/geth-linux-amd64 upload_url: ${{ steps.get_release.outputs.upload_url }}
asset_path: geth-linux-amd64
asset_name: geth-linux-amd64
asset_content_type: application/octet-stream

View File

@ -1,150 +0,0 @@
name: Tests for Geth that are used in multiple jobs.
on:
workflow_call:
env:
stack-orchestrator-ref: ${{ github.event.inputs.stack-orchestrator-ref || 'e62830c982d4dfc5f3c1c2b12c1754a7e9b538f1'}}
ipld-eth-db-ref: ${{ github.event.inputs.ipld-eth-db-ref || '1b922dbff350bfe2a9aec5fe82079e9d855ea7ed' }}
GOPATH: /tmp/go
jobs:
build:
name: Run docker build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run docker build
run: docker build -t cerc-io/go-ethereum .
geth-unit-test:
name: Run geth unit test
runs-on: ubuntu-latest
env:
GO111MODULE: on
steps:
- name: Create GOPATH
run: mkdir -p /tmp/go
- uses: actions/setup-go@v3
with:
go-version: "1.19"
check-latest: true
- name: Checkout code
uses: actions/checkout@v2
- name: Run unit tests
run: |
make test
statediff-unit-test:
name: Run state diff unit test
runs-on: ubuntu-latest
steps:
- name: Create GOPATH
run: mkdir -p /tmp/go
- uses: actions/setup-go@v3
with:
go-version: "1.19"
check-latest: true
- name: Checkout code
uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
ref: ${{ env.ipld-eth-db-ref }}
repository: cerc-io/ipld-eth-db
path: "./ipld-eth-db/"
fetch-depth: 0
- name: Build ipld-eth-db
run: |
docker build -f ./ipld-eth-db/Dockerfile ./ipld-eth-db/ -t cerc/ipld-eth-db:local
- name: Run docker compose
run: |
docker-compose up -d
- name: Give the migration a few seconds
run: sleep 30;
- name: Run unit tests
run: make statedifftest
private-network-test:
name: Start Geth in a private network.
runs-on: ubuntu-latest
steps:
- name: Create GOPATH
run: mkdir -p /tmp/go
- uses: actions/setup-go@v3
with:
go-version: "1.19"
check-latest: true
- name: Checkout code
uses: actions/checkout@v3
with:
path: "./go-ethereum"
- uses: actions/checkout@v3
with:
ref: ${{ env.stack-orchestrator-ref }}
path: "./stack-orchestrator/"
repository: cerc-io/mshaw_stack_hack
fetch-depth: 0
- uses: actions/checkout@v3
with:
ref: ${{ env.ipld-eth-db-ref }}
repository: cerc-io/ipld-eth-db
path: "./ipld-eth-db/"
fetch-depth: 0
- name: Create config file
run: |
echo vulcanize_ipld_eth_db=$GITHUB_WORKSPACE/ipld-eth-db/ > $GITHUB_WORKSPACE/config.sh
echo vulcanize_go_ethereum=$GITHUB_WORKSPACE/go-ethereum/ >> $GITHUB_WORKSPACE/config.sh
echo db_write=true >> $GITHUB_WORKSPACE/config.sh
echo genesis_file_path=start-up-files/go-ethereum/genesis.json >> $GITHUB_WORKSPACE/config.sh
cat $GITHUB_WORKSPACE/config.sh
- name: Compile Geth
run: |
cd $GITHUB_WORKSPACE/stack-orchestrator/helper-scripts
./compile-geth.sh -e docker -p $GITHUB_WORKSPACE/config.sh
cd -
- name: Run docker compose
run: |
docker-compose \
-f "$GITHUB_WORKSPACE/stack-orchestrator/docker/local/docker-compose-go-ethereum.yml" \
-f "$GITHUB_WORKSPACE/stack-orchestrator/docker/local/docker-compose-db-sharding.yml" \
--env-file $GITHUB_WORKSPACE/config.sh \
up -d --build
- name: Make sure the /root/transaction_info/STATEFUL_TEST_DEPLOYED_ADDRESS exists within a certain time frame.
shell: bash
run: |
COUNT=0
ATTEMPTS=15
docker logs local_go-ethereum_1
docker compose -f "$GITHUB_WORKSPACE/stack-orchestrator/docker/local/docker-compose-db-sharding.yml" -f "$GITHUB_WORKSPACE/stack-orchestrator/docker/local/docker-compose-go-ethereum.yml" exec go-ethereum ps aux
until $(docker compose -f "$GITHUB_WORKSPACE/stack-orchestrator/docker/local/docker-compose-db-sharding.yml" -f "$GITHUB_WORKSPACE/stack-orchestrator/docker/local/docker-compose-go-ethereum.yml" cp go-ethereum:/root/transaction_info/STATEFUL_TEST_DEPLOYED_ADDRESS ./STATEFUL_TEST_DEPLOYED_ADDRESS) || [[ $COUNT -eq $ATTEMPTS ]]; do echo -e "$(( COUNT++ ))... \c"; sleep 10; done
[[ $COUNT -eq $ATTEMPTS ]] && echo "Could not find the successful contract deployment" && (exit 1)
cat ./STATEFUL_TEST_DEPLOYED_ADDRESS
echo "Address length: `wc ./STATEFUL_TEST_DEPLOYED_ADDRESS`"
sleep 15;
- name: Create a new transaction.
shell: bash
run: |
docker logs local_go-ethereum_1
docker compose -f "$GITHUB_WORKSPACE/stack-orchestrator/docker/local/docker-compose-db-sharding.yml" -f "$GITHUB_WORKSPACE/stack-orchestrator/docker/local/docker-compose-go-ethereum.yml" exec go-ethereum /bin/bash /root/transaction_info/NEW_TRANSACTION
echo $?
- name: Make sure we see entries in the header table
shell: bash
run: |
rows=$(docker compose -f "$GITHUB_WORKSPACE/stack-orchestrator/docker/local/docker-compose-db-sharding.yml" -f "$GITHUB_WORKSPACE/stack-orchestrator/docker/local/docker-compose-go-ethereum.yml" exec ipld-eth-db psql -U vdbm -d vulcanize_testing -AXqtc "SELECT COUNT(*) FROM eth.header_cids")
[[ "$rows" -lt "1" ]] && echo "We could not find any rows in postgres table." && (exit 1)
echo $rows
docker compose -f "$GITHUB_WORKSPACE/stack-orchestrator/docker/local/docker-compose-db-sharding.yml" -f "$GITHUB_WORKSPACE/stack-orchestrator/docker/local/docker-compose-go-ethereum.yml" exec ipld-eth-db psql -U vdbm -d vulcanize_testing -AXqtc "SELECT * FROM eth.header_cids"

16
.gitignore vendored
View File

@ -47,19 +47,3 @@ profile.cov
/dashboard/assets/package-lock.json /dashboard/assets/package-lock.json
**/yarn-error.log **/yarn-error.log
logs/
foundry/deployments/local-private-network/geth-linux-amd64
foundry/projects/local-private-network/geth-linux-amd64
# Helpful repos
related-repositories/foundry-test/**
related-repositories/hive/**
related-repositories/ipld-eth-db/**
related-repositories/foundry-test/
related-repositories/ipld-eth-db/
# files generated by statediffing tests
statediff/indexer/database/sql/statediffing_test_file.sql
statediff/statediffing_test_file.sql
statediff/known_gaps.sql
statediff/indexer/database/file/statediffing_test

View File

@ -12,29 +12,17 @@ run:
linters: linters:
disable-all: true disable-all: true
enable: enable:
- deadcode
- goconst - goconst
- goimports - goimports
- gosimple - gosimple
- govet - govet
- ineffassign - ineffassign
- misspell - misspell
# - staticcheck
- unconvert - unconvert
- typecheck # - unused
- unused - varcheck
- staticcheck
- bidichk
- durationcheck
- exportloopref
- whitespace
# - structcheck # lots of false positives
# - errcheck #lot of false positives
# - contextcheck
# - errchkjson # lots of false positives
# - errorlint # this check crashes
# - exhaustive # silly check
# - makezero # false positives
# - nilerr # several intentional
linters-settings: linters-settings:
gofmt: gofmt:
@ -45,20 +33,18 @@ linters-settings:
issues: issues:
exclude-rules: exclude-rules:
- path: crypto/bn256/cloudflare/optate.go - path: crypto/blake2b/
linters:
- deadcode
- path: crypto/bn256/cloudflare
linters:
- deadcode
- path: p2p/discv5/
linters:
- deadcode
- path: core/vm/instructions_test.go
linters:
- goconst
- path: cmd/faucet/
linters: linters:
- deadcode - deadcode
- staticcheck
- path: internal/build/pgp.go
text: 'SA1019: "golang.org/x/crypto/openpgp" is deprecated: this package is unmaintained except for security fixes.'
- path: core/vm/contracts.go
text: 'SA1019: "golang.org/x/crypto/ripemd160" is deprecated: RIPEMD-160 is a legacy hash and should not be used for new applications.'
- path: accounts/usbwallet/trezor.go
text: 'SA1019: "github.com/golang/protobuf/proto" is deprecated: Use the "google.golang.org/protobuf/proto" package instead.'
- path: accounts/usbwallet/trezor/
text: 'SA1019: "github.com/golang/protobuf/proto" is deprecated: Use the "google.golang.org/protobuf/proto" package instead.'
exclude:
- 'SA1019: event.TypeMux is deprecated: use Feed'
- 'SA1019: strings.Title is deprecated'
- 'SA1019: strings.Title has been deprecated since Go 1.18 and an alternative has been available since Go 1.0: The rule Title uses for word boundaries does not handle Unicode punctuation properly. Use golang.org/x/text/cases instead.'
- 'SA1029: should not use built-in type string as key for value'

274
.mailmap
View File

@ -1,237 +1,123 @@
Aaron Buchwald <aaron.buchwald56@gmail.com> Jeffrey Wilcke <jeffrey@ethereum.org>
Jeffrey Wilcke <jeffrey@ethereum.org> <geffobscura@gmail.com>
Jeffrey Wilcke <jeffrey@ethereum.org> <obscuren@obscura.com>
Jeffrey Wilcke <jeffrey@ethereum.org> <obscuren@users.noreply.github.com>
Aaron Kumavis <kumavis@users.noreply.github.com> Viktor Trón <viktor.tron@gmail.com>
Abel Nieto <abel.nieto90@gmail.com> Joseph Goulden <joegoulden@gmail.com>
Abel Nieto <abel.nieto90@gmail.com> <anietoro@uwaterloo.ca>
Afri Schoedon <58883403+q9f@users.noreply.github.com> Nick Savers <nicksavers@gmail.com>
Afri Schoedon <5chdn@users.noreply.github.com> <58883403+q9f@users.noreply.github.com>
Alec Perseghin <aperseghin@gmail.com> Maran Hidskes <maran.hidskes@gmail.com>
Aleksey Smyrnov <i@soar.name> Taylor Gerring <taylor.gerring@gmail.com>
Taylor Gerring <taylor.gerring@gmail.com> <taylor.gerring@ethereum.org>
Alex Leverington <alex@ethdev.com>
Alex Leverington <alex@ethdev.com> <subtly@users.noreply.github.com>
Alex Pozhilenkov <alex_pozhilenkov@adoriasoft.com>
Alex Pozhilenkov <alex_pozhilenkov@adoriasoft.com> <leshiy12345678@gmail.com>
Alexey Akhunov <akhounov@gmail.com>
Alon Muroch <alonmuroch@gmail.com>
Andrey Petrov <shazow@gmail.com>
Andrey Petrov <shazow@gmail.com> <andrey.petrov@shazow.net>
Arkadiy Paronyan <arkadiy@ethdev.com>
Armin Braun <me@obrown.io>
Aron Fischer <github@aron.guru> <homotopycolimit@users.noreply.github.com>
Austin Roberts <code@ausiv.com>
Austin Roberts <code@ausiv.com> <git@ausiv.com>
Bas van Kervel <bas@ethdev.com> Bas van Kervel <bas@ethdev.com>
Bas van Kervel <bas@ethdev.com> <basvankervel@ziggo.nl> Bas van Kervel <bas@ethdev.com> <basvankervel@ziggo.nl>
Bas van Kervel <bas@ethdev.com> <basvankervel@gmail.com> Bas van Kervel <bas@ethdev.com> <basvankervel@gmail.com>
Bas van Kervel <bas@ethdev.com> <bas-vk@users.noreply.github.com> Bas van Kervel <bas@ethdev.com> <bas-vk@users.noreply.github.com>
Boqin Qin <bobbqqin@bupt.edu.cn> Sven Ehlert <sven@ethdev.com>
Boqin Qin <bobbqqin@bupt.edu.cn> <Bobbqqin@gmail.com>
Casey Detrio <cdetrio@gmail.com> Vitalik Buterin <v@buterin.com>
Cheng Li <lob4tt@gmail.com> Marian Oancea <contact@siteshop.ro>
Chris Ziogas <ziogaschr@gmail.com>
Chris Ziogas <ziogaschr@gmail.com> <ziogas_chr@hotmail.com>
Christoph Jentzsch <jentzsch.software@gmail.com> Christoph Jentzsch <jentzsch.software@gmail.com>
Diederik Loerakker <proto@protolambda.com> Heiko Hees <heiko@heiko.org>
Alex Leverington <alex@ethdev.com>
Alex Leverington <alex@ethdev.com> <subtly@users.noreply.github.com>
Zsolt Felföldi <zsfelfoldi@gmail.com>
Gavin Wood <i@gavwood.com>
Martin Becze <mjbecze@gmail.com>
Martin Becze <mjbecze@gmail.com> <wanderer@users.noreply.github.com>
Dimitry Khokhlov <winsvega@mail.ru> Dimitry Khokhlov <winsvega@mail.ru>
Domino Valdano <dominoplural@gmail.com> Roman Mandeleil <roman.mandeleil@gmail.com>
Domino Valdano <dominoplural@gmail.com> <jeff@okcupid.com>
Edgar Aroutiounian <edgar.factorial@gmail.com> Alec Perseghin <aperseghin@gmail.com>
Elliot Shepherd <elliot@identitii.com> Alon Muroch <alonmuroch@gmail.com>
Arkadiy Paronyan <arkadiy@ethdev.com>
Jae Kwon <jkwon.work@gmail.com>
Aaron Kumavis <kumavis@users.noreply.github.com>
Nick Dodson <silentcicero@outlook.com>
Jason Carver <jacarver@linkedin.com>
Jason Carver <jacarver@linkedin.com> <ut96caarrs@snkmail.com>
Joseph Chow <ethereum@outlook.com>
Joseph Chow <ethereum@outlook.com> ethers <TODO>
Enrique Fynn <enriquefynn@gmail.com> Enrique Fynn <enriquefynn@gmail.com>
Enrique Fynn <me@enriquefynn.com> Vincent G <caktux@gmail.com>
Enrique Fynn <me@enriquefynn.com> <enriquefynn@gmail.com>
Ernesto del Toro <ernesto.deltoro@gmail.com> RJ Catalano <catalanor0220@gmail.com>
Ernesto del Toro <ernesto.deltoro@gmail.com> <ernestodeltoro@users.noreply.github.com> RJ Catalano <catalanor0220@gmail.com> <rj@erisindustries.com>
Everton Fraga <ev@ethereum.org> Nchinda Nchinda <nchinda2@gmail.com>
Aron Fischer <github@aron.guru> <homotopycolimit@users.noreply.github.com>
Vlad Gluhovsky <gluk256@users.noreply.github.com>
Ville Sundell <github@solarius.fi>
Elliot Shepherd <elliot@identitii.com>
Yohann Léon <sybiload@gmail.com>
Gregg Dourgarian <greggd@tempworks.com>
Casey Detrio <cdetrio@gmail.com>
Jens Agerberg <github@agerberg.me>
Nick Johnson <arachnid@notdot.net>
Henning Diedrich <hd@eonblast.com>
Henning Diedrich <hd@eonblast.com> Drake Burroughs <wildfyre@hotmail.com>
Felix Lange <fjl@twurst.com> Felix Lange <fjl@twurst.com>
Felix Lange <fjl@twurst.com> <fjl@users.noreply.github.com> Felix Lange <fjl@twurst.com> <fjl@users.noreply.github.com>
Максим Чусовлянов <mchusovlianov@gmail.com>
Louis Holbrook <dev@holbrook.no>
Louis Holbrook <dev@holbrook.no> <nolash@users.noreply.github.com>
Thomas Bocek <tom@tomp2p.net>
Victor Tran <vu.tran54@gmail.com>
Justin Drake <drakefjustin@gmail.com>
Frank Wang <eternnoir@gmail.com> Frank Wang <eternnoir@gmail.com>
Gary Rong <garyrong0905@gmail.com> Gary Rong <garyrong0905@gmail.com>
Gavin Wood <i@gavwood.com>
Gregg Dourgarian <greggd@tempworks.com>
Guillaume Ballet <gballet@gmail.com>
Guillaume Ballet <gballet@gmail.com> <3272758+gballet@users.noreply.github.com>
Guillaume Nicolas <guin56@gmail.com> Guillaume Nicolas <guin56@gmail.com>
Hanjiang Yu <delacroix.yu@gmail.com>
Hanjiang Yu <delacroix.yu@gmail.com> <42531996+de1acr0ix@users.noreply.github.com>
Heiko Hees <heiko@heiko.org>
Henning Diedrich <hd@eonblast.com>
Henning Diedrich <hd@eonblast.com> Drake Burroughs <wildfyre@hotmail.com>
Hwanjo Heo <34005989+hwanjo@users.noreply.github.com>
Iskander (Alex) Sharipov <quasilyte@gmail.com>
Iskander (Alex) Sharipov <quasilyte@gmail.com> <i.sharipov@corp.vk.com>
Jae Kwon <jkwon.work@gmail.com>
Janoš Guljaš <janos@resenje.org> <janos@users.noreply.github.com>
Janoš Guljaš <janos@resenje.org> Janos Guljas <janos@resenje.org>
Jared Wasinger <j-wasinger@hotmail.com>
Jason Carver <jacarver@linkedin.com>
Jason Carver <jacarver@linkedin.com> <ut96caarrs@snkmail.com>
Javier Peletier <jm@epiclabs.io>
Javier Peletier <jm@epiclabs.io> <jpeletier@users.noreply.github.com>
Jeffrey Wilcke <jeffrey@ethereum.org>
Jeffrey Wilcke <jeffrey@ethereum.org> <geffobscura@gmail.com>
Jeffrey Wilcke <jeffrey@ethereum.org> <obscuren@obscura.com>
Jeffrey Wilcke <jeffrey@ethereum.org> <obscuren@users.noreply.github.com>
Jens Agerberg <github@agerberg.me>
Joseph Chow <ethereum@outlook.com>
Joseph Chow <ethereum@outlook.com> ethers <TODO>
Joseph Goulden <joegoulden@gmail.com>
Justin Drake <drakefjustin@gmail.com>
Kenso Trabing <ktrabing@acm.org>
Kenso Trabing <ktrabing@acm.org> <kenso.trabing@bloomwebsite.com>
Liang Ma <liangma@liangbit.com>
Liang Ma <liangma@liangbit.com> <liangma.ul@gmail.com>
Louis Holbrook <dev@holbrook.no>
Louis Holbrook <dev@holbrook.no> <nolash@users.noreply.github.com>
Maran Hidskes <maran.hidskes@gmail.com>
Marian Oancea <contact@siteshop.ro>
Martin Becze <mjbecze@gmail.com>
Martin Becze <mjbecze@gmail.com> <wanderer@users.noreply.github.com>
Martin Lundfall <martin.lundfall@protonmail.com>
Matt Garnett <14004106+lightclient@users.noreply.github.com>
Matthew Halpern <matthalp@gmail.com>
Matthew Halpern <matthalp@gmail.com> <matthalp@google.com>
Michael Riabzev <michael@starkware.co>
Nchinda Nchinda <nchinda2@gmail.com>
Nick Dodson <silentcicero@outlook.com>
Nick Johnson <arachnid@notdot.net>
Nick Savers <nicksavers@gmail.com>
Nishant Das <nishdas93@gmail.com>
Nishant Das <nishdas93@gmail.com> <nish1993@hotmail.com>
Olivier Hervieu <olivier.hervieu@gmail.com>
Pascal Dierich <pascal@merkleplant.xyz>
Pascal Dierich <pascal@merkleplant.xyz> <pascal@pascaldierich.com>
RJ Catalano <catalanor0220@gmail.com>
RJ Catalano <catalanor0220@gmail.com> <rj@erisindustries.com>
Ralph Caraveo <deckarep@gmail.com>
Rene Lubov <41963722+renaynay@users.noreply.github.com>
Robert Zaremba <robert@zaremba.ch>
Robert Zaremba <robert@zaremba.ch> <robert.zaremba@scale-it.pl>
Roman Mandeleil <roman.mandeleil@gmail.com>
Sorin Neacsu <sorin.neacsu@gmail.com> Sorin Neacsu <sorin.neacsu@gmail.com>
Sorin Neacsu <sorin.neacsu@gmail.com> <sorin@users.noreply.github.com> Sorin Neacsu <sorin.neacsu@gmail.com> <sorin@users.noreply.github.com>
Sven Ehlert <sven@ethdev.com>
Taylor Gerring <taylor.gerring@gmail.com>
Taylor Gerring <taylor.gerring@gmail.com> <taylor.gerring@ethereum.org>
Thomas Bocek <tom@tomp2p.net>
Tim Cooijmans <timcooijmans@gmail.com>
Valentin Wüstholz <wuestholz@gmail.com> Valentin Wüstholz <wuestholz@gmail.com>
Valentin Wüstholz <wuestholz@gmail.com> <wuestholz@users.noreply.github.com> Valentin Wüstholz <wuestholz@gmail.com> <wuestholz@users.noreply.github.com>
Victor Tran <vu.tran54@gmail.com> Armin Braun <me@obrown.io>
Viktor Trón <viktor.tron@gmail.com> Ernesto del Toro <ernesto.deltoro@gmail.com>
Ernesto del Toro <ernesto.deltoro@gmail.com> <ernestodeltoro@users.noreply.github.com>
Ville Sundell <github@solarius.fi>
Vincent G <caktux@gmail.com>
Vitalik Buterin <v@buterin.com>
Vlad Gluhovsky <gluk256@gmail.com>
Vlad Gluhovsky <gluk256@gmail.com> <gluk256@users.noreply.github.com>
Wenshao Zhong <wzhong20@uic.edu>
Wenshao Zhong <wzhong20@uic.edu> <11510383@mail.sustc.edu.cn>
Wenshao Zhong <wzhong20@uic.edu> <374662347@qq.com>
Will Villanueva <hello@willvillanueva.com>
Xiaobing Jiang <s7v7nislands@gmail.com>
Xudong Liu <33193253+r1cs@users.noreply.github.com>
Yohann Léon <sybiload@gmail.com>
Zachinquarantine <Zachinquarantine@protonmail.com>
Zachinquarantine <Zachinquarantine@protonmail.com> <zachinquarantine@yahoo.com>
Ziyuan Zhong <zzy.albert@163.com>
Zsolt Felföldi <zsfelfoldi@gmail.com>
meowsbits <b5c6@protonmail.com>
meowsbits <b5c6@protonmail.com> <45600330+meowsbits@users.noreply.github.com>
nedifi <103940716+nedifi@users.noreply.github.com>
Максим Чусовлянов <mchusovlianov@gmail.com>

View File

@ -1,189 +0,0 @@
language: go
go_import_path: github.com/ethereum/go-ethereum
sudo: false
jobs:
allow_failures:
- stage: build
os: osx
env:
- azure-osx
include:
# This builder only tests code linters on latest version of Go
- stage: lint
os: linux
dist: bionic
go: 1.20.x
env:
- lint
git:
submodules: false # avoid cloning ethereum/tests
script:
- go run build/ci.go lint
# These builders create the Docker sub-images for multi-arch push and each
# will attempt to push the multi-arch image if they are the last builder
- stage: build
if: type = push
os: linux
arch: amd64
dist: bionic
go: 1.20.x
env:
- docker
services:
- docker
git:
submodules: false # avoid cloning ethereum/tests
before_install:
- export DOCKER_CLI_EXPERIMENTAL=enabled
script:
- go run build/ci.go docker -image -manifest amd64,arm64 -upload ethereum/client-go
- stage: build
if: type = push
os: linux
arch: arm64
dist: bionic
go: 1.20.x
env:
- docker
services:
- docker
git:
submodules: false # avoid cloning ethereum/tests
before_install:
- export DOCKER_CLI_EXPERIMENTAL=enabled
script:
- go run build/ci.go docker -image -manifest amd64,arm64 -upload ethereum/client-go
# This builder does the Linux Azure uploads
- stage: build
if: type = push
os: linux
dist: bionic
sudo: required
go: 1.20.x
env:
- azure-linux
- GO111MODULE=on
git:
submodules: false # avoid cloning ethereum/tests
addons:
apt:
packages:
- gcc-multilib
script:
# Build for the primary platforms that Trusty can manage
- go run build/ci.go install -dlgo
- go run build/ci.go archive -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- go run build/ci.go install -dlgo -arch 386
- go run build/ci.go archive -arch 386 -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
# Switch over GCC to cross compilation (breaks 386, hence why do it here only)
- sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install gcc-arm-linux-gnueabi libc6-dev-armel-cross gcc-arm-linux-gnueabihf libc6-dev-armhf-cross gcc-aarch64-linux-gnu libc6-dev-arm64-cross
- sudo ln -s /usr/include/asm-generic /usr/include/asm
- GOARM=5 go run build/ci.go install -dlgo -arch arm -cc arm-linux-gnueabi-gcc
- GOARM=5 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- GOARM=6 go run build/ci.go install -dlgo -arch arm -cc arm-linux-gnueabi-gcc
- GOARM=6 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- GOARM=7 go run build/ci.go install -dlgo -arch arm -cc arm-linux-gnueabihf-gcc
- GOARM=7 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
- go run build/ci.go install -dlgo -arch arm64 -cc aarch64-linux-gnu-gcc
- go run build/ci.go archive -arch arm64 -type tar -signer LINUX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
# This builder does the OSX Azure uploads
- stage: build
if: type = push
os: osx
go: 1.20.x
env:
- azure-osx
- GO111MODULE=on
git:
submodules: false # avoid cloning ethereum/tests
script:
- go run build/ci.go install -dlgo
- go run build/ci.go archive -type tar -signer OSX_SIGNING_KEY -signify SIGNIFY_KEY -upload gethstore/builds
# These builders run the tests
- stage: build
os: linux
arch: amd64
dist: bionic
go: 1.20.x
env:
- GO111MODULE=on
script:
- go run build/ci.go test $TEST_PACKAGES
- stage: build
if: type = pull_request
os: linux
arch: arm64
dist: bionic
go: 1.19.x
env:
- GO111MODULE=on
script:
- go run build/ci.go test $TEST_PACKAGES
- stage: build
os: linux
dist: bionic
go: 1.19.x
env:
- GO111MODULE=on
script:
- go run build/ci.go test $TEST_PACKAGES
# This builder does the Ubuntu PPA nightly uploads
- stage: build
if: type = cron || (type = push && tag ~= /^v[0-9]/)
os: linux
dist: bionic
go: 1.20.x
env:
- ubuntu-ppa
- GO111MODULE=on
git:
submodules: false # avoid cloning ethereum/tests
addons:
apt:
packages:
- devscripts
- debhelper
- dput
- fakeroot
- python-bzrlib
- python-paramiko
script:
- echo '|1|7SiYPr9xl3uctzovOTj4gMwAC1M=|t6ReES75Bo/PxlOPJ6/GsGbTrM0= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0aKz5UTUndYgIGG7dQBV+HaeuEZJ2xPHo2DS2iSKvUL4xNMSAY4UguNW+pX56nAQmZKIZZ8MaEvSj6zMEDiq6HFfn5JcTlM80UwlnyKe8B8p7Nk06PPQLrnmQt5fh0HmEcZx+JU9TZsfCHPnX7MNz4ELfZE6cFsclClrKim3BHUIGq//t93DllB+h4O9LHjEUsQ1Sr63irDLSutkLJD6RXchjROXkNirlcNVHH/jwLWR5RcYilNX7S5bIkK8NlWPjsn/8Ua5O7I9/YoE97PpO6i73DTGLh5H9JN/SITwCKBkgSDWUt61uPK3Y11Gty7o2lWsBjhBUm2Y38CBsoGmBw==' >> ~/.ssh/known_hosts
- go run build/ci.go debsrc -upload ethereum/ethereum -sftp-user geth-ci -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>"
# This builder does the Azure archive purges to avoid accumulating junk
- stage: build
if: type = cron
os: linux
dist: bionic
go: 1.20.x
env:
- azure-purge
- GO111MODULE=on
git:
submodules: false # avoid cloning ethereum/tests
script:
- go run build/ci.go purge -store gethstore/builds -days 14
# This builder executes race tests
- stage: build
if: type = cron
os: linux
dist: bionic
go: 1.20.x
env:
- GO111MODULE=on
script:
- go run build/ci.go test -race $TEST_PACKAGES

260
AUTHORS
View File

@ -1,46 +1,27 @@
# This is the official list of go-ethereum authors for copyright purposes. # This is the official list of go-ethereum authors for copyright purposes.
6543 <6543@obermui.de>
a e r t h <aerth@users.noreply.github.com> a e r t h <aerth@users.noreply.github.com>
Aaron Buchwald <aaron.buchwald56@gmail.com>
Abel Nieto <abel.nieto90@gmail.com> Abel Nieto <abel.nieto90@gmail.com>
Abel Nieto <anietoro@uwaterloo.ca>
Adam Babik <a.babik@designfortress.com> Adam Babik <a.babik@designfortress.com>
Adam Schmideg <adamschmideg@users.noreply.github.com>
Aditya <adityasripal@gmail.com> Aditya <adityasripal@gmail.com>
Aditya Arora <arora.aditya520@gmail.com>
Adrià Cidre <adria.cidre@gmail.com> Adrià Cidre <adria.cidre@gmail.com>
Afanasii Kurakin <afanasy@users.noreply.github.com>
Afri Schoedon <5chdn@users.noreply.github.com> Afri Schoedon <5chdn@users.noreply.github.com>
Agustin Armellini Fischer <armellini13@gmail.com> Agustin Armellini Fischer <armellini13@gmail.com>
Ahyun <urbanart2251@gmail.com>
Airead <fgh1987168@gmail.com> Airead <fgh1987168@gmail.com>
Alan Chen <alanchchen@users.noreply.github.com> Alan Chen <alanchchen@users.noreply.github.com>
Alejandro Isaza <alejandro.isaza@gmail.com> Alejandro Isaza <alejandro.isaza@gmail.com>
Aleksey Smyrnov <i@soar.name>
Ales Katona <ales@coinbase.com> Ales Katona <ales@coinbase.com>
Alex Beregszaszi <alex@rtfs.hu>
Alex Leverington <alex@ethdev.com> Alex Leverington <alex@ethdev.com>
Alex Mazalov <mazalov@gmail.com>
Alex Pozhilenkov <alex_pozhilenkov@adoriasoft.com>
Alex Prut <1648497+alexprut@users.noreply.github.com>
Alex Wu <wuyiding@gmail.com> Alex Wu <wuyiding@gmail.com>
Alexander van der Meij <alexandervdm@users.noreply.github.com>
Alexander Yastrebov <yastrebov.alex@gmail.com>
Alexandre Van de Sande <alex.vandesande@ethdev.com> Alexandre Van de Sande <alex.vandesande@ethdev.com>
Alexey Akhunov <akhounov@gmail.com>
Alexey Shekhirin <a.shekhirin@gmail.com>
alexwang <39109351+dipingxian2@users.noreply.github.com>
Ali Atiia <42751398+aliatiia@users.noreply.github.com>
Ali Hajimirza <Ali92hm@users.noreply.github.com> Ali Hajimirza <Ali92hm@users.noreply.github.com>
am2rican5 <am2rican5@gmail.com> am2rican5 <am2rican5@gmail.com>
AmitBRD <60668103+AmitBRD@users.noreply.github.com>
Anatole <62328077+a2br@users.noreply.github.com>
Andrea Franz <andrea@gravityblast.com> Andrea Franz <andrea@gravityblast.com>
Andrei Maiboroda <andrei@ethereum.org> Andrey Petrov <andrey.petrov@shazow.net>
Andrey Petrov <shazow@gmail.com> Andrey Petrov <shazow@gmail.com>
ANOTHEL <anothel1@naver.com> ANOTHEL <anothel1@naver.com>
Antoine Rondelet <rondelet.antoine@gmail.com> Antoine Rondelet <rondelet.antoine@gmail.com>
Antoine Toulme <atoulme@users.noreply.github.com>
Anton Evangelatov <anton.evangelatov@gmail.com> Anton Evangelatov <anton.evangelatov@gmail.com>
Antonio Salazar Cardozo <savedfastcool@gmail.com> Antonio Salazar Cardozo <savedfastcool@gmail.com>
Arba Sasmoyo <arba.sasmoyo@gmail.com> Arba Sasmoyo <arba.sasmoyo@gmail.com>
@ -48,26 +29,19 @@ Armani Ferrante <armaniferrante@berkeley.edu>
Armin Braun <me@obrown.io> Armin Braun <me@obrown.io>
Aron Fischer <github@aron.guru> Aron Fischer <github@aron.guru>
atsushi-ishibashi <atsushi.ishibashi@finatext.com> atsushi-ishibashi <atsushi.ishibashi@finatext.com>
Austin Roberts <code@ausiv.com>
ayeowch <ayeowch@gmail.com> ayeowch <ayeowch@gmail.com>
b00ris <b00ris@mail.ru> b00ris <b00ris@mail.ru>
b1ackd0t <blackd0t@protonmail.com>
bailantaotao <Edwin@maicoin.com> bailantaotao <Edwin@maicoin.com>
baizhenxuan <nkbai@163.com> baizhenxuan <nkbai@163.com>
Balaji Shetty Pachai <32358081+balajipachai@users.noreply.github.com>
Balint Gabor <balint.g@gmail.com> Balint Gabor <balint.g@gmail.com>
baptiste-b-pegasys <85155432+baptiste-b-pegasys@users.noreply.github.com>
Bas van Kervel <bas@ethdev.com> Bas van Kervel <bas@ethdev.com>
Benjamin Brent <benjamin@benjaminbrent.com> Benjamin Brent <benjamin@benjaminbrent.com>
benma <mbencun@gmail.com> benma <mbencun@gmail.com>
Benoit Verkindt <benoit.verkindt@gmail.com> Benoit Verkindt <benoit.verkindt@gmail.com>
Binacs <bin646891055@gmail.com>
bloonfield <bloonfield@163.com> bloonfield <bloonfield@163.com>
Bo <bohende@gmail.com> Bo <bohende@gmail.com>
Bo Ye <boy.e.computer.1982@outlook.com> Bo Ye <boy.e.computer.1982@outlook.com>
Bob Glickstein <bobg@users.noreply.github.com> Bob Glickstein <bobg@users.noreply.github.com>
Boqin Qin <bobbqqin@bupt.edu.cn>
Brandon Harden <b.harden92@gmail.com>
Brent <bmperrea@gmail.com> Brent <bmperrea@gmail.com>
Brian Schroeder <bts@gmail.com> Brian Schroeder <bts@gmail.com>
Bruno Škvorc <bruno@skvorc.me> Bruno Škvorc <bruno@skvorc.me>
@ -75,58 +49,36 @@ C. Brown <hackdom@majoolr.io>
Caesar Chad <BLUE.WEB.GEEK@gmail.com> Caesar Chad <BLUE.WEB.GEEK@gmail.com>
Casey Detrio <cdetrio@gmail.com> Casey Detrio <cdetrio@gmail.com>
CDsigma <cdsigma271@gmail.com> CDsigma <cdsigma271@gmail.com>
Ceelog <chenwei@ceelog.org>
Ceyhun Onur <ceyhun.onur@avalabs.org>
chabashilah <doumodoumo@gmail.com>
changhong <changhong.yu@shanbay.com> changhong <changhong.yu@shanbay.com>
Chase Wright <mysticryuujin@gmail.com> Chase Wright <mysticryuujin@gmail.com>
Chen Quan <terasum@163.com> Chen Quan <terasum@163.com>
Cheng Li <lob4tt@gmail.com>
chenglin <910372762@qq.com>
chenyufeng <yufengcode@gmail.com> chenyufeng <yufengcode@gmail.com>
Chris Pacia <ctpacia@gmail.com>
Chris Ziogas <ziogaschr@gmail.com>
Christian Muehlhaeuser <muesli@gmail.com> Christian Muehlhaeuser <muesli@gmail.com>
Christoph Jentzsch <jentzsch.software@gmail.com> Christoph Jentzsch <jentzsch.software@gmail.com>
chuwt <weitaochu@gmail.com>
cong <ackratos@users.noreply.github.com> cong <ackratos@users.noreply.github.com>
Connor Stein <connor.stein@mail.mcgill.ca>
Corey Lin <514971757@qq.com> Corey Lin <514971757@qq.com>
courtier <derinilter@gmail.com>
cpusoft <cpusoft@live.com> cpusoft <cpusoft@live.com>
Crispin Flowerday <crispin@bitso.com> Crispin Flowerday <crispin@bitso.com>
croath <croathliu@gmail.com> croath <croathliu@gmail.com>
cui <523516579@qq.com> cui <523516579@qq.com>
Dan DeGreef <dan.degreef@gmail.com>
Dan Kinsley <dan@joincivil.com> Dan Kinsley <dan@joincivil.com>
Dan Sosedoff <dan.sosedoff@gmail.com>
Daniel A. Nagy <nagy.da@gmail.com> Daniel A. Nagy <nagy.da@gmail.com>
Daniel Perez <daniel@perez.sh>
Daniel Sloof <goapsychadelic@gmail.com> Daniel Sloof <goapsychadelic@gmail.com>
Darioush Jalali <darioush.jalali@avalabs.org>
Darrel Herbst <dherbst@gmail.com> Darrel Herbst <dherbst@gmail.com>
Dave Appleton <calistralabs@gmail.com> Dave Appleton <calistralabs@gmail.com>
Dave McGregor <dave.s.mcgregor@gmail.com> Dave McGregor <dave.s.mcgregor@gmail.com>
David Cai <davidcai1993@yahoo.com>
David Huie <dahuie@gmail.com> David Huie <dahuie@gmail.com>
Denver <aeharvlee@gmail.com>
Derek Chiang <me@derekchiang.com>
Derek Gottfrid <derek@codecubed.com> Derek Gottfrid <derek@codecubed.com>
Di Peng <pendyaaa@gmail.com>
Diederik Loerakker <proto@protolambda.com>
Diego Siqueira <DiSiqueira@users.noreply.github.com> Diego Siqueira <DiSiqueira@users.noreply.github.com>
Diep Pham <mrfavadi@gmail.com> Diep Pham <mrfavadi@gmail.com>
dipingxian2 <39109351+dipingxian2@users.noreply.github.com> dipingxian2 <39109351+dipingxian2@users.noreply.github.com>
divergencetech <94644849+divergencetech@users.noreply.github.com>
dm4 <sunrisedm4@gmail.com> dm4 <sunrisedm4@gmail.com>
Dmitrij Koniajev <dimchansky@gmail.com> Dmitrij Koniajev <dimchansky@gmail.com>
Dmitry Shulyak <yashulyak@gmail.com> Dmitry Shulyak <yashulyak@gmail.com>
Dmitry Zenovich <dzenovich@gmail.com>
Domino Valdano <dominoplural@gmail.com> Domino Valdano <dominoplural@gmail.com>
Domino Valdano <jeff@okcupid.com>
Dragan Milic <dragan@netice9.com> Dragan Milic <dragan@netice9.com>
dragonvslinux <35779158+dragononcrypto@users.noreply.github.com> dragonvslinux <35779158+dragononcrypto@users.noreply.github.com>
Edgar Aroutiounian <edgar.factorial@gmail.com>
Eduard S <eduardsanou@posteo.net>
Egon Elbre <egonelbre@gmail.com> Egon Elbre <egonelbre@gmail.com>
Elad <theman@elad.im> Elad <theman@elad.im>
Eli <elihanover@yahoo.com> Eli <elihanover@yahoo.com>
@ -134,189 +86,131 @@ Elias Naur <elias.naur@gmail.com>
Elliot Shepherd <elliot@identitii.com> Elliot Shepherd <elliot@identitii.com>
Emil <mursalimovemeel@gmail.com> Emil <mursalimovemeel@gmail.com>
emile <emile@users.noreply.github.com> emile <emile@users.noreply.github.com>
Emmanuel T Odeke <odeke@ualberta.ca> Enrique Fynn <enriquefynn@gmail.com>
Eng Zer Jun <engzerjun@gmail.com>
Enrique Fynn <me@enriquefynn.com> Enrique Fynn <me@enriquefynn.com>
Enrique Ortiz <hi@enriqueortiz.dev>
EOS Classic <info@eos-classic.io> EOS Classic <info@eos-classic.io>
Erichin <erichinbato@gmail.com> Erichin <erichinbato@gmail.com>
Ernesto del Toro <ernesto.deltoro@gmail.com> Ernesto del Toro <ernesto.deltoro@gmail.com>
Ethan Buchman <ethan@coinculture.info> Ethan Buchman <ethan@coinculture.info>
ethersphere <thesw@rm.eth> ethersphere <thesw@rm.eth>
Eugene Lepeico <eugenelepeico@gmail.com>
Eugene Valeyev <evgen.povt@gmail.com> Eugene Valeyev <evgen.povt@gmail.com>
Evangelos Pappas <epappas@evalonlabs.com> Evangelos Pappas <epappas@evalonlabs.com>
Everton Fraga <ev@ethereum.org>
Evgeny <awesome.observer@yandex.com> Evgeny <awesome.observer@yandex.com>
Evgeny Danilenko <6655321@bk.ru> Evgeny Danilenko <6655321@bk.ru>
evgk <evgeniy.kamyshev@gmail.com> evgk <evgeniy.kamyshev@gmail.com>
Evolution404 <35091674+Evolution404@users.noreply.github.com>
EXEC <execvy@gmail.com>
Fabian Vogelsteller <fabian@frozeman.de> Fabian Vogelsteller <fabian@frozeman.de>
Fabio Barone <fabio.barone.co@gmail.com> Fabio Barone <fabio.barone.co@gmail.com>
Fabio Berger <fabioberger1991@gmail.com> Fabio Berger <fabioberger1991@gmail.com>
FaceHo <facehoshi@gmail.com> FaceHo <facehoshi@gmail.com>
Felipe Strozberg <48066928+FelStroz@users.noreply.github.com>
Felix Lange <fjl@twurst.com> Felix Lange <fjl@twurst.com>
Ferenc Szabo <frncmx@gmail.com> Ferenc Szabo <frncmx@gmail.com>
ferhat elmas <elmas.ferhat@gmail.com> ferhat elmas <elmas.ferhat@gmail.com>
Ferran Borreguero <ferranbt@protonmail.com>
Fiisio <liangcszzu@163.com> Fiisio <liangcszzu@163.com>
Fire Man <55934298+basdevelop@users.noreply.github.com>
flowerofdream <775654398@qq.com>
fomotrader <82184770+fomotrader@users.noreply.github.com>
ForLina <471133417@qq.com>
Frank Szendzielarz <33515470+FrankSzendzielarz@users.noreply.github.com> Frank Szendzielarz <33515470+FrankSzendzielarz@users.noreply.github.com>
Frank Wang <eternnoir@gmail.com> Frank Wang <eternnoir@gmail.com>
Franklin <mr_franklin@126.com> Franklin <mr_franklin@126.com>
Furkan KAMACI <furkankamaci@gmail.com> Furkan KAMACI <furkankamaci@gmail.com>
Fuyang Deng <dengfuyang@outlook.com>
GagziW <leon.stanko@rwth-aachen.de> GagziW <leon.stanko@rwth-aachen.de>
Gary Rong <garyrong0905@gmail.com> Gary Rong <garyrong0905@gmail.com>
Gautam Botrel <gautam.botrel@gmail.com>
George Ornbo <george@shapeshed.com> George Ornbo <george@shapeshed.com>
Giuseppe Bertone <bertone.giuseppe@gmail.com>
Greg Colvin <greg@colvin.org>
Gregg Dourgarian <greggd@tempworks.com> Gregg Dourgarian <greggd@tempworks.com>
Gregory Markou <16929357+GregTheGreek@users.noreply.github.com>
Guifel <toowik@gmail.com>
Guilherme Salgado <gsalgado@gmail.com> Guilherme Salgado <gsalgado@gmail.com>
Guillaume Ballet <gballet@gmail.com> Guillaume Ballet <gballet@gmail.com>
Guillaume Nicolas <guin56@gmail.com> Guillaume Nicolas <guin56@gmail.com>
GuiltyMorishita <morilliantblue@gmail.com> GuiltyMorishita <morilliantblue@gmail.com>
Guruprasad Kamath <48196632+gurukamath@users.noreply.github.com>
Gus <yo@soygus.com> Gus <yo@soygus.com>
Gustav Simonsson <gustav.simonsson@gmail.com> Gustav Simonsson <gustav.simonsson@gmail.com>
Gísli Kristjánsson <gislik@hamstur.is> Gísli Kristjánsson <gislik@hamstur.is>
Ha ĐANG <dvietha@gmail.com> Ha ĐANG <dvietha@gmail.com>
HackyMiner <hackyminer@gmail.com> HackyMiner <hackyminer@gmail.com>
hadv <dvietha@gmail.com> hadv <dvietha@gmail.com>
Hanjiang Yu <delacroix.yu@gmail.com>
Hao Bryan Cheng <haobcheng@gmail.com> Hao Bryan Cheng <haobcheng@gmail.com>
Hao Duan <duanhao0814@gmail.com>
HAOYUatHZ <37070449+HAOYUatHZ@users.noreply.github.com> HAOYUatHZ <37070449+HAOYUatHZ@users.noreply.github.com>
Harry Dutton <me@bytejedi.com>
haryu703 <34744512+haryu703@users.noreply.github.com>
Hendrik Hofstadt <hendrik@nexantic.com>
Henning Diedrich <hd@eonblast.com> Henning Diedrich <hd@eonblast.com>
henopied <13500516+henopied@users.noreply.github.com>
hero5512 <lvshuaino@gmail.com>
holisticode <holistic.computing@gmail.com> holisticode <holistic.computing@gmail.com>
Hongbin Mao <hello2mao@gmail.com> Hongbin Mao <hello2mao@gmail.com>
Hsien-Tang Kao <htkao@pm.me> Hsien-Tang Kao <htkao@pm.me>
hsyodyssey <47173566+hsyodyssey@users.noreply.github.com>
Husam Ibrahim <39692071+HusamIbrahim@users.noreply.github.com> Husam Ibrahim <39692071+HusamIbrahim@users.noreply.github.com>
Hwanjo Heo <34005989+hwanjo@users.noreply.github.com>
hydai <z54981220@gmail.com> hydai <z54981220@gmail.com>
Hyung-Kyu Hqueue Choi <hyungkyu.choi@gmail.com> Hyung-Kyu Hqueue Choi <hyungkyu.choi@gmail.com>
Håvard Anda Estensen <haavard.ae@gmail.com>
Ian Macalinao <me@ian.pw> Ian Macalinao <me@ian.pw>
Ian Norden <iannordenn@gmail.com> Ian Norden <iannordenn@gmail.com>
icodezjb <icodezjb@163.com>
Ikko Ashimine <eltociear@gmail.com>
Ilan Gitter <8359193+gitteri@users.noreply.github.com>
ImanSharaf <78227895+ImanSharaf@users.noreply.github.com>
Isidoro Ghezzi <isidoro.ghezzi@icloud.com> Isidoro Ghezzi <isidoro.ghezzi@icloud.com>
Iskander (Alex) Sharipov <quasilyte@gmail.com> Iskander (Alex) Sharipov <quasilyte@gmail.com>
Ivan Bogatyy <bogatyi@gmail.com>
Ivan Daniluk <ivan.daniluk@gmail.com> Ivan Daniluk <ivan.daniluk@gmail.com>
Ivo Georgiev <ivo@strem.io> Ivo Georgiev <ivo@strem.io>
jacksoom <lifengliu1994@gmail.com>
Jae Kwon <jkwon.work@gmail.com> Jae Kwon <jkwon.work@gmail.com>
James Prestwich <10149425+prestwich@users.noreply.github.com>
Jamie Pitts <james.pitts@gmail.com> Jamie Pitts <james.pitts@gmail.com>
Janoš Guljaš <janos@resenje.org> Janos Guljas <janos@resenje.org>
Jared Wasinger <j-wasinger@hotmail.com> Janoš Guljaš <janos@users.noreply.github.com>
Jason Carver <jacarver@linkedin.com> Jason Carver <jacarver@linkedin.com>
Javier Peletier <jm@epiclabs.io> Javier Peletier <jm@epiclabs.io>
Javier Peletier <jpeletier@users.noreply.github.com>
Javier Sagredo <jasataco@gmail.com> Javier Sagredo <jasataco@gmail.com>
Jay <codeholic.arena@gmail.com> Jay <codeholic.arena@gmail.com>
Jay Guo <guojiannan1101@gmail.com> Jay Guo <guojiannan1101@gmail.com>
Jaynti Kanani <jdkanani@gmail.com> Jaynti Kanani <jdkanani@gmail.com>
Jeff Prestes <jeffprestes@gmail.com> Jeff Prestes <jeffprestes@gmail.com>
Jeff R. Allen <jra@nella.org> Jeff R. Allen <jra@nella.org>
Jeff Wentworth <jeff@curvegrid.com>
Jeffery Robert Walsh <rlxrlps@gmail.com> Jeffery Robert Walsh <rlxrlps@gmail.com>
Jeffrey Wilcke <jeffrey@ethereum.org> Jeffrey Wilcke <jeffrey@ethereum.org>
Jens Agerberg <github@agerberg.me> Jens Agerberg <github@agerberg.me>
Jeremy McNevin <jeremy.mcnevin@optum.com> Jeremy McNevin <jeremy.mcnevin@optum.com>
Jeremy Schlatter <jeremy.schlatter@gmail.com> Jeremy Schlatter <jeremy.schlatter@gmail.com>
Jerzy Lasyk <jerzylasyk@gmail.com> Jerzy Lasyk <jerzylasyk@gmail.com>
Jesse Tane <jesse.tane@gmail.com>
Jia Chenhui <jiachenhui1989@gmail.com> Jia Chenhui <jiachenhui1989@gmail.com>
Jim McDonald <Jim@mcdee.net> Jim McDonald <Jim@mcdee.net>
jk-jeongkyun <45347815+jeongkyun-oh@users.noreply.github.com>
jkcomment <jkcomment@gmail.com> jkcomment <jkcomment@gmail.com>
JoeGruffins <34998433+JoeGruffins@users.noreply.github.com>
Joel Burget <joelburget@gmail.com> Joel Burget <joelburget@gmail.com>
John C. Vernaleo <john@netpurgatory.com> John C. Vernaleo <john@netpurgatory.com>
John Difool <johndifoolpi@gmail.com>
Johns Beharry <johns@peakshift.com> Johns Beharry <johns@peakshift.com>
Jonas <felberj@users.noreply.github.com> Jonas <felberj@users.noreply.github.com>
Jonathan Brown <jbrown@bluedroplet.com> Jonathan Brown <jbrown@bluedroplet.com>
Jonathan Chappelow <chappjc@users.noreply.github.com>
Jonathan Gimeno <jgimeno@gmail.com>
JoranHonig <JoranHonig@users.noreply.github.com> JoranHonig <JoranHonig@users.noreply.github.com>
Jordan Krage <jmank88@gmail.com> Jordan Krage <jmank88@gmail.com>
Jorropo <jorropo.pgm@gmail.com>
Joseph Chow <ethereum@outlook.com> Joseph Chow <ethereum@outlook.com>
Joshua Colvin <jcolvin@offchainlabs.com>
Joshua Gutow <jbgutow@gmail.com>
jovijovi <mageyul@hotmail.com>
jtakalai <juuso.takalainen@streamr.com> jtakalai <juuso.takalainen@streamr.com>
JU HYEONG PARK <dkdkajej@gmail.com> JU HYEONG PARK <dkdkajej@gmail.com>
Julian Y <jyap808@users.noreply.github.com>
Justin Clark-Casey <justincc@justincc.org> Justin Clark-Casey <justincc@justincc.org>
Justin Drake <drakefjustin@gmail.com> Justin Drake <drakefjustin@gmail.com>
Justus <jus@gtsbr.org> jwasinger <j-wasinger@hotmail.com>
Kawashima <91420903+sscodereth@users.noreply.github.com>
ken10100147 <sunhongping@kanjian.com> ken10100147 <sunhongping@kanjian.com>
Kenji Siu <kenji@isuntv.com> Kenji Siu <kenji@isuntv.com>
Kenso Trabing <kenso.trabing@bloomwebsite.com>
Kenso Trabing <ktrabing@acm.org> Kenso Trabing <ktrabing@acm.org>
Kevin <denk.kevin@web.de> Kevin <denk.kevin@web.de>
kevin.xu <cming.xu@gmail.com> kevin.xu <cming.xu@gmail.com>
KibGzr <kibgzr@gmail.com>
kiel barry <kiel.j.barry@gmail.com> kiel barry <kiel.j.barry@gmail.com>
kilic <onurkilic1004@gmail.com>
kimmylin <30611210+kimmylin@users.noreply.github.com> kimmylin <30611210+kimmylin@users.noreply.github.com>
Kitten King <53072918+kittenking@users.noreply.github.com> Kitten King <53072918+kittenking@users.noreply.github.com>
knarfeh <hejun1874@gmail.com> knarfeh <hejun1874@gmail.com>
Kobi Gurkan <kobigurk@gmail.com> Kobi Gurkan <kobigurk@gmail.com>
komika <komika@komika.org>
Konrad Feldmeier <konrad@brainbot.com> Konrad Feldmeier <konrad@brainbot.com>
Kris Shinn <raggamuffin.music@gmail.com> Kris Shinn <raggamuffin.music@gmail.com>
Kristofer Peterson <svenski123@users.noreply.github.com>
Kumar Anirudha <mail@anirudha.dev>
Kurkó Mihály <kurkomisi@users.noreply.github.com> Kurkó Mihály <kurkomisi@users.noreply.github.com>
Kushagra Sharma <ksharm01@gmail.com> Kushagra Sharma <ksharm01@gmail.com>
Kwuaint <34888408+kwuaint@users.noreply.github.com> Kwuaint <34888408+kwuaint@users.noreply.github.com>
Kyuntae Ethan Kim <ethan.kyuntae.kim@gmail.com> Kyuntae Ethan Kim <ethan.kyuntae.kim@gmail.com>
Lee Bousfield <ljbousfield@gmail.com> ledgerwatch <akhounov@gmail.com>
Lefteris Karapetsas <lefteris@refu.co> Lefteris Karapetsas <lefteris@refu.co>
Leif Jurvetson <leijurv@gmail.com> Leif Jurvetson <leijurv@gmail.com>
Leo Shklovskii <leo@thermopylae.net> Leo Shklovskii <leo@thermopylae.net>
LeoLiao <leofantast@gmail.com> LeoLiao <leofantast@gmail.com>
Lewis Marshall <lewis@lmars.net> Lewis Marshall <lewis@lmars.net>
lhendre <lhendre2@gmail.com> lhendre <lhendre2@gmail.com>
Li Dongwei <lidw1988@126.com> Liang Ma <liangma.ul@gmail.com>
Liang Ma <liangma@liangbit.com> Liang Ma <liangma@liangbit.com>
Liang ZOU <liang.d.zou@gmail.com> Liang ZOU <liang.d.zou@gmail.com>
libby kent <viskovitzzz@gmail.com>
libotony <liboliqi@gmail.com> libotony <liboliqi@gmail.com>
LieutenantRoger <dijsky_2015@hotmail.com>
ligi <ligi@ligi.de> ligi <ligi@ligi.de>
Lio李欧 <lionello@users.noreply.github.com> Lio李欧 <lionello@users.noreply.github.com>
lmittmann <lmittmann@users.noreply.github.com>
Lorenzo Manacorda <lorenzo@kinvolk.io> Lorenzo Manacorda <lorenzo@kinvolk.io>
Louis Holbrook <dev@holbrook.no> Louis Holbrook <dev@holbrook.no>
Luca Zeug <luclu@users.noreply.github.com> Luca Zeug <luclu@users.noreply.github.com>
Lucas Hendren <lhendre2@gmail.com>
lzhfromustc <43191155+lzhfromustc@users.noreply.github.com>
Magicking <s@6120.eu> Magicking <s@6120.eu>
manlio <manlio.poltronieri@gmail.com> manlio <manlio.poltronieri@gmail.com>
Maran Hidskes <maran.hidskes@gmail.com> Maran Hidskes <maran.hidskes@gmail.com>
Marek Kotewicz <marek.kotewicz@gmail.com> Marek Kotewicz <marek.kotewicz@gmail.com>
Mariano Cortesi <mcortesi@gmail.com>
Marius van der Wijden <m.vanderwijden@live.de> Marius van der Wijden <m.vanderwijden@live.de>
Mark <markya0616@gmail.com> Mark <markya0616@gmail.com>
Mark Rushakoff <mark.rushakoff@gmail.com> Mark Rushakoff <mark.rushakoff@gmail.com>
@ -324,193 +218,108 @@ mark.lin <mark@maicoin.com>
Martin Alex Philip Dawson <u1356770@gmail.com> Martin Alex Philip Dawson <u1356770@gmail.com>
Martin Holst Swende <martin@swende.se> Martin Holst Swende <martin@swende.se>
Martin Klepsch <martinklepsch@googlemail.com> Martin Klepsch <martinklepsch@googlemail.com>
Martin Lundfall <martin.lundfall@protonmail.com>
Martin Michlmayr <tbm@cyrius.com>
Martin Redmond <21436+reds@users.noreply.github.com>
Mason Fischer <mason@kissr.co>
Mateusz Morusiewicz <11313015+Ruteri@users.noreply.github.com>
Mats Julian Olsen <mats@plysjbyen.net> Mats Julian Olsen <mats@plysjbyen.net>
Matt Garnett <14004106+lightclient@users.noreply.github.com>
Matt K <1036969+mkrump@users.noreply.github.com> Matt K <1036969+mkrump@users.noreply.github.com>
Matthew Di Ferrante <mattdf@users.noreply.github.com> Matthew Di Ferrante <mattdf@users.noreply.github.com>
Matthew Halpern <matthalp@gmail.com> Matthew Halpern <matthalp@gmail.com>
Matthew Halpern <matthalp@google.com>
Matthew Wampler-Doty <matthew.wampler.doty@gmail.com> Matthew Wampler-Doty <matthew.wampler.doty@gmail.com>
Max Sistemich <mafrasi2@googlemail.com> Max Sistemich <mafrasi2@googlemail.com>
Maxim Zhiburt <zhiburt@gmail.com>
Maximilian Meister <mmeister@suse.de> Maximilian Meister <mmeister@suse.de>
me020523 <me020523@gmail.com>
Melvin Junhee Woo <melvin.woo@groundx.xyz>
meowsbits <b5c6@protonmail.com>
Micah Zoltu <micah@zoltu.net> Micah Zoltu <micah@zoltu.net>
Michael Forney <mforney@mforney.org>
Michael Riabzev <michael@starkware.co>
Michael Ruminer <michael.ruminer+github@gmail.com> Michael Ruminer <michael.ruminer+github@gmail.com>
michael1011 <me@michael1011.at>
Miguel Mota <miguelmota2@gmail.com> Miguel Mota <miguelmota2@gmail.com>
Mike Burr <mburr@nightmare.com>
Mikhail Mikheev <mmvsha73@gmail.com>
milesvant <milesvant@gmail.com>
Miro <mirokuratczyk@users.noreply.github.com>
Miya Chen <miyatlchen@gmail.com> Miya Chen <miyatlchen@gmail.com>
Mohanson <mohanson@outlook.com> Mohanson <mohanson@outlook.com>
mr_franklin <mr_franklin@126.com> mr_franklin <mr_franklin@126.com>
Mudit Gupta <guptamudit@ymail.com>
Mymskmkt <1847234666@qq.com> Mymskmkt <1847234666@qq.com>
Nalin Bhardwaj <nalinbhardwaj@nibnalin.me> Nalin Bhardwaj <nalinbhardwaj@nibnalin.me>
Natsu Kagami <natsukagami@gmail.com>
Nchinda Nchinda <nchinda2@gmail.com> Nchinda Nchinda <nchinda2@gmail.com>
nebojsa94 <nebojsa94@users.noreply.github.com>
necaremus <necaremus@gmail.com> necaremus <necaremus@gmail.com>
nedifi <103940716+nedifi@users.noreply.github.com>
needkane <604476380@qq.com> needkane <604476380@qq.com>
Nguyen Kien Trung <trung.n.k@gmail.com> Nguyen Kien Trung <trung.n.k@gmail.com>
Nguyen Sy Thanh Son <thanhson1085@gmail.com> Nguyen Sy Thanh Son <thanhson1085@gmail.com>
Nic Jansma <nic@nicj.net>
Nick Dodson <silentcicero@outlook.com> Nick Dodson <silentcicero@outlook.com>
Nick Johnson <arachnid@notdot.net> Nick Johnson <arachnid@notdot.net>
Nicolas Feignon <nfeignon@gmail.com>
Nicolas Guillaume <gunicolas@sqli.com> Nicolas Guillaume <gunicolas@sqli.com>
Nikita Kozhemyakin <enginegl.ec@gmail.com>
Nikola Madjarevic <nikola.madjarevic@gmail.com>
Nilesh Trivedi <nilesh@hypertrack.io> Nilesh Trivedi <nilesh@hypertrack.io>
Nimrod Gutman <nimrod.gutman@gmail.com> Nimrod Gutman <nimrod.gutman@gmail.com>
Nishant Das <nishdas93@gmail.com>
njupt-moon <1015041018@njupt.edu.cn> njupt-moon <1015041018@njupt.edu.cn>
nkbai <nkbai@163.com> nkbai <nkbai@163.com>
noam-alchemy <76969113+noam-alchemy@users.noreply.github.com>
nobody <ddean2009@163.com> nobody <ddean2009@163.com>
Noman <noman@noman.land> Noman <noman@noman.land>
nujabes403 <nujabes403@gmail.com>
Nye Liu <nyet@nyet.org>
Oleg Kovalov <iamolegkovalov@gmail.com> Oleg Kovalov <iamolegkovalov@gmail.com>
Oli Bye <olibye@users.noreply.github.com> Oli Bye <olibye@users.noreply.github.com>
Oliver Tale-Yazdi <oliver@perun.network>
Olivier Hervieu <olivier.hervieu@gmail.com>
Or Neeman <oneeman@gmail.com>
Osoro Bironga <fanosoro@gmail.com>
Osuke <arget-fee.free.dgm@hotmail.co.jp> Osuke <arget-fee.free.dgm@hotmail.co.jp>
Pantelis Peslis <pespantelis@gmail.com>
Pascal Dierich <pascal@merkleplant.xyz>
Patrick O'Grady <prohb125@gmail.com>
Pau <pau@dabax.net>
Paul Berg <hello@paulrberg.com> Paul Berg <hello@paulrberg.com>
Paul Litvak <litvakpol@012.net.il> Paul Litvak <litvakpol@012.net.il>
Paul-Armand Verhaegen <paularmand.verhaegen@gmail.com>
Paulo L F Casaretto <pcasaretto@gmail.com> Paulo L F Casaretto <pcasaretto@gmail.com>
Paweł Bylica <chfast@gmail.com> Paweł Bylica <chfast@gmail.com>
Pedro Gomes <otherview@gmail.com>
Pedro Pombeiro <PombeirP@users.noreply.github.com> Pedro Pombeiro <PombeirP@users.noreply.github.com>
Peter Broadhurst <peter@themumbles.net> Peter Broadhurst <peter@themumbles.net>
peter cresswell <pcresswell@gmail.com>
Peter Pratscher <pratscher@gmail.com> Peter Pratscher <pratscher@gmail.com>
Peter Simard <petesimard56@gmail.com>
Petr Mikusek <petr@mikusek.info> Petr Mikusek <petr@mikusek.info>
Philip Schlump <pschlump@gmail.com> Philip Schlump <pschlump@gmail.com>
Pierre Neter <pierreneter@gmail.com> Pierre Neter <pierreneter@gmail.com>
Pierre R <p.rousset@gmail.com>
piersy <pierspowlesland@gmail.com>
PilkyuJung <anothel1@naver.com> PilkyuJung <anothel1@naver.com>
Piotr Dyraga <piotr.dyraga@keep.network> protolambda <proto@protolambda.com>
ploui <64719999+ploui@users.noreply.github.com>
Preston Van Loon <preston@prysmaticlabs.com>
Prince Sinha <sinhaprince013@gmail.com>
Péter Szilágyi <peterke@gmail.com> Péter Szilágyi <peterke@gmail.com>
qd-ethan <31876119+qdgogogo@users.noreply.github.com> qd-ethan <31876119+qdgogogo@users.noreply.github.com>
Qian Bin <cola.tin.com@gmail.com>
Quest Henkart <qhenkart@gmail.com>
Rachel Franks <nfranks@protonmail.com>
Rafael Matias <rafael@skyle.net>
Raghav Sood <raghavsood@gmail.com> Raghav Sood <raghavsood@gmail.com>
Ralph Caraveo <deckarep@gmail.com> Ralph Caraveo <deckarep@gmail.com>
Ralph Caraveo III <deckarep@gmail.com>
Ramesh Nair <ram@hiddentao.com> Ramesh Nair <ram@hiddentao.com>
rangzen <public@l-homme.com>
reinerRubin <tolstov.georgij@gmail.com> reinerRubin <tolstov.georgij@gmail.com>
Rene Lubov <41963722+renaynay@users.noreply.github.com>
rhaps107 <dod-source@yandex.ru> rhaps107 <dod-source@yandex.ru>
Ricardo Catalinas Jiménez <r@untroubled.be> Ricardo Catalinas Jiménez <r@untroubled.be>
Ricardo Domingos <ricardohsd@gmail.com> Ricardo Domingos <ricardohsd@gmail.com>
Richard Hart <richardhart92@gmail.com> Richard Hart <richardhart92@gmail.com>
Rick <rick.no@groundx.xyz>
RJ Catalano <catalanor0220@gmail.com> RJ Catalano <catalanor0220@gmail.com>
Rob <robert@rojotek.com> Rob <robert@rojotek.com>
Rob Mulholand <rmulholand@8thlight.com> Rob Mulholand <rmulholand@8thlight.com>
Robert Zaremba <robert@zaremba.ch> Robert Zaremba <robert.zaremba@scale-it.pl>
Roc Yu <rociiu0112@gmail.com> Roc Yu <rociiu0112@gmail.com>
Roman Mazalov <83914728+gopherxyz@users.noreply.github.com>
Ross <9055337+Chadsr@users.noreply.github.com>
Runchao Han <elvisage941102@gmail.com> Runchao Han <elvisage941102@gmail.com>
Russ Cox <rsc@golang.org> Russ Cox <rsc@golang.org>
Ryan Schneider <ryanleeschneider@gmail.com> Ryan Schneider <ryanleeschneider@gmail.com>
ryanc414 <ryan@tokencard.io>
Rémy Roy <remyroy@remyroy.com> Rémy Roy <remyroy@remyroy.com>
S. Matthew English <s-matthew-english@users.noreply.github.com> S. Matthew English <s-matthew-english@users.noreply.github.com>
salanfe <salanfe@users.noreply.github.com> salanfe <salanfe@users.noreply.github.com>
Sam <39165351+Xia-Sam@users.noreply.github.com>
Sammy Libre <7374093+sammy007@users.noreply.github.com>
Samuel Marks <samuelmarks@gmail.com> Samuel Marks <samuelmarks@gmail.com>
sanskarkhare <sanskarkhare47@gmail.com>
Sarlor <kinsleer@outlook.com> Sarlor <kinsleer@outlook.com>
Sasuke1964 <neilperry1964@gmail.com> Sasuke1964 <neilperry1964@gmail.com>
Satpal <28562234+SatpalSandhu61@users.noreply.github.com>
Saulius Grigaitis <saulius@necolt.com> Saulius Grigaitis <saulius@necolt.com>
Sean <darcys22@gmail.com> Sean <darcys22@gmail.com>
Serhat Şevki Dinçer <jfcgauss@gmail.com> Sheldon <11510383@mail.sustc.edu.cn>
Shane Bammel <sjb933@gmail.com> Sheldon <374662347@qq.com>
shawn <36943337+lxex@users.noreply.github.com>
shigeyuki azuchi <azuchi@chaintope.com>
Shihao Xia <charlesxsh@hotmail.com>
Shiming <codingmylife@gmail.com>
Shintaro Kaneko <kaneshin0120@gmail.com> Shintaro Kaneko <kaneshin0120@gmail.com>
shiqinfeng1 <150627601@qq.com>
Shuai Qi <qishuai231@gmail.com> Shuai Qi <qishuai231@gmail.com>
Shude Li <islishude@gmail.com>
Shunsuke Watanabe <ww.shunsuke@gmail.com> Shunsuke Watanabe <ww.shunsuke@gmail.com>
silence <wangsai.silence@qq.com> silence <wangsai.silence@qq.com>
Simon Jentzsch <simon@slock.it> Simon Jentzsch <simon@slock.it>
Sina Mahmoodi <1591639+s1na@users.noreply.github.com>
sixdays <lj491685571@126.com>
SjonHortensius <SjonHortensius@users.noreply.github.com>
Slava Karpenko <slavikus@gmail.com>
slumber1122 <slumber1122@gmail.com> slumber1122 <slumber1122@gmail.com>
Smilenator <yurivanenko@yandex.ru> Smilenator <yurivanenko@yandex.ru>
soc1c <soc1c@users.noreply.github.com>
Sorin Neacsu <sorin.neacsu@gmail.com> Sorin Neacsu <sorin.neacsu@gmail.com>
Sparty <vignesh.crysis@gmail.com>
Stein Dekker <dekker.stein@gmail.com> Stein Dekker <dekker.stein@gmail.com>
Steve Gattuso <steve@stevegattuso.me> Steve Gattuso <steve@stevegattuso.me>
Steve Ruckdashel <steve.ruckdashel@gmail.com> Steve Ruckdashel <steve.ruckdashel@gmail.com>
Steve Waldman <swaldman@mchange.com> Steve Waldman <swaldman@mchange.com>
Steven E. Harris <seh@panix.com>
Steven Roose <stevenroose@gmail.com> Steven Roose <stevenroose@gmail.com>
stompesi <stompesi@gmail.com> stompesi <stompesi@gmail.com>
stormpang <jialinpeng@vip.qq.com> stormpang <jialinpeng@vip.qq.com>
sunxiaojun2014 <sunxiaojun-xy@360.cn> sunxiaojun2014 <sunxiaojun-xy@360.cn>
Suriyaa Sundararuban <isc.suriyaa@gmail.com>
Sylvain Laurent <s@6120.eu>
Taeik Lim <sibera21@gmail.com>
tamirms <tamir@trello.com> tamirms <tamir@trello.com>
Tangui Clairet <tangui.clairet@gmail.com>
Tatsuya Shimoda <tacoo@users.noreply.github.com>
Taylor Gerring <taylor.gerring@gmail.com> Taylor Gerring <taylor.gerring@gmail.com>
TColl <38299499+TColl@users.noreply.github.com> TColl <38299499+TColl@users.noreply.github.com>
terasum <terasum@163.com> terasum <terasum@163.com>
tgyKomgo <52910426+tgyKomgo@users.noreply.github.com>
Thad Guidry <thadguidry@gmail.com>
Thomas Bocek <tom@tomp2p.net> Thomas Bocek <tom@tomp2p.net>
thomasmodeneis <thomas.modeneis@gmail.com> thomasmodeneis <thomas.modeneis@gmail.com>
thumb8432 <thumb8432@gmail.com> thumb8432 <thumb8432@gmail.com>
Ti Zhou <tizhou1986@gmail.com> Ti Zhou <tizhou1986@gmail.com>
tia-99 <67107070+tia-99@users.noreply.github.com>
Tim Cooijmans <timcooijmans@gmail.com>
Tobias Hildebrandt <79341166+tobias-hildebrandt@users.noreply.github.com>
Tosh Camille <tochecamille@gmail.com> Tosh Camille <tochecamille@gmail.com>
tsarpaul <Litvakpol@012.net.il> tsarpaul <Litvakpol@012.net.il>
Tyler Chambers <2775339+tylerchambers@users.noreply.github.com>
tzapu <alex@tzapu.com> tzapu <alex@tzapu.com>
ucwong <ucwong@126.com>
uji <49834542+uji@users.noreply.github.com>
ult-bobonovski <alex@ultiledger.io> ult-bobonovski <alex@ultiledger.io>
Valentin Trinqué <ValentinTrinque@users.noreply.github.com>
Valentin Wüstholz <wuestholz@gmail.com> Valentin Wüstholz <wuestholz@gmail.com>
Vedhavyas Singareddi <vedhavyas.singareddi@gmail.com> Vedhavyas Singareddi <vedhavyas.singareddi@gmail.com>
Victor Farazdagi <simple.square@gmail.com> Victor Farazdagi <simple.square@gmail.com>
@ -521,71 +330,40 @@ Ville Sundell <github@solarius.fi>
vim88 <vim88vim88@gmail.com> vim88 <vim88vim88@gmail.com>
Vincent G <caktux@gmail.com> Vincent G <caktux@gmail.com>
Vincent Serpoul <vincent@serpoul.com> Vincent Serpoul <vincent@serpoul.com>
Vinod Damle <vdamle@users.noreply.github.com>
Vitalik Buterin <v@buterin.com> Vitalik Buterin <v@buterin.com>
Vitaly Bogdanov <vsbogd@gmail.com> Vitaly Bogdanov <vsbogd@gmail.com>
Vitaly V <vvelikodny@gmail.com> Vitaly V <vvelikodny@gmail.com>
Vivek Anand <vivekanand1101@users.noreply.github.com> Vivek Anand <vivekanand1101@users.noreply.github.com>
Vlad <gluk256@gmail.com>
Vlad Bokov <razum2um@mail.ru> Vlad Bokov <razum2um@mail.ru>
Vlad Gluhovsky <gluk256@gmail.com> Vlad Gluhovsky <gluk256@users.noreply.github.com>
Ward Bradt <wardbradt5@gmail.com>
Water <44689567+codeoneline@users.noreply.github.com>
wbt <wbt@users.noreply.github.com>
weimumu <934657014@qq.com> weimumu <934657014@qq.com>
Wenbiao Zheng <delweng@gmail.com> Wenbiao Zheng <delweng@gmail.com>
Wenshao Zhong <wzhong20@uic.edu>
Will Villanueva <hello@willvillanueva.com>
William Morriss <wjmelements@gmail.com>
William Setzer <bootstrapsetzer@gmail.com> William Setzer <bootstrapsetzer@gmail.com>
williambannas <wrschwartz@wpi.edu> williambannas <wrschwartz@wpi.edu>
wuff1996 <33193253+wuff1996@users.noreply.github.com>
Wuxiang <wuxiangzhou2010@gmail.com> Wuxiang <wuxiangzhou2010@gmail.com>
Xiaobing Jiang <s7v7nislands@gmail.com>
xiekeyang <xiekeyang@users.noreply.github.com> xiekeyang <xiekeyang@users.noreply.github.com>
xincaosu <xincaosu@126.com> xincaosu <xincaosu@126.com>
xinluyin <31590468+xinluyin@users.noreply.github.com>
Xudong Liu <33193253+r1cs@users.noreply.github.com>
xwjack <XWJACK@users.noreply.github.com>
yahtoo <yahtoo.ma@gmail.com> yahtoo <yahtoo.ma@gmail.com>
Yang Hau <vulxj0j8j8@gmail.com>
YaoZengzeng <yaozengzeng@zju.edu.cn> YaoZengzeng <yaozengzeng@zju.edu.cn>
YH-Zhou <yanhong.zhou05@gmail.com> YH-Zhou <yanhong.zhou05@gmail.com>
Yihau Chen <a122092487@gmail.com>
Yohann Léon <sybiload@gmail.com> Yohann Léon <sybiload@gmail.com>
Yoichi Hirai <i@yoichihirai.com> Yoichi Hirai <i@yoichihirai.com>
Yole <007yuyue@gmail.com>
Yondon Fu <yondon.fu@gmail.com> Yondon Fu <yondon.fu@gmail.com>
YOSHIDA Masanori <masanori.yoshida@gmail.com> YOSHIDA Masanori <masanori.yoshida@gmail.com>
yoza <yoza.is12s@gmail.com> yoza <yoza.is12s@gmail.com>
yumiel yoomee1313 <yumiel.ko@groundx.xyz>
Yusup <awklsgrep@gmail.com> Yusup <awklsgrep@gmail.com>
yutianwu <wzxingbupt@gmail.com>
ywzqwwt <39263032+ywzqwwt@users.noreply.github.com>
zaccoding <zaccoding725@gmail.com>
Zach <zach.ramsay@gmail.com> Zach <zach.ramsay@gmail.com>
Zachinquarantine <Zachinquarantine@protonmail.com>
zah <zahary@gmail.com> zah <zahary@gmail.com>
Zahoor Mohamed <zahoor@zahoor.in> Zahoor Mohamed <zahoor@zahoor.in>
Zak Cole <zak@beattiecole.com> Zak Cole <zak@beattiecole.com>
zcheng9 <zcheng9@hawk.iit.edu>
zer0to0ne <36526113+zer0to0ne@users.noreply.github.com> zer0to0ne <36526113+zer0to0ne@users.noreply.github.com>
zgfzgf <48779939+zgfzgf@users.noreply.github.com>
Zhang Zhuo <mycinbrin@gmail.com>
zhangsoledad <787953403@qq.com>
zhaochonghe <41711151+zhaochonghe@users.noreply.github.com>
Zhenguo Niu <Niu.ZGlinux@gmail.com> Zhenguo Niu <Niu.ZGlinux@gmail.com>
zhiqiangxu <652732310@qq.com>
Zhou Zhiyao <ZHOU0250@e.ntu.edu.sg>
Ziyuan Zhong <zzy.albert@163.com>
Zoe Nolan <github@zoenolan.org> Zoe Nolan <github@zoenolan.org>
Zou Guangxian <zouguangxian@gmail.com>
Zsolt Felföldi <zsfelfoldi@gmail.com> Zsolt Felföldi <zsfelfoldi@gmail.com>
Łukasz Kurowski <crackcomm@users.noreply.github.com> Łukasz Kurowski <crackcomm@users.noreply.github.com>
Łukasz Zimnoch <lukaszzimnoch1994@gmail.com>
ΞTHΞЯSPHΞЯΞ <{viktor.tron,nagydani,zsfelfoldi}@gmail.com> ΞTHΞЯSPHΞЯΞ <{viktor.tron,nagydani,zsfelfoldi}@gmail.com>
Максим Чусовлянов <mchusovlianov@gmail.com> Максим Чусовлянов <mchusovlianov@gmail.com>
大彬 <hz_stb@163.com> 大彬 <hz_stb@163.com>
沉风 <myself659@users.noreply.github.com>
贺鹏飞 <hpf@hackerful.cn> 贺鹏飞 <hpf@hackerful.cn>
陈佳 <chenjiablog@gmail.com>
유용환 <33824408+eric-yoo@users.noreply.github.com> 유용환 <33824408+eric-yoo@users.noreply.github.com>

View File

@ -4,17 +4,12 @@ ARG VERSION=""
ARG BUILDNUM="" ARG BUILDNUM=""
# Build Geth in a stock Go builder container # Build Geth in a stock Go builder container
FROM golang:1.20-alpine as builder FROM golang:1.17-alpine as builder
RUN apk add --no-cache gcc musl-dev linux-headers git RUN apk add --no-cache gcc musl-dev linux-headers git
# Get dependencies - will also be cached if we won't change go.mod/go.sum
COPY go.mod /go-ethereum/
COPY go.sum /go-ethereum/
RUN cd /go-ethereum && go mod download
ADD . /go-ethereum ADD . /go-ethereum
RUN cd /go-ethereum && go run build/ci.go install -static ./cmd/geth RUN cd /go-ethereum && go run build/ci.go install ./cmd/geth
# Pull Geth into a second stage deploy alpine container # Pull Geth into a second stage deploy alpine container
FROM alpine:latest FROM alpine:latest

View File

@ -4,17 +4,12 @@ ARG VERSION=""
ARG BUILDNUM="" ARG BUILDNUM=""
# Build Geth in a stock Go builder container # Build Geth in a stock Go builder container
FROM golang:1.20-alpine as builder FROM golang:1.17-alpine as builder
RUN apk add --no-cache gcc musl-dev linux-headers git RUN apk add --no-cache gcc musl-dev linux-headers git
# Get dependencies - will also be cached if we won't change go.mod/go.sum
COPY go.mod /go-ethereum/
COPY go.sum /go-ethereum/
RUN cd /go-ethereum && go mod download
ADD . /go-ethereum ADD . /go-ethereum
RUN cd /go-ethereum && go run build/ci.go install -static RUN cd /go-ethereum && go run build/ci.go install
# Pull all binaries into a second stage deploy alpine container # Pull all binaries into a second stage deploy alpine container
FROM alpine:latest FROM alpine:latest

50
Jenkinsfile vendored
View File

@ -1,50 +0,0 @@
pipeline {
agent any
stages {
stage('Build') {
steps {
script{
docker.withRegistry('https://git.vdb.to'){
echo 'Building geth image...'
//def geth_image = docker.build("cerc-io/go-ethereum:jenkinscicd")
echo 'built geth image'
}
}
}
}
stage('Test') {
agent {
docker {
image 'cerc-io/foundation:jenkinscicd'
//image 'cerc-io/foundation_alpine:jenkinscicd'
}
}
environment {
GO111MODULE = "on"
CGO_ENABLED = 1
//GOPATH = "${JENKINS_HOME}/jobs/${JOB_NAME}/builds/${BUILD_ID}"
//GOPATH = "/go"
GOPATH = "/tmp/go"
//GOMODCACHE = "/go/pkg/mod"
GOCACHE = "${WORKSPACE}/.cache/go-build"
GOENV = "${WORKSPACE}/.config/go/env"
GOMODCACHE = "/tmp/go/pkg/mod"
GOWORK=""
//GOFLAGS=""
}
steps {
echo 'Testing ...'
//sh '/usr/local/go/bin/go test -p 1 -v ./...'
sh 'make test'
}
}
stage('Packaging') {
steps {
echo 'Packaging ...'
}
}
}
}

View File

@ -26,7 +26,7 @@ PASSWORD = password
export PGPASSWORD=$(PASSWORD) export PGPASSWORD=$(PASSWORD)
#Test #Test
TEST_DB = cerc_testing TEST_DB = vulcanize_public
TEST_CONNECT_STRING = postgresql://$(USER):$(PASSWORD)@$(HOST_NAME):$(PORT)/$(TEST_DB)?sslmode=disable TEST_CONNECT_STRING = postgresql://$(USER):$(PASSWORD)@$(HOST_NAME):$(PORT)/$(TEST_DB)?sslmode=disable
geth: geth:
@ -37,6 +37,18 @@ geth:
all: all:
$(GORUN) build/ci.go install $(GORUN) build/ci.go install
android:
$(GORUN) build/ci.go aar --local
@echo "Done building."
@echo "Import \"$(GOBIN)/geth.aar\" to use the library."
@echo "Import \"$(GOBIN)/geth-sources.jar\" to add javadocs"
@echo "For more info see https://stackoverflow.com/questions/20994336/android-studio-how-to-attach-javadoc"
ios:
$(GORUN) build/ci.go xcode --local
@echo "Done building."
@echo "Import \"$(GOBIN)/Geth.framework\" to use the library."
test: all test: all
$(GORUN) build/ci.go test $(GORUN) build/ci.go test
@ -52,6 +64,7 @@ clean:
devtools: devtools:
env GOBIN= go install golang.org/x/tools/cmd/stringer@latest env GOBIN= go install golang.org/x/tools/cmd/stringer@latest
env GOBIN= go install github.com/kevinburke/go-bindata/go-bindata@latest
env GOBIN= go install github.com/fjl/gencodec@latest env GOBIN= go install github.com/fjl/gencodec@latest
env GOBIN= go install github.com/golang/protobuf/protoc-gen-go@latest env GOBIN= go install github.com/golang/protobuf/protoc-gen-go@latest
env GOBIN= go install ./cmd/abigen env GOBIN= go install ./cmd/abigen

155
README.md
View File

@ -1,8 +1,10 @@
## Go Ethereum ## Go Ethereum
Official Golang execution layer implementation of the Ethereum protocol. Official Golang implementation of the Ethereum protocol.
[![API Reference](https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667)](https://pkg.go.dev/github.com/ethereum/go-ethereum?tab=doc) [![API Reference](
https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667
)](https://pkg.go.dev/github.com/ethereum/go-ethereum?tab=doc)
[![Go Report Card](https://goreportcard.com/badge/github.com/ethereum/go-ethereum)](https://goreportcard.com/report/github.com/ethereum/go-ethereum) [![Go Report Card](https://goreportcard.com/badge/github.com/ethereum/go-ethereum)](https://goreportcard.com/report/github.com/ethereum/go-ethereum)
[![Travis](https://travis-ci.com/ethereum/go-ethereum.svg?branch=master)](https://travis-ci.com/ethereum/go-ethereum) [![Travis](https://travis-ci.com/ethereum/go-ethereum.svg?branch=master)](https://travis-ci.com/ethereum/go-ethereum)
[![Discord](https://img.shields.io/badge/discord-join%20chat-blue.svg)](https://discord.gg/nthXNEv) [![Discord](https://img.shields.io/badge/discord-join%20chat-blue.svg)](https://discord.gg/nthXNEv)
@ -10,26 +12,11 @@ Official Golang execution layer implementation of the Ethereum protocol.
Automated builds are available for stable releases and the unstable master branch. Binary Automated builds are available for stable releases and the unstable master branch. Binary
archives are published at https://geth.ethereum.org/downloads/. archives are published at https://geth.ethereum.org/downloads/.
## Vulcanize Specific
This section captures components specific to vulcanize.
### Branching Structure
We currently follow the following branching structure.
1. Create a branch: `v1.10.18-statediff-vX` --> feature/some-feature`
2. Create a PR upstream: `feature/some-feature` --> `v1.10.18-statediff-vX`
3. When a release is ready, create a release branch: `v1.10.18-statediff-vX` --> `v1.10.18-statediff-X.Y.Z`
4. When `v1.10.18-statediff-vX` is stable, merge it to `statediff`.
This process is subject to change.
## Building the source ## Building the source
For prerequisites and detailed build instructions please read the [Installation Instructions](https://geth.ethereum.org/docs/getting-started/installing-geth). For prerequisites and detailed build instructions please read the [Installation Instructions](https://geth.ethereum.org/docs/install-and-build/installing-geth).
Building `geth` requires both a Go (version 1.19 or later) and a C compiler. You can install Building `geth` requires both a Go (version 1.14 or later) and a C compiler. You can install
them using your favourite package manager. Once the dependencies are installed, run them using your favourite package manager. Once the dependencies are installed, run
```shell ```shell
@ -47,44 +34,29 @@ make all
The go-ethereum project comes with several wrappers/executables found in the `cmd` The go-ethereum project comes with several wrappers/executables found in the `cmd`
directory. directory.
| Command | Description | | Command | Description |
| :--------: | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | :-----------: | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`geth`** | Our main Ethereum CLI client. It is the entry point into the Ethereum network (main-, test- or private net), capable of running as a full node (default), archive node (retaining all historical state) or a light node (retrieving data live). It can be used by other processes as a gateway into the Ethereum network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI page](https://geth.ethereum.org/docs/fundamentals/command-line-options) for command line options. | | **`geth`** | Our main Ethereum CLI client. It is the entry point into the Ethereum network (main-, test- or private net), capable of running as a full node (default), archive node (retaining all historical state) or a light node (retrieving data live). It can be used by other processes as a gateway into the Ethereum network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI page](https://geth.ethereum.org/docs/interface/command-line-options) for command line options. |
| `clef` | Stand-alone signing tool, which can be used as a backend signer for `geth`. | | `clef` | Stand-alone signing tool, which can be used as a backend signer for `geth`. |
| `devp2p` | Utilities to interact with nodes on the networking layer, without running a full blockchain. | | `devp2p` | Utilities to interact with nodes on the networking layer, without running a full blockchain. |
| `abigen` | Source code generator to convert Ethereum contract definitions into easy-to-use, compile-time type-safe Go packages. It operates on plain [Ethereum contract ABIs](https://docs.soliditylang.org/en/develop/abi-spec.html) with expanded functionality if the contract bytecode is also available. However, it also accepts Solidity source files, making development much more streamlined. Please see our [Native DApps](https://geth.ethereum.org/docs/developers/dapp-developer/native-bindings) page for details. | | `abigen` | Source code generator to convert Ethereum contract definitions into easy to use, compile-time type-safe Go packages. It operates on plain [Ethereum contract ABIs](https://docs.soliditylang.org/en/develop/abi-spec.html) with expanded functionality if the contract bytecode is also available. However, it also accepts Solidity source files, making development much more streamlined. Please see our [Native DApps](https://geth.ethereum.org/docs/dapp/native-bindings) page for details. |
| `bootnode` | Stripped down version of our Ethereum client implementation that only takes part in the network node discovery protocol, but does not run any of the higher level application protocols. It can be used as a lightweight bootstrap node to aid in finding peers in private networks. | | `bootnode` | Stripped down version of our Ethereum client implementation that only takes part in the network node discovery protocol, but does not run any of the higher level application protocols. It can be used as a lightweight bootstrap node to aid in finding peers in private networks. |
| `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow isolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug run`). | | `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow isolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug run`). |
| `rlpdump` | Developer utility tool to convert binary RLP ([Recursive Length Prefix](https://ethereum.org/en/developers/docs/data-structures-and-encoding/rlp)) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user-friendlier hierarchical representation (e.g. `rlpdump --hex CE0183FFFFFFC4C304050583616263`). | | `rlpdump` | Developer utility tool to convert binary RLP ([Recursive Length Prefix](https://eth.wiki/en/fundamentals/rlp)) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user-friendlier hierarchical representation (e.g. `rlpdump --hex CE0183FFFFFFC4C304050583616263`). |
| `puppeth` | a CLI wizard that aids in creating a new Ethereum network. |
## Running `geth` ## Running `geth`
Going through all the possible command line flags is out of scope here (please consult our Going through all the possible command line flags is out of scope here (please consult our
[CLI Wiki page](https://geth.ethereum.org/docs/fundamentals/command-line-options)), [CLI Wiki page](https://geth.ethereum.org/docs/interface/command-line-options)),
but we've enumerated a few common parameter combos to get you up to speed quickly but we've enumerated a few common parameter combos to get you up to speed quickly
on how you can run your own `geth` instance. on how you can run your own `geth` instance.
### Hardware Requirements
Minimum:
- CPU with 2+ cores
- 4GB RAM
- 1TB free storage space to sync the Mainnet
- 8 MBit/sec download Internet service
Recommended:
* Fast CPU with 4+ cores
* 16GB+ RAM
* High-performance SSD with at least 1TB of free space
* 25+ MBit/sec download Internet service
### Full node on the main Ethereum network ### Full node on the main Ethereum network
By far the most common scenario is people wanting to simply interact with the Ethereum By far the most common scenario is people wanting to simply interact with the Ethereum
network: create accounts; transfer funds; deploy and interact with contracts. For this network: create accounts; transfer funds; deploy and interact with contracts. For this
particular use case, the user doesn't care about years-old historical data, so we can particular use-case the user doesn't care about years-old historical data, so we can
sync quickly to the current state of the network. To do so: sync quickly to the current state of the network. To do so:
```shell ```shell
@ -95,11 +67,11 @@ This command will:
* Start `geth` in snap sync mode (default, can be changed with the `--syncmode` flag), * Start `geth` in snap sync mode (default, can be changed with the `--syncmode` flag),
causing it to download more data in exchange for avoiding processing the entire history causing it to download more data in exchange for avoiding processing the entire history
of the Ethereum network, which is very CPU intensive. of the Ethereum network, which is very CPU intensive.
* Start the built-in interactive [JavaScript console](https://geth.ethereum.org/docs/interacting-with-geth/javascript-console), * Start up `geth`'s built-in interactive [JavaScript console](https://geth.ethereum.org/docs/interface/javascript-console),
(via the trailing `console` subcommand) through which you can interact using [`web3` methods](https://github.com/ChainSafe/web3.js/blob/0.20.7/DOCUMENTATION.md) (via the trailing `console` subcommand) through which you can interact using [`web3` methods](https://github.com/ChainSafe/web3.js/blob/0.20.7/DOCUMENTATION.md)
(note: the `web3` version bundled within `geth` is very old, and not up to date with official docs), (note: the `web3` version bundled within `geth` is very old, and not up to date with official docs),
as well as `geth`'s own [management APIs](https://geth.ethereum.org/docs/interacting-with-geth/rpc). as well as `geth`'s own [management APIs](https://geth.ethereum.org/docs/rpc/server).
This tool is optional and if you leave it out you can always attach it to an already running This tool is optional and if you leave it out you can always attach to an already running
`geth` instance with `geth attach`. `geth` instance with `geth attach`.
### A Full node on the Görli test network ### A Full node on the Görli test network
@ -114,12 +86,12 @@ the main network, but with play-Ether only.
$ geth --goerli console $ geth --goerli console
``` ```
The `console` subcommand has the same meaning as above and is equally The `console` subcommand has the exact same meaning as above and they are equally
useful on the testnet too. useful on the testnet too. Please, see above for their explanations if you've skipped here.
Specifying the `--goerli` flag, however, will reconfigure your `geth` instance a bit: Specifying the `--goerli` flag, however, will reconfigure your `geth` instance a bit:
* Instead of connecting to the main Ethereum network, the client will connect to the Görli * Instead of connecting the main Ethereum network, the client will connect to the Görli
test network, which uses different P2P bootnodes, different network IDs and genesis test network, which uses different P2P bootnodes, different network IDs and genesis
states. states.
* Instead of using the default data directory (`~/.ethereum` on Linux for example), `geth` * Instead of using the default data directory (`~/.ethereum` on Linux for example), `geth`
@ -130,21 +102,34 @@ Specifying the `--goerli` flag, however, will reconfigure your `geth` instance a
`geth attach <datadir>/goerli/geth.ipc`. Windows users are not affected by `geth attach <datadir>/goerli/geth.ipc`. Windows users are not affected by
this. this.
*Note: Although some internal protective measures prevent transactions from *Note: Although there are some internal protective measures to prevent transactions from
crossing over between the main network and test network, you should always crossing over between the main network and test network, you should make sure to always
use separate accounts for play and real money. Unless you manually move use separate accounts for play-money and real-money. Unless you manually move
accounts, `geth` will by default correctly separate the two networks and will not make any accounts, `geth` will by default correctly separate the two networks and will not make any
accounts available between them._ accounts available between them.*
### Full node on the Rinkeby test network ### Full node on the Rinkeby test network
Go Ethereum also supports connecting to the older proof-of-authority based test network Go Ethereum also supports connecting to the older proof-of-authority based test network
called [_Rinkeby_](https://www.rinkeby.io) which is operated by members of the community. called [*Rinkeby*](https://www.rinkeby.io) which is operated by members of the community.
```shell ```shell
$ geth --rinkeby console $ geth --rinkeby console
``` ```
### Full node on the Ropsten test network
In addition to Görli and Rinkeby, Geth also supports the ancient Ropsten testnet. The
Ropsten test network is based on the Ethash proof-of-work consensus algorithm. As such,
it has certain extra overhead and is more susceptible to reorganization attacks due to the
network's low difficulty/security.
```shell
$ geth --ropsten console
```
*Note: Older Geth configurations store the Ropsten database in the `testnet` subdirectory.*
### Configuration ### Configuration
As an alternative to passing the numerous flags to the `geth` binary, you can also pass a As an alternative to passing the numerous flags to the `geth` binary, you can also pass a
@ -154,14 +139,14 @@ configuration file via:
$ geth --config /path/to/your_config.toml $ geth --config /path/to/your_config.toml
``` ```
To get an idea of how the file should look like you can use the `dumpconfig` subcommand to To get an idea how the file should look like you can use the `dumpconfig` subcommand to
export your existing configuration: export your existing configuration:
```shell ```shell
$ geth --your-favourite-flags dumpconfig $ geth --your-favourite-flags dumpconfig
``` ```
_Note: This works only with `geth` v1.6.0 and above._ *Note: This works only with `geth` v1.6.0 and above.*
#### Docker quick start #### Docker quick start
@ -174,7 +159,7 @@ docker run -d --name ethereum-node -v /Users/alice/ethereum:/root \
ethereum/client-go ethereum/client-go
``` ```
This will start `geth` in snap-sync mode with a DB memory allowance of 1GB, as the This will start `geth` in snap-sync mode with a DB memory allowance of 1GB just as the
above command does. It will also create a persistent volume in your home directory for above command does. It will also create a persistent volume in your home directory for
saving your blockchain as well as map the default ports. There is also an `alpine` tag saving your blockchain as well as map the default ports. There is also an `alpine` tag
available for a slim version of the image. available for a slim version of the image.
@ -187,8 +172,8 @@ accessible from the outside.
As a developer, sooner rather than later you'll want to start interacting with `geth` and the As a developer, sooner rather than later you'll want to start interacting with `geth` and the
Ethereum network via your own programs and not manually through the console. To aid Ethereum network via your own programs and not manually through the console. To aid
this, `geth` has built-in support for a JSON-RPC based APIs ([standard APIs](https://ethereum.github.io/execution-apis/api-documentation/) this, `geth` has built-in support for a JSON-RPC based APIs ([standard APIs](https://eth.wiki/json-rpc/API)
and [`geth` specific APIs](https://geth.ethereum.org/docs/interacting-with-geth/rpc)). and [`geth` specific APIs](https://geth.ethereum.org/docs/rpc/server)).
These can be exposed via HTTP, WebSockets and IPC (UNIX sockets on UNIX based These can be exposed via HTTP, WebSockets and IPC (UNIX sockets on UNIX based
platforms, and named pipes on Windows). platforms, and named pipes on Windows).
@ -208,9 +193,9 @@ HTTP based JSON-RPC API options:
* `--ws.addr` WS-RPC server listening interface (default: `localhost`) * `--ws.addr` WS-RPC server listening interface (default: `localhost`)
* `--ws.port` WS-RPC server listening port (default: `8546`) * `--ws.port` WS-RPC server listening port (default: `8546`)
* `--ws.api` API's offered over the WS-RPC interface (default: `eth,net,web3`) * `--ws.api` API's offered over the WS-RPC interface (default: `eth,net,web3`)
* `--ws.origins` Origins from which to accept WebSocket requests * `--ws.origins` Origins from which to accept websockets requests
* `--ipcdisable` Disable the IPC-RPC server * `--ipcdisable` Disable the IPC-RPC server
* `--ipcapi` API's offered over the IPC-RPC interface (default: `admin,debug,eth,miner,net,personal,txpool,web3`) * `--ipcapi` API's offered over the IPC-RPC interface (default: `admin,debug,eth,miner,net,personal,shh,txpool,web3`)
* `--ipcpath` Filename for IPC socket/pipe within the datadir (explicit paths escape it) * `--ipcpath` Filename for IPC socket/pipe within the datadir (explicit paths escape it)
You'll need to use your own programming environments' capabilities (libraries, tools, etc) to You'll need to use your own programming environments' capabilities (libraries, tools, etc) to
@ -296,13 +281,13 @@ $ bootnode --genkey=boot.key
$ bootnode --nodekey=boot.key $ bootnode --nodekey=boot.key
``` ```
With the bootnode online, it will display an [`enode` URL](https://ethereum.org/en/developers/docs/networking-layer/network-addresses/#enode) With the bootnode online, it will display an [`enode` URL](https://eth.wiki/en/fundamentals/enode-url-format)
that other nodes can use to connect to it and exchange peer information. Make sure to that other nodes can use to connect to it and exchange peer information. Make sure to
replace the displayed IP address information (most probably `[::]`) with your externally replace the displayed IP address information (most probably `[::]`) with your externally
accessible IP to get the actual `enode` URL. accessible IP to get the actual `enode` URL.
_Note: You could also use a full-fledged `geth` node as a bootnode, but it's the less *Note: You could also use a full-fledged `geth` node as a bootnode, but it's the less
recommended way._ recommended way.*
#### Starting up your member nodes #### Starting up your member nodes
@ -316,13 +301,17 @@ do also specify a custom `--datadir` flag.
$ geth --datadir=path/to/custom/data/folder --bootnodes=<bootnode-enode-url-from-above> $ geth --datadir=path/to/custom/data/folder --bootnodes=<bootnode-enode-url-from-above>
``` ```
_Note: Since your network will be completely cut off from the main and test networks, you'll *Note: Since your network will be completely cut off from the main and test networks, you'll
also need to configure a miner to process transactions and create new blocks for you._ also need to configure a miner to process transactions and create new blocks for you.*
#### Running a private miner #### Running a private miner
Mining on the public Ethereum network is a complex task as it's only feasible using GPUs,
requiring an OpenCL or CUDA enabled `ethminer` instance. For information on such a
setup, please consult the [EtherMining subreddit](https://www.reddit.com/r/EtherMining/)
and the [ethminer](https://github.com/ethereum-mining/ethminer) repository.
In a private network setting a single CPU miner instance is more than enough for In a private network setting, however a single CPU miner instance is more than enough for
practical purposes as it can produce a stable stream of blocks at the correct intervals practical purposes as it can produce a stable stream of blocks at the correct intervals
without needing heavy resources (consider running on a single thread, no need for multiple without needing heavy resources (consider running on a single thread, no need for multiple
ones either). To start a `geth` instance for mining, run it with all your usual flags, extended ones either). To start a `geth` instance for mining, run it with all your usual flags, extended
@ -339,7 +328,7 @@ transactions are accepted at (`--miner.gasprice`).
## Contribution ## Contribution
Thank you for considering helping out with the source code! We welcome contributions Thank you for considering to help out with the source code! We welcome contributions
from anyone on the internet, and are grateful for even the smallest of fixes! from anyone on the internet, and are grateful for even the smallest of fixes!
If you'd like to contribute to go-ethereum, please fork, fix, commit and send a pull request If you'd like to contribute to go-ethereum, please fork, fix, commit and send a pull request
@ -351,30 +340,24 @@ and merge procedures quick and simple.
Please make sure your contributions adhere to our coding guidelines: Please make sure your contributions adhere to our coding guidelines:
- Code must adhere to the official Go [formatting](https://golang.org/doc/effective_go.html#formatting) * Code must adhere to the official Go [formatting](https://golang.org/doc/effective_go.html#formatting)
guidelines (i.e. uses [gofmt](https://golang.org/cmd/gofmt/)). guidelines (i.e. uses [gofmt](https://golang.org/cmd/gofmt/)).
- Code must be documented adhering to the official Go [commentary](https://golang.org/doc/effective_go.html#commentary) * Code must be documented adhering to the official Go [commentary](https://golang.org/doc/effective_go.html#commentary)
guidelines. guidelines.
- Pull requests need to be based on and opened against the `master` branch. * Pull requests need to be based on and opened against the `master` branch.
- Commit messages should be prefixed with the package(s) they modify. * Commit messages should be prefixed with the package(s) they modify.
- E.g. "eth, rpc: make trace configs optional" * E.g. "eth, rpc: make trace configs optional"
Please see the [Developers' Guide](https://geth.ethereum.org/docs/developers/geth-developer/dev-guide) Please see the [Developers' Guide](https://geth.ethereum.org/docs/developers/devguide)
for more details on configuring your environment, managing project dependencies, and for more details on configuring your environment, managing project dependencies, and
testing procedures. testing procedures.
### Contributing to geth.ethereum.org
For contributions to the [go-ethereum website](https://geth.ethereum.org), please checkout and raise pull requests against the `website` branch.
For more detailed instructions please see the `website` branch [README](https://github.com/ethereum/go-ethereum/tree/website#readme) or the
[contributing](https://geth.ethereum.org/docs/developers/geth-developer/contributing) page of the website.
## License ## License
The go-ethereum library (i.e. all code outside of the `cmd` directory) is licensed under the The go-ethereum library (i.e. all code outside of the `cmd` directory) is licensed under the
[GNU Lesser General Public License v3.0](https://www.gnu.org/licenses/lgpl-3.0.en.html), [GNU Lesser General Public License v3.0](https://www.gnu.org/licenses/lgpl-3.0.en.html),
also included in our repository in the `COPYING.LESSER` file. also included in our repository in the `COPYING.LESSER` file.
The go-ethereum binaries (i.e. all code inside of the `cmd` directory) are licensed under the The go-ethereum binaries (i.e. all code inside of the `cmd` directory) is licensed under the
[GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html), also [GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html), also
included in our repository in the `COPYING` file. included in our repository in the `COPYING` file.

View File

@ -87,7 +87,7 @@ func (abi ABI) getArguments(name string, data []byte) (Arguments, error) {
var args Arguments var args Arguments
if method, ok := abi.Methods[name]; ok { if method, ok := abi.Methods[name]; ok {
if len(data)%32 != 0 { if len(data)%32 != 0 {
return nil, fmt.Errorf("abi: improperly formatted output: %q - Bytes: %+v", data, data) return nil, fmt.Errorf("abi: improperly formatted output: %s - Bytes: [%+v]", string(data), data)
} }
args = method.Outputs args = method.Outputs
} }
@ -95,7 +95,7 @@ func (abi ABI) getArguments(name string, data []byte) (Arguments, error) {
args = event.Inputs args = event.Inputs
} }
if args == nil { if args == nil {
return nil, fmt.Errorf("abi: could not locate named method or event: %s", name) return nil, errors.New("abi: could not locate named method or event")
} }
return args, nil return args, nil
} }
@ -164,7 +164,7 @@ func (abi *ABI) UnmarshalJSON(data []byte) error {
case "constructor": case "constructor":
abi.Constructor = NewMethod("", "", Constructor, field.StateMutability, field.Constant, field.Payable, field.Inputs, nil) abi.Constructor = NewMethod("", "", Constructor, field.StateMutability, field.Constant, field.Payable, field.Inputs, nil)
case "function": case "function":
name := ResolveNameConflict(field.Name, func(s string) bool { _, ok := abi.Methods[s]; return ok }) name := overloadedName(field.Name, func(s string) bool { _, ok := abi.Methods[s]; return ok })
abi.Methods[name] = NewMethod(name, field.Name, Function, field.StateMutability, field.Constant, field.Payable, field.Inputs, field.Outputs) abi.Methods[name] = NewMethod(name, field.Name, Function, field.StateMutability, field.Constant, field.Payable, field.Inputs, field.Outputs)
case "fallback": case "fallback":
// New introduced function type in v0.6.0, check more detail // New introduced function type in v0.6.0, check more detail
@ -184,11 +184,9 @@ func (abi *ABI) UnmarshalJSON(data []byte) error {
} }
abi.Receive = NewMethod("", "", Receive, field.StateMutability, field.Constant, field.Payable, nil, nil) abi.Receive = NewMethod("", "", Receive, field.StateMutability, field.Constant, field.Payable, nil, nil)
case "event": case "event":
name := ResolveNameConflict(field.Name, func(s string) bool { _, ok := abi.Events[s]; return ok }) name := overloadedName(field.Name, func(s string) bool { _, ok := abi.Events[s]; return ok })
abi.Events[name] = NewEvent(name, field.Name, field.Anonymous, field.Inputs) abi.Events[name] = NewEvent(name, field.Name, field.Anonymous, field.Inputs)
case "error": case "error":
// Errors cannot be overloaded or overridden but are inherited,
// no need to resolve the name conflict here.
abi.Errors[field.Name] = NewError(field.Name, field.Inputs) abi.Errors[field.Name] = NewError(field.Name, field.Inputs)
default: default:
return fmt.Errorf("abi: could not recognize type %v of field %v", field.Type, field.Name) return fmt.Errorf("abi: could not recognize type %v of field %v", field.Type, field.Name)
@ -246,13 +244,27 @@ func UnpackRevert(data []byte) (string, error) {
if !bytes.Equal(data[:4], revertSelector) { if !bytes.Equal(data[:4], revertSelector) {
return "", errors.New("invalid data for unpacking") return "", errors.New("invalid data for unpacking")
} }
typ, err := NewType("string", "", nil) typ, _ := NewType("string", "", nil)
if err != nil {
return "", err
}
unpacked, err := (Arguments{{Type: typ}}).Unpack(data[4:]) unpacked, err := (Arguments{{Type: typ}}).Unpack(data[4:])
if err != nil { if err != nil {
return "", err return "", err
} }
return unpacked[0].(string), nil return unpacked[0].(string), nil
} }
// overloadedName returns the next available name for a given thing.
// Needed since solidity allows for overloading.
//
// e.g. if the abi contains Methods send, send1
// overloadedName would return send2 for input send.
//
// overloadedName works for methods, events and errors.
func overloadedName(rawName string, isAvail func(string) bool) string {
name := rawName
ok := isAvail(name)
for idx := 0; ok; idx++ {
name = fmt.Sprintf("%s%d", rawName, idx)
ok = isAvail(name)
}
return name
}

View File

@ -165,9 +165,8 @@ func TestInvalidABI(t *testing.T) {
// TestConstructor tests a constructor function. // TestConstructor tests a constructor function.
// The test is based on the following contract: // The test is based on the following contract:
// // contract TestConstructor {
// contract TestConstructor { // constructor(uint256 a, uint256 b) public{}
// constructor(uint256 a, uint256 b) public{}
// } // }
func TestConstructor(t *testing.T) { func TestConstructor(t *testing.T) {
json := `[{ "inputs": [{"internalType": "uint256","name": "a","type": "uint256" },{ "internalType": "uint256","name": "b","type": "uint256"}],"stateMutability": "nonpayable","type": "constructor"}]` json := `[{ "inputs": [{"internalType": "uint256","name": "a","type": "uint256" },{ "internalType": "uint256","name": "b","type": "uint256"}],"stateMutability": "nonpayable","type": "constructor"}]`
@ -725,19 +724,16 @@ func TestBareEvents(t *testing.T) {
} }
// TestUnpackEvent is based on this contract: // TestUnpackEvent is based on this contract:
// // contract T {
// contract T { // event received(address sender, uint amount, bytes memo);
// event received(address sender, uint amount, bytes memo); // event receivedAddr(address sender);
// event receivedAddr(address sender); // function receive(bytes memo) external payable {
// function receive(bytes memo) external payable { // received(msg.sender, msg.value, memo);
// received(msg.sender, msg.value, memo); // receivedAddr(msg.sender);
// receivedAddr(msg.sender); // }
// } // }
// }
//
// When receive("X") is called with sender 0x00... and value 1, it produces this tx receipt: // When receive("X") is called with sender 0x00... and value 1, it produces this tx receipt:
// // receipt{status=1 cgas=23949 bloom=00000000004000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000040200000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 logs=[log: b6818c8064f645cd82d99b59a1a267d6d61117ef [75fd880d39c1daf53b6547ab6cb59451fc6452d27caa90e5b6649dd8293b9eed] 000000000000000000000000376c47978271565f56deb45495afa69e59c16ab200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000000000000158 9ae378b6d4409eada347a5dc0c180f186cb62dc68fcc0f043425eb917335aa28 0 95d429d309bb9d753954195fe2d69bd140b4ae731b9b5b605c34323de162cf00 0]}
// receipt{status=1 cgas=23949 bloom=00000000004000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000040200000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 logs=[log: b6818c8064f645cd82d99b59a1a267d6d61117ef [75fd880d39c1daf53b6547ab6cb59451fc6452d27caa90e5b6649dd8293b9eed] 000000000000000000000000376c47978271565f56deb45495afa69e59c16ab200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000000000000158 9ae378b6d4409eada347a5dc0c180f186cb62dc68fcc0f043425eb917335aa28 0 95d429d309bb9d753954195fe2d69bd140b4ae731b9b5b605c34323de162cf00 0]}
func TestUnpackEvent(t *testing.T) { func TestUnpackEvent(t *testing.T) {
const abiJSON = `[{"constant":false,"inputs":[{"name":"memo","type":"bytes"}],"name":"receive","outputs":[],"payable":true,"stateMutability":"payable","type":"function"},{"anonymous":false,"inputs":[{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"name":"sender","type":"address"}],"name":"receivedAddr","type":"event"}]` const abiJSON = `[{"constant":false,"inputs":[{"name":"memo","type":"bytes"}],"name":"receive","outputs":[],"payable":true,"stateMutability":"payable","type":"function"},{"anonymous":false,"inputs":[{"indexed":false,"name":"sender","type":"address"},{"indexed":false,"name":"amount","type":"uint256"},{"indexed":false,"name":"memo","type":"bytes"}],"name":"received","type":"event"},{"anonymous":false,"inputs":[{"indexed":false,"name":"sender","type":"address"}],"name":"receivedAddr","type":"event"}]`
abi, err := JSON(strings.NewReader(abiJSON)) abi, err := JSON(strings.NewReader(abiJSON))
@ -1042,7 +1038,9 @@ func TestABI_EventById(t *testing.T) {
} }
if event == nil { if event == nil {
t.Errorf("We should find a event for topic %s, test #%d", topicID.Hex(), testnum) t.Errorf("We should find a event for topic %s, test #%d", topicID.Hex(), testnum)
} else if event.ID != topicID { }
if event.ID != topicID {
t.Errorf("Event id %s does not match topic %s, test #%d", event.ID.Hex(), topicID.Hex(), testnum) t.Errorf("Event id %s does not match topic %s, test #%d", event.ID.Hex(), topicID.Hex(), testnum)
} }
@ -1082,9 +1080,8 @@ func TestDoubleDuplicateMethodNames(t *testing.T) {
// TestDoubleDuplicateEventNames checks that if send0 already exists, there won't be a name // TestDoubleDuplicateEventNames checks that if send0 already exists, there won't be a name
// conflict and that the second send event will be renamed send1. // conflict and that the second send event will be renamed send1.
// The test runs the abi of the following contract. // The test runs the abi of the following contract.
// // contract DuplicateEvent {
// contract DuplicateEvent { // event send(uint256 a);
// event send(uint256 a);
// event send0(); // event send0();
// event send(); // event send();
// } // }
@ -1111,8 +1108,7 @@ func TestDoubleDuplicateEventNames(t *testing.T) {
// TestUnnamedEventParam checks that an event with unnamed parameters is // TestUnnamedEventParam checks that an event with unnamed parameters is
// correctly handled. // correctly handled.
// The test runs the abi of the following contract. // The test runs the abi of the following contract.
// // contract TestEvent {
// contract TestEvent {
// event send(uint256, uint256); // event send(uint256, uint256);
// } // }
func TestUnnamedEventParam(t *testing.T) { func TestUnnamedEventParam(t *testing.T) {

View File

@ -18,7 +18,6 @@ package abi
import ( import (
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"reflect" "reflect"
"strings" "strings"
@ -79,8 +78,8 @@ func (arguments Arguments) isTuple() bool {
// Unpack performs the operation hexdata -> Go format. // Unpack performs the operation hexdata -> Go format.
func (arguments Arguments) Unpack(data []byte) ([]interface{}, error) { func (arguments Arguments) Unpack(data []byte) ([]interface{}, error) {
if len(data) == 0 { if len(data) == 0 {
if len(arguments.NonIndexed()) != 0 { if len(arguments) != 0 {
return nil, errors.New("abi: attempting to unmarshall an empty string while arguments are expected") return nil, fmt.Errorf("abi: attempting to unmarshall an empty string while arguments are expected")
} }
return make([]interface{}, 0), nil return make([]interface{}, 0), nil
} }
@ -91,11 +90,11 @@ func (arguments Arguments) Unpack(data []byte) ([]interface{}, error) {
func (arguments Arguments) UnpackIntoMap(v map[string]interface{}, data []byte) error { func (arguments Arguments) UnpackIntoMap(v map[string]interface{}, data []byte) error {
// Make sure map is not nil // Make sure map is not nil
if v == nil { if v == nil {
return errors.New("abi: cannot unpack into a nil map") return fmt.Errorf("abi: cannot unpack into a nil map")
} }
if len(data) == 0 { if len(data) == 0 {
if len(arguments.NonIndexed()) != 0 { if len(arguments) != 0 {
return errors.New("abi: attempting to unmarshall an empty string while arguments are expected") return fmt.Errorf("abi: attempting to unmarshall an empty string while arguments are expected")
} }
return nil // Nothing to unmarshal, return return nil // Nothing to unmarshal, return
} }
@ -116,8 +115,8 @@ func (arguments Arguments) Copy(v interface{}, values []interface{}) error {
return fmt.Errorf("abi: Unpack(non-pointer %T)", v) return fmt.Errorf("abi: Unpack(non-pointer %T)", v)
} }
if len(values) == 0 { if len(values) == 0 {
if len(arguments.NonIndexed()) != 0 { if len(arguments) != 0 {
return errors.New("abi: attempting to copy no values while arguments are expected") return fmt.Errorf("abi: attempting to copy no values while %d arguments are expected", len(arguments))
} }
return nil // Nothing to copy, return return nil // Nothing to copy, return
} }
@ -187,9 +186,6 @@ func (arguments Arguments) UnpackValues(data []byte) ([]interface{}, error) {
virtualArgs := 0 virtualArgs := 0
for index, arg := range nonIndexedArgs { for index, arg := range nonIndexedArgs {
marshalledValue, err := toGoType((index+virtualArgs)*32, arg.Type, data) marshalledValue, err := toGoType((index+virtualArgs)*32, arg.Type, data)
if err != nil {
return nil, err
}
if arg.Type.T == ArrayTy && !isDynamicType(arg.Type) { if arg.Type.T == ArrayTy && !isDynamicType(arg.Type) {
// If we have a static array, like [3]uint256, these are coded as // If we have a static array, like [3]uint256, these are coded as
// just like uint256,uint256,uint256. // just like uint256,uint256,uint256.
@ -207,6 +203,9 @@ func (arguments Arguments) UnpackValues(data []byte) ([]interface{}, error) {
// coded as just like uint256,bool,uint256 // coded as just like uint256,bool,uint256
virtualArgs += getTypeSize(arg.Type)/32 - 1 virtualArgs += getTypeSize(arg.Type)/32 - 1
} }
if err != nil {
return nil, err
}
retval = append(retval, marshalledValue) retval = append(retval, marshalledValue)
} }
return retval, nil return retval, nil

View File

@ -21,6 +21,7 @@ import (
"crypto/ecdsa" "crypto/ecdsa"
"errors" "errors"
"io" "io"
"io/ioutil"
"math/big" "math/big"
"github.com/ethereum/go-ethereum/accounts" "github.com/ethereum/go-ethereum/accounts"
@ -44,7 +45,7 @@ var ErrNotAuthorized = errors.New("not authorized to sign this account")
// Deprecated: Use NewTransactorWithChainID instead. // Deprecated: Use NewTransactorWithChainID instead.
func NewTransactor(keyin io.Reader, passphrase string) (*TransactOpts, error) { func NewTransactor(keyin io.Reader, passphrase string) (*TransactOpts, error) {
log.Warn("WARNING: NewTransactor has been deprecated in favour of NewTransactorWithChainID") log.Warn("WARNING: NewTransactor has been deprecated in favour of NewTransactorWithChainID")
json, err := io.ReadAll(keyin) json, err := ioutil.ReadAll(keyin)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -105,7 +106,7 @@ func NewKeyedTransactor(key *ecdsa.PrivateKey) *TransactOpts {
// NewTransactorWithChainID is a utility method to easily create a transaction signer from // NewTransactorWithChainID is a utility method to easily create a transaction signer from
// an encrypted json key stream and the associated passphrase. // an encrypted json key stream and the associated passphrase.
func NewTransactorWithChainID(keyin io.Reader, passphrase string, chainID *big.Int) (*TransactOpts, error) { func NewTransactorWithChainID(keyin io.Reader, passphrase string, chainID *big.Int) (*TransactOpts, error) {
json, err := io.ReadAll(keyin) json, err := ioutil.ReadAll(keyin)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -63,13 +63,11 @@ type SimulatedBackend struct {
database ethdb.Database // In memory database to store our testing data database ethdb.Database // In memory database to store our testing data
blockchain *core.BlockChain // Ethereum blockchain to handle the consensus blockchain *core.BlockChain // Ethereum blockchain to handle the consensus
mu sync.Mutex mu sync.Mutex
pendingBlock *types.Block // Currently pending block that will be imported on request pendingBlock *types.Block // Currently pending block that will be imported on request
pendingState *state.StateDB // Currently pending state that will be the active on request pendingState *state.StateDB // Currently pending state that will be the active on request
pendingReceipts types.Receipts // Currently receipts for the pending block
events *filters.EventSystem // for filtering log events live events *filters.EventSystem // Event system for filtering log events live
filterSystem *filters.FilterSystem // for filtering database logs
config *params.ChainConfig config *params.ChainConfig
} }
@ -78,27 +76,17 @@ type SimulatedBackend struct {
// and uses a simulated blockchain for testing purposes. // and uses a simulated blockchain for testing purposes.
// A simulated backend always uses chainID 1337. // A simulated backend always uses chainID 1337.
func NewSimulatedBackendWithDatabase(database ethdb.Database, alloc core.GenesisAlloc, gasLimit uint64) *SimulatedBackend { func NewSimulatedBackendWithDatabase(database ethdb.Database, alloc core.GenesisAlloc, gasLimit uint64) *SimulatedBackend {
genesis := core.Genesis{ genesis := core.Genesis{Config: params.AllEthashProtocolChanges, GasLimit: gasLimit, Alloc: alloc}
Config: params.AllEthashProtocolChanges, genesis.MustCommit(database)
GasLimit: gasLimit, blockchain, _ := core.NewBlockChain(database, nil, genesis.Config, ethash.NewFaker(), vm.Config{}, nil, nil)
Alloc: alloc,
}
blockchain, _ := core.NewBlockChain(database, nil, &genesis, nil, ethash.NewFaker(), vm.Config{}, nil, nil)
backend := &SimulatedBackend{ backend := &SimulatedBackend{
database: database, database: database,
blockchain: blockchain, blockchain: blockchain,
config: genesis.Config, config: genesis.Config,
events: filters.NewEventSystem(&filterBackend{database, blockchain}, false),
} }
backend.rollback(blockchain.CurrentBlock())
filterBackend := &filterBackend{database, blockchain, backend}
backend.filterSystem = filters.NewFilterSystem(filterBackend, filters.Config{})
backend.events = filters.NewEventSystem(backend.filterSystem, false)
header := backend.blockchain.CurrentBlock()
block := backend.blockchain.GetBlock(header.Hash(), header.Number.Uint64())
backend.rollback(block)
return backend return backend
} }
@ -117,20 +105,16 @@ func (b *SimulatedBackend) Close() error {
// Commit imports all the pending transactions as a single block and starts a // Commit imports all the pending transactions as a single block and starts a
// fresh new state. // fresh new state.
func (b *SimulatedBackend) Commit() common.Hash { func (b *SimulatedBackend) Commit() {
b.mu.Lock() b.mu.Lock()
defer b.mu.Unlock() defer b.mu.Unlock()
if _, err := b.blockchain.InsertChain([]*types.Block{b.pendingBlock}); err != nil { if _, err := b.blockchain.InsertChain([]*types.Block{b.pendingBlock}); err != nil {
panic(err) // This cannot happen unless the simulator is wrong, fail in that case panic(err) // This cannot happen unless the simulator is wrong, fail in that case
} }
blockHash := b.pendingBlock.Hash()
// Using the last inserted block here makes it possible to build on a side // Using the last inserted block here makes it possible to build on a side
// chain after a fork. // chain after a fork.
b.rollback(b.pendingBlock) b.rollback(b.pendingBlock)
return blockHash
} }
// Rollback aborts all pending transactions, reverting to the last committed state. // Rollback aborts all pending transactions, reverting to the last committed state.
@ -138,10 +122,7 @@ func (b *SimulatedBackend) Rollback() {
b.mu.Lock() b.mu.Lock()
defer b.mu.Unlock() defer b.mu.Unlock()
header := b.blockchain.CurrentBlock() b.rollback(b.blockchain.CurrentBlock())
block := b.blockchain.GetBlock(header.Hash(), header.Number.Uint64())
b.rollback(block)
} }
func (b *SimulatedBackend) rollback(parent *types.Block) { func (b *SimulatedBackend) rollback(parent *types.Block) {
@ -180,7 +161,7 @@ func (b *SimulatedBackend) Fork(ctx context.Context, parent common.Hash) error {
// stateByBlockNumber retrieves a state by a given blocknumber. // stateByBlockNumber retrieves a state by a given blocknumber.
func (b *SimulatedBackend) stateByBlockNumber(ctx context.Context, blockNumber *big.Int) (*state.StateDB, error) { func (b *SimulatedBackend) stateByBlockNumber(ctx context.Context, blockNumber *big.Int) (*state.StateDB, error) {
if blockNumber == nil || blockNumber.Cmp(b.blockchain.CurrentBlock().Number) == 0 { if blockNumber == nil || blockNumber.Cmp(b.blockchain.CurrentBlock().Number()) == 0 {
return b.blockchain.State() return b.blockchain.State()
} }
block, err := b.blockByNumber(ctx, blockNumber) block, err := b.blockByNumber(ctx, blockNumber)
@ -309,7 +290,7 @@ func (b *SimulatedBackend) BlockByNumber(ctx context.Context, number *big.Int) (
// (associated with its hash) if found without Lock. // (associated with its hash) if found without Lock.
func (b *SimulatedBackend) blockByNumber(ctx context.Context, number *big.Int) (*types.Block, error) { func (b *SimulatedBackend) blockByNumber(ctx context.Context, number *big.Int) (*types.Block, error) {
if number == nil || number.Cmp(b.pendingBlock.Number()) == 0 { if number == nil || number.Cmp(b.pendingBlock.Number()) == 0 {
return b.blockByHash(ctx, b.blockchain.CurrentBlock().Hash()) return b.blockchain.CurrentBlock(), nil
} }
block := b.blockchain.GetBlockByNumber(uint64(number.Int64())) block := b.blockchain.GetBlockByNumber(uint64(number.Int64()))
@ -437,7 +418,7 @@ func (b *SimulatedBackend) CallContract(ctx context.Context, call ethereum.CallM
b.mu.Lock() b.mu.Lock()
defer b.mu.Unlock() defer b.mu.Unlock()
if blockNumber != nil && blockNumber.Cmp(b.blockchain.CurrentBlock().Number) != 0 { if blockNumber != nil && blockNumber.Cmp(b.blockchain.CurrentBlock().Number()) != 0 {
return nil, errBlockNumberUnsupported return nil, errBlockNumberUnsupported
} }
stateDB, err := b.blockchain.State() stateDB, err := b.blockchain.State()
@ -461,7 +442,7 @@ func (b *SimulatedBackend) PendingCallContract(ctx context.Context, call ethereu
defer b.mu.Unlock() defer b.mu.Unlock()
defer b.pendingState.RevertToSnapshot(b.pendingState.Snapshot()) defer b.pendingState.RevertToSnapshot(b.pendingState.Snapshot())
res, err := b.callContract(ctx, call, b.pendingBlock.Header(), b.pendingState) res, err := b.callContract(ctx, call, b.pendingBlock, b.pendingState)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -533,7 +514,7 @@ func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMs
available := new(big.Int).Set(balance) available := new(big.Int).Set(balance)
if call.Value != nil { if call.Value != nil {
if call.Value.Cmp(available) >= 0 { if call.Value.Cmp(available) >= 0 {
return 0, core.ErrInsufficientFundsForTransfer return 0, errors.New("insufficient funds for transfer")
} }
available.Sub(available, call.Value) available.Sub(available, call.Value)
} }
@ -555,7 +536,7 @@ func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMs
call.Gas = gas call.Gas = gas
snapshot := b.pendingState.Snapshot() snapshot := b.pendingState.Snapshot()
res, err := b.callContract(ctx, call, b.pendingBlock.Header(), b.pendingState) res, err := b.callContract(ctx, call, b.pendingBlock, b.pendingState)
b.pendingState.RevertToSnapshot(snapshot) b.pendingState.RevertToSnapshot(snapshot)
if err != nil { if err != nil {
@ -605,7 +586,7 @@ func (b *SimulatedBackend) EstimateGas(ctx context.Context, call ethereum.CallMs
// callContract implements common code between normal and pending contract calls. // callContract implements common code between normal and pending contract calls.
// state is modified during execution, make sure to copy it if necessary. // state is modified during execution, make sure to copy it if necessary.
func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallMsg, header *types.Header, stateDB *state.StateDB) (*core.ExecutionResult, error) { func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallMsg, block *types.Block, stateDB *state.StateDB) (*core.ExecutionResult, error) {
// Gas prices post 1559 need to be initialized // Gas prices post 1559 need to be initialized
if call.GasPrice != nil && (call.GasFeeCap != nil || call.GasTipCap != nil) { if call.GasPrice != nil && (call.GasFeeCap != nil || call.GasTipCap != nil) {
return nil, errors.New("both gasPrice and (maxFeePerGas or maxPriorityFeePerGas) specified") return nil, errors.New("both gasPrice and (maxFeePerGas or maxPriorityFeePerGas) specified")
@ -623,7 +604,7 @@ func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallM
// User specified the legacy gas field, convert to 1559 gas typing // User specified the legacy gas field, convert to 1559 gas typing
call.GasFeeCap, call.GasTipCap = call.GasPrice, call.GasPrice call.GasFeeCap, call.GasTipCap = call.GasPrice, call.GasPrice
} else { } else {
// User specified 1559 gas fields (or none), use those // User specified 1559 gas feilds (or none), use those
if call.GasFeeCap == nil { if call.GasFeeCap == nil {
call.GasFeeCap = new(big.Int) call.GasFeeCap = new(big.Int)
} }
@ -644,33 +625,20 @@ func (b *SimulatedBackend) callContract(ctx context.Context, call ethereum.CallM
if call.Value == nil { if call.Value == nil {
call.Value = new(big.Int) call.Value = new(big.Int)
} }
// Set infinite balance to the fake caller account. // Set infinite balance to the fake caller account.
from := stateDB.GetOrNewStateObject(call.From) from := stateDB.GetOrNewStateObject(call.From)
from.SetBalance(math.MaxBig256) from.SetBalance(math.MaxBig256)
// Execute the call. // Execute the call.
msg := &core.Message{ msg := callMsg{call}
From: call.From,
To: call.To,
Value: call.Value,
GasLimit: call.Gas,
GasPrice: call.GasPrice,
GasFeeCap: call.GasFeeCap,
GasTipCap: call.GasTipCap,
Data: call.Data,
AccessList: call.AccessList,
SkipAccountChecks: true,
}
txContext := core.NewEVMTxContext(msg)
evmContext := core.NewEVMBlockContext(block.Header(), b.blockchain, nil)
// Create a new environment which holds all relevant information // Create a new environment which holds all relevant information
// about the transaction and calling mechanisms. // about the transaction and calling mechanisms.
txContext := core.NewEVMTxContext(msg)
evmContext := core.NewEVMBlockContext(header, b.blockchain, nil)
vmEnv := vm.NewEVM(evmContext, txContext, stateDB, b.config, vm.Config{NoBaseFee: true}) vmEnv := vm.NewEVM(evmContext, txContext, stateDB, b.config, vm.Config{NoBaseFee: true})
gasPool := new(core.GasPool).AddGas(math.MaxUint64) gasPool := new(core.GasPool).AddGas(math.MaxUint64)
return core.ApplyMessage(vmEnv, msg, gasPool) return core.NewStateTransition(vmEnv, msg, gasPool).TransitionDb()
} }
// SendTransaction updates the pending block to include the given transaction. // SendTransaction updates the pending block to include the given transaction.
@ -694,7 +662,7 @@ func (b *SimulatedBackend) SendTransaction(ctx context.Context, tx *types.Transa
return fmt.Errorf("invalid transaction nonce: got %d, want %d", tx.Nonce(), nonce) return fmt.Errorf("invalid transaction nonce: got %d, want %d", tx.Nonce(), nonce)
} }
// Include tx in chain // Include tx in chain
blocks, receipts := core.GenerateChain(b.config, block, ethash.NewFaker(), b.database, 1, func(number int, block *core.BlockGen) { blocks, _ := core.GenerateChain(b.config, block, ethash.NewFaker(), b.database, 1, func(number int, block *core.BlockGen) {
for _, tx := range b.pendingBlock.Transactions() { for _, tx := range b.pendingBlock.Transactions() {
block.AddTxWithChain(b.blockchain, tx) block.AddTxWithChain(b.blockchain, tx)
} }
@ -704,7 +672,6 @@ func (b *SimulatedBackend) SendTransaction(ctx context.Context, tx *types.Transa
b.pendingBlock = blocks[0] b.pendingBlock = blocks[0]
b.pendingState, _ = state.New(b.pendingBlock.Root(), stateDB.Database(), nil) b.pendingState, _ = state.New(b.pendingBlock.Root(), stateDB.Database(), nil)
b.pendingReceipts = receipts[0]
return nil return nil
} }
@ -716,7 +683,7 @@ func (b *SimulatedBackend) FilterLogs(ctx context.Context, query ethereum.Filter
var filter *filters.Filter var filter *filters.Filter
if query.BlockHash != nil { if query.BlockHash != nil {
// Block filter requested, construct a single-shot filter // Block filter requested, construct a single-shot filter
filter = b.filterSystem.NewBlockFilter(*query.BlockHash, query.Addresses, query.Topics) filter = filters.NewBlockFilter(&filterBackend{b.database, b.blockchain}, *query.BlockHash, query.Addresses, query.Topics)
} else { } else {
// Initialize unset filter boundaries to run from genesis to chain head // Initialize unset filter boundaries to run from genesis to chain head
from := int64(0) from := int64(0)
@ -728,7 +695,7 @@ func (b *SimulatedBackend) FilterLogs(ctx context.Context, query ethereum.Filter
to = query.ToBlock.Int64() to = query.ToBlock.Int64()
} }
// Construct the range filter // Construct the range filter
filter = b.filterSystem.NewRangeFilter(from, to, query.Addresses, query.Topics) filter = filters.NewRangeFilter(&filterBackend{b.database, b.blockchain}, from, to, query.Addresses, query.Topics)
} }
// Run the filter and return all the logs // Run the filter and return all the logs
logs, err := filter.Logs(ctx) logs, err := filter.Logs(ctx)
@ -812,13 +779,8 @@ func (b *SimulatedBackend) AdjustTime(adjustment time.Duration) error {
if len(b.pendingBlock.Transactions()) != 0 { if len(b.pendingBlock.Transactions()) != 0 {
return errors.New("Could not adjust time on non-empty block") return errors.New("Could not adjust time on non-empty block")
} }
// Get the last block
block := b.blockchain.GetBlockByHash(b.pendingBlock.ParentHash())
if block == nil {
return fmt.Errorf("could not find parent")
}
blocks, _ := core.GenerateChain(b.config, block, ethash.NewFaker(), b.database, 1, func(number int, block *core.BlockGen) { blocks, _ := core.GenerateChain(b.config, b.blockchain.CurrentBlock(), ethash.NewFaker(), b.database, 1, func(number int, block *core.BlockGen) {
block.OffsetTime(int64(adjustment.Seconds())) block.OffsetTime(int64(adjustment.Seconds()))
}) })
stateDB, _ := b.blockchain.State() stateDB, _ := b.blockchain.State()
@ -834,51 +796,44 @@ func (b *SimulatedBackend) Blockchain() *core.BlockChain {
return b.blockchain return b.blockchain
} }
// callMsg implements core.Message to allow passing it as a transaction simulator.
type callMsg struct {
ethereum.CallMsg
}
func (m callMsg) From() common.Address { return m.CallMsg.From }
func (m callMsg) Nonce() uint64 { return 0 }
func (m callMsg) IsFake() bool { return true }
func (m callMsg) To() *common.Address { return m.CallMsg.To }
func (m callMsg) GasPrice() *big.Int { return m.CallMsg.GasPrice }
func (m callMsg) GasFeeCap() *big.Int { return m.CallMsg.GasFeeCap }
func (m callMsg) GasTipCap() *big.Int { return m.CallMsg.GasTipCap }
func (m callMsg) Gas() uint64 { return m.CallMsg.Gas }
func (m callMsg) Value() *big.Int { return m.CallMsg.Value }
func (m callMsg) Data() []byte { return m.CallMsg.Data }
func (m callMsg) AccessList() types.AccessList { return m.CallMsg.AccessList }
// filterBackend implements filters.Backend to support filtering for logs without // filterBackend implements filters.Backend to support filtering for logs without
// taking bloom-bits acceleration structures into account. // taking bloom-bits acceleration structures into account.
type filterBackend struct { type filterBackend struct {
db ethdb.Database db ethdb.Database
bc *core.BlockChain bc *core.BlockChain
backend *SimulatedBackend
} }
func (fb *filterBackend) ChainDb() ethdb.Database { return fb.db } func (fb *filterBackend) ChainDb() ethdb.Database { return fb.db }
func (fb *filterBackend) EventMux() *event.TypeMux { panic("not supported") } func (fb *filterBackend) EventMux() *event.TypeMux { panic("not supported") }
func (fb *filterBackend) HeaderByNumber(ctx context.Context, number rpc.BlockNumber) (*types.Header, error) { func (fb *filterBackend) HeaderByNumber(ctx context.Context, block rpc.BlockNumber) (*types.Header, error) {
switch number { if block == rpc.LatestBlockNumber {
case rpc.PendingBlockNumber:
if block := fb.backend.pendingBlock; block != nil {
return block.Header(), nil
}
return nil, nil
case rpc.LatestBlockNumber:
return fb.bc.CurrentHeader(), nil return fb.bc.CurrentHeader(), nil
case rpc.FinalizedBlockNumber:
return fb.bc.CurrentFinalBlock(), nil
case rpc.SafeBlockNumber:
return fb.bc.CurrentSafeBlock(), nil
default:
return fb.bc.GetHeaderByNumber(uint64(number.Int64())), nil
} }
return fb.bc.GetHeaderByNumber(uint64(block.Int64())), nil
} }
func (fb *filterBackend) HeaderByHash(ctx context.Context, hash common.Hash) (*types.Header, error) { func (fb *filterBackend) HeaderByHash(ctx context.Context, hash common.Hash) (*types.Header, error) {
return fb.bc.GetHeaderByHash(hash), nil return fb.bc.GetHeaderByHash(hash), nil
} }
func (fb *filterBackend) GetBody(ctx context.Context, hash common.Hash, number rpc.BlockNumber) (*types.Body, error) {
if body := fb.bc.GetBody(hash); body != nil {
return body, nil
}
return nil, errors.New("block body not found")
}
func (fb *filterBackend) PendingBlockAndReceipts() (*types.Block, types.Receipts) {
return fb.backend.pendingBlock, fb.backend.pendingReceipts
}
func (fb *filterBackend) GetReceipts(ctx context.Context, hash common.Hash) (types.Receipts, error) { func (fb *filterBackend) GetReceipts(ctx context.Context, hash common.Hash) (types.Receipts, error) {
number := rawdb.ReadHeaderNumber(fb.db, hash) number := rawdb.ReadHeaderNumber(fb.db, hash)
if number == nil { if number == nil {
@ -887,8 +842,19 @@ func (fb *filterBackend) GetReceipts(ctx context.Context, hash common.Hash) (typ
return rawdb.ReadReceipts(fb.db, hash, *number, fb.bc.Config()), nil return rawdb.ReadReceipts(fb.db, hash, *number, fb.bc.Config()), nil
} }
func (fb *filterBackend) GetLogs(ctx context.Context, hash common.Hash, number uint64) ([][]*types.Log, error) { func (fb *filterBackend) GetLogs(ctx context.Context, hash common.Hash) ([][]*types.Log, error) {
logs := rawdb.ReadLogs(fb.db, hash, number, fb.bc.Config()) number := rawdb.ReadHeaderNumber(fb.db, hash)
if number == nil {
return nil, nil
}
receipts := rawdb.ReadReceipts(fb.db, hash, *number, fb.bc.Config())
if receipts == nil {
return nil, nil
}
logs := make([][]*types.Log, len(receipts))
for i, receipt := range receipts {
logs[i] = receipt.Logs
}
return logs, nil return logs, nil
} }
@ -918,14 +884,6 @@ func (fb *filterBackend) ServiceFilter(ctx context.Context, ms *bloombits.Matche
panic("not supported") panic("not supported")
} }
func (fb *filterBackend) ChainConfig() *params.ChainConfig {
panic("not supported")
}
func (fb *filterBackend) CurrentHeader() *types.Header {
panic("not supported")
}
func nullSubscription() event.Subscription { func nullSubscription() event.Subscription {
return event.NewSubscription(func(quit <-chan struct{}) error { return event.NewSubscription(func(quit <-chan struct{}) error {
<-quit <-quit

View File

@ -93,18 +93,17 @@ func TestSimulatedBackend(t *testing.T) {
var testKey, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291") var testKey, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
// the following is based on this contract: // the following is based on this contract:
// contract T {
// event received(address sender, uint amount, bytes memo);
// event receivedAddr(address sender);
// //
// contract T { // function receive(bytes calldata memo) external payable returns (string memory res) {
// event received(address sender, uint amount, bytes memo); // emit received(msg.sender, msg.value, memo);
// event receivedAddr(address sender); // emit receivedAddr(msg.sender);
// // return "hello world";
// function receive(bytes calldata memo) external payable returns (string memory res) { // }
// emit received(msg.sender, msg.value, memo); // }
// emit receivedAddr(msg.sender);
// return "hello world";
// }
// }
const abiJSON = `[ { "constant": false, "inputs": [ { "name": "memo", "type": "bytes" } ], "name": "receive", "outputs": [ { "name": "res", "type": "string" } ], "payable": true, "stateMutability": "payable", "type": "function" }, { "anonymous": false, "inputs": [ { "indexed": false, "name": "sender", "type": "address" }, { "indexed": false, "name": "amount", "type": "uint256" }, { "indexed": false, "name": "memo", "type": "bytes" } ], "name": "received", "type": "event" }, { "anonymous": false, "inputs": [ { "indexed": false, "name": "sender", "type": "address" } ], "name": "receivedAddr", "type": "event" } ]` const abiJSON = `[ { "constant": false, "inputs": [ { "name": "memo", "type": "bytes" } ], "name": "receive", "outputs": [ { "name": "res", "type": "string" } ], "payable": true, "stateMutability": "payable", "type": "function" }, { "anonymous": false, "inputs": [ { "indexed": false, "name": "sender", "type": "address" }, { "indexed": false, "name": "amount", "type": "uint256" }, { "indexed": false, "name": "memo", "type": "bytes" } ], "name": "received", "type": "event" }, { "anonymous": false, "inputs": [ { "indexed": false, "name": "sender", "type": "address" } ], "name": "receivedAddr", "type": "event" } ]`
const abiBin = `0x608060405234801561001057600080fd5b506102a0806100206000396000f3fe60806040526004361061003b576000357c010000000000000000000000000000000000000000000000000000000090048063a69b6ed014610040575b600080fd5b6100b76004803603602081101561005657600080fd5b810190808035906020019064010000000081111561007357600080fd5b82018360208201111561008557600080fd5b803590602001918460018302840111640100000000831117156100a757600080fd5b9091929391929390505050610132565b6040518080602001828103825283818151815260200191508051906020019080838360005b838110156100f75780820151818401526020810190506100dc565b50505050905090810190601f1680156101245780820380516001836020036101000a031916815260200191505b509250505060405180910390f35b60607f75fd880d39c1daf53b6547ab6cb59451fc6452d27caa90e5b6649dd8293b9eed33348585604051808573ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff168152602001848152602001806020018281038252848482818152602001925080828437600081840152601f19601f8201169050808301925050509550505050505060405180910390a17f46923992397eac56cf13058aced2a1871933622717e27b24eabc13bf9dd329c833604051808273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200191505060405180910390a16040805190810160405280600b81526020017f68656c6c6f20776f726c6400000000000000000000000000000000000000000081525090509291505056fea165627a7a72305820ff0c57dad254cfeda48c9cfb47f1353a558bccb4d1bc31da1dae69315772d29e0029` const abiBin = `0x608060405234801561001057600080fd5b506102a0806100206000396000f3fe60806040526004361061003b576000357c010000000000000000000000000000000000000000000000000000000090048063a69b6ed014610040575b600080fd5b6100b76004803603602081101561005657600080fd5b810190808035906020019064010000000081111561007357600080fd5b82018360208201111561008557600080fd5b803590602001918460018302840111640100000000831117156100a757600080fd5b9091929391929390505050610132565b6040518080602001828103825283818151815260200191508051906020019080838360005b838110156100f75780820151818401526020810190506100dc565b50505050905090810190601f1680156101245780820380516001836020036101000a031916815260200191505b509250505060405180910390f35b60607f75fd880d39c1daf53b6547ab6cb59451fc6452d27caa90e5b6649dd8293b9eed33348585604051808573ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff168152602001848152602001806020018281038252848482818152602001925080828437600081840152601f19601f8201169050808301925050509550505050505060405180910390a17f46923992397eac56cf13058aced2a1871933622717e27b24eabc13bf9dd329c833604051808273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200191505060405180910390a16040805190810160405280600b81526020017f68656c6c6f20776f726c6400000000000000000000000000000000000000000081525090509291505056fea165627a7a72305820ff0c57dad254cfeda48c9cfb47f1353a558bccb4d1bc31da1dae69315772d29e0029`
const deployedCode = `60806040526004361061003b576000357c010000000000000000000000000000000000000000000000000000000090048063a69b6ed014610040575b600080fd5b6100b76004803603602081101561005657600080fd5b810190808035906020019064010000000081111561007357600080fd5b82018360208201111561008557600080fd5b803590602001918460018302840111640100000000831117156100a757600080fd5b9091929391929390505050610132565b6040518080602001828103825283818151815260200191508051906020019080838360005b838110156100f75780820151818401526020810190506100dc565b50505050905090810190601f1680156101245780820380516001836020036101000a031916815260200191505b509250505060405180910390f35b60607f75fd880d39c1daf53b6547ab6cb59451fc6452d27caa90e5b6649dd8293b9eed33348585604051808573ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff168152602001848152602001806020018281038252848482818152602001925080828437600081840152601f19601f8201169050808301925050509550505050505060405180910390a17f46923992397eac56cf13058aced2a1871933622717e27b24eabc13bf9dd329c833604051808273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200191505060405180910390a16040805190810160405280600b81526020017f68656c6c6f20776f726c6400000000000000000000000000000000000000000081525090509291505056fea165627a7a72305820ff0c57dad254cfeda48c9cfb47f1353a558bccb4d1bc31da1dae69315772d29e0029` const deployedCode = `60806040526004361061003b576000357c010000000000000000000000000000000000000000000000000000000090048063a69b6ed014610040575b600080fd5b6100b76004803603602081101561005657600080fd5b810190808035906020019064010000000081111561007357600080fd5b82018360208201111561008557600080fd5b803590602001918460018302840111640100000000831117156100a757600080fd5b9091929391929390505050610132565b6040518080602001828103825283818151815260200191508051906020019080838360005b838110156100f75780820151818401526020810190506100dc565b50505050905090810190601f1680156101245780820380516001836020036101000a031916815260200191505b509250505060405180910390f35b60607f75fd880d39c1daf53b6547ab6cb59451fc6452d27caa90e5b6649dd8293b9eed33348585604051808573ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff168152602001848152602001806020018281038252848482818152602001925080828437600081840152601f19601f8201169050808301925050509550505050505060405180910390a17f46923992397eac56cf13058aced2a1871933622717e27b24eabc13bf9dd329c833604051808273ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16815260200191505060405180910390a16040805190810160405280600b81526020017f68656c6c6f20776f726c6400000000000000000000000000000000000000000081525090509291505056fea165627a7a72305820ff0c57dad254cfeda48c9cfb47f1353a558bccb4d1bc31da1dae69315772d29e0029`
@ -418,13 +417,12 @@ func TestEstimateGas(t *testing.T) {
/* /*
pragma solidity ^0.6.4; pragma solidity ^0.6.4;
contract GasEstimation { contract GasEstimation {
function PureRevert() public { revert(); } function PureRevert() public { revert(); }
function Revert() public { revert("revert reason");} function Revert() public { revert("revert reason");}
function OOG() public { for (uint i = 0; ; i++) {}} function OOG() public { for (uint i = 0; ; i++) {}}
function Assert() public { assert(false);} function Assert() public { assert(false);}
function Valid() public {} function Valid() public {}
} }*/
*/
const contractAbi = "[{\"inputs\":[],\"name\":\"Assert\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[],\"name\":\"OOG\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[],\"name\":\"PureRevert\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[],\"name\":\"Revert\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[],\"name\":\"Valid\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"}]" const contractAbi = "[{\"inputs\":[],\"name\":\"Assert\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[],\"name\":\"OOG\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[],\"name\":\"PureRevert\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[],\"name\":\"Revert\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"},{\"inputs\":[],\"name\":\"Valid\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"}]"
const contractBin = "0x60806040523480156100115760006000fd5b50610017565b61016e806100266000396000f3fe60806040523480156100115760006000fd5b506004361061005c5760003560e01c806350f6fe3414610062578063aa8b1d301461006c578063b9b046f914610076578063d8b9839114610080578063e09fface1461008a5761005c565b60006000fd5b61006a610094565b005b6100746100ad565b005b61007e6100b5565b005b6100886100c2565b005b610092610135565b005b6000600090505b5b808060010191505061009b565b505b565b60006000fd5b565b600015156100bf57fe5b5b565b6040517f08c379a000000000000000000000000000000000000000000000000000000000815260040180806020018281038252600d8152602001807f72657665727420726561736f6e0000000000000000000000000000000000000081526020015060200191505060405180910390fd5b565b5b56fea2646970667358221220345bbcbb1a5ecf22b53a78eaebf95f8ee0eceff6d10d4b9643495084d2ec934a64736f6c63430006040033" const contractBin = "0x60806040523480156100115760006000fd5b50610017565b61016e806100266000396000f3fe60806040523480156100115760006000fd5b506004361061005c5760003560e01c806350f6fe3414610062578063aa8b1d301461006c578063b9b046f914610076578063d8b9839114610080578063e09fface1461008a5761005c565b60006000fd5b61006a610094565b005b6100746100ad565b005b61007e6100b5565b005b6100886100c2565b005b610092610135565b005b6000600090505b5b808060010191505061009b565b505b565b60006000fd5b565b600015156100bf57fe5b5b565b6040517f08c379a000000000000000000000000000000000000000000000000000000000815260040180806020018281038252600d8152602001807f72657665727420726561736f6e0000000000000000000000000000000000000081526020015060200191505060405180910390fd5b565b5b56fea2646970667358221220345bbcbb1a5ecf22b53a78eaebf95f8ee0eceff6d10d4b9643495084d2ec934a64736f6c63430006040033"
@ -657,7 +655,8 @@ func TestHeaderByNumber(t *testing.T) {
} }
if latestBlockHeader == nil { if latestBlockHeader == nil {
t.Errorf("received a nil block header") t.Errorf("received a nil block header")
} else if latestBlockHeader.Number.Uint64() != uint64(0) { }
if latestBlockHeader.Number.Uint64() != uint64(0) {
t.Errorf("expected block header number 0, instead got %v", latestBlockHeader.Number.Uint64()) t.Errorf("expected block header number 0, instead got %v", latestBlockHeader.Number.Uint64())
} }
@ -996,8 +995,7 @@ func TestCodeAt(t *testing.T) {
} }
// When receive("X") is called with sender 0x00... and value 1, it produces this tx receipt: // When receive("X") is called with sender 0x00... and value 1, it produces this tx receipt:
// // receipt{status=1 cgas=23949 bloom=00000000004000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000040200000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 logs=[log: b6818c8064f645cd82d99b59a1a267d6d61117ef [75fd880d39c1daf53b6547ab6cb59451fc6452d27caa90e5b6649dd8293b9eed] 000000000000000000000000376c47978271565f56deb45495afa69e59c16ab200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000000000000158 9ae378b6d4409eada347a5dc0c180f186cb62dc68fcc0f043425eb917335aa28 0 95d429d309bb9d753954195fe2d69bd140b4ae731b9b5b605c34323de162cf00 0]}
// receipt{status=1 cgas=23949 bloom=00000000004000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000040200000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 logs=[log: b6818c8064f645cd82d99b59a1a267d6d61117ef [75fd880d39c1daf53b6547ab6cb59451fc6452d27caa90e5b6649dd8293b9eed] 000000000000000000000000376c47978271565f56deb45495afa69e59c16ab200000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000000000000158 9ae378b6d4409eada347a5dc0c180f186cb62dc68fcc0f043425eb917335aa28 0 95d429d309bb9d753954195fe2d69bd140b4ae731b9b5b605c34323de162cf00 0]}
func TestPendingAndCallContract(t *testing.T) { func TestPendingAndCallContract(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey) testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr) sim := simTestBackend(testAddr)
@ -1060,27 +1058,27 @@ func TestPendingAndCallContract(t *testing.T) {
// This test is based on the following contract: // This test is based on the following contract:
/* /*
contract Reverter { contract Reverter {
function revertString() public pure{ function revertString() public pure{
require(false, "some error"); require(false, "some error");
} }
function revertNoString() public pure { function revertNoString() public pure {
require(false, ""); require(false, "");
} }
function revertASM() public pure { function revertASM() public pure {
assembly { assembly {
revert(0x0, 0x0) revert(0x0, 0x0)
} }
} }
function noRevert() public pure { function noRevert() public pure {
assembly { assembly {
// Assembles something that looks like require(false, "some error") but is not reverted // Assembles something that looks like require(false, "some error") but is not reverted
mstore(0x0, 0x08c379a000000000000000000000000000000000000000000000000000000000) mstore(0x0, 0x08c379a000000000000000000000000000000000000000000000000000000000)
mstore(0x4, 0x0000000000000000000000000000000000000000000000000000000000000020) mstore(0x4, 0x0000000000000000000000000000000000000000000000000000000000000020)
mstore(0x24, 0x000000000000000000000000000000000000000000000000000000000000000a) mstore(0x24, 0x000000000000000000000000000000000000000000000000000000000000000a)
mstore(0x44, 0x736f6d65206572726f7200000000000000000000000000000000000000000000) mstore(0x44, 0x736f6d65206572726f7200000000000000000000000000000000000000000000)
return(0x0, 0x64) return(0x0, 0x64)
} }
} }
}*/ }*/
func TestCallContractRevert(t *testing.T) { func TestCallContractRevert(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey) testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
@ -1189,7 +1187,7 @@ func TestFork(t *testing.T) {
sim.Commit() sim.Commit()
} }
// 3. // 3.
if sim.blockchain.CurrentBlock().Number.Uint64() != uint64(n) { if sim.blockchain.CurrentBlock().NumberU64() != uint64(n) {
t.Error("wrong chain length") t.Error("wrong chain length")
} }
// 4. // 4.
@ -1199,7 +1197,7 @@ func TestFork(t *testing.T) {
sim.Commit() sim.Commit()
} }
// 6. // 6.
if sim.blockchain.CurrentBlock().Number.Uint64() != uint64(n+1) { if sim.blockchain.CurrentBlock().NumberU64() != uint64(n+1) {
t.Error("wrong chain length") t.Error("wrong chain length")
} }
} }
@ -1207,11 +1205,11 @@ func TestFork(t *testing.T) {
/* /*
Example contract to test event emission: Example contract to test event emission:
pragma solidity >=0.7.0 <0.9.0; pragma solidity >=0.7.0 <0.9.0;
contract Callable { contract Callable {
event Called(); event Called();
function Call() public { emit Called(); } function Call() public { emit Called(); }
} }
*/ */
const callableAbi = "[{\"anonymous\":false,\"inputs\":[],\"name\":\"Called\",\"type\":\"event\"},{\"inputs\":[],\"name\":\"Call\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"}]" const callableAbi = "[{\"anonymous\":false,\"inputs\":[],\"name\":\"Called\",\"type\":\"event\"},{\"inputs\":[],\"name\":\"Call\",\"outputs\":[],\"stateMutability\":\"nonpayable\",\"type\":\"function\"}]"
@ -1229,7 +1227,7 @@ const callableBin = "6080604052348015600f57600080fd5b5060998061001e6000396000f3f
// 7. Mine two blocks to trigger a reorg. // 7. Mine two blocks to trigger a reorg.
// 8. Check that the event was removed. // 8. Check that the event was removed.
// 9. Re-send the transaction and mine a block. // 9. Re-send the transaction and mine a block.
// 10. Check that the event was reborn. // 10. Check that the event was reborn.
func TestForkLogsReborn(t *testing.T) { func TestForkLogsReborn(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey) testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr) sim := simTestBackend(testAddr)
@ -1338,62 +1336,3 @@ func TestForkResendTx(t *testing.T) {
t.Errorf("TX included in wrong block: %d", h) t.Errorf("TX included in wrong block: %d", h)
} }
} }
func TestCommitReturnValue(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
defer sim.Close()
startBlockHeight := sim.blockchain.CurrentBlock().Number.Uint64()
// Test if Commit returns the correct block hash
h1 := sim.Commit()
if h1 != sim.blockchain.CurrentBlock().Hash() {
t.Error("Commit did not return the hash of the last block.")
}
// Create a block in the original chain (containing a transaction to force different block hashes)
head, _ := sim.HeaderByNumber(context.Background(), nil) // Should be child's, good enough
gasPrice := new(big.Int).Add(head.BaseFee, big.NewInt(1))
_tx := types.NewTransaction(0, testAddr, big.NewInt(1000), params.TxGas, gasPrice, nil)
tx, _ := types.SignTx(_tx, types.HomesteadSigner{}, testKey)
sim.SendTransaction(context.Background(), tx)
h2 := sim.Commit()
// Create another block in the original chain
sim.Commit()
// Fork at the first bock
if err := sim.Fork(context.Background(), h1); err != nil {
t.Errorf("forking: %v", err)
}
// Test if Commit returns the correct block hash after the reorg
h2fork := sim.Commit()
if h2 == h2fork {
t.Error("The block in the fork and the original block are the same block!")
}
if sim.blockchain.GetHeader(h2fork, startBlockHeight+2) == nil {
t.Error("Could not retrieve the just created block (side-chain)")
}
}
// TestAdjustTimeAfterFork ensures that after a fork, AdjustTime uses the pending fork
// block's parent rather than the canonical head's parent.
func TestAdjustTimeAfterFork(t *testing.T) {
testAddr := crypto.PubkeyToAddress(testKey.PublicKey)
sim := simTestBackend(testAddr)
defer sim.Close()
sim.Commit() // h1
h1 := sim.blockchain.CurrentHeader().Hash()
sim.Commit() // h2
sim.Fork(context.Background(), h1)
sim.AdjustTime(1 * time.Second)
sim.Commit()
head := sim.blockchain.CurrentHeader()
if head.Number == common.Big2 && head.ParentHash != h1 {
t.Errorf("failed to build block on fork")
}
}

View File

@ -32,13 +32,6 @@ import (
"github.com/ethereum/go-ethereum/event" "github.com/ethereum/go-ethereum/event"
) )
const basefeeWiggleMultiplier = 2
var (
errNoEventSignature = errors.New("no event signature")
errEventSignatureMismatch = errors.New("event signature mismatch")
)
// SignerFn is a signer function callback when a contract requires a method to // SignerFn is a signer function callback when a contract requires a method to
// sign the transaction before submission. // sign the transaction before submission.
type SignerFn func(common.Address, *types.Transaction) (*types.Transaction, error) type SignerFn func(common.Address, *types.Transaction) (*types.Transaction, error)
@ -178,10 +171,7 @@ func (c *BoundContract) Call(opts *CallOpts, results *[]interface{}, method stri
return ErrNoPendingState return ErrNoPendingState
} }
output, err = pb.PendingCallContract(ctx, msg) output, err = pb.PendingCallContract(ctx, msg)
if err != nil { if err == nil && len(output) == 0 {
return err
}
if len(output) == 0 {
// Make sure we have a contract to operate on, and bail out otherwise. // Make sure we have a contract to operate on, and bail out otherwise.
if code, err = pb.PendingCodeAt(ctx, c.address); err != nil { if code, err = pb.PendingCodeAt(ctx, c.address); err != nil {
return err return err
@ -261,7 +251,7 @@ func (c *BoundContract) createDynamicTx(opts *TransactOpts, contract *common.Add
if gasFeeCap == nil { if gasFeeCap == nil {
gasFeeCap = new(big.Int).Add( gasFeeCap = new(big.Int).Add(
gasTipCap, gasTipCap,
new(big.Int).Mul(head.BaseFee, big.NewInt(basefeeWiggleMultiplier)), new(big.Int).Mul(head.BaseFee, big.NewInt(2)),
) )
} }
if gasFeeCap.Cmp(gasTipCap) < 0 { if gasFeeCap.Cmp(gasTipCap) < 0 {
@ -378,8 +368,6 @@ func (c *BoundContract) transact(opts *TransactOpts, contract *common.Address, i
) )
if opts.GasPrice != nil { if opts.GasPrice != nil {
rawTx, err = c.createLegacyTx(opts, contract, input) rawTx, err = c.createLegacyTx(opts, contract, input)
} else if opts.GasFeeCap != nil && opts.GasTipCap != nil {
rawTx, err = c.createDynamicTx(opts, contract, input, nil)
} else { } else {
// Only query for basefee if gasPrice not specified // Only query for basefee if gasPrice not specified
if head, errHead := c.transactor.HeaderByNumber(ensureContext(opts.Context), nil); errHead != nil { if head, errHead := c.transactor.HeaderByNumber(ensureContext(opts.Context), nil); errHead != nil {
@ -493,12 +481,8 @@ func (c *BoundContract) WatchLogs(opts *WatchOpts, name string, query ...[]inter
// UnpackLog unpacks a retrieved log into the provided output structure. // UnpackLog unpacks a retrieved log into the provided output structure.
func (c *BoundContract) UnpackLog(out interface{}, event string, log types.Log) error { func (c *BoundContract) UnpackLog(out interface{}, event string, log types.Log) error {
// Anonymous events are not supported.
if len(log.Topics) == 0 {
return errNoEventSignature
}
if log.Topics[0] != c.abi.Events[event].ID { if log.Topics[0] != c.abi.Events[event].ID {
return errEventSignatureMismatch return fmt.Errorf("event signature mismatch")
} }
if len(log.Data) > 0 { if len(log.Data) > 0 {
if err := c.abi.UnpackIntoInterface(out, event, log.Data); err != nil { if err := c.abi.UnpackIntoInterface(out, event, log.Data); err != nil {
@ -516,12 +500,8 @@ func (c *BoundContract) UnpackLog(out interface{}, event string, log types.Log)
// UnpackLogIntoMap unpacks a retrieved log into the provided map. // UnpackLogIntoMap unpacks a retrieved log into the provided map.
func (c *BoundContract) UnpackLogIntoMap(out map[string]interface{}, event string, log types.Log) error { func (c *BoundContract) UnpackLogIntoMap(out map[string]interface{}, event string, log types.Log) error {
// Anonymous events are not supported.
if len(log.Topics) == 0 {
return errNoEventSignature
}
if log.Topics[0] != c.abi.Events[event].ID { if log.Topics[0] != c.abi.Events[event].ID {
return errEventSignatureMismatch return fmt.Errorf("event signature mismatch")
} }
if len(log.Data) > 0 { if len(log.Data) > 0 {
if err := c.abi.UnpackIntoMap(out, event, log.Data); err != nil { if err := c.abi.UnpackIntoMap(out, event, log.Data); err != nil {

View File

@ -18,7 +18,6 @@ package bind_test
import ( import (
"context" "context"
"errors"
"math/big" "math/big"
"reflect" "reflect"
"strings" "strings"
@ -76,50 +75,34 @@ func (mt *mockTransactor) SendTransaction(ctx context.Context, tx *types.Transac
} }
type mockCaller struct { type mockCaller struct {
codeAtBlockNumber *big.Int codeAtBlockNumber *big.Int
callContractBlockNumber *big.Int callContractBlockNumber *big.Int
callContractBytes []byte pendingCodeAtCalled bool
callContractErr error pendingCallContractCalled bool
codeAtBytes []byte
codeAtErr error
} }
func (mc *mockCaller) CodeAt(ctx context.Context, contract common.Address, blockNumber *big.Int) ([]byte, error) { func (mc *mockCaller) CodeAt(ctx context.Context, contract common.Address, blockNumber *big.Int) ([]byte, error) {
mc.codeAtBlockNumber = blockNumber mc.codeAtBlockNumber = blockNumber
return mc.codeAtBytes, mc.codeAtErr return []byte{1, 2, 3}, nil
} }
func (mc *mockCaller) CallContract(ctx context.Context, call ethereum.CallMsg, blockNumber *big.Int) ([]byte, error) { func (mc *mockCaller) CallContract(ctx context.Context, call ethereum.CallMsg, blockNumber *big.Int) ([]byte, error) {
mc.callContractBlockNumber = blockNumber mc.callContractBlockNumber = blockNumber
return mc.callContractBytes, mc.callContractErr return nil, nil
} }
type mockPendingCaller struct { func (mc *mockCaller) PendingCodeAt(ctx context.Context, contract common.Address) ([]byte, error) {
*mockCaller
pendingCodeAtBytes []byte
pendingCodeAtErr error
pendingCodeAtCalled bool
pendingCallContractCalled bool
pendingCallContractBytes []byte
pendingCallContractErr error
}
func (mc *mockPendingCaller) PendingCodeAt(ctx context.Context, contract common.Address) ([]byte, error) {
mc.pendingCodeAtCalled = true mc.pendingCodeAtCalled = true
return mc.pendingCodeAtBytes, mc.pendingCodeAtErr return nil, nil
} }
func (mc *mockPendingCaller) PendingCallContract(ctx context.Context, call ethereum.CallMsg) ([]byte, error) { func (mc *mockCaller) PendingCallContract(ctx context.Context, call ethereum.CallMsg) ([]byte, error) {
mc.pendingCallContractCalled = true mc.pendingCallContractCalled = true
return mc.pendingCallContractBytes, mc.pendingCallContractErr return nil, nil
} }
func TestPassingBlockNumber(t *testing.T) { func TestPassingBlockNumber(t *testing.T) {
mc := &mockPendingCaller{
mockCaller: &mockCaller{ mc := &mockCaller{}
codeAtBytes: []byte{1, 2, 3},
},
}
bc := bind.NewBoundContract(common.HexToAddress("0x0"), abi.ABI{ bc := bind.NewBoundContract(common.HexToAddress("0x0"), abi.ABI{
Methods: map[string]abi.Method{ Methods: map[string]abi.Method{
@ -186,23 +169,6 @@ func TestUnpackIndexedStringTyLogIntoMap(t *testing.T) {
unpackAndCheck(t, bc, expectedReceivedMap, mockLog) unpackAndCheck(t, bc, expectedReceivedMap, mockLog)
} }
func TestUnpackAnonymousLogIntoMap(t *testing.T) {
mockLog := newMockLog(nil, common.HexToHash("0x0"))
abiString := `[{"anonymous":false,"inputs":[{"indexed":false,"name":"amount","type":"uint256"}],"name":"received","type":"event"}]`
parsedAbi, _ := abi.JSON(strings.NewReader(abiString))
bc := bind.NewBoundContract(common.HexToAddress("0x0"), parsedAbi, nil, nil, nil)
var received map[string]interface{}
err := bc.UnpackLogIntoMap(received, "received", mockLog)
if err == nil {
t.Error("unpacking anonymous event is not supported")
}
if err.Error() != "no event signature" {
t.Errorf("expected error 'no event signature', got '%s'", err)
}
}
func TestUnpackIndexedSliceTyLogIntoMap(t *testing.T) { func TestUnpackIndexedSliceTyLogIntoMap(t *testing.T) {
sliceBytes, err := rlp.EncodeToBytes([]string{"name1", "name2", "name3", "name4"}) sliceBytes, err := rlp.EncodeToBytes([]string{"name1", "name2", "name3", "name4"})
if err != nil { if err != nil {
@ -375,140 +341,3 @@ func newMockLog(topics []common.Hash, txHash common.Hash) types.Log {
Removed: false, Removed: false,
} }
} }
func TestCall(t *testing.T) {
var method, methodWithArg = "something", "somethingArrrrg"
tests := []struct {
name, method string
opts *bind.CallOpts
mc bind.ContractCaller
results *[]interface{}
wantErr bool
wantErrExact error
}{{
name: "ok not pending",
mc: &mockCaller{
codeAtBytes: []byte{0},
},
method: method,
}, {
name: "ok pending",
mc: &mockPendingCaller{
pendingCodeAtBytes: []byte{0},
},
opts: &bind.CallOpts{
Pending: true,
},
method: method,
}, {
name: "pack error, no method",
mc: new(mockCaller),
method: "else",
wantErr: true,
}, {
name: "interface error, pending but not a PendingContractCaller",
mc: new(mockCaller),
opts: &bind.CallOpts{
Pending: true,
},
method: method,
wantErrExact: bind.ErrNoPendingState,
}, {
name: "pending call canceled",
mc: &mockPendingCaller{
pendingCallContractErr: context.DeadlineExceeded,
},
opts: &bind.CallOpts{
Pending: true,
},
method: method,
wantErrExact: context.DeadlineExceeded,
}, {
name: "pending code at error",
mc: &mockPendingCaller{
pendingCodeAtErr: errors.New(""),
},
opts: &bind.CallOpts{
Pending: true,
},
method: method,
wantErr: true,
}, {
name: "no pending code at",
mc: new(mockPendingCaller),
opts: &bind.CallOpts{
Pending: true,
},
method: method,
wantErrExact: bind.ErrNoCode,
}, {
name: "call contract error",
mc: &mockCaller{
callContractErr: context.DeadlineExceeded,
},
method: method,
wantErrExact: context.DeadlineExceeded,
}, {
name: "code at error",
mc: &mockCaller{
codeAtErr: errors.New(""),
},
method: method,
wantErr: true,
}, {
name: "no code at",
mc: new(mockCaller),
method: method,
wantErrExact: bind.ErrNoCode,
}, {
name: "unpack error missing arg",
mc: &mockCaller{
codeAtBytes: []byte{0},
},
method: methodWithArg,
wantErr: true,
}, {
name: "interface unpack error",
mc: &mockCaller{
codeAtBytes: []byte{0},
},
method: method,
results: &[]interface{}{0},
wantErr: true,
}}
for _, test := range tests {
bc := bind.NewBoundContract(common.HexToAddress("0x0"), abi.ABI{
Methods: map[string]abi.Method{
method: {
Name: method,
Outputs: abi.Arguments{},
},
methodWithArg: {
Name: methodWithArg,
Outputs: abi.Arguments{abi.Argument{}},
},
},
}, test.mc, nil, nil)
err := bc.Call(test.opts, test.results, test.method)
if test.wantErr || test.wantErrExact != nil {
if err == nil {
t.Fatalf("%q expected error", test.name)
}
if test.wantErrExact != nil && !errors.Is(err, test.wantErrExact) {
t.Fatalf("%q expected error %q but got %q", test.name, test.wantErrExact, err)
}
continue
}
if err != nil {
t.Fatalf("%q unexpected error: %v", test.name, err)
}
}
}
// TestCrashers contains some strings which previously caused the abi codec to crash.
func TestCrashers(t *testing.T) {
abi.JSON(strings.NewReader(`[{"inputs":[{"type":"tuple[]","components":[{"type":"bool","name":"_1"}]}]}]`))
abi.JSON(strings.NewReader(`[{"inputs":[{"type":"tuple[]","components":[{"type":"bool","name":"&"}]}]}]`))
abi.JSON(strings.NewReader(`[{"inputs":[{"type":"tuple[]","components":[{"type":"bool","name":"----"}]}]}]`))
abi.JSON(strings.NewReader(`[{"inputs":[{"type":"tuple[]","components":[{"type":"bool","name":"foo.Bar"}]}]}]`))
}

View File

@ -22,6 +22,7 @@ package bind
import ( import (
"bytes" "bytes"
"errors"
"fmt" "fmt"
"go/format" "go/format"
"regexp" "regexp"
@ -38,45 +39,10 @@ type Lang int
const ( const (
LangGo Lang = iota LangGo Lang = iota
LangJava
LangObjC
) )
func isKeyWord(arg string) bool {
switch arg {
case "break":
case "case":
case "chan":
case "const":
case "continue":
case "default":
case "defer":
case "else":
case "fallthrough":
case "for":
case "func":
case "go":
case "goto":
case "if":
case "import":
case "interface":
case "iota":
case "map":
case "make":
case "new":
case "package":
case "range":
case "return":
case "select":
case "struct":
case "switch":
case "type":
case "var":
default:
return false
}
return true
}
// Bind generates a Go wrapper around a contract ABI. This wrapper isn't meant // Bind generates a Go wrapper around a contract ABI. This wrapper isn't meant
// to be used as is in client code, but rather as an intermediate struct which // to be used as is in client code, but rather as an intermediate struct which
// enforces compile time type safety and naming convention opposed to having to // enforces compile time type safety and naming convention opposed to having to
@ -133,7 +99,6 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
// Normalize the method for capital cases and non-anonymous inputs/outputs // Normalize the method for capital cases and non-anonymous inputs/outputs
normalized := original normalized := original
normalizedName := methodNormalizer[lang](alias(aliases, original.Name)) normalizedName := methodNormalizer[lang](alias(aliases, original.Name))
// Ensure there is no duplicated identifier // Ensure there is no duplicated identifier
var identifiers = callIdentifiers var identifiers = callIdentifiers
if !original.IsConstant() { if !original.IsConstant() {
@ -143,12 +108,11 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
return "", fmt.Errorf("duplicated identifier \"%s\"(normalized \"%s\"), use --alias for renaming", original.Name, normalizedName) return "", fmt.Errorf("duplicated identifier \"%s\"(normalized \"%s\"), use --alias for renaming", original.Name, normalizedName)
} }
identifiers[normalizedName] = true identifiers[normalizedName] = true
normalized.Name = normalizedName normalized.Name = normalizedName
normalized.Inputs = make([]abi.Argument, len(original.Inputs)) normalized.Inputs = make([]abi.Argument, len(original.Inputs))
copy(normalized.Inputs, original.Inputs) copy(normalized.Inputs, original.Inputs)
for j, input := range normalized.Inputs { for j, input := range normalized.Inputs {
if input.Name == "" || isKeyWord(input.Name) { if input.Name == "" {
normalized.Inputs[j].Name = fmt.Sprintf("arg%d", j) normalized.Inputs[j].Name = fmt.Sprintf("arg%d", j)
} }
if hasStruct(input.Type) { if hasStruct(input.Type) {
@ -188,22 +152,12 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
eventIdentifiers[normalizedName] = true eventIdentifiers[normalizedName] = true
normalized.Name = normalizedName normalized.Name = normalizedName
used := make(map[string]bool)
normalized.Inputs = make([]abi.Argument, len(original.Inputs)) normalized.Inputs = make([]abi.Argument, len(original.Inputs))
copy(normalized.Inputs, original.Inputs) copy(normalized.Inputs, original.Inputs)
for j, input := range normalized.Inputs { for j, input := range normalized.Inputs {
if input.Name == "" || isKeyWord(input.Name) { if input.Name == "" {
normalized.Inputs[j].Name = fmt.Sprintf("arg%d", j) normalized.Inputs[j].Name = fmt.Sprintf("arg%d", j)
} }
// Event is a bit special, we need to define event struct in binding,
// ensure there is no camel-case-style name conflict.
for index := 0; ; index++ {
if !used[capitalise(normalized.Inputs[j].Name)] {
used[capitalise(normalized.Inputs[j].Name)] = true
break
}
normalized.Inputs[j].Name = fmt.Sprintf("%s%d", normalized.Inputs[j].Name, index)
}
if hasStruct(input.Type) { if hasStruct(input.Type) {
bindStructType[lang](input.Type, structs) bindStructType[lang](input.Type, structs)
} }
@ -218,9 +172,14 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
if evmABI.HasReceive() { if evmABI.HasReceive() {
receive = &tmplMethod{Original: evmABI.Receive} receive = &tmplMethod{Original: evmABI.Receive}
} }
// There is no easy way to pass arbitrary java objects to the Go side.
if len(structs) > 0 && lang == LangJava {
return "", errors.New("java binding for tuple arguments is not supported yet")
}
contracts[types[i]] = &tmplContract{ contracts[types[i]] = &tmplContract{
Type: capitalise(types[i]), Type: capitalise(types[i]),
InputABI: strings.ReplaceAll(strippedABI, "\"", "\\\""), InputABI: strings.Replace(strippedABI, "\"", "\\\"", -1),
InputBin: strings.TrimPrefix(strings.TrimSpace(bytecodes[i]), "0x"), InputBin: strings.TrimPrefix(strings.TrimSpace(bytecodes[i]), "0x"),
Constructor: evmABI.Constructor, Constructor: evmABI.Constructor,
Calls: calls, Calls: calls,
@ -290,7 +249,8 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
// bindType is a set of type binders that convert Solidity types to some supported // bindType is a set of type binders that convert Solidity types to some supported
// programming language types. // programming language types.
var bindType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct) string{ var bindType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct) string{
LangGo: bindTypeGo, LangGo: bindTypeGo,
LangJava: bindTypeJava,
} }
// bindBasicTypeGo converts basic solidity types(except array, slice and tuple) to Go ones. // bindBasicTypeGo converts basic solidity types(except array, slice and tuple) to Go ones.
@ -333,10 +293,86 @@ func bindTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
} }
} }
// bindBasicTypeJava converts basic solidity types(except array, slice and tuple) to Java ones.
func bindBasicTypeJava(kind abi.Type) string {
switch kind.T {
case abi.AddressTy:
return "Address"
case abi.IntTy, abi.UintTy:
// Note that uint and int (without digits) are also matched,
// these are size 256, and will translate to BigInt (the default).
parts := regexp.MustCompile(`(u)?int([0-9]*)`).FindStringSubmatch(kind.String())
if len(parts) != 3 {
return kind.String()
}
// All unsigned integers should be translated to BigInt since gomobile doesn't
// support them.
if parts[1] == "u" {
return "BigInt"
}
namedSize := map[string]string{
"8": "byte",
"16": "short",
"32": "int",
"64": "long",
}[parts[2]]
// default to BigInt
if namedSize == "" {
namedSize = "BigInt"
}
return namedSize
case abi.FixedBytesTy, abi.BytesTy:
return "byte[]"
case abi.BoolTy:
return "boolean"
case abi.StringTy:
return "String"
case abi.FunctionTy:
return "byte[24]"
default:
return kind.String()
}
}
// pluralizeJavaType explicitly converts multidimensional types to predefined
// types in go side.
func pluralizeJavaType(typ string) string {
switch typ {
case "boolean":
return "Bools"
case "String":
return "Strings"
case "Address":
return "Addresses"
case "byte[]":
return "Binaries"
case "BigInt":
return "BigInts"
}
return typ + "[]"
}
// bindTypeJava converts a Solidity type to a Java one. Since there is no clear mapping
// from all Solidity types to Java ones (e.g. uint17), those that cannot be exactly
// mapped will use an upscaled type (e.g. BigDecimal).
func bindTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
switch kind.T {
case abi.TupleTy:
return structs[kind.TupleRawName+kind.String()].Name
case abi.ArrayTy, abi.SliceTy:
return pluralizeJavaType(bindTypeJava(*kind.Elem, structs))
default:
return bindBasicTypeJava(kind)
}
}
// bindTopicType is a set of type binders that convert Solidity types to some // bindTopicType is a set of type binders that convert Solidity types to some
// supported programming language topic types. // supported programming language topic types.
var bindTopicType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct) string{ var bindTopicType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct) string{
LangGo: bindTopicTypeGo, LangGo: bindTopicTypeGo,
LangJava: bindTopicTypeJava,
} }
// bindTopicTypeGo converts a Solidity topic type to a Go one. It is almost the same // bindTopicTypeGo converts a Solidity topic type to a Go one. It is almost the same
@ -356,10 +392,28 @@ func bindTopicTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
return bound return bound
} }
// bindTopicTypeJava converts a Solidity topic type to a Java one. It is almost the same
// functionality as for simple types, but dynamic types get converted to hashes.
func bindTopicTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
bound := bindTypeJava(kind, structs)
// todo(rjl493456442) according solidity documentation, indexed event
// parameters that are not value types i.e. arrays and structs are not
// stored directly but instead a keccak256-hash of an encoding is stored.
//
// We only convert strings and bytes to hash, still need to deal with
// array(both fixed-size and dynamic-size) and struct.
if bound == "String" || bound == "byte[]" {
bound = "Hash"
}
return bound
}
// bindStructType is a set of type binders that convert Solidity tuple types to some supported // bindStructType is a set of type binders that convert Solidity tuple types to some supported
// programming language struct definition. // programming language struct definition.
var bindStructType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct) string{ var bindStructType = map[Lang]func(kind abi.Type, structs map[string]*tmplStruct) string{
LangGo: bindStructTypeGo, LangGo: bindStructTypeGo,
LangJava: bindStructTypeJava,
} }
// bindStructTypeGo converts a Solidity tuple type to a Go one and records the mapping // bindStructTypeGo converts a Solidity tuple type to a Go one and records the mapping
@ -378,22 +432,15 @@ func bindStructTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
if s, exist := structs[id]; exist { if s, exist := structs[id]; exist {
return s.Name return s.Name
} }
var ( var fields []*tmplField
names = make(map[string]bool)
fields []*tmplField
)
for i, elem := range kind.TupleElems { for i, elem := range kind.TupleElems {
name := capitalise(kind.TupleRawNames[i]) field := bindStructTypeGo(*elem, structs)
name = abi.ResolveNameConflict(name, func(s string) bool { return names[s] }) fields = append(fields, &tmplField{Type: field, Name: capitalise(kind.TupleRawNames[i]), SolKind: *elem})
names[name] = true
fields = append(fields, &tmplField{Type: bindStructTypeGo(*elem, structs), Name: name, SolKind: *elem})
} }
name := kind.TupleRawName name := kind.TupleRawName
if name == "" { if name == "" {
name = fmt.Sprintf("Struct%d", len(structs)) name = fmt.Sprintf("Struct%d", len(structs))
} }
name = capitalise(name)
structs[id] = &tmplStruct{ structs[id] = &tmplStruct{
Name: name, Name: name,
Fields: fields, Fields: fields,
@ -408,10 +455,74 @@ func bindStructTypeGo(kind abi.Type, structs map[string]*tmplStruct) string {
} }
} }
// bindStructTypeJava converts a Solidity tuple type to a Java one and records the mapping
// in the given map.
// Notably, this function will resolve and record nested struct recursively.
func bindStructTypeJava(kind abi.Type, structs map[string]*tmplStruct) string {
switch kind.T {
case abi.TupleTy:
// We compose a raw struct name and a canonical parameter expression
// together here. The reason is before solidity v0.5.11, kind.TupleRawName
// is empty, so we use canonical parameter expression to distinguish
// different struct definition. From the consideration of backward
// compatibility, we concat these two together so that if kind.TupleRawName
// is not empty, it can have unique id.
id := kind.TupleRawName + kind.String()
if s, exist := structs[id]; exist {
return s.Name
}
var fields []*tmplField
for i, elem := range kind.TupleElems {
field := bindStructTypeJava(*elem, structs)
fields = append(fields, &tmplField{Type: field, Name: decapitalise(kind.TupleRawNames[i]), SolKind: *elem})
}
name := kind.TupleRawName
if name == "" {
name = fmt.Sprintf("Class%d", len(structs))
}
structs[id] = &tmplStruct{
Name: name,
Fields: fields,
}
return name
case abi.ArrayTy, abi.SliceTy:
return pluralizeJavaType(bindStructTypeJava(*kind.Elem, structs))
default:
return bindBasicTypeJava(kind)
}
}
// namedType is a set of functions that transform language specific types to // namedType is a set of functions that transform language specific types to
// named versions that may be used inside method names. // named versions that may be used inside method names.
var namedType = map[Lang]func(string, abi.Type) string{ var namedType = map[Lang]func(string, abi.Type) string{
LangGo: func(string, abi.Type) string { panic("this shouldn't be needed") }, LangGo: func(string, abi.Type) string { panic("this shouldn't be needed") },
LangJava: namedTypeJava,
}
// namedTypeJava converts some primitive data types to named variants that can
// be used as parts of method names.
func namedTypeJava(javaKind string, solKind abi.Type) string {
switch javaKind {
case "byte[]":
return "Binary"
case "boolean":
return "Bool"
default:
parts := regexp.MustCompile(`(u)?int([0-9]*)(\[[0-9]*\])?`).FindStringSubmatch(solKind.String())
if len(parts) != 4 {
return javaKind
}
switch parts[2] {
case "8", "16", "32", "64":
if parts[3] == "" {
return capitalise(fmt.Sprintf("%sint%s", parts[1], parts[2]))
}
return capitalise(fmt.Sprintf("%sint%ss", parts[1], parts[2]))
default:
return javaKind
}
}
} }
// alias returns an alias of the given string based on the aliasing rules // alias returns an alias of the given string based on the aliasing rules
@ -426,7 +537,8 @@ func alias(aliases map[string]string, n string) string {
// methodNormalizer is a name transformer that modifies Solidity method names to // methodNormalizer is a name transformer that modifies Solidity method names to
// conform to target language naming conventions. // conform to target language naming conventions.
var methodNormalizer = map[Lang]func(string) string{ var methodNormalizer = map[Lang]func(string) string{
LangGo: abi.ToCamelCase, LangGo: abi.ToCamelCase,
LangJava: decapitalise,
} }
// capitalise makes a camel-case string which starts with an upper case character. // capitalise makes a camel-case string which starts with an upper case character.

File diff suppressed because one or more lines are too long

View File

@ -75,7 +75,8 @@ type tmplStruct struct {
// tmplSource is language to template mapping containing all the supported // tmplSource is language to template mapping containing all the supported
// programming languages the package can generate to. // programming languages the package can generate to.
var tmplSource = map[Lang]string{ var tmplSource = map[Lang]string{
LangGo: tmplSourceGo, LangGo: tmplSourceGo,
LangJava: tmplSourceJava,
} }
// tmplSourceGo is the Go source template that the generated Go contract binding // tmplSourceGo is the Go source template that the generated Go contract binding
@ -109,7 +110,6 @@ var (
_ = common.Big1 _ = common.Big1
_ = types.BloomLookup _ = types.BloomLookup
_ = event.NewSubscription _ = event.NewSubscription
_ = abi.ConvertType
) )
{{$structs := .Structs}} {{$structs := .Structs}}
@ -161,7 +161,7 @@ var (
} }
{{range $pattern, $name := .Libraries}} {{range $pattern, $name := .Libraries}}
{{decapitalise $name}}Addr, _, _, _ := Deploy{{capitalise $name}}(auth, backend) {{decapitalise $name}}Addr, _, _, _ := Deploy{{capitalise $name}}(auth, backend)
{{$contract.Type}}Bin = strings.ReplaceAll({{$contract.Type}}Bin, "__${{$pattern}}$__", {{decapitalise $name}}Addr.String()[2:]) {{$contract.Type}}Bin = strings.Replace({{$contract.Type}}Bin, "__${{$pattern}}$__", {{decapitalise $name}}Addr.String()[2:], -1)
{{end}} {{end}}
address, tx, contract, err := bind.DeployContract(auth, *parsed, common.FromHex({{.Type}}Bin), backend {{range .Constructor.Inputs}}, {{.Name}}{{end}}) address, tx, contract, err := bind.DeployContract(auth, *parsed, common.FromHex({{.Type}}Bin), backend {{range .Constructor.Inputs}}, {{.Name}}{{end}})
if err != nil { if err != nil {
@ -268,11 +268,11 @@ var (
// bind{{.Type}} binds a generic wrapper to an already deployed contract. // bind{{.Type}} binds a generic wrapper to an already deployed contract.
func bind{{.Type}}(address common.Address, caller bind.ContractCaller, transactor bind.ContractTransactor, filterer bind.ContractFilterer) (*bind.BoundContract, error) { func bind{{.Type}}(address common.Address, caller bind.ContractCaller, transactor bind.ContractTransactor, filterer bind.ContractFilterer) (*bind.BoundContract, error) {
parsed, err := {{.Type}}MetaData.GetAbi() parsed, err := abi.JSON(strings.NewReader({{.Type}}ABI))
if err != nil { if err != nil {
return nil, err return nil, err
} }
return bind.NewBoundContract(address, *parsed, caller, transactor, filterer), nil return bind.NewBoundContract(address, parsed, caller, transactor, filterer), nil
} }
// Call invokes the (constant) contract method with params as input values and // Call invokes the (constant) contract method with params as input values and
@ -569,3 +569,140 @@ var (
{{end}} {{end}}
{{end}} {{end}}
` `
// tmplSourceJava is the Java source template that the generated Java contract binding
// is based on.
const tmplSourceJava = `
// This file is an automatically generated Java binding. Do not modify as any
// change will likely be lost upon the next re-generation!
package {{.Package}};
import org.ethereum.geth.*;
import java.util.*;
{{$structs := .Structs}}
{{range $contract := .Contracts}}
{{if not .Library}}public {{end}}class {{.Type}} {
// ABI is the input ABI used to generate the binding from.
public final static String ABI = "{{.InputABI}}";
{{if $contract.FuncSigs}}
// {{.Type}}FuncSigs maps the 4-byte function signature to its string representation.
public final static Map<String, String> {{.Type}}FuncSigs;
static {
Hashtable<String, String> temp = new Hashtable<String, String>();
{{range $strsig, $binsig := .FuncSigs}}temp.put("{{$binsig}}", "{{$strsig}}");
{{end}}
{{.Type}}FuncSigs = Collections.unmodifiableMap(temp);
}
{{end}}
{{if .InputBin}}
// BYTECODE is the compiled bytecode used for deploying new contracts.
public final static String BYTECODE = "0x{{.InputBin}}";
// deploy deploys a new Ethereum contract, binding an instance of {{.Type}} to it.
public static {{.Type}} deploy(TransactOpts auth, EthereumClient client{{range .Constructor.Inputs}}, {{bindtype .Type $structs}} {{.Name}}{{end}}) throws Exception {
Interfaces args = Geth.newInterfaces({{(len .Constructor.Inputs)}});
String bytecode = BYTECODE;
{{if .Libraries}}
// "link" contract to dependent libraries by deploying them first.
{{range $pattern, $name := .Libraries}}
{{capitalise $name}} {{decapitalise $name}}Inst = {{capitalise $name}}.deploy(auth, client);
bytecode = bytecode.replace("__${{$pattern}}$__", {{decapitalise $name}}Inst.Address.getHex().substring(2));
{{end}}
{{end}}
{{range $index, $element := .Constructor.Inputs}}Interface arg{{$index}} = Geth.newInterface();arg{{$index}}.set{{namedtype (bindtype .Type $structs) .Type}}({{.Name}});args.set({{$index}},arg{{$index}});
{{end}}
return new {{.Type}}(Geth.deployContract(auth, ABI, Geth.decodeFromHex(bytecode), client, args));
}
// Internal constructor used by contract deployment.
private {{.Type}}(BoundContract deployment) {
this.Address = deployment.getAddress();
this.Deployer = deployment.getDeployer();
this.Contract = deployment;
}
{{end}}
// Ethereum address where this contract is located at.
public final Address Address;
// Ethereum transaction in which this contract was deployed (if known!).
public final Transaction Deployer;
// Contract instance bound to a blockchain address.
private final BoundContract Contract;
// Creates a new instance of {{.Type}}, bound to a specific deployed contract.
public {{.Type}}(Address address, EthereumClient client) throws Exception {
this(Geth.bindContract(address, ABI, client));
}
{{range .Calls}}
{{if gt (len .Normalized.Outputs) 1}}
// {{capitalise .Normalized.Name}}Results is the output of a call to {{.Normalized.Name}}.
public class {{capitalise .Normalized.Name}}Results {
{{range $index, $item := .Normalized.Outputs}}public {{bindtype .Type $structs}} {{if ne .Name ""}}{{.Name}}{{else}}Return{{$index}}{{end}};
{{end}}
}
{{end}}
// {{.Normalized.Name}} is a free data retrieval call binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{.Original.String}}
public {{if gt (len .Normalized.Outputs) 1}}{{capitalise .Normalized.Name}}Results{{else if eq (len .Normalized.Outputs) 0}}void{{else}}{{range .Normalized.Outputs}}{{bindtype .Type $structs}}{{end}}{{end}} {{.Normalized.Name}}(CallOpts opts{{range .Normalized.Inputs}}, {{bindtype .Type $structs}} {{.Name}}{{end}}) throws Exception {
Interfaces args = Geth.newInterfaces({{(len .Normalized.Inputs)}});
{{range $index, $item := .Normalized.Inputs}}Interface arg{{$index}} = Geth.newInterface();arg{{$index}}.set{{namedtype (bindtype .Type $structs) .Type}}({{.Name}});args.set({{$index}},arg{{$index}});
{{end}}
Interfaces results = Geth.newInterfaces({{(len .Normalized.Outputs)}});
{{range $index, $item := .Normalized.Outputs}}Interface result{{$index}} = Geth.newInterface(); result{{$index}}.setDefault{{namedtype (bindtype .Type $structs) .Type}}(); results.set({{$index}}, result{{$index}});
{{end}}
if (opts == null) {
opts = Geth.newCallOpts();
}
this.Contract.call(opts, results, "{{.Original.Name}}", args);
{{if gt (len .Normalized.Outputs) 1}}
{{capitalise .Normalized.Name}}Results result = new {{capitalise .Normalized.Name}}Results();
{{range $index, $item := .Normalized.Outputs}}result.{{if ne .Name ""}}{{.Name}}{{else}}Return{{$index}}{{end}} = results.get({{$index}}).get{{namedtype (bindtype .Type $structs) .Type}}();
{{end}}
return result;
{{else}}{{range .Normalized.Outputs}}return results.get(0).get{{namedtype (bindtype .Type $structs) .Type}}();{{end}}
{{end}}
}
{{end}}
{{range .Transacts}}
// {{.Normalized.Name}} is a paid mutator transaction binding the contract method 0x{{printf "%x" .Original.ID}}.
//
// Solidity: {{.Original.String}}
public Transaction {{.Normalized.Name}}(TransactOpts opts{{range .Normalized.Inputs}}, {{bindtype .Type $structs}} {{.Name}}{{end}}) throws Exception {
Interfaces args = Geth.newInterfaces({{(len .Normalized.Inputs)}});
{{range $index, $item := .Normalized.Inputs}}Interface arg{{$index}} = Geth.newInterface();arg{{$index}}.set{{namedtype (bindtype .Type $structs) .Type}}({{.Name}});args.set({{$index}},arg{{$index}});
{{end}}
return this.Contract.transact(opts, "{{.Original.Name}}" , args);
}
{{end}}
{{if .Fallback}}
// Fallback is a paid mutator transaction binding the contract fallback function.
//
// Solidity: {{.Fallback.Original.String}}
public Transaction Fallback(TransactOpts opts, byte[] calldata) throws Exception {
return this.Contract.rawTransact(opts, calldata);
}
{{end}}
{{if .Receive}}
// Receive is a paid mutator transaction binding the contract receive function.
//
// Solidity: {{.Receive.Original.String}}
public Transaction Receive(TransactOpts opts) throws Exception {
return this.Contract.rawTransact(opts, null);
}
{{end}}
}
{{end}}
`

View File

@ -1,4 +1,4 @@
// Copyright 2016 The go-ethereum Authors // Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library. // This file is part of the go-ethereum library.
// //
// The go-ethereum library is free software: you can redistribute it and/or modify // The go-ethereum library is free software: you can redistribute it and/or modify
@ -30,13 +30,11 @@ type Error struct {
Name string Name string
Inputs Arguments Inputs Arguments
str string str string
// Sig contains the string signature according to the ABI spec. // Sig contains the string signature according to the ABI spec.
// e.g. error foo(uint32 a, int b) = "foo(uint32,int256)" // e.g. event foo(uint32 a, int b) = "foo(uint32,int256)"
// Please note that "int" is substitute for its canonical representation "int256" // Please note that "int" is substitute for its canonical representation "int256"
Sig string Sig string
// ID returns the canonical representation of the event's signature used by the
// ID returns the canonical representation of the error's signature used by the
// abi definition to identify event names and types. // abi definition to identify event names and types.
ID common.Hash ID common.Hash
} }

View File

@ -23,15 +23,7 @@ import (
) )
var ( var (
errBadBool = errors.New("abi: improperly encoded boolean value") errBadBool = errors.New("abi: improperly encoded boolean value")
errBadUint8 = errors.New("abi: improperly encoded uint8 value")
errBadUint16 = errors.New("abi: improperly encoded uint16 value")
errBadUint32 = errors.New("abi: improperly encoded uint32 value")
errBadUint64 = errors.New("abi: improperly encoded uint64 value")
errBadInt8 = errors.New("abi: improperly encoded int8 value")
errBadInt16 = errors.New("abi: improperly encoded int16 value")
errBadInt32 = errors.New("abi: improperly encoded int32 value")
errBadInt64 = errors.New("abi: improperly encoded int64 value")
) )
// formatSliceString formats the reflection kind with the given slice size // formatSliceString formats the reflection kind with the given slice size
@ -81,6 +73,7 @@ func typeCheck(t Type, value reflect.Value) error {
} else { } else {
return nil return nil
} }
} }
// typeErr returns a formatted type casting error. // typeErr returns a formatted type casting error.

View File

@ -29,27 +29,24 @@ import (
// don't get the signature canonical representation as the first LOG topic. // don't get the signature canonical representation as the first LOG topic.
type Event struct { type Event struct {
// Name is the event name used for internal representation. It's derived from // Name is the event name used for internal representation. It's derived from
// the raw name and a suffix will be added in the case of event overloading. // the raw name and a suffix will be added in the case of a event overload.
// //
// e.g. // e.g.
// These are two events that have the same name: // These are two events that have the same name:
// * foo(int,int) // * foo(int,int)
// * foo(uint,uint) // * foo(uint,uint)
// The event name of the first one will be resolved as foo while the second one // The event name of the first one wll be resolved as foo while the second one
// will be resolved as foo0. // will be resolved as foo0.
Name string Name string
// RawName is the raw event name parsed from ABI. // RawName is the raw event name parsed from ABI.
RawName string RawName string
Anonymous bool Anonymous bool
Inputs Arguments Inputs Arguments
str string str string
// Sig contains the string signature according to the ABI spec. // Sig contains the string signature according to the ABI spec.
// e.g. event foo(uint32 a, int b) = "foo(uint32,int256)" // e.g. event foo(uint32 a, int b) = "foo(uint32,int256)"
// Please note that "int" is substitute for its canonical representation "int256" // Please note that "int" is substitute for its canonical representation "int256"
Sig string Sig string
// ID returns the canonical representation of the event's signature used by the // ID returns the canonical representation of the event's signature used by the
// abi definition to identify event names and types. // abi definition to identify event names and types.
ID common.Hash ID common.Hash

View File

@ -161,6 +161,7 @@ func TestEventMultiValueWithArrayUnpack(t *testing.T) {
} }
func TestEventTupleUnpack(t *testing.T) { func TestEventTupleUnpack(t *testing.T) {
type EventTransfer struct { type EventTransfer struct {
Value *big.Int Value *big.Int
} }

View File

@ -25,19 +25,16 @@ import (
) )
// ConvertType converts an interface of a runtime type into a interface of the // ConvertType converts an interface of a runtime type into a interface of the
// given type, e.g. turn this code: // given type
// // e.g. turn
// var fields []reflect.StructField // var fields []reflect.StructField
// // fields = append(fields, reflect.StructField{
// fields = append(fields, reflect.StructField{ // Name: "X",
// Name: "X", // Type: reflect.TypeOf(new(big.Int)),
// Type: reflect.TypeOf(new(big.Int)), // Tag: reflect.StructTag("json:\"" + "x" + "\""),
// Tag: reflect.StructTag("json:\"" + "x" + "\""), // }
// } // into
// // type TupleT struct { X *big.Int }
// into:
//
// type TupleT struct { X *big.Int }
func ConvertType(in interface{}, proto interface{}) interface{} { func ConvertType(in interface{}, proto interface{}) interface{} {
protoType := reflect.TypeOf(proto) protoType := reflect.TypeOf(proto)
if reflect.TypeOf(in).ConvertibleTo(protoType) { if reflect.TypeOf(in).ConvertibleTo(protoType) {
@ -102,7 +99,7 @@ func mustArrayToByteSlice(value reflect.Value) reflect.Value {
func set(dst, src reflect.Value) error { func set(dst, src reflect.Value) error {
dstType, srcType := dst.Type(), src.Type() dstType, srcType := dst.Type(), src.Type()
switch { switch {
case dstType.Kind() == reflect.Interface && dst.Elem().IsValid() && (dst.Elem().Type().Kind() == reflect.Ptr || dst.Elem().CanSet()): case dstType.Kind() == reflect.Interface && dst.Elem().IsValid():
return set(dst.Elem(), src) return set(dst.Elem(), src)
case dstType.Kind() == reflect.Ptr && dstType.Elem() != reflect.TypeOf(big.Int{}): case dstType.Kind() == reflect.Ptr && dstType.Elem() != reflect.TypeOf(big.Int{}):
return set(dst.Elem(), src) return set(dst.Elem(), src)
@ -173,13 +170,11 @@ func setStruct(dst, src reflect.Value) error {
} }
// mapArgNamesToStructFields maps a slice of argument names to struct fields. // mapArgNamesToStructFields maps a slice of argument names to struct fields.
// // first round: for each Exportable field that contains a `abi:""` tag
// first round: for each Exportable field that contains a `abi:""` tag and this field name // and this field name exists in the given argument name list, pair them together.
// exists in the given argument name list, pair them together. // second round: for each argument name that has not been already linked,
// // find what variable is expected to be mapped into, if it exists and has not been
// second round: for each argument name that has not been already linked, find what // used, pair them.
// variable is expected to be mapped into, if it exists and has not been used, pair them.
//
// Note this function assumes the given value is a struct value. // Note this function assumes the given value is a struct value.
func mapArgNamesToStructFields(argNames []string, value reflect.Value) (map[string]string, error) { func mapArgNamesToStructFields(argNames []string, value reflect.Value) (map[string]string, error) {
typ := value.Type() typ := value.Type()
@ -225,6 +220,7 @@ func mapArgNamesToStructFields(argNames []string, value reflect.Value) (map[stri
// second round ~~~ // second round ~~~
for _, argName := range argNames { for _, argName := range argNames {
structFieldName := ToCamelCase(argName) structFieldName := ToCamelCase(argName)
if structFieldName == "" { if structFieldName == "" {

View File

@ -32,7 +32,7 @@ type reflectTest struct {
var reflectTests = []reflectTest{ var reflectTests = []reflectTest{
{ {
name: "OneToOneCorrespondence", name: "OneToOneCorrespondance",
args: []string{"fieldA"}, args: []string{"fieldA"},
struc: struct { struc: struct {
FieldA int `abi:"fieldA"` FieldA int `abi:"fieldA"`

View File

@ -1,176 +0,0 @@
// Copyright 2022 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"fmt"
)
type SelectorMarshaling struct {
Name string `json:"name"`
Type string `json:"type"`
Inputs []ArgumentMarshaling `json:"inputs"`
}
func isDigit(c byte) bool {
return c >= '0' && c <= '9'
}
func isAlpha(c byte) bool {
return (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z')
}
func isIdentifierSymbol(c byte) bool {
return c == '$' || c == '_'
}
func parseToken(unescapedSelector string, isIdent bool) (string, string, error) {
if len(unescapedSelector) == 0 {
return "", "", fmt.Errorf("empty token")
}
firstChar := unescapedSelector[0]
position := 1
if !(isAlpha(firstChar) || (isIdent && isIdentifierSymbol(firstChar))) {
return "", "", fmt.Errorf("invalid token start: %c", firstChar)
}
for position < len(unescapedSelector) {
char := unescapedSelector[position]
if !(isAlpha(char) || isDigit(char) || (isIdent && isIdentifierSymbol(char))) {
break
}
position++
}
return unescapedSelector[:position], unescapedSelector[position:], nil
}
func parseIdentifier(unescapedSelector string) (string, string, error) {
return parseToken(unescapedSelector, true)
}
func parseElementaryType(unescapedSelector string) (string, string, error) {
parsedType, rest, err := parseToken(unescapedSelector, false)
if err != nil {
return "", "", fmt.Errorf("failed to parse elementary type: %v", err)
}
// handle arrays
for len(rest) > 0 && rest[0] == '[' {
parsedType = parsedType + string(rest[0])
rest = rest[1:]
for len(rest) > 0 && isDigit(rest[0]) {
parsedType = parsedType + string(rest[0])
rest = rest[1:]
}
if len(rest) == 0 || rest[0] != ']' {
return "", "", fmt.Errorf("failed to parse array: expected ']', got %c", unescapedSelector[0])
}
parsedType = parsedType + string(rest[0])
rest = rest[1:]
}
return parsedType, rest, nil
}
func parseCompositeType(unescapedSelector string) ([]interface{}, string, error) {
if len(unescapedSelector) == 0 || unescapedSelector[0] != '(' {
return nil, "", fmt.Errorf("expected '(', got %c", unescapedSelector[0])
}
parsedType, rest, err := parseType(unescapedSelector[1:])
if err != nil {
return nil, "", fmt.Errorf("failed to parse type: %v", err)
}
result := []interface{}{parsedType}
for len(rest) > 0 && rest[0] != ')' {
parsedType, rest, err = parseType(rest[1:])
if err != nil {
return nil, "", fmt.Errorf("failed to parse type: %v", err)
}
result = append(result, parsedType)
}
if len(rest) == 0 || rest[0] != ')' {
return nil, "", fmt.Errorf("expected ')', got '%s'", rest)
}
if len(rest) >= 3 && rest[1] == '[' && rest[2] == ']' {
return append(result, "[]"), rest[3:], nil
}
return result, rest[1:], nil
}
func parseType(unescapedSelector string) (interface{}, string, error) {
if len(unescapedSelector) == 0 {
return nil, "", fmt.Errorf("empty type")
}
if unescapedSelector[0] == '(' {
return parseCompositeType(unescapedSelector)
} else {
return parseElementaryType(unescapedSelector)
}
}
func assembleArgs(args []interface{}) ([]ArgumentMarshaling, error) {
arguments := make([]ArgumentMarshaling, 0)
for i, arg := range args {
// generate dummy name to avoid unmarshal issues
name := fmt.Sprintf("name%d", i)
if s, ok := arg.(string); ok {
arguments = append(arguments, ArgumentMarshaling{name, s, s, nil, false})
} else if components, ok := arg.([]interface{}); ok {
subArgs, err := assembleArgs(components)
if err != nil {
return nil, fmt.Errorf("failed to assemble components: %v", err)
}
tupleType := "tuple"
if len(subArgs) != 0 && subArgs[len(subArgs)-1].Type == "[]" {
subArgs = subArgs[:len(subArgs)-1]
tupleType = "tuple[]"
}
arguments = append(arguments, ArgumentMarshaling{name, tupleType, tupleType, subArgs, false})
} else {
return nil, fmt.Errorf("failed to assemble args: unexpected type %T", arg)
}
}
return arguments, nil
}
// ParseSelector converts a method selector into a struct that can be JSON encoded
// and consumed by other functions in this package.
// Note, although uppercase letters are not part of the ABI spec, this function
// still accepts it as the general format is valid.
func ParseSelector(unescapedSelector string) (SelectorMarshaling, error) {
name, rest, err := parseIdentifier(unescapedSelector)
if err != nil {
return SelectorMarshaling{}, fmt.Errorf("failed to parse selector '%s': %v", unescapedSelector, err)
}
args := []interface{}{}
if len(rest) >= 2 && rest[0] == '(' && rest[1] == ')' {
rest = rest[2:]
} else {
args, rest, err = parseCompositeType(rest)
if err != nil {
return SelectorMarshaling{}, fmt.Errorf("failed to parse selector '%s': %v", unescapedSelector, err)
}
}
if len(rest) > 0 {
return SelectorMarshaling{}, fmt.Errorf("failed to parse selector '%s': unexpected string '%s'", unescapedSelector, rest)
}
// Reassemble the fake ABI and construct the JSON
fakeArgs, err := assembleArgs(args)
if err != nil {
return SelectorMarshaling{}, fmt.Errorf("failed to parse selector: %v", err)
}
return SelectorMarshaling{name, "function", fakeArgs}, nil
}

View File

@ -1,79 +0,0 @@
// Copyright 2022 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import (
"fmt"
"log"
"reflect"
"testing"
)
func TestParseSelector(t *testing.T) {
mkType := func(types ...interface{}) []ArgumentMarshaling {
var result []ArgumentMarshaling
for i, typeOrComponents := range types {
name := fmt.Sprintf("name%d", i)
if typeName, ok := typeOrComponents.(string); ok {
result = append(result, ArgumentMarshaling{name, typeName, typeName, nil, false})
} else if components, ok := typeOrComponents.([]ArgumentMarshaling); ok {
result = append(result, ArgumentMarshaling{name, "tuple", "tuple", components, false})
} else if components, ok := typeOrComponents.([][]ArgumentMarshaling); ok {
result = append(result, ArgumentMarshaling{name, "tuple[]", "tuple[]", components[0], false})
} else {
log.Fatalf("unexpected type %T", typeOrComponents)
}
}
return result
}
tests := []struct {
input string
name string
args []ArgumentMarshaling
}{
{"noargs()", "noargs", []ArgumentMarshaling{}},
{"simple(uint256,uint256,uint256)", "simple", mkType("uint256", "uint256", "uint256")},
{"other(uint256,address)", "other", mkType("uint256", "address")},
{"withArray(uint256[],address[2],uint8[4][][5])", "withArray", mkType("uint256[]", "address[2]", "uint8[4][][5]")},
{"singleNest(bytes32,uint8,(uint256,uint256),address)", "singleNest", mkType("bytes32", "uint8", mkType("uint256", "uint256"), "address")},
{"multiNest(address,(uint256[],uint256),((address,bytes32),uint256))", "multiNest",
mkType("address", mkType("uint256[]", "uint256"), mkType(mkType("address", "bytes32"), "uint256"))},
{"arrayNest((uint256,uint256)[],bytes32)", "arrayNest", mkType([][]ArgumentMarshaling{mkType("uint256", "uint256")}, "bytes32")},
{"multiArrayNest((uint256,uint256)[],(uint256,uint256)[])", "multiArrayNest",
mkType([][]ArgumentMarshaling{mkType("uint256", "uint256")}, [][]ArgumentMarshaling{mkType("uint256", "uint256")})},
{"singleArrayNestAndArray((uint256,uint256)[],bytes32[])", "singleArrayNestAndArray",
mkType([][]ArgumentMarshaling{mkType("uint256", "uint256")}, "bytes32[]")},
{"singleArrayNestWithArrayAndArray((uint256[],address[2],uint8[4][][5])[],bytes32[])", "singleArrayNestWithArrayAndArray",
mkType([][]ArgumentMarshaling{mkType("uint256[]", "address[2]", "uint8[4][][5]")}, "bytes32[]")},
}
for i, tt := range tests {
selector, err := ParseSelector(tt.input)
if err != nil {
t.Errorf("test %d: failed to parse selector '%v': %v", i, tt.input, err)
}
if selector.Name != tt.name {
t.Errorf("test %d: unexpected function name: '%s' != '%s'", i, selector.Name, tt.name)
}
if selector.Type != "function" {
t.Errorf("test %d: unexpected type: '%s' != '%s'", i, selector.Type, "function")
}
if !reflect.DeepEqual(selector.Inputs, tt.args) {
t.Errorf("test %d: unexpected args: '%v' != '%v'", i, selector.Inputs, tt.args)
}
}
}

View File

@ -1,4 +1,4 @@
// Copyright 2020 The go-ethereum Authors // Copyright 2019 The go-ethereum Authors
// This file is part of the go-ethereum library. // This file is part of the go-ethereum library.
// //
// The go-ethereum library is free software: you can redistribute it and/or modify // The go-ethereum library is free software: you can redistribute it and/or modify

View File

@ -23,8 +23,6 @@ import (
"regexp" "regexp"
"strconv" "strconv"
"strings" "strings"
"unicode"
"unicode/utf8"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
) )
@ -154,9 +152,6 @@ func NewType(t string, internalType string, components []ArgumentMarshaling) (ty
if varSize == 0 { if varSize == 0 {
typ.T = BytesTy typ.T = BytesTy
} else { } else {
if varSize > 32 {
return Type{}, fmt.Errorf("unsupported arg type: %s", t)
}
typ.T = FixedBytesTy typ.T = FixedBytesTy
typ.Size = varSize typ.Size = varSize
} }
@ -166,26 +161,19 @@ func NewType(t string, internalType string, components []ArgumentMarshaling) (ty
elems []*Type elems []*Type
names []string names []string
expression string // canonical parameter expression expression string // canonical parameter expression
used = make(map[string]bool)
) )
expression += "(" expression += "("
overloadedNames := make(map[string]string)
for idx, c := range components { for idx, c := range components {
cType, err := NewType(c.Type, c.InternalType, c.Components) cType, err := NewType(c.Type, c.InternalType, c.Components)
if err != nil { if err != nil {
return Type{}, err return Type{}, err
} }
name := ToCamelCase(c.Name) fieldName, err := overloadedArgName(c.Name, overloadedNames)
if name == "" {
return Type{}, errors.New("abi: purely anonymous or underscored field is not supported")
}
fieldName := ResolveNameConflict(name, func(s string) bool { return used[s] })
if err != nil { if err != nil {
return Type{}, err return Type{}, err
} }
used[fieldName] = true overloadedNames[fieldName] = fieldName
if !isValidFieldName(fieldName) {
return Type{}, fmt.Errorf("field %d has invalid name", idx)
}
fields = append(fields, reflect.StructField{ fields = append(fields, reflect.StructField{
Name: fieldName, // reflect.StructOf will panic for any exported field. Name: fieldName, // reflect.StructOf will panic for any exported field.
Type: cType.GetType(), Type: cType.GetType(),
@ -213,7 +201,7 @@ func NewType(t string, internalType string, components []ArgumentMarshaling) (ty
if internalType != "" && strings.HasPrefix(internalType, structPrefix) { if internalType != "" && strings.HasPrefix(internalType, structPrefix) {
// Foo.Bar type definition is not allowed in golang, // Foo.Bar type definition is not allowed in golang,
// convert the format to FooBar // convert the format to FooBar
typ.TupleRawName = strings.ReplaceAll(internalType[len(structPrefix):], ".", "") typ.TupleRawName = strings.Replace(internalType[len(structPrefix):], ".", "", -1)
} }
case "function": case "function":
@ -262,6 +250,20 @@ func (t Type) GetType() reflect.Type {
} }
} }
func overloadedArgName(rawName string, names map[string]string) (string, error) {
fieldName := ToCamelCase(rawName)
if fieldName == "" {
return "", errors.New("abi: purely anonymous or underscored field is not supported")
}
// Handle overloaded fieldNames
_, ok := names[fieldName]
for idx := 0; ok; idx++ {
fieldName = fmt.Sprintf("%s%d", ToCamelCase(rawName), idx)
_, ok = names[fieldName]
}
return fieldName, nil
}
// String implements Stringer. // String implements Stringer.
func (t Type) String() (out string) { func (t Type) String() (out string) {
return t.stringKind return t.stringKind
@ -397,30 +399,3 @@ func getTypeSize(t Type) int {
} }
return 32 return 32
} }
// isLetter reports whether a given 'rune' is classified as a Letter.
// This method is copied from reflect/type.go
func isLetter(ch rune) bool {
return 'a' <= ch && ch <= 'z' || 'A' <= ch && ch <= 'Z' || ch == '_' || ch >= utf8.RuneSelf && unicode.IsLetter(ch)
}
// isValidFieldName checks if a string is a valid (struct) field name or not.
//
// According to the language spec, a field name should be an identifier.
//
// identifier = letter { letter | unicode_digit } .
// letter = unicode_letter | "_" .
// This method is copied from reflect/type.go
func isValidFieldName(fieldName string) bool {
for i, c := range fieldName {
if i == 0 && !isLetter(c) {
return false
}
if !(isLetter(c) || unicode.IsDigit(c)) {
return false
}
}
return len(fieldName) > 0
}

View File

@ -366,10 +366,3 @@ func TestGetTypeSize(t *testing.T) {
} }
} }
} }
func TestNewFixedBytesOver32(t *testing.T) {
_, err := NewType("bytes4096", "", nil)
if err == nil {
t.Errorf("fixed bytes with size over 32 is not spec'd")
}
}

View File

@ -19,7 +19,6 @@ package abi
import ( import (
"encoding/binary" "encoding/binary"
"fmt" "fmt"
"math"
"math/big" "math/big"
"reflect" "reflect"
@ -34,72 +33,43 @@ var (
) )
// ReadInteger reads the integer based on its kind and returns the appropriate value. // ReadInteger reads the integer based on its kind and returns the appropriate value.
func ReadInteger(typ Type, b []byte) (interface{}, error) { func ReadInteger(typ Type, b []byte) interface{} {
ret := new(big.Int).SetBytes(b)
if typ.T == UintTy { if typ.T == UintTy {
u64, isu64 := ret.Uint64(), ret.IsUint64()
switch typ.Size { switch typ.Size {
case 8: case 8:
if !isu64 || u64 > math.MaxUint8 { return b[len(b)-1]
return nil, errBadUint8
}
return byte(u64), nil
case 16: case 16:
if !isu64 || u64 > math.MaxUint16 { return binary.BigEndian.Uint16(b[len(b)-2:])
return nil, errBadUint16
}
return uint16(u64), nil
case 32: case 32:
if !isu64 || u64 > math.MaxUint32 { return binary.BigEndian.Uint32(b[len(b)-4:])
return nil, errBadUint32
}
return uint32(u64), nil
case 64: case 64:
if !isu64 { return binary.BigEndian.Uint64(b[len(b)-8:])
return nil, errBadUint64
}
return u64, nil
default: default:
// the only case left for unsigned integer is uint256. // the only case left for unsigned integer is uint256.
return ret, nil return new(big.Int).SetBytes(b)
} }
} }
// big.SetBytes can't tell if a number is negative or positive in itself.
// On EVM, if the returned number > max int256, it is negative.
// A number is > max int256 if the bit at position 255 is set.
if ret.Bit(255) == 1 {
ret.Add(MaxUint256, new(big.Int).Neg(ret))
ret.Add(ret, common.Big1)
ret.Neg(ret)
}
i64, isi64 := ret.Int64(), ret.IsInt64()
switch typ.Size { switch typ.Size {
case 8: case 8:
if !isi64 || i64 < math.MinInt8 || i64 > math.MaxInt8 { return int8(b[len(b)-1])
return nil, errBadInt8
}
return int8(i64), nil
case 16: case 16:
if !isi64 || i64 < math.MinInt16 || i64 > math.MaxInt16 { return int16(binary.BigEndian.Uint16(b[len(b)-2:]))
return nil, errBadInt16
}
return int16(i64), nil
case 32: case 32:
if !isi64 || i64 < math.MinInt32 || i64 > math.MaxInt32 { return int32(binary.BigEndian.Uint32(b[len(b)-4:]))
return nil, errBadInt32
}
return int32(i64), nil
case 64: case 64:
if !isi64 { return int64(binary.BigEndian.Uint64(b[len(b)-8:]))
return nil, errBadInt64
}
return i64, nil
default: default:
// the only case left for integer is int256 // the only case left for integer is int256
// big.SetBytes can't tell if a number is negative or positive in itself.
return ret, nil // On EVM, if the returned number > max int256, it is negative.
// A number is > max int256 if the bit at position 255 is set.
ret := new(big.Int).SetBytes(b)
if ret.Bit(255) == 1 {
ret.Add(MaxUint256, new(big.Int).Neg(ret))
ret.Add(ret, common.Big1)
ret.Neg(ret)
}
return ret
} }
} }
@ -145,6 +115,7 @@ func ReadFixedBytes(t Type, word []byte) (interface{}, error) {
reflect.Copy(array, reflect.ValueOf(word[0:t.Size])) reflect.Copy(array, reflect.ValueOf(word[0:t.Size]))
return array.Interface(), nil return array.Interface(), nil
} }
// forEachUnpack iteratively unpack elements. // forEachUnpack iteratively unpack elements.
@ -153,7 +124,7 @@ func forEachUnpack(t Type, output []byte, start, size int) (interface{}, error)
return nil, fmt.Errorf("cannot marshal input to array, size is negative (%d)", size) return nil, fmt.Errorf("cannot marshal input to array, size is negative (%d)", size)
} }
if start+32*size > len(output) { if start+32*size > len(output) {
return nil, fmt.Errorf("abi: cannot marshal into go array: offset %d would go over slice boundary (len=%d)", len(output), start+32*size) return nil, fmt.Errorf("abi: cannot marshal in to go array: offset %d would go over slice boundary (len=%d)", len(output), start+32*size)
} }
// this value will become our slice or our array, depending on the type // this value will become our slice or our array, depending on the type
@ -192,9 +163,6 @@ func forTupleUnpack(t Type, output []byte) (interface{}, error) {
virtualArgs := 0 virtualArgs := 0
for index, elem := range t.TupleElems { for index, elem := range t.TupleElems {
marshalledValue, err := toGoType((index+virtualArgs)*32, *elem, output) marshalledValue, err := toGoType((index+virtualArgs)*32, *elem, output)
if err != nil {
return nil, err
}
if elem.T == ArrayTy && !isDynamicType(*elem) { if elem.T == ArrayTy && !isDynamicType(*elem) {
// If we have a static array, like [3]uint256, these are coded as // If we have a static array, like [3]uint256, these are coded as
// just like uint256,uint256,uint256. // just like uint256,uint256,uint256.
@ -212,6 +180,9 @@ func forTupleUnpack(t Type, output []byte) (interface{}, error) {
// coded as just like uint256,bool,uint256 // coded as just like uint256,bool,uint256
virtualArgs += getTypeSize(*elem)/32 - 1 virtualArgs += getTypeSize(*elem)/32 - 1
} }
if err != nil {
return nil, err
}
retval.Field(index).Set(reflect.ValueOf(marshalledValue)) retval.Field(index).Set(reflect.ValueOf(marshalledValue))
} }
return retval.Interface(), nil return retval.Interface(), nil
@ -264,7 +235,7 @@ func toGoType(index int, t Type, output []byte) (interface{}, error) {
case StringTy: // variable arrays are written at the end of the return bytes case StringTy: // variable arrays are written at the end of the return bytes
return string(output[begin : begin+length]), nil return string(output[begin : begin+length]), nil
case IntTy, UintTy: case IntTy, UintTy:
return ReadInteger(t, returnOutput) return ReadInteger(t, returnOutput), nil
case BoolTy: case BoolTy:
return readBool(returnOutput) return readBool(returnOutput)
case AddressTy: case AddressTy:
@ -284,7 +255,7 @@ func toGoType(index int, t Type, output []byte) (interface{}, error) {
// lengthPrefixPointsTo interprets a 32 byte slice as an offset and then determines which indices to look to decode the type. // lengthPrefixPointsTo interprets a 32 byte slice as an offset and then determines which indices to look to decode the type.
func lengthPrefixPointsTo(index int, output []byte) (start int, length int, err error) { func lengthPrefixPointsTo(index int, output []byte) (start int, length int, err error) {
bigOffsetEnd := new(big.Int).SetBytes(output[index : index+32]) bigOffsetEnd := big.NewInt(0).SetBytes(output[index : index+32])
bigOffsetEnd.Add(bigOffsetEnd, common.Big32) bigOffsetEnd.Add(bigOffsetEnd, common.Big32)
outputLength := big.NewInt(int64(len(output))) outputLength := big.NewInt(int64(len(output)))
@ -297,9 +268,11 @@ func lengthPrefixPointsTo(index int, output []byte) (start int, length int, err
} }
offsetEnd := int(bigOffsetEnd.Uint64()) offsetEnd := int(bigOffsetEnd.Uint64())
lengthBig := new(big.Int).SetBytes(output[offsetEnd-32 : offsetEnd]) lengthBig := big.NewInt(0).SetBytes(output[offsetEnd-32 : offsetEnd])
totalSize := new(big.Int).Add(bigOffsetEnd, lengthBig) totalSize := big.NewInt(0)
totalSize.Add(totalSize, bigOffsetEnd)
totalSize.Add(totalSize, lengthBig)
if totalSize.BitLen() > 63 { if totalSize.BitLen() > 63 {
return 0, 0, fmt.Errorf("abi: length larger than int64: %v", totalSize) return 0, 0, fmt.Errorf("abi: length larger than int64: %v", totalSize)
} }
@ -314,7 +287,7 @@ func lengthPrefixPointsTo(index int, output []byte) (start int, length int, err
// tuplePointsTo resolves the location reference for dynamic tuple. // tuplePointsTo resolves the location reference for dynamic tuple.
func tuplePointsTo(index int, output []byte) (start int, err error) { func tuplePointsTo(index int, output []byte) (start int, err error) {
offset := new(big.Int).SetBytes(output[index : index+32]) offset := big.NewInt(0).SetBytes(output[index : index+32])
outputLen := big.NewInt(int64(len(output))) outputLen := big.NewInt(int64(len(output)))
if offset.Cmp(outputLen) > 0 { if offset.Cmp(outputLen) > 0 {

View File

@ -20,7 +20,6 @@ import (
"bytes" "bytes"
"encoding/hex" "encoding/hex"
"fmt" "fmt"
"math"
"math/big" "math/big"
"reflect" "reflect"
"strconv" "strconv"
@ -202,23 +201,6 @@ var unpackTests = []unpackTest{
IntOne *big.Int IntOne *big.Int
}{big.NewInt(1)}, }{big.NewInt(1)},
}, },
{
def: `[{"type":"bool"}]`,
enc: "",
want: false,
err: "abi: attempting to unmarshall an empty string while arguments are expected",
},
{
def: `[{"type":"bytes32","indexed":true},{"type":"uint256","indexed":false}]`,
enc: "",
want: false,
err: "abi: attempting to unmarshall an empty string while arguments are expected",
},
{
def: `[{"type":"bool","indexed":true},{"type":"uint64","indexed":true}]`,
enc: "",
want: false,
},
} }
// TestLocalUnpackTests runs test specially designed only for unpacking. // TestLocalUnpackTests runs test specially designed only for unpacking.
@ -353,11 +335,6 @@ func TestMethodMultiReturn(t *testing.T) {
&[]interface{}{&expected.Int, &expected.String}, &[]interface{}{&expected.Int, &expected.String},
"", "",
"Can unpack into a slice", "Can unpack into a slice",
}, {
&[]interface{}{&bigint, ""},
&[]interface{}{&expected.Int, expected.String},
"",
"Can unpack into a slice without indirection",
}, { }, {
&[2]interface{}{&bigint, new(string)}, &[2]interface{}{&bigint, new(string)},
&[2]interface{}{&expected.Int, &expected.String}, &[2]interface{}{&expected.Int, &expected.String},
@ -430,7 +407,7 @@ func TestMultiReturnWithStringArray(t *testing.T) {
} }
buff := new(bytes.Buffer) buff := new(bytes.Buffer)
buff.Write(common.Hex2Bytes("000000000000000000000000000000000000000000000000000000005c1b78ea0000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000001a055690d9db80000000000000000000000000000ab1257528b3782fb40d7ed5f72e624b744dffb2f00000000000000000000000000000000000000000000000000000000000000c00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000008457468657265756d000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001048656c6c6f2c20457468657265756d2100000000000000000000000000000000")) buff.Write(common.Hex2Bytes("000000000000000000000000000000000000000000000000000000005c1b78ea0000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000001a055690d9db80000000000000000000000000000ab1257528b3782fb40d7ed5f72e624b744dffb2f00000000000000000000000000000000000000000000000000000000000000c00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000800000000000000000000000000000000000000000000000000000000000000008457468657265756d000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001048656c6c6f2c20457468657265756d2100000000000000000000000000000000"))
temp, _ := new(big.Int).SetString("30000000000000000000", 10) temp, _ := big.NewInt(0).SetString("30000000000000000000", 10)
ret1, ret1Exp := new([3]*big.Int), [3]*big.Int{big.NewInt(1545304298), big.NewInt(6), temp} ret1, ret1Exp := new([3]*big.Int), [3]*big.Int{big.NewInt(1545304298), big.NewInt(6), temp}
ret2, ret2Exp := new(common.Address), common.HexToAddress("ab1257528b3782fb40d7ed5f72e624b744dffb2f") ret2, ret2Exp := new(common.Address), common.HexToAddress("ab1257528b3782fb40d7ed5f72e624b744dffb2f")
ret3, ret3Exp := new([2]string), [2]string{"Ethereum", "Hello, Ethereum!"} ret3, ret3Exp := new([2]string), [2]string{"Ethereum", "Hello, Ethereum!"}
@ -944,164 +921,3 @@ func TestOOMMaliciousInput(t *testing.T) {
} }
} }
} }
func TestPackAndUnpackIncompatibleNumber(t *testing.T) {
var encodeABI Arguments
uint256Ty, err := NewType("uint256", "", nil)
if err != nil {
panic(err)
}
encodeABI = Arguments{
{Type: uint256Ty},
}
maxU64, ok := new(big.Int).SetString(strconv.FormatUint(math.MaxUint64, 10), 10)
if !ok {
panic("bug")
}
maxU64Plus1 := new(big.Int).Add(maxU64, big.NewInt(1))
cases := []struct {
decodeType string
inputValue *big.Int
err error
expectValue interface{}
}{
{
decodeType: "uint8",
inputValue: big.NewInt(math.MaxUint8 + 1),
err: errBadUint8,
},
{
decodeType: "uint8",
inputValue: big.NewInt(math.MaxUint8),
err: nil,
expectValue: uint8(math.MaxUint8),
},
{
decodeType: "uint16",
inputValue: big.NewInt(math.MaxUint16 + 1),
err: errBadUint16,
},
{
decodeType: "uint16",
inputValue: big.NewInt(math.MaxUint16),
err: nil,
expectValue: uint16(math.MaxUint16),
},
{
decodeType: "uint32",
inputValue: big.NewInt(math.MaxUint32 + 1),
err: errBadUint32,
},
{
decodeType: "uint32",
inputValue: big.NewInt(math.MaxUint32),
err: nil,
expectValue: uint32(math.MaxUint32),
},
{
decodeType: "uint64",
inputValue: maxU64Plus1,
err: errBadUint64,
},
{
decodeType: "uint64",
inputValue: maxU64,
err: nil,
expectValue: uint64(math.MaxUint64),
},
{
decodeType: "uint256",
inputValue: maxU64Plus1,
err: nil,
expectValue: maxU64Plus1,
},
{
decodeType: "int8",
inputValue: big.NewInt(math.MaxInt8 + 1),
err: errBadInt8,
},
{
decodeType: "int8",
inputValue: big.NewInt(math.MinInt8 - 1),
err: errBadInt8,
},
{
decodeType: "int8",
inputValue: big.NewInt(math.MaxInt8),
err: nil,
expectValue: int8(math.MaxInt8),
},
{
decodeType: "int16",
inputValue: big.NewInt(math.MaxInt16 + 1),
err: errBadInt16,
},
{
decodeType: "int16",
inputValue: big.NewInt(math.MinInt16 - 1),
err: errBadInt16,
},
{
decodeType: "int16",
inputValue: big.NewInt(math.MaxInt16),
err: nil,
expectValue: int16(math.MaxInt16),
},
{
decodeType: "int32",
inputValue: big.NewInt(math.MaxInt32 + 1),
err: errBadInt32,
},
{
decodeType: "int32",
inputValue: big.NewInt(math.MinInt32 - 1),
err: errBadInt32,
},
{
decodeType: "int32",
inputValue: big.NewInt(math.MaxInt32),
err: nil,
expectValue: int32(math.MaxInt32),
},
{
decodeType: "int64",
inputValue: new(big.Int).Add(big.NewInt(math.MaxInt64), big.NewInt(1)),
err: errBadInt64,
},
{
decodeType: "int64",
inputValue: new(big.Int).Sub(big.NewInt(math.MinInt64), big.NewInt(1)),
err: errBadInt64,
},
{
decodeType: "int64",
inputValue: big.NewInt(math.MaxInt64),
err: nil,
expectValue: int64(math.MaxInt64),
},
}
for i, testCase := range cases {
packed, err := encodeABI.Pack(testCase.inputValue)
if err != nil {
panic(err)
}
ty, err := NewType(testCase.decodeType, "", nil)
if err != nil {
panic(err)
}
decodeABI := Arguments{
{Type: ty},
}
decoded, err := decodeABI.Unpack(packed)
if err != testCase.err {
t.Fatalf("Expected error %v, actual error %v. case %d", testCase.err, err, i)
}
if err != nil {
continue
}
if !reflect.DeepEqual(decoded[0], testCase.expectValue) {
t.Fatalf("Expected value %v, actual value %v", testCase.expectValue, decoded[0])
}
}
}

View File

@ -1,40 +0,0 @@
// Copyright 2022 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package abi
import "fmt"
// ResolveNameConflict returns the next available name for a given thing.
// This helper can be used for lots of purposes:
//
// - In solidity function overloading is supported, this function can fix
// the name conflicts of overloaded functions.
// - In golang binding generation, the parameter(in function, event, error,
// and struct definition) name will be converted to camelcase style which
// may eventually lead to name conflicts.
//
// Name conflicts are mostly resolved by adding number suffix. e.g. if the abi contains
// Methods "send" and "send1", ResolveNameConflict would return "send2" for input "send".
func ResolveNameConflict(rawName string, used func(string) bool) string {
name := rawName
ok := used(name)
for idx := 0; ok; idx++ {
name = fmt.Sprintf("%s%d", rawName, idx)
ok = used(name)
}
return name
}

View File

@ -177,8 +177,7 @@ type Backend interface {
// safely used to calculate a signature from. // safely used to calculate a signature from.
// //
// The hash is calculated as // The hash is calculated as
// // keccak256("\x19Ethereum Signed Message:\n"${message length}${message}).
// keccak256("\x19Ethereum Signed Message:\n"${message length}${message}).
// //
// This gives context to the signed message and prevents signing of transactions. // This gives context to the signed message and prevents signing of transactions.
func TextHash(data []byte) []byte { func TextHash(data []byte) []byte {
@ -190,8 +189,7 @@ func TextHash(data []byte) []byte {
// safely used to calculate a signature from. // safely used to calculate a signature from.
// //
// The hash is calculated as // The hash is calculated as
// // keccak256("\x19Ethereum Signed Message:\n"${message length}${message}).
// keccak256("\x19Ethereum Signed Message:\n"${message length}${message}).
// //
// This gives context to the signed message and prevents signing of transactions. // This gives context to the signed message and prevents signing of transactions.
func TextAndHash(data []byte) ([]byte, string) { func TextAndHash(data []byte) ([]byte, string) {

View File

@ -41,7 +41,8 @@ var ErrInvalidPassphrase = errors.New("invalid password")
// second time. // second time.
var ErrWalletAlreadyOpen = errors.New("wallet already open") var ErrWalletAlreadyOpen = errors.New("wallet already open")
// ErrWalletClosed is returned if a wallet is offline. // ErrWalletClosed is returned if a wallet is attempted to be opened the
// second time.
var ErrWalletClosed = errors.New("wallet closed") var ErrWalletClosed = errors.New("wallet closed")
// AuthNeededError is returned by backends for signing requests where the user // AuthNeededError is returned by backends for signing requests where the user

View File

@ -152,6 +152,10 @@ func (api *ExternalSigner) SelfDerive(bases []accounts.DerivationPath, chain eth
log.Error("operation SelfDerive not supported on external signers") log.Error("operation SelfDerive not supported on external signers")
} }
func (api *ExternalSigner) signHash(account accounts.Account, hash []byte) ([]byte, error) {
return []byte{}, fmt.Errorf("operation not supported on external signers")
}
// SignData signs keccak256(data). The mimetype parameter describes the type of data being signed // SignData signs keccak256(data). The mimetype parameter describes the type of data being signed
func (api *ExternalSigner) SignData(account accounts.Account, mimeType string, data []byte) ([]byte, error) { func (api *ExternalSigner) SignData(account accounts.Account, mimeType string, data []byte) ([]byte, error) {
var res hexutil.Bytes var res hexutil.Bytes

View File

@ -41,12 +41,12 @@ var DefaultBaseDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60,
var LegacyLedgerBaseDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0} var LegacyLedgerBaseDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}
// DerivationPath represents the computer friendly version of a hierarchical // DerivationPath represents the computer friendly version of a hierarchical
// deterministic wallet account derivation path. // deterministic wallet account derivaion path.
// //
// The BIP-32 spec https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki // The BIP-32 spec https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki
// defines derivation paths to be of the form: // defines derivation paths to be of the form:
// //
// m / purpose' / coin_type' / account' / change / address_index // m / purpose' / coin_type' / account' / change / address_index
// //
// The BIP-44 spec https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki // The BIP-44 spec https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki
// defines that the `purpose` be 44' (or 0x8000002C) for crypto currencies, and // defines that the `purpose` be 44' (or 0x8000002C) for crypto currencies, and

View File

@ -27,7 +27,7 @@ import (
"sync" "sync"
"time" "time"
mapset "github.com/deckarep/golang-set/v2" mapset "github.com/deckarep/golang-set"
"github.com/ethereum/go-ethereum/accounts" "github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
@ -79,7 +79,7 @@ func newAccountCache(keydir string) (*accountCache, chan struct{}) {
keydir: keydir, keydir: keydir,
byAddr: make(map[common.Address][]accounts.Account), byAddr: make(map[common.Address][]accounts.Account),
notify: make(chan struct{}, 1), notify: make(chan struct{}, 1),
fileC: fileCache{all: mapset.NewThreadUnsafeSet[string]()}, fileC: fileCache{all: mapset.NewThreadUnsafeSet()},
} }
ac.watcher = newWatcher(ac) ac.watcher = newWatcher(ac)
return ac, ac.notify return ac, ac.notify
@ -146,14 +146,6 @@ func (ac *accountCache) deleteByFile(path string) {
} }
} }
// watcherStarted returns true if the watcher loop started running (even if it
// has since also ended).
func (ac *accountCache) watcherStarted() bool {
ac.mu.Lock()
defer ac.mu.Unlock()
return ac.watcher.running || ac.watcher.runEnded
}
func removeAccount(slice []accounts.Account, elem accounts.Account) []accounts.Account { func removeAccount(slice []accounts.Account, elem accounts.Account) []accounts.Account {
for i := range slice { for i := range slice {
if slice[i] == elem { if slice[i] == elem {
@ -283,15 +275,16 @@ func (ac *accountCache) scanAccounts() error {
// Process all the file diffs // Process all the file diffs
start := time.Now() start := time.Now()
for _, path := range creates.ToSlice() { for _, p := range creates.ToSlice() {
if a := readAccount(path); a != nil { if a := readAccount(p.(string)); a != nil {
ac.add(*a) ac.add(*a)
} }
} }
for _, path := range deletes.ToSlice() { for _, p := range deletes.ToSlice() {
ac.deleteByFile(path) ac.deleteByFile(p.(string))
} }
for _, path := range updates.ToSlice() { for _, p := range updates.ToSlice() {
path := p.(string)
ac.deleteByFile(path) ac.deleteByFile(path)
if a := readAccount(path); a != nil { if a := readAccount(path); a != nil {
ac.add(*a) ac.add(*a)

View File

@ -18,6 +18,7 @@ package keystore
import ( import (
"fmt" "fmt"
"io/ioutil"
"math/rand" "math/rand"
"os" "os"
"path/filepath" "path/filepath"
@ -50,48 +51,16 @@ var (
} }
) )
// waitWatcherStarts waits up to 1s for the keystore watcher to start.
func waitWatcherStart(ks *KeyStore) bool {
// On systems where file watch is not supported, just return "ok".
if !ks.cache.watcher.enabled() {
return true
}
// The watcher should start, and then exit.
for t0 := time.Now(); time.Since(t0) < 1*time.Second; time.Sleep(100 * time.Millisecond) {
if ks.cache.watcherStarted() {
return true
}
}
return false
}
func waitForAccounts(wantAccounts []accounts.Account, ks *KeyStore) error {
var list []accounts.Account
for t0 := time.Now(); time.Since(t0) < 5*time.Second; time.Sleep(200 * time.Millisecond) {
list = ks.Accounts()
if reflect.DeepEqual(list, wantAccounts) {
// ks should have also received change notifications
select {
case <-ks.changes:
default:
return fmt.Errorf("wasn't notified of new accounts")
}
return nil
}
}
return fmt.Errorf("\ngot %v\nwant %v", list, wantAccounts)
}
func TestWatchNewFile(t *testing.T) { func TestWatchNewFile(t *testing.T) {
t.Parallel() t.Parallel()
dir, ks := tmpKeyStore(t, false) dir, ks := tmpKeyStore(t, false)
defer os.RemoveAll(dir)
// Ensure the watcher is started before adding any files. // Ensure the watcher is started before adding any files.
ks.Accounts() ks.Accounts()
if !waitWatcherStart(ks) { time.Sleep(1000 * time.Millisecond)
t.Fatal("keystore watcher didn't start in time")
}
// Move in the files. // Move in the files.
wantAccounts := make([]accounts.Account, len(cachetestAccounts)) wantAccounts := make([]accounts.Account, len(cachetestAccounts))
for i := range cachetestAccounts { for i := range cachetestAccounts {
@ -105,24 +74,37 @@ func TestWatchNewFile(t *testing.T) {
} }
// ks should see the accounts. // ks should see the accounts.
if err := waitForAccounts(wantAccounts, ks); err != nil { var list []accounts.Account
t.Error(err) for d := 200 * time.Millisecond; d < 5*time.Second; d *= 2 {
list = ks.Accounts()
if reflect.DeepEqual(list, wantAccounts) {
// ks should have also received change notifications
select {
case <-ks.changes:
default:
t.Fatalf("wasn't notified of new accounts")
}
return
}
time.Sleep(d)
} }
t.Errorf("got %s, want %s", spew.Sdump(list), spew.Sdump(wantAccounts))
} }
func TestWatchNoDir(t *testing.T) { func TestWatchNoDir(t *testing.T) {
t.Parallel() t.Parallel()
// Create ks but not the directory that it watches. // Create ks but not the directory that it watches.
rand.Seed(time.Now().UnixNano())
dir := filepath.Join(os.TempDir(), fmt.Sprintf("eth-keystore-watchnodir-test-%d-%d", os.Getpid(), rand.Int())) dir := filepath.Join(os.TempDir(), fmt.Sprintf("eth-keystore-watchnodir-test-%d-%d", os.Getpid(), rand.Int()))
ks := NewKeyStore(dir, LightScryptN, LightScryptP) ks := NewKeyStore(dir, LightScryptN, LightScryptP)
list := ks.Accounts() list := ks.Accounts()
if len(list) > 0 { if len(list) > 0 {
t.Error("initial account list not empty:", list) t.Error("initial account list not empty:", list)
} }
// The watcher should start, and then exit. time.Sleep(100 * time.Millisecond)
if !waitWatcherStart(ks) {
t.Fatal("keystore watcher didn't start in time")
}
// Create the directory and copy a key file into it. // Create the directory and copy a key file into it.
os.MkdirAll(dir, 0700) os.MkdirAll(dir, 0700)
defer os.RemoveAll(dir) defer os.RemoveAll(dir)
@ -315,12 +297,31 @@ func TestCacheFind(t *testing.T) {
} }
} }
func waitForAccounts(wantAccounts []accounts.Account, ks *KeyStore) error {
var list []accounts.Account
for d := 200 * time.Millisecond; d < 8*time.Second; d *= 2 {
list = ks.Accounts()
if reflect.DeepEqual(list, wantAccounts) {
// ks should have also received change notifications
select {
case <-ks.changes:
default:
return fmt.Errorf("wasn't notified of new accounts")
}
return nil
}
time.Sleep(d)
}
return fmt.Errorf("\ngot %v\nwant %v", list, wantAccounts)
}
// TestUpdatedKeyfileContents tests that updating the contents of a keystore file // TestUpdatedKeyfileContents tests that updating the contents of a keystore file
// is noticed by the watcher, and the account cache is updated accordingly // is noticed by the watcher, and the account cache is updated accordingly
func TestUpdatedKeyfileContents(t *testing.T) { func TestUpdatedKeyfileContents(t *testing.T) {
t.Parallel() t.Parallel()
// Create a temporary keystore to test with // Create a temporary kesytore to test with
rand.Seed(time.Now().UnixNano())
dir := filepath.Join(os.TempDir(), fmt.Sprintf("eth-keystore-updatedkeyfilecontents-test-%d-%d", os.Getpid(), rand.Int())) dir := filepath.Join(os.TempDir(), fmt.Sprintf("eth-keystore-updatedkeyfilecontents-test-%d-%d", os.Getpid(), rand.Int()))
ks := NewKeyStore(dir, LightScryptN, LightScryptP) ks := NewKeyStore(dir, LightScryptN, LightScryptP)
@ -328,9 +329,8 @@ func TestUpdatedKeyfileContents(t *testing.T) {
if len(list) > 0 { if len(list) > 0 {
t.Error("initial account list not empty:", list) t.Error("initial account list not empty:", list)
} }
if !waitWatcherStart(ks) { time.Sleep(100 * time.Millisecond)
t.Fatal("keystore watcher didn't start in time")
}
// Create the directory and copy a key file into it. // Create the directory and copy a key file into it.
os.MkdirAll(dir, 0700) os.MkdirAll(dir, 0700)
defer os.RemoveAll(dir) defer os.RemoveAll(dir)
@ -348,8 +348,9 @@ func TestUpdatedKeyfileContents(t *testing.T) {
t.Error(err) t.Error(err)
return return
} }
// needed so that modTime of `file` is different to its current value after forceCopyFile // needed so that modTime of `file` is different to its current value after forceCopyFile
time.Sleep(time.Second) time.Sleep(1000 * time.Millisecond)
// Now replace file contents // Now replace file contents
if err := forceCopyFile(file, cachetestAccounts[1].URL.Path); err != nil { if err := forceCopyFile(file, cachetestAccounts[1].URL.Path); err != nil {
@ -365,7 +366,7 @@ func TestUpdatedKeyfileContents(t *testing.T) {
} }
// needed so that modTime of `file` is different to its current value after forceCopyFile // needed so that modTime of `file` is different to its current value after forceCopyFile
time.Sleep(time.Second) time.Sleep(1000 * time.Millisecond)
// Now replace file contents again // Now replace file contents again
if err := forceCopyFile(file, cachetestAccounts[2].URL.Path); err != nil { if err := forceCopyFile(file, cachetestAccounts[2].URL.Path); err != nil {
@ -380,11 +381,11 @@ func TestUpdatedKeyfileContents(t *testing.T) {
return return
} }
// needed so that modTime of `file` is different to its current value after os.WriteFile // needed so that modTime of `file` is different to its current value after ioutil.WriteFile
time.Sleep(time.Second) time.Sleep(1000 * time.Millisecond)
// Now replace file contents with crap // Now replace file contents with crap
if err := os.WriteFile(file, []byte("foo"), 0600); err != nil { if err := ioutil.WriteFile(file, []byte("foo"), 0644); err != nil {
t.Fatal(err) t.Fatal(err)
return return
} }
@ -397,9 +398,9 @@ func TestUpdatedKeyfileContents(t *testing.T) {
// forceCopyFile is like cp.CopyFile, but doesn't complain if the destination exists. // forceCopyFile is like cp.CopyFile, but doesn't complain if the destination exists.
func forceCopyFile(dst, src string) error { func forceCopyFile(dst, src string) error {
data, err := os.ReadFile(src) data, err := ioutil.ReadFile(src)
if err != nil { if err != nil {
return err return err
} }
return os.WriteFile(dst, data, 0644) return ioutil.WriteFile(dst, data, 0644)
} }

View File

@ -17,30 +17,31 @@
package keystore package keystore
import ( import (
"io/ioutil"
"os" "os"
"path/filepath" "path/filepath"
"strings" "strings"
"sync" "sync"
"time" "time"
mapset "github.com/deckarep/golang-set/v2" mapset "github.com/deckarep/golang-set"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
) )
// fileCache is a cache of files seen during scan of keystore. // fileCache is a cache of files seen during scan of keystore.
type fileCache struct { type fileCache struct {
all mapset.Set[string] // Set of all files from the keystore folder all mapset.Set // Set of all files from the keystore folder
lastMod time.Time // Last time instance when a file was modified lastMod time.Time // Last time instance when a file was modified
mu sync.Mutex mu sync.Mutex
} }
// scan performs a new scan on the given directory, compares against the already // scan performs a new scan on the given directory, compares against the already
// cached filenames, and returns file sets: creates, deletes, updates. // cached filenames, and returns file sets: creates, deletes, updates.
func (fc *fileCache) scan(keyDir string) (mapset.Set[string], mapset.Set[string], mapset.Set[string], error) { func (fc *fileCache) scan(keyDir string) (mapset.Set, mapset.Set, mapset.Set, error) {
t0 := time.Now() t0 := time.Now()
// List all the files from the keystore folder // List all the failes from the keystore folder
files, err := os.ReadDir(keyDir) files, err := ioutil.ReadDir(keyDir)
if err != nil { if err != nil {
return nil, nil, nil, err return nil, nil, nil, err
} }
@ -50,8 +51,8 @@ func (fc *fileCache) scan(keyDir string) (mapset.Set[string], mapset.Set[string]
defer fc.mu.Unlock() defer fc.mu.Unlock()
// Iterate all the files and gather their metadata // Iterate all the files and gather their metadata
all := mapset.NewThreadUnsafeSet[string]() all := mapset.NewThreadUnsafeSet()
mods := mapset.NewThreadUnsafeSet[string]() mods := mapset.NewThreadUnsafeSet()
var newLastMod time.Time var newLastMod time.Time
for _, fi := range files { for _, fi := range files {
@ -61,14 +62,10 @@ func (fc *fileCache) scan(keyDir string) (mapset.Set[string], mapset.Set[string]
log.Trace("Ignoring file on account scan", "path", path) log.Trace("Ignoring file on account scan", "path", path)
continue continue
} }
// Gather the set of all and freshly modified files // Gather the set of all and fresly modified files
all.Add(path) all.Add(path)
info, err := fi.Info() modified := fi.ModTime()
if err != nil {
return nil, nil, nil, err
}
modified := info.ModTime()
if modified.After(fc.lastMod) { if modified.After(fc.lastMod) {
mods.Add(path) mods.Add(path)
} }
@ -92,13 +89,13 @@ func (fc *fileCache) scan(keyDir string) (mapset.Set[string], mapset.Set[string]
} }
// nonKeyFile ignores editor backups, hidden files and folders/symlinks. // nonKeyFile ignores editor backups, hidden files and folders/symlinks.
func nonKeyFile(fi os.DirEntry) bool { func nonKeyFile(fi os.FileInfo) bool {
// Skip editor backups and UNIX-style hidden files. // Skip editor backups and UNIX-style hidden files.
if strings.HasSuffix(fi.Name(), "~") || strings.HasPrefix(fi.Name(), ".") { if strings.HasSuffix(fi.Name(), "~") || strings.HasPrefix(fi.Name(), ".") {
return true return true
} }
// Skip misc special files, directories (yes, symlinks too). // Skip misc special files, directories (yes, symlinks too).
if fi.IsDir() || !fi.Type().IsRegular() { if fi.IsDir() || fi.Mode()&os.ModeType != 0 {
return true return true
} }
return false return false

View File

@ -23,6 +23,7 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"os" "os"
"path/filepath" "path/filepath"
"strings" "strings"
@ -196,7 +197,7 @@ func writeTemporaryKeyFile(file string, content []byte) (string, error) {
} }
// Atomic write: create a temporary hidden file first // Atomic write: create a temporary hidden file first
// then move it into place. TempFile assigns mode 0600. // then move it into place. TempFile assigns mode 0600.
f, err := os.CreateTemp(filepath.Dir(file), "."+filepath.Base(file)+".tmp") f, err := ioutil.TempFile(filepath.Dir(file), "."+filepath.Base(file)+".tmp")
if err != nil { if err != nil {
return "", err return "", err
} }

View File

@ -498,14 +498,6 @@ func (ks *KeyStore) ImportPreSaleKey(keyJSON []byte, passphrase string) (account
return a, nil return a, nil
} }
// isUpdating returns whether the event notification loop is running.
// This method is mainly meant for tests.
func (ks *KeyStore) isUpdating() bool {
ks.mu.RLock()
defer ks.mu.RUnlock()
return ks.updating
}
// zeroKey zeroes a private key in memory. // zeroKey zeroes a private key in memory.
func zeroKey(k *ecdsa.PrivateKey) { func zeroKey(k *ecdsa.PrivateKey) {
b := k.D.Bits() b := k.D.Bits()

View File

@ -17,6 +17,7 @@
package keystore package keystore
import ( import (
"io/ioutil"
"math/rand" "math/rand"
"os" "os"
"runtime" "runtime"
@ -37,6 +38,7 @@ var testSigData = make([]byte, 32)
func TestKeyStore(t *testing.T) { func TestKeyStore(t *testing.T) {
dir, ks := tmpKeyStore(t, true) dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
a, err := ks.NewAccount("foo") a, err := ks.NewAccount("foo")
if err != nil { if err != nil {
@ -70,7 +72,8 @@ func TestKeyStore(t *testing.T) {
} }
func TestSign(t *testing.T) { func TestSign(t *testing.T) {
_, ks := tmpKeyStore(t, true) dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
pass := "" // not used but required by API pass := "" // not used but required by API
a1, err := ks.NewAccount(pass) a1, err := ks.NewAccount(pass)
@ -86,7 +89,8 @@ func TestSign(t *testing.T) {
} }
func TestSignWithPassphrase(t *testing.T) { func TestSignWithPassphrase(t *testing.T) {
_, ks := tmpKeyStore(t, true) dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
pass := "passwd" pass := "passwd"
acc, err := ks.NewAccount(pass) acc, err := ks.NewAccount(pass)
@ -113,8 +117,8 @@ func TestSignWithPassphrase(t *testing.T) {
} }
func TestTimedUnlock(t *testing.T) { func TestTimedUnlock(t *testing.T) {
t.Parallel() dir, ks := tmpKeyStore(t, true)
_, ks := tmpKeyStore(t, true) defer os.RemoveAll(dir)
pass := "foo" pass := "foo"
a1, err := ks.NewAccount(pass) a1, err := ks.NewAccount(pass)
@ -148,8 +152,8 @@ func TestTimedUnlock(t *testing.T) {
} }
func TestOverrideUnlock(t *testing.T) { func TestOverrideUnlock(t *testing.T) {
t.Parallel() dir, ks := tmpKeyStore(t, false)
_, ks := tmpKeyStore(t, false) defer os.RemoveAll(dir)
pass := "foo" pass := "foo"
a1, err := ks.NewAccount(pass) a1, err := ks.NewAccount(pass)
@ -189,8 +193,8 @@ func TestOverrideUnlock(t *testing.T) {
// This test should fail under -race if signing races the expiration goroutine. // This test should fail under -race if signing races the expiration goroutine.
func TestSignRace(t *testing.T) { func TestSignRace(t *testing.T) {
t.Parallel() dir, ks := tmpKeyStore(t, false)
_, ks := tmpKeyStore(t, false) defer os.RemoveAll(dir)
// Create a test account. // Create a test account.
a1, err := ks.NewAccount("") a1, err := ks.NewAccount("")
@ -214,33 +218,20 @@ func TestSignRace(t *testing.T) {
t.Errorf("Account did not lock within the timeout") t.Errorf("Account did not lock within the timeout")
} }
// waitForKsUpdating waits until the updating-status of the ks reaches the
// desired wantStatus.
// It waits for a maximum time of maxTime, and returns false if it does not
// finish in time
func waitForKsUpdating(t *testing.T, ks *KeyStore, wantStatus bool, maxTime time.Duration) bool {
t.Helper()
// Wait max 250 ms, then return false
for t0 := time.Now(); time.Since(t0) < maxTime; {
if ks.isUpdating() == wantStatus {
return true
}
time.Sleep(25 * time.Millisecond)
}
return false
}
// Tests that the wallet notifier loop starts and stops correctly based on the // Tests that the wallet notifier loop starts and stops correctly based on the
// addition and removal of wallet event subscriptions. // addition and removal of wallet event subscriptions.
func TestWalletNotifierLifecycle(t *testing.T) { func TestWalletNotifierLifecycle(t *testing.T) {
t.Parallel() // Create a temporary kesytore to test with
// Create a temporary keystore to test with dir, ks := tmpKeyStore(t, false)
_, ks := tmpKeyStore(t, false) defer os.RemoveAll(dir)
// Ensure that the notification updater is not running yet // Ensure that the notification updater is not running yet
time.Sleep(250 * time.Millisecond) time.Sleep(250 * time.Millisecond)
ks.mu.RLock()
updating := ks.updating
ks.mu.RUnlock()
if ks.isUpdating() { if updating {
t.Errorf("wallet notifier running without subscribers") t.Errorf("wallet notifier running without subscribers")
} }
// Subscribe to the wallet feed and ensure the updater boots up // Subscribe to the wallet feed and ensure the updater boots up
@ -250,26 +241,38 @@ func TestWalletNotifierLifecycle(t *testing.T) {
for i := 0; i < len(subs); i++ { for i := 0; i < len(subs); i++ {
// Create a new subscription // Create a new subscription
subs[i] = ks.Subscribe(updates) subs[i] = ks.Subscribe(updates)
if !waitForKsUpdating(t, ks, true, 250*time.Millisecond) {
// Ensure the notifier comes online
time.Sleep(250 * time.Millisecond)
ks.mu.RLock()
updating = ks.updating
ks.mu.RUnlock()
if !updating {
t.Errorf("sub %d: wallet notifier not running after subscription", i) t.Errorf("sub %d: wallet notifier not running after subscription", i)
} }
} }
// Close all but one sub // Unsubscribe and ensure the updater terminates eventually
for i := 0; i < len(subs)-1; i++ { for i := 0; i < len(subs); i++ {
// Close an existing subscription // Close an existing subscription
subs[i].Unsubscribe() subs[i].Unsubscribe()
}
// Check that it is still running
time.Sleep(250 * time.Millisecond)
if !ks.isUpdating() { // Ensure the notifier shuts down at and only at the last close
t.Fatal("event notifier stopped prematurely") for k := 0; k < int(walletRefreshCycle/(250*time.Millisecond))+2; k++ {
} ks.mu.RLock()
// Unsubscribe the last one and ensure the updater terminates eventually. updating = ks.updating
subs[len(subs)-1].Unsubscribe() ks.mu.RUnlock()
if !waitForKsUpdating(t, ks, false, 4*time.Second) {
t.Errorf("wallet notifier didn't terminate after unsubscribe") if i < len(subs)-1 && !updating {
t.Fatalf("sub %d: event notifier stopped prematurely", i)
}
if i == len(subs)-1 && !updating {
return
}
time.Sleep(250 * time.Millisecond)
}
} }
t.Errorf("wallet notifier didn't terminate after unsubscribe")
} }
type walletEvent struct { type walletEvent struct {
@ -280,7 +283,8 @@ type walletEvent struct {
// Tests that wallet notifications and correctly fired when accounts are added // Tests that wallet notifications and correctly fired when accounts are added
// or deleted from the keystore. // or deleted from the keystore.
func TestWalletNotifications(t *testing.T) { func TestWalletNotifications(t *testing.T) {
_, ks := tmpKeyStore(t, false) dir, ks := tmpKeyStore(t, false)
defer os.RemoveAll(dir)
// Subscribe to the wallet feed and collect events. // Subscribe to the wallet feed and collect events.
var ( var (
@ -341,7 +345,8 @@ func TestWalletNotifications(t *testing.T) {
// TestImportExport tests the import functionality of a keystore. // TestImportExport tests the import functionality of a keystore.
func TestImportECDSA(t *testing.T) { func TestImportECDSA(t *testing.T) {
_, ks := tmpKeyStore(t, true) dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
key, err := crypto.GenerateKey() key, err := crypto.GenerateKey()
if err != nil { if err != nil {
t.Fatalf("failed to generate key: %v", key) t.Fatalf("failed to generate key: %v", key)
@ -359,7 +364,8 @@ func TestImportECDSA(t *testing.T) {
// TestImportECDSA tests the import and export functionality of a keystore. // TestImportECDSA tests the import and export functionality of a keystore.
func TestImportExport(t *testing.T) { func TestImportExport(t *testing.T) {
_, ks := tmpKeyStore(t, true) dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
acc, err := ks.NewAccount("old") acc, err := ks.NewAccount("old")
if err != nil { if err != nil {
t.Fatalf("failed to create account: %v", acc) t.Fatalf("failed to create account: %v", acc)
@ -368,7 +374,8 @@ func TestImportExport(t *testing.T) {
if err != nil { if err != nil {
t.Fatalf("failed to export account: %v", acc) t.Fatalf("failed to export account: %v", acc)
} }
_, ks2 := tmpKeyStore(t, true) dir2, ks2 := tmpKeyStore(t, true)
defer os.RemoveAll(dir2)
if _, err = ks2.Import(json, "old", "old"); err == nil { if _, err = ks2.Import(json, "old", "old"); err == nil {
t.Errorf("importing with invalid password succeeded") t.Errorf("importing with invalid password succeeded")
} }
@ -382,12 +389,14 @@ func TestImportExport(t *testing.T) {
if _, err = ks2.Import(json, "new", "new"); err == nil { if _, err = ks2.Import(json, "new", "new"); err == nil {
t.Errorf("importing a key twice succeeded") t.Errorf("importing a key twice succeeded")
} }
} }
// TestImportRace tests the keystore on races. // TestImportRace tests the keystore on races.
// This test should fail under -race if importing races. // This test should fail under -race if importing races.
func TestImportRace(t *testing.T) { func TestImportRace(t *testing.T) {
_, ks := tmpKeyStore(t, true) dir, ks := tmpKeyStore(t, true)
defer os.RemoveAll(dir)
acc, err := ks.NewAccount("old") acc, err := ks.NewAccount("old")
if err != nil { if err != nil {
t.Fatalf("failed to create account: %v", acc) t.Fatalf("failed to create account: %v", acc)
@ -396,7 +405,8 @@ func TestImportRace(t *testing.T) {
if err != nil { if err != nil {
t.Fatalf("failed to export account: %v", acc) t.Fatalf("failed to export account: %v", acc)
} }
_, ks2 := tmpKeyStore(t, true) dir2, ks2 := tmpKeyStore(t, true)
defer os.RemoveAll(dir2)
var atom uint32 var atom uint32
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(2) wg.Add(2)
@ -406,6 +416,7 @@ func TestImportRace(t *testing.T) {
if _, err := ks2.Import(json, "new", "new"); err != nil { if _, err := ks2.Import(json, "new", "new"); err != nil {
atomic.AddUint32(&atom, 1) atomic.AddUint32(&atom, 1)
} }
}() }()
} }
wg.Wait() wg.Wait()
@ -451,7 +462,10 @@ func checkEvents(t *testing.T, want []walletEvent, have []walletEvent) {
} }
func tmpKeyStore(t *testing.T, encrypted bool) (string, *KeyStore) { func tmpKeyStore(t *testing.T, encrypted bool) (string, *KeyStore) {
d := t.TempDir() d, err := ioutil.TempDir("", "eth-keystore-test")
if err != nil {
t.Fatal(err)
}
newKs := NewPlaintextKeyStore newKs := NewPlaintextKeyStore
if encrypted { if encrypted {
newKs = func(kd string) *KeyStore { return NewKeyStore(kd, veryLightScryptN, veryLightScryptP) } newKs = func(kd string) *KeyStore { return NewKeyStore(kd, veryLightScryptN, veryLightScryptP) }

View File

@ -34,6 +34,7 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"os" "os"
"path/filepath" "path/filepath"
@ -81,7 +82,7 @@ type keyStorePassphrase struct {
func (ks keyStorePassphrase) GetKey(addr common.Address, filename, auth string) (*Key, error) { func (ks keyStorePassphrase) GetKey(addr common.Address, filename, auth string) (*Key, error) {
// Load the key from the keystore and decrypt its contents // Load the key from the keystore and decrypt its contents
keyjson, err := os.ReadFile(filename) keyjson, err := ioutil.ReadFile(filename)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -138,6 +139,7 @@ func (ks keyStorePassphrase) JoinPath(filename string) string {
// Encryptdata encrypts the data given as 'data' with the password 'auth'. // Encryptdata encrypts the data given as 'data' with the password 'auth'.
func EncryptDataV3(data, auth []byte, scryptN, scryptP int) (CryptoJSON, error) { func EncryptDataV3(data, auth []byte, scryptN, scryptP int) (CryptoJSON, error) {
salt := make([]byte, 32) salt := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, salt); err != nil { if _, err := io.ReadFull(rand.Reader, salt); err != nil {
panic("reading from crypto/rand failed: " + err.Error()) panic("reading from crypto/rand failed: " + err.Error())
@ -340,6 +342,7 @@ func getKDFKey(cryptoJSON CryptoJSON, auth string) ([]byte, error) {
r := ensureInt(cryptoJSON.KDFParams["r"]) r := ensureInt(cryptoJSON.KDFParams["r"])
p := ensureInt(cryptoJSON.KDFParams["p"]) p := ensureInt(cryptoJSON.KDFParams["p"])
return scrypt.Key(authArray, salt, n, r, p, dkLen) return scrypt.Key(authArray, salt, n, r, p, dkLen)
} else if cryptoJSON.KDF == "pbkdf2" { } else if cryptoJSON.KDF == "pbkdf2" {
c := ensureInt(cryptoJSON.KDFParams["c"]) c := ensureInt(cryptoJSON.KDFParams["c"])
prf := cryptoJSON.KDFParams["prf"].(string) prf := cryptoJSON.KDFParams["prf"].(string)

View File

@ -17,7 +17,7 @@
package keystore package keystore
import ( import (
"os" "io/ioutil"
"testing" "testing"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
@ -30,7 +30,7 @@ const (
// Tests that a json key file can be decrypted and encrypted in multiple rounds. // Tests that a json key file can be decrypted and encrypted in multiple rounds.
func TestKeyEncryptDecrypt(t *testing.T) { func TestKeyEncryptDecrypt(t *testing.T) {
keyjson, err := os.ReadFile("testdata/very-light-scrypt.json") keyjson, err := ioutil.ReadFile("testdata/very-light-scrypt.json")
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -52,7 +52,7 @@ func TestKeyEncryptDecrypt(t *testing.T) {
t.Errorf("test %d: key address mismatch: have %x, want %x", i, key.Address, address) t.Errorf("test %d: key address mismatch: have %x, want %x", i, key.Address, address)
} }
// Recrypt with a new password and start over // Recrypt with a new password and start over
password += "new data appended" // nolint: gosec password += "new data appended"
if keyjson, err = EncryptKey(key, password, veryLightScryptN, veryLightScryptP); err != nil { if keyjson, err = EncryptKey(key, password, veryLightScryptN, veryLightScryptP); err != nil {
t.Errorf("test %d: failed to recrypt key %v", i, err) t.Errorf("test %d: failed to recrypt key %v", i, err)
} }

View File

@ -20,6 +20,8 @@ import (
"crypto/rand" "crypto/rand"
"encoding/hex" "encoding/hex"
"fmt" "fmt"
"io/ioutil"
"os"
"path/filepath" "path/filepath"
"reflect" "reflect"
"strings" "strings"
@ -30,7 +32,10 @@ import (
) )
func tmpKeyStoreIface(t *testing.T, encrypted bool) (dir string, ks keyStore) { func tmpKeyStoreIface(t *testing.T, encrypted bool) (dir string, ks keyStore) {
d := t.TempDir() d, err := ioutil.TempDir("", "geth-keystore-test")
if err != nil {
t.Fatal(err)
}
if encrypted { if encrypted {
ks = &keyStorePassphrase{d, veryLightScryptN, veryLightScryptP, true} ks = &keyStorePassphrase{d, veryLightScryptN, veryLightScryptP, true}
} else { } else {
@ -40,7 +45,8 @@ func tmpKeyStoreIface(t *testing.T, encrypted bool) (dir string, ks keyStore) {
} }
func TestKeyStorePlain(t *testing.T) { func TestKeyStorePlain(t *testing.T) {
_, ks := tmpKeyStoreIface(t, false) dir, ks := tmpKeyStoreIface(t, false)
defer os.RemoveAll(dir)
pass := "" // not used but required by API pass := "" // not used but required by API
k1, account, err := storeNewKey(ks, rand.Reader, pass) k1, account, err := storeNewKey(ks, rand.Reader, pass)
@ -60,7 +66,8 @@ func TestKeyStorePlain(t *testing.T) {
} }
func TestKeyStorePassphrase(t *testing.T) { func TestKeyStorePassphrase(t *testing.T) {
_, ks := tmpKeyStoreIface(t, true) dir, ks := tmpKeyStoreIface(t, true)
defer os.RemoveAll(dir)
pass := "foo" pass := "foo"
k1, account, err := storeNewKey(ks, rand.Reader, pass) k1, account, err := storeNewKey(ks, rand.Reader, pass)
@ -80,7 +87,8 @@ func TestKeyStorePassphrase(t *testing.T) {
} }
func TestKeyStorePassphraseDecryptionFail(t *testing.T) { func TestKeyStorePassphraseDecryptionFail(t *testing.T) {
_, ks := tmpKeyStoreIface(t, true) dir, ks := tmpKeyStoreIface(t, true)
defer os.RemoveAll(dir)
pass := "foo" pass := "foo"
k1, account, err := storeNewKey(ks, rand.Reader, pass) k1, account, err := storeNewKey(ks, rand.Reader, pass)
@ -94,6 +102,7 @@ func TestKeyStorePassphraseDecryptionFail(t *testing.T) {
func TestImportPreSaleKey(t *testing.T) { func TestImportPreSaleKey(t *testing.T) {
dir, ks := tmpKeyStoreIface(t, true) dir, ks := tmpKeyStoreIface(t, true)
defer os.RemoveAll(dir)
// file content of a presale key file generated with: // file content of a presale key file generated with:
// python pyethsaletool.py genwallet // python pyethsaletool.py genwallet

View File

@ -23,27 +23,25 @@ import (
"time" "time"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/fsnotify/fsnotify" "github.com/rjeczalik/notify"
) )
type watcher struct { type watcher struct {
ac *accountCache ac *accountCache
running bool // set to true when runloop begins starting bool
runEnded bool // set to true when runloop ends running bool
starting bool // set to true prior to runloop starting ev chan notify.EventInfo
quit chan struct{} quit chan struct{}
} }
func newWatcher(ac *accountCache) *watcher { func newWatcher(ac *accountCache) *watcher {
return &watcher{ return &watcher{
ac: ac, ac: ac,
ev: make(chan notify.EventInfo, 10),
quit: make(chan struct{}), quit: make(chan struct{}),
} }
} }
// enabled returns false on systems not supported.
func (*watcher) enabled() bool { return true }
// starts the watcher loop in the background. // starts the watcher loop in the background.
// Start a watcher in the background if that's not already in progress. // Start a watcher in the background if that's not already in progress.
// The caller must hold w.ac.mu. // The caller must hold w.ac.mu.
@ -64,24 +62,16 @@ func (w *watcher) loop() {
w.ac.mu.Lock() w.ac.mu.Lock()
w.running = false w.running = false
w.starting = false w.starting = false
w.runEnded = true
w.ac.mu.Unlock() w.ac.mu.Unlock()
}() }()
logger := log.New("path", w.ac.keydir) logger := log.New("path", w.ac.keydir)
// Create new watcher. if err := notify.Watch(w.ac.keydir, w.ev, notify.All); err != nil {
watcher, err := fsnotify.NewWatcher() logger.Trace("Failed to watch keystore folder", "err", err)
if err != nil {
log.Error("Failed to start filesystem watcher", "err", err)
return return
} }
defer watcher.Close() defer notify.Stop(w.ev)
if err := watcher.Add(w.ac.keydir); err != nil { logger.Trace("Started watching keystore folder")
logger.Warn("Failed to watch keystore folder", "err", err)
return
}
logger.Trace("Started watching keystore folder", "folder", w.ac.keydir)
defer logger.Trace("Stopped watching keystore folder") defer logger.Trace("Stopped watching keystore folder")
w.ac.mu.Lock() w.ac.mu.Lock()
@ -105,24 +95,12 @@ func (w *watcher) loop() {
select { select {
case <-w.quit: case <-w.quit:
return return
case _, ok := <-watcher.Events: case <-w.ev:
if !ok {
return
}
// Trigger the scan (with delay), if not already triggered // Trigger the scan (with delay), if not already triggered
if !rescanTriggered { if !rescanTriggered {
debounce.Reset(debounceDuration) debounce.Reset(debounceDuration)
rescanTriggered = true rescanTriggered = true
} }
// The fsnotify library does provide more granular event-info, it
// would be possible to refresh individual affected files instead
// of scheduling a full rescan. For most cases though, the
// full rescan is quick and obviously simplest.
case err, ok := <-watcher.Errors:
if !ok {
return
}
log.Info("Filsystem watcher error", "err", err)
case <-debounce.C: case <-debounce.C:
w.ac.scanAccounts() w.ac.scanAccounts()
rescanTriggered = false rescanTriggered = false

View File

@ -22,14 +22,8 @@
package keystore package keystore
type watcher struct { type watcher struct{ running bool }
running bool
runEnded bool
}
func newWatcher(*accountCache) *watcher { return new(watcher) } func newWatcher(*accountCache) *watcher { return new(watcher) }
func (*watcher) start() {} func (*watcher) start() {}
func (*watcher) close() {} func (*watcher) close() {}
// enabled returns false on systems not supported.
func (*watcher) enabled() bool { return false }

View File

@ -257,7 +257,7 @@ func merge(slice []Wallet, wallets ...Wallet) []Wallet {
return slice return slice
} }
// drop is the counterpart of merge, which looks up wallets from within the sorted // drop is the couterpart of merge, which looks up wallets from within the sorted
// cache and removes the ones specified. // cache and removes the ones specified.
func drop(slice []Wallet, wallets ...Wallet) []Wallet { func drop(slice []Wallet, wallets ...Wallet) []Wallet {
for _, wallet := range wallets { for _, wallet := range wallets {

View File

@ -34,7 +34,7 @@ package scwallet
import ( import (
"encoding/json" "encoding/json"
"io" "io/ioutil"
"os" "os"
"path/filepath" "path/filepath"
"sort" "sort"
@ -96,7 +96,7 @@ func (hub *Hub) readPairings() error {
return err return err
} }
pairingData, err := io.ReadAll(pairingFile) pairingData, err := ioutil.ReadAll(pairingFile)
if err != nil { if err != nil {
return err return err
} }

View File

@ -178,7 +178,7 @@ func (s *SecureChannelSession) mutuallyAuthenticate() error {
return err return err
} }
if response.Sw1 != 0x90 || response.Sw2 != 0x00 { if response.Sw1 != 0x90 || response.Sw2 != 0x00 {
return fmt.Errorf("got unexpected response from MUTUALLY_AUTHENTICATE: %#x%x", response.Sw1, response.Sw2) return fmt.Errorf("got unexpected response from MUTUALLY_AUTHENTICATE: 0x%x%x", response.Sw1, response.Sw2)
} }
if len(response.Data) != scSecretLength { if len(response.Data) != scSecretLength {
@ -261,7 +261,7 @@ func (s *SecureChannelSession) transmitEncrypted(cla, ins, p1, p2 byte, data []b
rapdu.deserialize(plainData) rapdu.deserialize(plainData)
if rapdu.Sw1 != sw1Ok { if rapdu.Sw1 != sw1Ok {
return nil, fmt.Errorf("unexpected response status Cla=%#x, Ins=%#x, Sw=%#x%x", cla, ins, rapdu.Sw1, rapdu.Sw2) return nil, fmt.Errorf("unexpected response status Cla=0x%x, Ins=0x%x, Sw=0x%x%x", cla, ins, rapdu.Sw1, rapdu.Sw2)
} }
return rapdu, nil return rapdu, nil

View File

@ -99,8 +99,8 @@ const (
P1DeriveKeyFromCurrent = uint8(0x10) P1DeriveKeyFromCurrent = uint8(0x10)
statusP1WalletStatus = uint8(0x00) statusP1WalletStatus = uint8(0x00)
statusP1Path = uint8(0x01) statusP1Path = uint8(0x01)
signP1PrecomputedHash = uint8(0x00) signP1PrecomputedHash = uint8(0x01)
signP2OnlyBlock = uint8(0x00) signP2OnlyBlock = uint8(0x81)
exportP1Any = uint8(0x00) exportP1Any = uint8(0x00)
exportP2Pubkey = uint8(0x01) exportP2Pubkey = uint8(0x01)
) )
@ -167,7 +167,7 @@ func transmit(card *pcsc.Card, command *commandAPDU) (*responseAPDU, error) {
} }
if response.Sw1 != sw1Ok { if response.Sw1 != sw1Ok {
return nil, fmt.Errorf("unexpected insecure response status Cla=%#x, Ins=%#x, Sw=%#x%x", command.Cla, command.Ins, response.Sw1, response.Sw2) return nil, fmt.Errorf("unexpected insecure response status Cla=0x%x, Ins=0x%x, Sw=0x%x%x", command.Cla, command.Ins, response.Sw1, response.Sw2)
} }
return response, nil return response, nil
@ -879,7 +879,6 @@ func (s *Session) walletStatus() (*walletStatus, error) {
} }
// derivationPath fetches the wallet's current derivation path from the card. // derivationPath fetches the wallet's current derivation path from the card.
//
//lint:ignore U1000 needs to be added to the console interface //lint:ignore U1000 needs to be added to the console interface
func (s *Session) derivationPath() (accounts.DerivationPath, error) { func (s *Session) derivationPath() (accounts.DerivationPath, error) {
response, err := s.Channel.transmitEncrypted(claSCWallet, insStatus, statusP1Path, 0, nil) response, err := s.Channel.transmitEncrypted(claSCWallet, insStatus, statusP1Path, 0, nil)
@ -995,7 +994,6 @@ func (s *Session) derive(path accounts.DerivationPath) (accounts.Account, error)
} }
// keyExport contains information on an exported keypair. // keyExport contains information on an exported keypair.
//
//lint:ignore U1000 needs to be added to the console interface //lint:ignore U1000 needs to be added to the console interface
type keyExport struct { type keyExport struct {
PublicKey []byte `asn1:"tag:0"` PublicKey []byte `asn1:"tag:0"`
@ -1003,7 +1001,6 @@ type keyExport struct {
} }
// publicKey returns the public key for the current derivation path. // publicKey returns the public key for the current derivation path.
//
//lint:ignore U1000 needs to be added to the console interface //lint:ignore U1000 needs to be added to the console interface
func (s *Session) publicKey() ([]byte, error) { func (s *Session) publicKey() ([]byte, error) {
response, err := s.Channel.transmitEncrypted(claSCWallet, insExportKey, exportP1Any, exportP2Pubkey, nil) response, err := s.Channel.transmitEncrypted(claSCWallet, insExportKey, exportP1Any, exportP2Pubkey, nil)

View File

@ -92,9 +92,10 @@ func (u *URL) UnmarshalJSON(input []byte) error {
// Cmp compares x and y and returns: // Cmp compares x and y and returns:
// //
// -1 if x < y // -1 if x < y
// 0 if x == y // 0 if x == y
// +1 if x > y // +1 if x > y
//
func (u URL) Cmp(url URL) int { func (u URL) Cmp(url URL) int {
if u.Scheme == url.Scheme { if u.Scheme == url.Scheme {
return strings.Compare(u.Path, url.Path) return strings.Compare(u.Path, url.Path)

View File

@ -32,10 +32,9 @@ func TestURLParsing(t *testing.T) {
t.Errorf("expected: %v, got: %v", "ethereum.org", url.Path) t.Errorf("expected: %v, got: %v", "ethereum.org", url.Path)
} }
for _, u := range []string{"ethereum.org", ""} { _, err = parseURL("ethereum.org")
if _, err = parseURL(u); err == nil { if err == nil {
t.Errorf("input %v, expected err, got: nil", u) t.Error("expected err, got: nil")
}
} }
} }

View File

@ -71,28 +71,18 @@ type Hub struct {
// NewLedgerHub creates a new hardware wallet manager for Ledger devices. // NewLedgerHub creates a new hardware wallet manager for Ledger devices.
func NewLedgerHub() (*Hub, error) { func NewLedgerHub() (*Hub, error) {
return newHub(LedgerScheme, 0x2c97, []uint16{ return newHub(LedgerScheme, 0x2c97, []uint16{
// Device definitions taken from
// https://github.com/LedgerHQ/ledger-live/blob/38012bc8899e0f07149ea9cfe7e64b2c146bc92b/libs/ledgerjs/packages/devices/src/index.ts
// Original product IDs // Original product IDs
0x0000, /* Ledger Blue */ 0x0000, /* Ledger Blue */
0x0001, /* Ledger Nano S */ 0x0001, /* Ledger Nano S */
0x0004, /* Ledger Nano X */ 0x0004, /* Ledger Nano X */
0x0005, /* Ledger Nano S Plus */
0x0006, /* Ledger Nano FTS */
// Upcoming product IDs: https://www.ledger.com/2019/05/17/windows-10-update-sunsetting-u2f-tunnel-transport-for-ledger-devices/
0x0015, /* HID + U2F + WebUSB Ledger Blue */ 0x0015, /* HID + U2F + WebUSB Ledger Blue */
0x1015, /* HID + U2F + WebUSB Ledger Nano S */ 0x1015, /* HID + U2F + WebUSB Ledger Nano S */
0x4015, /* HID + U2F + WebUSB Ledger Nano X */ 0x4015, /* HID + U2F + WebUSB Ledger Nano X */
0x5015, /* HID + U2F + WebUSB Ledger Nano S Plus */
0x6015, /* HID + U2F + WebUSB Ledger Nano FTS */
0x0011, /* HID + WebUSB Ledger Blue */ 0x0011, /* HID + WebUSB Ledger Blue */
0x1011, /* HID + WebUSB Ledger Nano S */ 0x1011, /* HID + WebUSB Ledger Nano S */
0x4011, /* HID + WebUSB Ledger Nano X */ 0x4011, /* HID + WebUSB Ledger Nano X */
0x5011, /* HID + WebUSB Ledger Nano S Plus */
0x6011, /* HID + WebUSB Ledger Nano FTS */
}, 0xffa0, 0, newLedgerDriver) }, 0xffa0, 0, newLedgerDriver)
} }

View File

@ -59,8 +59,6 @@ const (
ledgerP1InitTransactionData ledgerParam1 = 0x00 // First transaction data block for signing ledgerP1InitTransactionData ledgerParam1 = 0x00 // First transaction data block for signing
ledgerP1ContTransactionData ledgerParam1 = 0x80 // Subsequent transaction data block for signing ledgerP1ContTransactionData ledgerParam1 = 0x80 // Subsequent transaction data block for signing
ledgerP2DiscardAddressChainCode ledgerParam2 = 0x00 // Do not return the chain code along with the address ledgerP2DiscardAddressChainCode ledgerParam2 = 0x00 // Do not return the chain code along with the address
ledgerEip155Size int = 3 // Size of the EIP-155 chain_id,r,s in unsigned transactions
) )
// errLedgerReplyInvalidHeader is the error message returned by a Ledger data exchange // errLedgerReplyInvalidHeader is the error message returned by a Ledger data exchange
@ -197,18 +195,18 @@ func (w *ledgerDriver) SignTypedMessage(path accounts.DerivationPath, domainHash
// //
// The version retrieval protocol is defined as follows: // The version retrieval protocol is defined as follows:
// //
// CLA | INS | P1 | P2 | Lc | Le // CLA | INS | P1 | P2 | Lc | Le
// ----+-----+----+----+----+--- // ----+-----+----+----+----+---
// E0 | 06 | 00 | 00 | 00 | 04 // E0 | 06 | 00 | 00 | 00 | 04
// //
// With no input data, and the output data being: // With no input data, and the output data being:
// //
// Description | Length // Description | Length
// ---------------------------------------------------+-------- // ---------------------------------------------------+--------
// Flags 01: arbitrary data signature enabled by user | 1 byte // Flags 01: arbitrary data signature enabled by user | 1 byte
// Application major version | 1 byte // Application major version | 1 byte
// Application minor version | 1 byte // Application minor version | 1 byte
// Application patch version | 1 byte // Application patch version | 1 byte
func (w *ledgerDriver) ledgerVersion() ([3]byte, error) { func (w *ledgerDriver) ledgerVersion() ([3]byte, error) {
// Send the request and wait for the response // Send the request and wait for the response
reply, err := w.ledgerExchange(ledgerOpGetConfiguration, 0, 0, nil) reply, err := w.ledgerExchange(ledgerOpGetConfiguration, 0, 0, nil)
@ -229,32 +227,32 @@ func (w *ledgerDriver) ledgerVersion() ([3]byte, error) {
// //
// The address derivation protocol is defined as follows: // The address derivation protocol is defined as follows:
// //
// CLA | INS | P1 | P2 | Lc | Le // CLA | INS | P1 | P2 | Lc | Le
// ----+-----+----+----+-----+--- // ----+-----+----+----+-----+---
// E0 | 02 | 00 return address // E0 | 02 | 00 return address
// 01 display address and confirm before returning // 01 display address and confirm before returning
// | 00: do not return the chain code // | 00: do not return the chain code
// | 01: return the chain code // | 01: return the chain code
// | var | 00 // | var | 00
// //
// Where the input data is: // Where the input data is:
// //
// Description | Length // Description | Length
// -------------------------------------------------+-------- // -------------------------------------------------+--------
// Number of BIP 32 derivations to perform (max 10) | 1 byte // Number of BIP 32 derivations to perform (max 10) | 1 byte
// First derivation index (big endian) | 4 bytes // First derivation index (big endian) | 4 bytes
// ... | 4 bytes // ... | 4 bytes
// Last derivation index (big endian) | 4 bytes // Last derivation index (big endian) | 4 bytes
// //
// And the output data is: // And the output data is:
// //
// Description | Length // Description | Length
// ------------------------+------------------- // ------------------------+-------------------
// Public Key length | 1 byte // Public Key length | 1 byte
// Uncompressed Public Key | arbitrary // Uncompressed Public Key | arbitrary
// Ethereum address length | 1 byte // Ethereum address length | 1 byte
// Ethereum address | 40 bytes hex ascii // Ethereum address | 40 bytes hex ascii
// Chain code if requested | 32 bytes // Chain code if requested | 32 bytes
func (w *ledgerDriver) ledgerDerive(derivationPath []uint32) (common.Address, error) { func (w *ledgerDriver) ledgerDerive(derivationPath []uint32) (common.Address, error) {
// Flatten the derivation path into the Ledger request // Flatten the derivation path into the Ledger request
path := make([]byte, 1+4*len(derivationPath)) path := make([]byte, 1+4*len(derivationPath))
@ -292,35 +290,35 @@ func (w *ledgerDriver) ledgerDerive(derivationPath []uint32) (common.Address, er
// //
// The transaction signing protocol is defined as follows: // The transaction signing protocol is defined as follows:
// //
// CLA | INS | P1 | P2 | Lc | Le // CLA | INS | P1 | P2 | Lc | Le
// ----+-----+----+----+-----+--- // ----+-----+----+----+-----+---
// E0 | 04 | 00: first transaction data block // E0 | 04 | 00: first transaction data block
// 80: subsequent transaction data block // 80: subsequent transaction data block
// | 00 | variable | variable // | 00 | variable | variable
// //
// Where the input for the first transaction block (first 255 bytes) is: // Where the input for the first transaction block (first 255 bytes) is:
// //
// Description | Length // Description | Length
// -------------------------------------------------+---------- // -------------------------------------------------+----------
// Number of BIP 32 derivations to perform (max 10) | 1 byte // Number of BIP 32 derivations to perform (max 10) | 1 byte
// First derivation index (big endian) | 4 bytes // First derivation index (big endian) | 4 bytes
// ... | 4 bytes // ... | 4 bytes
// Last derivation index (big endian) | 4 bytes // Last derivation index (big endian) | 4 bytes
// RLP transaction chunk | arbitrary // RLP transaction chunk | arbitrary
// //
// And the input for subsequent transaction blocks (first 255 bytes) are: // And the input for subsequent transaction blocks (first 255 bytes) are:
// //
// Description | Length // Description | Length
// ----------------------+---------- // ----------------------+----------
// RLP transaction chunk | arbitrary // RLP transaction chunk | arbitrary
// //
// And the output data is: // And the output data is:
// //
// Description | Length // Description | Length
// ------------+--------- // ------------+---------
// signature V | 1 byte // signature V | 1 byte
// signature R | 32 bytes // signature R | 32 bytes
// signature S | 32 bytes // signature S | 32 bytes
func (w *ledgerDriver) ledgerSign(derivationPath []uint32, tx *types.Transaction, chainID *big.Int) (common.Address, *types.Transaction, error) { func (w *ledgerDriver) ledgerSign(derivationPath []uint32, tx *types.Transaction, chainID *big.Int) (common.Address, *types.Transaction, error) {
// Flatten the derivation path into the Ledger request // Flatten the derivation path into the Ledger request
path := make([]byte, 1+4*len(derivationPath)) path := make([]byte, 1+4*len(derivationPath))
@ -349,15 +347,9 @@ func (w *ledgerDriver) ledgerSign(derivationPath []uint32, tx *types.Transaction
op = ledgerP1InitTransactionData op = ledgerP1InitTransactionData
reply []byte reply []byte
) )
// Chunk size selection to mitigate an underlying RLP deserialization issue on the ledger app.
// https://github.com/LedgerHQ/app-ethereum/issues/409
chunk := 255
for ; len(payload)%chunk <= ledgerEip155Size; chunk-- {
}
for len(payload) > 0 { for len(payload) > 0 {
// Calculate the size of the next data chunk // Calculate the size of the next data chunk
chunk := 255
if chunk > len(payload) { if chunk > len(payload) {
chunk = len(payload) chunk = len(payload)
} }
@ -400,28 +392,30 @@ func (w *ledgerDriver) ledgerSign(derivationPath []uint32, tx *types.Transaction
// //
// The signing protocol is defined as follows: // The signing protocol is defined as follows:
// //
// CLA | INS | P1 | P2 | Lc | Le // CLA | INS | P1 | P2 | Lc | Le
// ----+-----+----+-----------------------------+-----+--- // ----+-----+----+-----------------------------+-----+---
// E0 | 0C | 00 | implementation version : 00 | variable | variable // E0 | 0C | 00 | implementation version : 00 | variable | variable
// //
// Where the input is: // Where the input is:
// //
// Description | Length // Description | Length
// -------------------------------------------------+---------- // -------------------------------------------------+----------
// Number of BIP 32 derivations to perform (max 10) | 1 byte // Number of BIP 32 derivations to perform (max 10) | 1 byte
// First derivation index (big endian) | 4 bytes // First derivation index (big endian) | 4 bytes
// ... | 4 bytes // ... | 4 bytes
// Last derivation index (big endian) | 4 bytes // Last derivation index (big endian) | 4 bytes
// domain hash | 32 bytes // domain hash | 32 bytes
// message hash | 32 bytes // message hash | 32 bytes
//
//
// //
// And the output data is: // And the output data is:
// //
// Description | Length // Description | Length
// ------------+--------- // ------------+---------
// signature V | 1 byte // signature V | 1 byte
// signature R | 32 bytes // signature R | 32 bytes
// signature S | 32 bytes // signature S | 32 bytes
func (w *ledgerDriver) ledgerSignTypedMessage(derivationPath []uint32, domainHash []byte, messageHash []byte) ([]byte, error) { func (w *ledgerDriver) ledgerSignTypedMessage(derivationPath []uint32, domainHash []byte, messageHash []byte) ([]byte, error) {
// Flatten the derivation path into the Ledger request // Flatten the derivation path into the Ledger request
path := make([]byte, 1+4*len(derivationPath)) path := make([]byte, 1+4*len(derivationPath))
@ -460,12 +454,12 @@ func (w *ledgerDriver) ledgerSignTypedMessage(derivationPath []uint32, domainHas
// //
// The common transport header is defined as follows: // The common transport header is defined as follows:
// //
// Description | Length // Description | Length
// --------------------------------------+---------- // --------------------------------------+----------
// Communication channel ID (big endian) | 2 bytes // Communication channel ID (big endian) | 2 bytes
// Command tag | 1 byte // Command tag | 1 byte
// Packet sequence index (big endian) | 2 bytes // Packet sequence index (big endian) | 2 bytes
// Payload | arbitrary // Payload | arbitrary
// //
// The Communication channel ID allows commands multiplexing over the same // The Communication channel ID allows commands multiplexing over the same
// physical link. It is not used for the time being, and should be set to 0101 // physical link. It is not used for the time being, and should be set to 0101
@ -479,15 +473,15 @@ func (w *ledgerDriver) ledgerSignTypedMessage(derivationPath []uint32, domainHas
// //
// APDU Command payloads are encoded as follows: // APDU Command payloads are encoded as follows:
// //
// Description | Length // Description | Length
// ----------------------------------- // -----------------------------------
// APDU length (big endian) | 2 bytes // APDU length (big endian) | 2 bytes
// APDU CLA | 1 byte // APDU CLA | 1 byte
// APDU INS | 1 byte // APDU INS | 1 byte
// APDU P1 | 1 byte // APDU P1 | 1 byte
// APDU P2 | 1 byte // APDU P2 | 1 byte
// APDU length | 1 byte // APDU length | 1 byte
// Optional APDU data | arbitrary // Optional APDU data | arbitrary
func (w *ledgerDriver) ledgerExchange(opcode ledgerOpcode, p1 ledgerParam1, p2 ledgerParam2, data []byte) ([]byte, error) { func (w *ledgerDriver) ledgerExchange(opcode ledgerOpcode, p1 ledgerParam1, p2 ledgerParam2, data []byte) ([]byte, error) {
// Construct the message payload, possibly split into multiple chunks // Construct the message payload, possibly split into multiple chunks
apdu := make([]byte, 2, 7+len(data)) apdu := make([]byte, 2, 7+len(data))

View File

@ -84,15 +84,15 @@ func (w *trezorDriver) Status() (string, error) {
// Open implements usbwallet.driver, attempting to initialize the connection to // Open implements usbwallet.driver, attempting to initialize the connection to
// the Trezor hardware wallet. Initializing the Trezor is a two or three phase operation: // the Trezor hardware wallet. Initializing the Trezor is a two or three phase operation:
// - The first phase is to initialize the connection and read the wallet's // * The first phase is to initialize the connection and read the wallet's
// features. This phase is invoked if the provided passphrase is empty. The // features. This phase is invoked if the provided passphrase is empty. The
// device will display the pinpad as a result and will return an appropriate // device will display the pinpad as a result and will return an appropriate
// error to notify the user that a second open phase is needed. // error to notify the user that a second open phase is needed.
// - The second phase is to unlock access to the Trezor, which is done by the // * The second phase is to unlock access to the Trezor, which is done by the
// user actually providing a passphrase mapping a keyboard keypad to the pin // user actually providing a passphrase mapping a keyboard keypad to the pin
// number of the user (shuffled according to the pinpad displayed). // number of the user (shuffled according to the pinpad displayed).
// - If needed the device will ask for passphrase which will require calling // * If needed the device will ask for passphrase which will require calling
// open again with the actual passphrase (3rd phase) // open again with the actual passphrase (3rd phase)
func (w *trezorDriver) Open(device io.ReadWriter, passphrase string) error { func (w *trezorDriver) Open(device io.ReadWriter, passphrase string) error {
w.device, w.failure = device, nil w.device, w.failure = device, nil
@ -196,10 +196,10 @@ func (w *trezorDriver) trezorDerive(derivationPath []uint32) (common.Address, er
if _, err := w.trezorExchange(&trezor.EthereumGetAddress{AddressN: derivationPath}, address); err != nil { if _, err := w.trezorExchange(&trezor.EthereumGetAddress{AddressN: derivationPath}, address); err != nil {
return common.Address{}, err return common.Address{}, err
} }
if addr := address.GetAddressBin(); len(addr) > 0 { // Older firmwares use binary formats if addr := address.GetAddressBin(); len(addr) > 0 { // Older firmwares use binary fomats
return common.BytesToAddress(addr), nil return common.BytesToAddress(addr), nil
} }
if addr := address.GetAddressHex(); len(addr) > 0 { // Newer firmwares use hexadecimal formats if addr := address.GetAddressHex(); len(addr) > 0 { // Newer firmwares use hexadecimal fomats
return common.HexToAddress(addr), nil return common.HexToAddress(addr), nil
} }
return common.Address{}, errors.New("missing derived address") return common.Address{}, errors.New("missing derived address")

View File

@ -94,7 +94,7 @@ func (Failure_FailureType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{1, 0} return fileDescriptor_aaf30d059fdbc38d, []int{1, 0}
} }
// * //*
// Type of button request // Type of button request
type ButtonRequest_ButtonRequestType int32 type ButtonRequest_ButtonRequestType int32
@ -175,7 +175,7 @@ func (ButtonRequest_ButtonRequestType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{2, 0} return fileDescriptor_aaf30d059fdbc38d, []int{2, 0}
} }
// * //*
// Type of PIN request // Type of PIN request
type PinMatrixRequest_PinMatrixRequestType int32 type PinMatrixRequest_PinMatrixRequestType int32
@ -220,7 +220,7 @@ func (PinMatrixRequest_PinMatrixRequestType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_aaf30d059fdbc38d, []int{4, 0} return fileDescriptor_aaf30d059fdbc38d, []int{4, 0}
} }
// * //*
// Response: Success of the previous request // Response: Success of the previous request
// @end // @end
type Success struct { type Success struct {
@ -262,7 +262,7 @@ func (m *Success) GetMessage() string {
return "" return ""
} }
// * //*
// Response: Failure of the previous request // Response: Failure of the previous request
// @end // @end
type Failure struct { type Failure struct {
@ -312,7 +312,7 @@ func (m *Failure) GetMessage() string {
return "" return ""
} }
// * //*
// Response: Device is waiting for HW button press. // Response: Device is waiting for HW button press.
// @auxstart // @auxstart
// @next ButtonAck // @next ButtonAck
@ -363,7 +363,7 @@ func (m *ButtonRequest) GetData() string {
return "" return ""
} }
// * //*
// Request: Computer agrees to wait for HW button press // Request: Computer agrees to wait for HW button press
// @auxend // @auxend
type ButtonAck struct { type ButtonAck struct {
@ -397,7 +397,7 @@ func (m *ButtonAck) XXX_DiscardUnknown() {
var xxx_messageInfo_ButtonAck proto.InternalMessageInfo var xxx_messageInfo_ButtonAck proto.InternalMessageInfo
// * //*
// Response: Device is asking computer to show PIN matrix and awaits PIN encoded using this matrix scheme // Response: Device is asking computer to show PIN matrix and awaits PIN encoded using this matrix scheme
// @auxstart // @auxstart
// @next PinMatrixAck // @next PinMatrixAck
@ -440,7 +440,7 @@ func (m *PinMatrixRequest) GetType() PinMatrixRequest_PinMatrixRequestType {
return PinMatrixRequest_PinMatrixRequestType_Current return PinMatrixRequest_PinMatrixRequestType_Current
} }
// * //*
// Request: Computer responds with encoded PIN // Request: Computer responds with encoded PIN
// @auxend // @auxend
type PinMatrixAck struct { type PinMatrixAck struct {
@ -482,7 +482,7 @@ func (m *PinMatrixAck) GetPin() string {
return "" return ""
} }
// * //*
// Response: Device awaits encryption passphrase // Response: Device awaits encryption passphrase
// @auxstart // @auxstart
// @next PassphraseAck // @next PassphraseAck
@ -525,7 +525,7 @@ func (m *PassphraseRequest) GetOnDevice() bool {
return false return false
} }
// * //*
// Request: Send passphrase back // Request: Send passphrase back
// @next PassphraseStateRequest // @next PassphraseStateRequest
type PassphraseAck struct { type PassphraseAck struct {
@ -575,7 +575,7 @@ func (m *PassphraseAck) GetState() []byte {
return nil return nil
} }
// * //*
// Response: Device awaits passphrase state // Response: Device awaits passphrase state
// @next PassphraseStateAck // @next PassphraseStateAck
type PassphraseStateRequest struct { type PassphraseStateRequest struct {
@ -617,7 +617,7 @@ func (m *PassphraseStateRequest) GetState() []byte {
return nil return nil
} }
// * //*
// Request: Send passphrase state back // Request: Send passphrase state back
// @auxend // @auxend
type PassphraseStateAck struct { type PassphraseStateAck struct {
@ -651,7 +651,7 @@ func (m *PassphraseStateAck) XXX_DiscardUnknown() {
var xxx_messageInfo_PassphraseStateAck proto.InternalMessageInfo var xxx_messageInfo_PassphraseStateAck proto.InternalMessageInfo
// * //*
// Structure representing BIP32 (hierarchical deterministic) node // Structure representing BIP32 (hierarchical deterministic) node
// Used for imports of private key into the device and exporting public key out of device // Used for imports of private key into the device and exporting public key out of device
// @embed // @embed

View File

@ -21,7 +21,7 @@ var _ = math.Inf
// proto package needs to be updated. // proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
// * //*
// Request: Ask device for public key corresponding to address_n path // Request: Ask device for public key corresponding to address_n path
// @start // @start
// @next EthereumPublicKey // @next EthereumPublicKey
@ -73,7 +73,7 @@ func (m *EthereumGetPublicKey) GetShowDisplay() bool {
return false return false
} }
// * //*
// Response: Contains public key derived from device private seed // Response: Contains public key derived from device private seed
// @end // @end
type EthereumPublicKey struct { type EthereumPublicKey struct {
@ -123,7 +123,7 @@ func (m *EthereumPublicKey) GetXpub() string {
return "" return ""
} }
// * //*
// Request: Ask device for Ethereum address corresponding to address_n path // Request: Ask device for Ethereum address corresponding to address_n path
// @start // @start
// @next EthereumAddress // @next EthereumAddress
@ -175,7 +175,7 @@ func (m *EthereumGetAddress) GetShowDisplay() bool {
return false return false
} }
// * //*
// Response: Contains an Ethereum address derived from device private seed // Response: Contains an Ethereum address derived from device private seed
// @end // @end
type EthereumAddress struct { type EthereumAddress struct {
@ -225,7 +225,7 @@ func (m *EthereumAddress) GetAddressHex() string {
return "" return ""
} }
// * //*
// Request: Ask device to sign transaction // Request: Ask device to sign transaction
// All fields are optional from the protocol's point of view. Each field defaults to value `0` if missing. // All fields are optional from the protocol's point of view. Each field defaults to value `0` if missing.
// Note: the first at most 1024 bytes of data MUST be transmitted as part of this message. // Note: the first at most 1024 bytes of data MUST be transmitted as part of this message.
@ -351,7 +351,7 @@ func (m *EthereumSignTx) GetTxType() uint32 {
return 0 return 0
} }
// * //*
// Response: Device asks for more data from transaction payload, or returns the signature. // Response: Device asks for more data from transaction payload, or returns the signature.
// If data_length is set, device awaits that many more bytes of payload. // If data_length is set, device awaits that many more bytes of payload.
// Otherwise, the signature_* fields contain the computed transaction signature. All three fields will be present. // Otherwise, the signature_* fields contain the computed transaction signature. All three fields will be present.
@ -420,7 +420,7 @@ func (m *EthereumTxRequest) GetSignatureS() []byte {
return nil return nil
} }
// * //*
// Request: Transaction payload data. // Request: Transaction payload data.
// @next EthereumTxRequest // @next EthereumTxRequest
type EthereumTxAck struct { type EthereumTxAck struct {
@ -462,7 +462,7 @@ func (m *EthereumTxAck) GetDataChunk() []byte {
return nil return nil
} }
// * //*
// Request: Ask device to sign message // Request: Ask device to sign message
// @start // @start
// @next EthereumMessageSignature // @next EthereumMessageSignature
@ -514,7 +514,7 @@ func (m *EthereumSignMessage) GetMessage() []byte {
return nil return nil
} }
// * //*
// Response: Signed message // Response: Signed message
// @end // @end
type EthereumMessageSignature struct { type EthereumMessageSignature struct {
@ -572,7 +572,7 @@ func (m *EthereumMessageSignature) GetAddressHex() string {
return "" return ""
} }
// * //*
// Request: Ask device to verify message // Request: Ask device to verify message
// @start // @start
// @next Success // @next Success

View File

@ -21,7 +21,7 @@ var _ = math.Inf
// proto package needs to be updated. // proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
// * //*
// Structure representing passphrase source // Structure representing passphrase source
type ApplySettings_PassphraseSourceType int32 type ApplySettings_PassphraseSourceType int32
@ -66,7 +66,7 @@ func (ApplySettings_PassphraseSourceType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_0c720c20d27aa029, []int{4, 0} return fileDescriptor_0c720c20d27aa029, []int{4, 0}
} }
// * //*
// Type of recovery procedure. These should be used as bitmask, e.g., // Type of recovery procedure. These should be used as bitmask, e.g.,
// `RecoveryDeviceType_ScrambledWords | RecoveryDeviceType_Matrix` // `RecoveryDeviceType_ScrambledWords | RecoveryDeviceType_Matrix`
// listing every method supported by the host computer. // listing every method supported by the host computer.
@ -114,7 +114,7 @@ func (RecoveryDevice_RecoveryDeviceType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_0c720c20d27aa029, []int{17, 0} return fileDescriptor_0c720c20d27aa029, []int{17, 0}
} }
// * //*
// Type of Recovery Word request // Type of Recovery Word request
type WordRequest_WordRequestType int32 type WordRequest_WordRequestType int32
@ -159,7 +159,7 @@ func (WordRequest_WordRequestType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_0c720c20d27aa029, []int{18, 0} return fileDescriptor_0c720c20d27aa029, []int{18, 0}
} }
// * //*
// Request: Reset device to default state and ask for device details // Request: Reset device to default state and ask for device details
// @start // @start
// @next Features // @next Features
@ -210,7 +210,7 @@ func (m *Initialize) GetSkipPassphrase() bool {
return false return false
} }
// * //*
// Request: Ask for device details (no device reset) // Request: Ask for device details (no device reset)
// @start // @start
// @next Features // @next Features
@ -245,7 +245,7 @@ func (m *GetFeatures) XXX_DiscardUnknown() {
var xxx_messageInfo_GetFeatures proto.InternalMessageInfo var xxx_messageInfo_GetFeatures proto.InternalMessageInfo
// * //*
// Response: Reports various information about the device // Response: Reports various information about the device
// @end // @end
type Features struct { type Features struct {
@ -495,7 +495,7 @@ func (m *Features) GetNoBackup() bool {
return false return false
} }
// * //*
// Request: clear session (removes cached PIN, passphrase, etc). // Request: clear session (removes cached PIN, passphrase, etc).
// @start // @start
// @next Success // @next Success
@ -530,7 +530,7 @@ func (m *ClearSession) XXX_DiscardUnknown() {
var xxx_messageInfo_ClearSession proto.InternalMessageInfo var xxx_messageInfo_ClearSession proto.InternalMessageInfo
// * //*
// Request: change language and/or label of the device // Request: change language and/or label of the device
// @start // @start
// @next Success // @next Success
@ -622,7 +622,7 @@ func (m *ApplySettings) GetDisplayRotation() uint32 {
return 0 return 0
} }
// * //*
// Request: set flags of the device // Request: set flags of the device
// @start // @start
// @next Success // @next Success
@ -666,7 +666,7 @@ func (m *ApplyFlags) GetFlags() uint32 {
return 0 return 0
} }
// * //*
// Request: Starts workflow for setting/changing/removing the PIN // Request: Starts workflow for setting/changing/removing the PIN
// @start // @start
// @next Success // @next Success
@ -710,7 +710,7 @@ func (m *ChangePin) GetRemove() bool {
return false return false
} }
// * //*
// Request: Test if the device is alive, device sends back the message in Success response // Request: Test if the device is alive, device sends back the message in Success response
// @start // @start
// @next Success // @next Success
@ -777,7 +777,7 @@ func (m *Ping) GetPassphraseProtection() bool {
return false return false
} }
// * //*
// Request: Abort last operation that required user interaction // Request: Abort last operation that required user interaction
// @start // @start
// @next Failure // @next Failure
@ -812,7 +812,7 @@ func (m *Cancel) XXX_DiscardUnknown() {
var xxx_messageInfo_Cancel proto.InternalMessageInfo var xxx_messageInfo_Cancel proto.InternalMessageInfo
// * //*
// Request: Request a sample of random data generated by hardware RNG. May be used for testing. // Request: Request a sample of random data generated by hardware RNG. May be used for testing.
// @start // @start
// @next Entropy // @next Entropy
@ -856,7 +856,7 @@ func (m *GetEntropy) GetSize() uint32 {
return 0 return 0
} }
// * //*
// Response: Reply with random data generated by internal RNG // Response: Reply with random data generated by internal RNG
// @end // @end
type Entropy struct { type Entropy struct {
@ -898,7 +898,7 @@ func (m *Entropy) GetEntropy() []byte {
return nil return nil
} }
// * //*
// Request: Request device to wipe all sensitive data and settings // Request: Request device to wipe all sensitive data and settings
// @start // @start
// @next Success // @next Success
@ -934,7 +934,7 @@ func (m *WipeDevice) XXX_DiscardUnknown() {
var xxx_messageInfo_WipeDevice proto.InternalMessageInfo var xxx_messageInfo_WipeDevice proto.InternalMessageInfo
// * //*
// Request: Load seed and related internal settings from the computer // Request: Load seed and related internal settings from the computer
// @start // @start
// @next Success // @next Success
@ -1036,7 +1036,7 @@ func (m *LoadDevice) GetU2FCounter() uint32 {
return 0 return 0
} }
// * //*
// Request: Ask device to do initialization involving user interaction // Request: Ask device to do initialization involving user interaction
// @start // @start
// @next EntropyRequest // @next EntropyRequest
@ -1147,7 +1147,7 @@ func (m *ResetDevice) GetNoBackup() bool {
return false return false
} }
// * //*
// Request: Perform backup of the device seed if not backed up using ResetDevice // Request: Perform backup of the device seed if not backed up using ResetDevice
// @start // @start
// @next Success // @next Success
@ -1182,7 +1182,7 @@ func (m *BackupDevice) XXX_DiscardUnknown() {
var xxx_messageInfo_BackupDevice proto.InternalMessageInfo var xxx_messageInfo_BackupDevice proto.InternalMessageInfo
// * //*
// Response: Ask for additional entropy from host computer // Response: Ask for additional entropy from host computer
// @next EntropyAck // @next EntropyAck
type EntropyRequest struct { type EntropyRequest struct {
@ -1216,7 +1216,7 @@ func (m *EntropyRequest) XXX_DiscardUnknown() {
var xxx_messageInfo_EntropyRequest proto.InternalMessageInfo var xxx_messageInfo_EntropyRequest proto.InternalMessageInfo
// * //*
// Request: Provide additional entropy for seed generation function // Request: Provide additional entropy for seed generation function
// @next Success // @next Success
type EntropyAck struct { type EntropyAck struct {
@ -1258,7 +1258,7 @@ func (m *EntropyAck) GetEntropy() []byte {
return nil return nil
} }
// * //*
// Request: Start recovery workflow asking user for specific words of mnemonic // Request: Start recovery workflow asking user for specific words of mnemonic
// Used to recovery device safely even on untrusted computer. // Used to recovery device safely even on untrusted computer.
// @start // @start
@ -1369,7 +1369,7 @@ func (m *RecoveryDevice) GetDryRun() bool {
return false return false
} }
// * //*
// Response: Device is waiting for user to enter word of the mnemonic // Response: Device is waiting for user to enter word of the mnemonic
// Its position is shown only on device's internal display. // Its position is shown only on device's internal display.
// @next WordAck // @next WordAck
@ -1412,7 +1412,7 @@ func (m *WordRequest) GetType() WordRequest_WordRequestType {
return WordRequest_WordRequestType_Plain return WordRequest_WordRequestType_Plain
} }
// * //*
// Request: Computer replies with word from the mnemonic // Request: Computer replies with word from the mnemonic
// @next WordRequest // @next WordRequest
// @next Success // @next Success
@ -1456,7 +1456,7 @@ func (m *WordAck) GetWord() string {
return "" return ""
} }
// * //*
// Request: Set U2F counter // Request: Set U2F counter
// @start // @start
// @next Success // @next Success

View File

@ -22,7 +22,7 @@ var _ = math.Inf
// proto package needs to be updated. // proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
// * //*
// Mapping between TREZOR wire identifier (uint) and a protobuf message // Mapping between TREZOR wire identifier (uint) and a protobuf message
type MessageType int32 type MessageType int32

View File

@ -380,7 +380,7 @@ func (w *wallet) selfDerive() {
// of legacy-ledger, the first account on the legacy-path will // of legacy-ledger, the first account on the legacy-path will
// be shown to the user, even if we don't actively track it // be shown to the user, even if we don't actively track it
if i < len(nextAddrs)-1 { if i < len(nextAddrs)-1 {
w.log.Info("Skipping tracking first account on legacy path, use personal.deriveAccount(<url>,<path>, false) to track", w.log.Info("Skipping trakcking first account on legacy path, use personal.deriveAccount(<url>,<path>, false) to track",
"path", path, "address", nextAddrs[i]) "path", path, "address", nextAddrs[i])
break break
} }
@ -526,6 +526,7 @@ func (w *wallet) signHash(account accounts.Account, hash []byte) ([]byte, error)
// SignData signs keccak256(data). The mimetype parameter describes the type of data being signed // SignData signs keccak256(data). The mimetype parameter describes the type of data being signed
func (w *wallet) SignData(account accounts.Account, mimeType string, data []byte) ([]byte, error) { func (w *wallet) SignData(account accounts.Account, mimeType string, data []byte) ([]byte, error) {
// Unless we are doing 712 signing, simply dispatch to signHash // Unless we are doing 712 signing, simply dispatch to signHash
if !(mimeType == accounts.MimetypeTypedData && len(data) == 66 && data[0] == 0x19 && data[1] == 0x01) { if !(mimeType == accounts.MimetypeTypedData && len(data) == 66 && data[0] == 0x19 && data[1] == 0x01) {
return w.signHash(account, crypto.Keccak256(data)) return w.signHash(account, crypto.Keccak256(data))

View File

@ -13,7 +13,7 @@ environment:
GETH_MINGW: 'C:\msys64\mingw32' GETH_MINGW: 'C:\msys64\mingw32'
install: install:
- git submodule update --init --depth 1 --recursive - git submodule update --init --depth 1
- go version - go version
for: for:
@ -26,7 +26,7 @@ for:
- go run build/ci.go lint - go run build/ci.go lint
- go run build/ci.go install -dlgo - go run build/ci.go install -dlgo
test_script: test_script:
- go run build/ci.go test -dlgo - go run build/ci.go test -dlgo -coverage
# linux/386 is disabled. # linux/386 is disabled.
- matrix: - matrix:
@ -54,4 +54,4 @@ for:
- go run build/ci.go archive -arch %GETH_ARCH% -type zip -signer WINDOWS_SIGNING_KEY -upload gethstore/builds - go run build/ci.go archive -arch %GETH_ARCH% -type zip -signer WINDOWS_SIGNING_KEY -upload gethstore/builds
- go run build/ci.go nsis -arch %GETH_ARCH% -signer WINDOWS_SIGNING_KEY -upload gethstore/builds - go run build/ci.go nsis -arch %GETH_ARCH% -signer WINDOWS_SIGNING_KEY -upload gethstore/builds
test_script: test_script:
- go run build/ci.go test -dlgo -arch %GETH_ARCH% -cc %GETH_CC% - go run build/ci.go test -dlgo -arch %GETH_ARCH% -cc %GETH_CC% -coverage

View File

@ -1,87 +0,0 @@
// Copyright 2022 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package engine
import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/rpc"
)
// EngineAPIError is a standardized error message between consensus and execution
// clients, also containing any custom error message Geth might include.
type EngineAPIError struct {
code int
msg string
err error
}
func (e *EngineAPIError) ErrorCode() int { return e.code }
func (e *EngineAPIError) Error() string { return e.msg }
func (e *EngineAPIError) ErrorData() interface{} {
if e.err == nil {
return nil
}
return struct {
Error string `json:"err"`
}{e.err.Error()}
}
// With returns a copy of the error with a new embedded custom data field.
func (e *EngineAPIError) With(err error) *EngineAPIError {
return &EngineAPIError{
code: e.code,
msg: e.msg,
err: err,
}
}
var (
_ rpc.Error = new(EngineAPIError)
_ rpc.DataError = new(EngineAPIError)
)
var (
// VALID is returned by the engine API in the following calls:
// - newPayloadV1: if the payload was already known or was just validated and executed
// - forkchoiceUpdateV1: if the chain accepted the reorg (might ignore if it's stale)
VALID = "VALID"
// INVALID is returned by the engine API in the following calls:
// - newPayloadV1: if the payload failed to execute on top of the local chain
// - forkchoiceUpdateV1: if the new head is unknown, pre-merge, or reorg to it fails
INVALID = "INVALID"
// SYNCING is returned by the engine API in the following calls:
// - newPayloadV1: if the payload was accepted on top of an active sync
// - forkchoiceUpdateV1: if the new head was seen before, but not part of the chain
SYNCING = "SYNCING"
// ACCEPTED is returned by the engine API in the following calls:
// - newPayloadV1: if the payload was accepted, but not processed (side chain)
ACCEPTED = "ACCEPTED"
GenericServerError = &EngineAPIError{code: -32000, msg: "Server error"}
UnknownPayload = &EngineAPIError{code: -38001, msg: "Unknown payload"}
InvalidForkChoiceState = &EngineAPIError{code: -38002, msg: "Invalid forkchoice state"}
InvalidPayloadAttributes = &EngineAPIError{code: -38003, msg: "Invalid payload attributes"}
TooLargeRequest = &EngineAPIError{code: -38004, msg: "Too large request"}
InvalidParams = &EngineAPIError{code: -32602, msg: "Invalid parameters"}
STATUS_INVALID = ForkChoiceResponse{PayloadStatus: PayloadStatusV1{Status: INVALID}, PayloadID: nil}
STATUS_SYNCING = ForkChoiceResponse{PayloadStatus: PayloadStatusV1{Status: SYNCING}, PayloadID: nil}
INVALID_TERMINAL_BLOCK = PayloadStatusV1{Status: INVALID, LatestValidHash: &common.Hash{}}
)

View File

@ -1,60 +0,0 @@
// Code generated by github.com/fjl/gencodec. DO NOT EDIT.
package engine
import (
"encoding/json"
"errors"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
)
var _ = (*payloadAttributesMarshaling)(nil)
// MarshalJSON marshals as JSON.
func (p PayloadAttributes) MarshalJSON() ([]byte, error) {
type PayloadAttributes struct {
Timestamp hexutil.Uint64 `json:"timestamp" gencodec:"required"`
Random common.Hash `json:"prevRandao" gencodec:"required"`
SuggestedFeeRecipient common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
Withdrawals []*types.Withdrawal `json:"withdrawals"`
}
var enc PayloadAttributes
enc.Timestamp = hexutil.Uint64(p.Timestamp)
enc.Random = p.Random
enc.SuggestedFeeRecipient = p.SuggestedFeeRecipient
enc.Withdrawals = p.Withdrawals
return json.Marshal(&enc)
}
// UnmarshalJSON unmarshals from JSON.
func (p *PayloadAttributes) UnmarshalJSON(input []byte) error {
type PayloadAttributes struct {
Timestamp *hexutil.Uint64 `json:"timestamp" gencodec:"required"`
Random *common.Hash `json:"prevRandao" gencodec:"required"`
SuggestedFeeRecipient *common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
Withdrawals []*types.Withdrawal `json:"withdrawals"`
}
var dec PayloadAttributes
if err := json.Unmarshal(input, &dec); err != nil {
return err
}
if dec.Timestamp == nil {
return errors.New("missing required field 'timestamp' for PayloadAttributes")
}
p.Timestamp = uint64(*dec.Timestamp)
if dec.Random == nil {
return errors.New("missing required field 'prevRandao' for PayloadAttributes")
}
p.Random = *dec.Random
if dec.SuggestedFeeRecipient == nil {
return errors.New("missing required field 'suggestedFeeRecipient' for PayloadAttributes")
}
p.SuggestedFeeRecipient = *dec.SuggestedFeeRecipient
if dec.Withdrawals != nil {
p.Withdrawals = dec.Withdrawals
}
return nil
}

View File

@ -1,146 +0,0 @@
// Code generated by github.com/fjl/gencodec. DO NOT EDIT.
package engine
import (
"encoding/json"
"errors"
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
)
var _ = (*executableDataMarshaling)(nil)
// MarshalJSON marshals as JSON.
func (e ExecutableData) MarshalJSON() ([]byte, error) {
type ExecutableData struct {
ParentHash common.Hash `json:"parentHash" gencodec:"required"`
FeeRecipient common.Address `json:"feeRecipient" gencodec:"required"`
StateRoot common.Hash `json:"stateRoot" gencodec:"required"`
ReceiptsRoot common.Hash `json:"receiptsRoot" gencodec:"required"`
LogsBloom hexutil.Bytes `json:"logsBloom" gencodec:"required"`
Random common.Hash `json:"prevRandao" gencodec:"required"`
Number hexutil.Uint64 `json:"blockNumber" gencodec:"required"`
GasLimit hexutil.Uint64 `json:"gasLimit" gencodec:"required"`
GasUsed hexutil.Uint64 `json:"gasUsed" gencodec:"required"`
Timestamp hexutil.Uint64 `json:"timestamp" gencodec:"required"`
ExtraData hexutil.Bytes `json:"extraData" gencodec:"required"`
BaseFeePerGas *hexutil.Big `json:"baseFeePerGas" gencodec:"required"`
BlockHash common.Hash `json:"blockHash" gencodec:"required"`
Transactions []hexutil.Bytes `json:"transactions" gencodec:"required"`
Withdrawals []*types.Withdrawal `json:"withdrawals"`
}
var enc ExecutableData
enc.ParentHash = e.ParentHash
enc.FeeRecipient = e.FeeRecipient
enc.StateRoot = e.StateRoot
enc.ReceiptsRoot = e.ReceiptsRoot
enc.LogsBloom = e.LogsBloom
enc.Random = e.Random
enc.Number = hexutil.Uint64(e.Number)
enc.GasLimit = hexutil.Uint64(e.GasLimit)
enc.GasUsed = hexutil.Uint64(e.GasUsed)
enc.Timestamp = hexutil.Uint64(e.Timestamp)
enc.ExtraData = e.ExtraData
enc.BaseFeePerGas = (*hexutil.Big)(e.BaseFeePerGas)
enc.BlockHash = e.BlockHash
if e.Transactions != nil {
enc.Transactions = make([]hexutil.Bytes, len(e.Transactions))
for k, v := range e.Transactions {
enc.Transactions[k] = v
}
}
enc.Withdrawals = e.Withdrawals
return json.Marshal(&enc)
}
// UnmarshalJSON unmarshals from JSON.
func (e *ExecutableData) UnmarshalJSON(input []byte) error {
type ExecutableData struct {
ParentHash *common.Hash `json:"parentHash" gencodec:"required"`
FeeRecipient *common.Address `json:"feeRecipient" gencodec:"required"`
StateRoot *common.Hash `json:"stateRoot" gencodec:"required"`
ReceiptsRoot *common.Hash `json:"receiptsRoot" gencodec:"required"`
LogsBloom *hexutil.Bytes `json:"logsBloom" gencodec:"required"`
Random *common.Hash `json:"prevRandao" gencodec:"required"`
Number *hexutil.Uint64 `json:"blockNumber" gencodec:"required"`
GasLimit *hexutil.Uint64 `json:"gasLimit" gencodec:"required"`
GasUsed *hexutil.Uint64 `json:"gasUsed" gencodec:"required"`
Timestamp *hexutil.Uint64 `json:"timestamp" gencodec:"required"`
ExtraData *hexutil.Bytes `json:"extraData" gencodec:"required"`
BaseFeePerGas *hexutil.Big `json:"baseFeePerGas" gencodec:"required"`
BlockHash *common.Hash `json:"blockHash" gencodec:"required"`
Transactions []hexutil.Bytes `json:"transactions" gencodec:"required"`
Withdrawals []*types.Withdrawal `json:"withdrawals"`
}
var dec ExecutableData
if err := json.Unmarshal(input, &dec); err != nil {
return err
}
if dec.ParentHash == nil {
return errors.New("missing required field 'parentHash' for ExecutableData")
}
e.ParentHash = *dec.ParentHash
if dec.FeeRecipient == nil {
return errors.New("missing required field 'feeRecipient' for ExecutableData")
}
e.FeeRecipient = *dec.FeeRecipient
if dec.StateRoot == nil {
return errors.New("missing required field 'stateRoot' for ExecutableData")
}
e.StateRoot = *dec.StateRoot
if dec.ReceiptsRoot == nil {
return errors.New("missing required field 'receiptsRoot' for ExecutableData")
}
e.ReceiptsRoot = *dec.ReceiptsRoot
if dec.LogsBloom == nil {
return errors.New("missing required field 'logsBloom' for ExecutableData")
}
e.LogsBloom = *dec.LogsBloom
if dec.Random == nil {
return errors.New("missing required field 'prevRandao' for ExecutableData")
}
e.Random = *dec.Random
if dec.Number == nil {
return errors.New("missing required field 'blockNumber' for ExecutableData")
}
e.Number = uint64(*dec.Number)
if dec.GasLimit == nil {
return errors.New("missing required field 'gasLimit' for ExecutableData")
}
e.GasLimit = uint64(*dec.GasLimit)
if dec.GasUsed == nil {
return errors.New("missing required field 'gasUsed' for ExecutableData")
}
e.GasUsed = uint64(*dec.GasUsed)
if dec.Timestamp == nil {
return errors.New("missing required field 'timestamp' for ExecutableData")
}
e.Timestamp = uint64(*dec.Timestamp)
if dec.ExtraData == nil {
return errors.New("missing required field 'extraData' for ExecutableData")
}
e.ExtraData = *dec.ExtraData
if dec.BaseFeePerGas == nil {
return errors.New("missing required field 'baseFeePerGas' for ExecutableData")
}
e.BaseFeePerGas = (*big.Int)(dec.BaseFeePerGas)
if dec.BlockHash == nil {
return errors.New("missing required field 'blockHash' for ExecutableData")
}
e.BlockHash = *dec.BlockHash
if dec.Transactions == nil {
return errors.New("missing required field 'transactions' for ExecutableData")
}
e.Transactions = make([][]byte, len(dec.Transactions))
for k, v := range dec.Transactions {
e.Transactions[k] = v
}
if dec.Withdrawals != nil {
e.Withdrawals = dec.Withdrawals
}
return nil
}

View File

@ -1,46 +0,0 @@
// Code generated by github.com/fjl/gencodec. DO NOT EDIT.
package engine
import (
"encoding/json"
"errors"
"math/big"
"github.com/ethereum/go-ethereum/common/hexutil"
)
var _ = (*executionPayloadEnvelopeMarshaling)(nil)
// MarshalJSON marshals as JSON.
func (e ExecutionPayloadEnvelope) MarshalJSON() ([]byte, error) {
type ExecutionPayloadEnvelope struct {
ExecutionPayload *ExecutableData `json:"executionPayload" gencodec:"required"`
BlockValue *hexutil.Big `json:"blockValue" gencodec:"required"`
}
var enc ExecutionPayloadEnvelope
enc.ExecutionPayload = e.ExecutionPayload
enc.BlockValue = (*hexutil.Big)(e.BlockValue)
return json.Marshal(&enc)
}
// UnmarshalJSON unmarshals from JSON.
func (e *ExecutionPayloadEnvelope) UnmarshalJSON(input []byte) error {
type ExecutionPayloadEnvelope struct {
ExecutionPayload *ExecutableData `json:"executionPayload" gencodec:"required"`
BlockValue *hexutil.Big `json:"blockValue" gencodec:"required"`
}
var dec ExecutionPayloadEnvelope
if err := json.Unmarshal(input, &dec); err != nil {
return err
}
if dec.ExecutionPayload == nil {
return errors.New("missing required field 'executionPayload' for ExecutionPayloadEnvelope")
}
e.ExecutionPayload = dec.ExecutionPayload
if dec.BlockValue == nil {
return errors.New("missing required field 'blockValue' for ExecutionPayloadEnvelope")
}
e.BlockValue = (*big.Int)(dec.BlockValue)
return nil
}

View File

@ -1,237 +0,0 @@
// Copyright 2022 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package engine
import (
"fmt"
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/trie"
)
//go:generate go run github.com/fjl/gencodec -type PayloadAttributes -field-override payloadAttributesMarshaling -out gen_blockparams.go
// PayloadAttributes describes the environment context in which a block should
// be built.
type PayloadAttributes struct {
Timestamp uint64 `json:"timestamp" gencodec:"required"`
Random common.Hash `json:"prevRandao" gencodec:"required"`
SuggestedFeeRecipient common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
Withdrawals []*types.Withdrawal `json:"withdrawals"`
}
// JSON type overrides for PayloadAttributes.
type payloadAttributesMarshaling struct {
Timestamp hexutil.Uint64
}
//go:generate go run github.com/fjl/gencodec -type ExecutableData -field-override executableDataMarshaling -out gen_ed.go
// ExecutableData is the data necessary to execute an EL payload.
type ExecutableData struct {
ParentHash common.Hash `json:"parentHash" gencodec:"required"`
FeeRecipient common.Address `json:"feeRecipient" gencodec:"required"`
StateRoot common.Hash `json:"stateRoot" gencodec:"required"`
ReceiptsRoot common.Hash `json:"receiptsRoot" gencodec:"required"`
LogsBloom []byte `json:"logsBloom" gencodec:"required"`
Random common.Hash `json:"prevRandao" gencodec:"required"`
Number uint64 `json:"blockNumber" gencodec:"required"`
GasLimit uint64 `json:"gasLimit" gencodec:"required"`
GasUsed uint64 `json:"gasUsed" gencodec:"required"`
Timestamp uint64 `json:"timestamp" gencodec:"required"`
ExtraData []byte `json:"extraData" gencodec:"required"`
BaseFeePerGas *big.Int `json:"baseFeePerGas" gencodec:"required"`
BlockHash common.Hash `json:"blockHash" gencodec:"required"`
Transactions [][]byte `json:"transactions" gencodec:"required"`
Withdrawals []*types.Withdrawal `json:"withdrawals"`
}
// JSON type overrides for executableData.
type executableDataMarshaling struct {
Number hexutil.Uint64
GasLimit hexutil.Uint64
GasUsed hexutil.Uint64
Timestamp hexutil.Uint64
BaseFeePerGas *hexutil.Big
ExtraData hexutil.Bytes
LogsBloom hexutil.Bytes
Transactions []hexutil.Bytes
}
//go:generate go run github.com/fjl/gencodec -type ExecutionPayloadEnvelope -field-override executionPayloadEnvelopeMarshaling -out gen_epe.go
type ExecutionPayloadEnvelope struct {
ExecutionPayload *ExecutableData `json:"executionPayload" gencodec:"required"`
BlockValue *big.Int `json:"blockValue" gencodec:"required"`
}
// JSON type overrides for ExecutionPayloadEnvelope.
type executionPayloadEnvelopeMarshaling struct {
BlockValue *hexutil.Big
}
type PayloadStatusV1 struct {
Status string `json:"status"`
LatestValidHash *common.Hash `json:"latestValidHash"`
ValidationError *string `json:"validationError"`
}
type TransitionConfigurationV1 struct {
TerminalTotalDifficulty *hexutil.Big `json:"terminalTotalDifficulty"`
TerminalBlockHash common.Hash `json:"terminalBlockHash"`
TerminalBlockNumber hexutil.Uint64 `json:"terminalBlockNumber"`
}
// PayloadID is an identifier of the payload build process
type PayloadID [8]byte
func (b PayloadID) String() string {
return hexutil.Encode(b[:])
}
func (b PayloadID) MarshalText() ([]byte, error) {
return hexutil.Bytes(b[:]).MarshalText()
}
func (b *PayloadID) UnmarshalText(input []byte) error {
err := hexutil.UnmarshalFixedText("PayloadID", input, b[:])
if err != nil {
return fmt.Errorf("invalid payload id %q: %w", input, err)
}
return nil
}
type ForkChoiceResponse struct {
PayloadStatus PayloadStatusV1 `json:"payloadStatus"`
PayloadID *PayloadID `json:"payloadId"`
}
type ForkchoiceStateV1 struct {
HeadBlockHash common.Hash `json:"headBlockHash"`
SafeBlockHash common.Hash `json:"safeBlockHash"`
FinalizedBlockHash common.Hash `json:"finalizedBlockHash"`
}
func encodeTransactions(txs []*types.Transaction) [][]byte {
var enc = make([][]byte, len(txs))
for i, tx := range txs {
enc[i], _ = tx.MarshalBinary()
}
return enc
}
func decodeTransactions(enc [][]byte) ([]*types.Transaction, error) {
var txs = make([]*types.Transaction, len(enc))
for i, encTx := range enc {
var tx types.Transaction
if err := tx.UnmarshalBinary(encTx); err != nil {
return nil, fmt.Errorf("invalid transaction %d: %v", i, err)
}
txs[i] = &tx
}
return txs, nil
}
// ExecutableDataToBlock constructs a block from executable data.
// It verifies that the following fields:
//
// len(extraData) <= 32
// uncleHash = emptyUncleHash
// difficulty = 0
//
// and that the blockhash of the constructed block matches the parameters. Nil
// Withdrawals value will propagate through the returned block. Empty
// Withdrawals value must be passed via non-nil, length 0 value in params.
func ExecutableDataToBlock(params ExecutableData) (*types.Block, error) {
txs, err := decodeTransactions(params.Transactions)
if err != nil {
return nil, err
}
if len(params.ExtraData) > 32 {
return nil, fmt.Errorf("invalid extradata length: %v", len(params.ExtraData))
}
if len(params.LogsBloom) != 256 {
return nil, fmt.Errorf("invalid logsBloom length: %v", len(params.LogsBloom))
}
// Check that baseFeePerGas is not negative or too big
if params.BaseFeePerGas != nil && (params.BaseFeePerGas.Sign() == -1 || params.BaseFeePerGas.BitLen() > 256) {
return nil, fmt.Errorf("invalid baseFeePerGas: %v", params.BaseFeePerGas)
}
// Only set withdrawalsRoot if it is non-nil. This allows CLs to use
// ExecutableData before withdrawals are enabled by marshaling
// Withdrawals as the json null value.
var withdrawalsRoot *common.Hash
if params.Withdrawals != nil {
h := types.DeriveSha(types.Withdrawals(params.Withdrawals), trie.NewStackTrie(nil))
withdrawalsRoot = &h
}
header := &types.Header{
ParentHash: params.ParentHash,
UncleHash: types.EmptyUncleHash,
Coinbase: params.FeeRecipient,
Root: params.StateRoot,
TxHash: types.DeriveSha(types.Transactions(txs), trie.NewStackTrie(nil)),
ReceiptHash: params.ReceiptsRoot,
Bloom: types.BytesToBloom(params.LogsBloom),
Difficulty: common.Big0,
Number: new(big.Int).SetUint64(params.Number),
GasLimit: params.GasLimit,
GasUsed: params.GasUsed,
Time: params.Timestamp,
BaseFee: params.BaseFeePerGas,
Extra: params.ExtraData,
MixDigest: params.Random,
WithdrawalsHash: withdrawalsRoot,
}
block := types.NewBlockWithHeader(header).WithBody(txs, nil /* uncles */).WithWithdrawals(params.Withdrawals)
if block.Hash() != params.BlockHash {
return nil, fmt.Errorf("blockhash mismatch, want %x, got %x", params.BlockHash, block.Hash())
}
return block, nil
}
// BlockToExecutableData constructs the ExecutableData structure by filling the
// fields from the given block. It assumes the given block is post-merge block.
func BlockToExecutableData(block *types.Block, fees *big.Int) *ExecutionPayloadEnvelope {
data := &ExecutableData{
BlockHash: block.Hash(),
ParentHash: block.ParentHash(),
FeeRecipient: block.Coinbase(),
StateRoot: block.Root(),
Number: block.NumberU64(),
GasLimit: block.GasLimit(),
GasUsed: block.GasUsed(),
BaseFeePerGas: block.BaseFee(),
Timestamp: block.Time(),
ReceiptsRoot: block.ReceiptHash(),
LogsBloom: block.Bloom().Bytes(),
Transactions: encodeTransactions(block.Transactions()),
Random: block.MixDigest(),
ExtraData: block.Extra(),
Withdrawals: block.Withdrawals(),
}
return &ExecutionPayloadEnvelope{ExecutionPayload: data, BlockValue: fees}
}
// ExecutionPayloadBodyV1 is used in the response to GetPayloadBodiesByHashV1 and GetPayloadBodiesByRangeV1
type ExecutionPayloadBodyV1 struct {
TransactionData []hexutil.Bytes `json:"transactions"`
Withdrawals []*types.Withdrawal `json:"withdrawals"`
}

View File

@ -1,22 +0,0 @@
#!/bin/bash
set -e -x
# -- Check XCode version
xcodebuild -version
# xcrun simctl list
# -- Build for macOS and upload to Azure
go run build/ci.go install -dlgo
go run build/ci.go archive -type tar # -signer OSX_SIGNING_KEY -upload gethstore/builds
# # -- CocoaPods
# gem uninstall cocoapods -a -x
# gem install cocoapods
# mv ~/.cocoapods/repos/master ~/.cocoapods/repos/master.bak
# sed -i '.bak' 's/repo.join/!repo.join/g' $(dirname `gem which cocoapods`)/cocoapods/sources_manager.rb
# git clone --depth=1 https://github.com/CocoaPods/Specs.git ~/.cocoapods/repos/master
# pod setup --verbose
# # -- Build for iOS and upload to Azure
# go run build/ci.go xcode -signer IOS_SIGNING_KEY -upload gethstore/builds

View File

@ -1,17 +0,0 @@
#!/bin/bash
set -e -x
# Note: this script is meant to be run in a Debian/Ubuntu docker container,
# as user 'root'.
# Install the required tools for creating source packages.
apt-get -yq --no-install-suggests --no-install-recommends install\
devscripts debhelper dput fakeroot
# Add the SSH key of ppa.launchpad.net to known_hosts.
mkdir -p ~/.ssh
echo '|1|7SiYPr9xl3uctzovOTj4gMwAC1M=|t6ReES75Bo/PxlOPJ6/GsGbTrM0= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0aKz5UTUndYgIGG7dQBV+HaeuEZJ2xPHo2DS2iSKvUL4xNMSAY4UguNW+pX56nAQmZKIZZ8MaEvSj6zMEDiq6HFfn5JcTlM80UwlnyKe8B8p7Nk06PPQLrnmQt5fh0HmEcZx+JU9TZsfCHPnX7MNz4ELfZE6cFsclClrKim3BHUIGq//t93DllB+h4O9LHjEUsQ1Sr63irDLSutkLJD6RXchjROXkNirlcNVHH/jwLWR5RcYilNX7S5bIkK8NlWPjsn/8Ua5O7I9/YoE97PpO6i73DTGLh5H9JN/SITwCKBkgSDWUt61uPK3Y11Gty7o2lWsBjhBUm2Y38CBsoGmBw==' >> ~/.ssh/known_hosts
# Build the source package and upload.
go run build/ci.go debsrc -upload ethereum/ethereum -sftp-user geth-ci -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>"

View File

@ -1,46 +1,37 @@
# This file contains sha256 checksums of optional build dependencies. # This file contains sha256 checksums of optional build dependencies.
e447b498cde50215c4f7619e5124b0fc4e25fb5d16ea47271c47f278e7aa763a go1.20.3.src.tar.gz 3defb9a09bed042403195e872dcbc8c6fae1485963332279668ec52e80a95a2d go1.17.5.src.tar.gz
c1e1161d6d859deb576e6cfabeb40e3d042ceb1c6f444f617c3c9d76269c3565 go1.20.3.darwin-amd64.tar.gz 2db6a5d25815b56072465a2cacc8ed426c18f1d5fc26c1fc8c4f5a7188658264 go1.17.5.darwin-amd64.tar.gz
86b0ed0f2b2df50fa8036eea875d1cf2d76cefdacf247c44639a1464b7e36b95 go1.20.3.darwin-arm64.tar.gz 111f71166de0cb8089bb3e8f9f5b02d76e1bf1309256824d4062a47b0e5f98e0 go1.17.5.darwin-arm64.tar.gz
340e80abd047c597fdc0f50a6cc59617f06c297d62f7fc77f4a0164e2da6f7aa go1.20.3.freebsd-386.tar.gz 443c1cd9768df02085014f1eb034ebc7dbe032ffc8a9bb9f2e6617d037eee23c go1.17.5.freebsd-386.tar.gz
2169fcd8b6c94c5fbe07c0b470ccfb6001d343f6548ad49f3d9ab78e3b5753c7 go1.20.3.freebsd-amd64.tar.gz 17180bdc4126acffd0ebf86d66ef5cbc3488b6734e93374fb00eb09494e006d3 go1.17.5.freebsd-amd64.tar.gz
e12384311403f1389d14cc1c1295bfb4e0dd5ab919403b80da429f671a223507 go1.20.3.linux-386.tar.gz 4f4914303bc18f24fd137a97e595735308f5ce81323c7224c12466fd763fc59f go1.17.5.linux-386.tar.gz
979694c2c25c735755bf26f4f45e19e64e4811d661dd07b8c010f7a8e18adfca go1.20.3.linux-amd64.tar.gz bd78114b0d441b029c8fe0341f4910370925a4d270a6a590668840675b0c653e go1.17.5.linux-amd64.tar.gz
eb186529f13f901e7a2c4438a05c2cd90d74706aaa0a888469b2a4a617b6ee54 go1.20.3.linux-arm64.tar.gz 6f95ce3da40d9ce1355e48f31f4eb6508382415ca4d7413b1e7a3314e6430e7e go1.17.5.linux-arm64.tar.gz
b421e90469a83671641f81b6e20df6500f033e9523e89cbe7b7223704dd1035c go1.20.3.linux-armv6l.tar.gz aa1fb6c53b4fe72f159333362a10aca37ae938bde8adc9c6eaf2a8e87d1e47de go1.17.5.linux-armv6l.tar.gz
943c89aa1624ea544a022b31e3d6e16a037200e436370bdd5fd67f3fa60be282 go1.20.3.linux-ppc64le.tar.gz 3d4be616e568f0a02cb7f7769bcaafda4b0969ed0f9bb4277619930b96847e70 go1.17.5.linux-ppc64le.tar.gz
126cf823a5634ef2544b866db107b9d351d3ea70d9e240b0bdcfb46f4dcae54b go1.20.3.linux-s390x.tar.gz 8087d4fe991e82804e6485c26568c2e0ee0bfde00ceb9015dc86cb6bf84ef40b go1.17.5.linux-s390x.tar.gz
37e9146e1f9d681cfcaa6fee6c7b890c44c64bc50228c9588f3c4231346d33bd go1.20.3.windows-386.zip 6d7b9948ee14a906b14f5cbebdfab63cd6828b0b618160847ecd3cc3470a26fe go1.17.5.windows-386.zip
143a2837821c7dbacf7744cbb1a8421c1f48307c6fdfaeffc5f8c2f69e1b7932 go1.20.3.windows-amd64.zip 671faf99cd5d81cd7e40936c0a94363c64d654faa0148d2af4bbc262555620b9 go1.17.5.windows-amd64.zip
158cb159e00bc979f473e0f5b5a561613129c5e51067967b72b8e072e5a4db81 go1.20.3.windows-arm64.zip 45e88676b68e9cf364be469b5a27965397f4e339aa622c2f52c10433c56e5030 go1.17.5.windows-arm64.zip
fba08acc4027f69f07cef48fbff70b8a7ecdfaa1c2aba9ad3fb31d60d9f5d4bc golangci-lint-1.51.1-darwin-amd64.tar.gz d4bd25b9814eeaa2134197dd2c7671bb791eae786d42010d9d788af20dee4bfa golangci-lint-1.42.0-darwin-amd64.tar.gz
75b8f0ff3a4e68147156be4161a49d4576f1be37a0b506473f8c482140c1e7f2 golangci-lint-1.51.1-darwin-arm64.tar.gz e56859c04a2ad5390c6a497b1acb1cc9329ecb1010260c6faae9b5a4c35b35ea golangci-lint-1.42.0-darwin-arm64.tar.gz
e06b3459aaed356e1667580be00b05f41f3b2e29685d12cdee571c23e1edb414 golangci-lint-1.51.1-freebsd-386.tar.gz 14d912a3fa856830339472fc4dc341933adf15f37bdb7130bbbfcf960ecf4809 golangci-lint-1.42.0-freebsd-386.tar.gz
623ce2d0fa4d35cc2e8d69fa7334227ab592380962a13b4d9cdc77cf41db2008 golangci-lint-1.51.1-freebsd-amd64.tar.gz 337257fccc9baeb5ee1cd7e70c153e9d9f59d3afde46d631659500048afbdf80 golangci-lint-1.42.0-freebsd-amd64.tar.gz
131365feb0584cc2736c43192fa673ca50e5b6b765456990cb379ecfb787e568 golangci-lint-1.51.1-freebsd-armv6.tar.gz 6debcc266b629359fdd8eef4f4abb05a621604079d27016265afb5b4593b0eff golangci-lint-1.42.0-freebsd-armv6.tar.gz
98fb627927cbb654f5bf85dcffc5f646666b2ce96ea0fed977c9fb28abd51532 golangci-lint-1.51.1-freebsd-armv7.tar.gz 878f0e190169db2ce9dde8cefbd99adc4fe28b90b68686bbfcfcc2085e6d693e golangci-lint-1.42.0-freebsd-armv7.tar.gz
b36a99702fa762c15840261bc0fb41b4b1b16b8b19b8c0941bae98c85bb0f8b8 golangci-lint-1.51.1-linux-386.tar.gz 42c78e31faf62b225363eff1b1d2aa74f9dbcb75686c8914aa3e90d6af65cece golangci-lint-1.42.0-linux-386.tar.gz
17aeb26c76820c22efa0e1838b0ab93e90cfedef43fbfc9a2f33f27eb9e5e070 golangci-lint-1.51.1-linux-amd64.tar.gz 6937f62f8e2329e94822dc11c10b871ace5557ae1fcc4ee2f9980cd6aecbc159 golangci-lint-1.42.0-linux-amd64.tar.gz
9744bc34e7b8d82ca788b667bfb7155a39b4be9aef43bf9f10318b1372cea338 golangci-lint-1.51.1-linux-arm64.tar.gz 2cf8d23d96cd854a537b355dab2962b960b88a06b615232599f066afd233f246 golangci-lint-1.42.0-linux-arm64.tar.gz
0dda8dbeb2ff7455a044ec8e347f2fc6d655d2e99d281b3b95e88167031c673d golangci-lint-1.51.1-linux-armv6.tar.gz 08b003d1ed61367473886defc957af5301066e62338e5d96a319c34dadc4c1d1 golangci-lint-1.42.0-linux-armv6.tar.gz
0512f311b11d43b8b22989d929f0fe8a2e1e5ebe497f1eb0ff73a0fc3d188fd1 golangci-lint-1.51.1-linux-armv7.tar.gz c7c00ec4845e806a1f32685f5b150219e180bd6d6a9d584be8d27f0c41d7a1bf golangci-lint-1.42.0-linux-armv7.tar.gz
d767108dcf84a8eaa844df3454cb0f75a492f4e7102ecc2b0a3545cfe073a566 golangci-lint-1.51.1-linux-loong64.tar.gz 3650fcf29eb3d8ee326d77791a896b15259eb2d5bf77437dc72e7efe5af6bd40 golangci-lint-1.42.0-linux-mips64.tar.gz
3bd56c54daec16585b2668e0dfabb27af2c2b38cc0fdb46923e2521e1634846b golangci-lint-1.51.1-linux-mips64.tar.gz f51ae003fdbca4fef78ba73e2eb736a939c8eaa178cd452234213b489da5a420 golangci-lint-1.42.0-linux-mips64le.tar.gz
f72f5adfa2219e15d2414c9a2966f86e74556cf17a85c727a7fb7770a16cf814 golangci-lint-1.51.1-linux-mips64le.tar.gz 1b0bb7b8b22cc4ea7da44fd5ad5faaf6111d0677e01cc6f961b62a96537de2c6 golangci-lint-1.42.0-linux-ppc64le.tar.gz
e605521dac98096d8737e1997c954f41f1d0d8275b8731f62783d410c23574b9 golangci-lint-1.51.1-linux-ppc64le.tar.gz 8cb56927eb75e572450efbe0ff0f9cf3f56dc9faa81d9e8d30d6559fc1d06e6d golangci-lint-1.42.0-linux-riscv64.tar.gz
2f683217b814339e74d61ca700922d8407f15addd6d4c5e8b156fbab79f26a87 golangci-lint-1.51.1-linux-riscv64.tar.gz 5ac41cd31825a176b21505a371a7b307cd9cdf17df0f35bbb3bf1466f9356ccc golangci-lint-1.42.0-linux-s390x.tar.gz
d98528292b65971a3594e5880530e7624597dc9806fcfccdfbe39be411713d63 golangci-lint-1.51.1-linux-s390x.tar.gz e1cebd2af621ac4b64c20937df92c3819264f2174c92f51e196db1e64ae097e0 golangci-lint-1.42.0-windows-386.zip
9bb2d0fe9e692ed0aea4f2537e3e6862b2f6768fe2849a84f4a6ad09da9fd971 golangci-lint-1.51.1-netbsd-386.tar.gz 7e70fcde8e87a17cae0455df07d257ebc86669f3968d568e12727fa24bbe9883 golangci-lint-1.42.0-windows-amd64.zip
34cafdcd11ae73ae88d66c33eb8449f5c976fc3e37b44774dbe9c71caa95e592 golangci-lint-1.51.1-netbsd-amd64.tar.gz 59da7ce1bda432616bfc28ae663e52c3675adee8d9bf5959fafd657c159576ab golangci-lint-1.42.0-windows-armv6.zip
f8b4e1e47ac17caafe8a5f32f975a2b6a7cb14c27c0f73c1fb15c20ca91c2e03 golangci-lint-1.51.1-netbsd-armv6.tar.gz 65f62dda937bfcede0326ac77abe947ce1548931e6e13298ca036cb31f224db5 golangci-lint-1.42.0-windows-armv7.zip
c4f58b7e227b9fd41f0e9310dc83f4a4e7d026598e2f6e95b78761081a6d9bd2 golangci-lint-1.51.1-netbsd-armv7.tar.gz
6710e2f5375dc75521c1a17980a6cbbe6ff76c2f8b852964a8af558899a97cf5 golangci-lint-1.51.1-windows-386.zip
722d7b87b9cdda0a3835d5030b3fc5385c2eba4c107f63f6391cfb2ac35f051d golangci-lint-1.51.1-windows-amd64.zip
eb57f9bcb56646f2e3d6ccaf02ec227815fb05077b2e0b1bf9e755805acdc2b9 golangci-lint-1.51.1-windows-arm64.zip
bce02f7232723cb727755ee11f168a700a00896a25d37f87c4b173bce55596b4 golangci-lint-1.51.1-windows-armv6.zip
cf6403f84707ce8c98664736772271bc8874f2e760c2fd0f00cf3e85963507e9 golangci-lint-1.51.1-windows-armv7.zip
# This is the builder on PPA that will build Go itself (inception-y), don't modify!
d7f0013f82e6d7f862cc6cb5c8cdb48eef5f2e239b35baa97e2f1a7466043767 go1.19.6.src.tar.gz

View File

@ -24,36 +24,42 @@ Usage: go run build/ci.go <command> <command flags/arguments>
Available commands are: Available commands are:
install [ -arch architecture ] [ -cc compiler ] [ packages... ] -- builds packages and executables install [ -arch architecture ] [ -cc compiler ] [ packages... ] -- builds packages and executables
test [ -coverage ] [ packages... ] -- runs the tests test [ -coverage ] [ packages... ] -- runs the tests
lint -- runs certain pre-selected linters lint -- runs certain pre-selected linters
archive [ -arch architecture ] [ -type zip|tar ] [ -signer key-envvar ] [ -signify key-envvar ] [ -upload dest ] -- archives build artifacts archive [ -arch architecture ] [ -type zip|tar ] [ -signer key-envvar ] [ -signify key-envvar ] [ -upload dest ] -- archives build artifacts
importkeys -- imports signing keys from env importkeys -- imports signing keys from env
debsrc [ -signer key-id ] [ -upload dest ] -- creates a debian source package debsrc [ -signer key-id ] [ -upload dest ] -- creates a debian source package
nsis -- creates a Windows NSIS installer nsis -- creates a Windows NSIS installer
purge [ -store blobstore ] [ -days threshold ] -- purges old archives from the blobstore aar [ -local ] [ -sign key-id ] [-deploy repo] [ -upload dest ] -- creates an Android archive
xcode [ -local ] [ -sign key-id ] [-deploy repo] [ -upload dest ] -- creates an iOS XCode framework
purge [ -store blobstore ] [ -days threshold ] -- purges old archives from the blobstore
For all commands, -n prevents execution of external programs (dry run mode). For all commands, -n prevents execution of external programs (dry run mode).
*/ */
package main package main
import ( import (
"bufio"
"bytes" "bytes"
"encoding/base64" "encoding/base64"
"flag" "flag"
"fmt" "fmt"
"io/ioutil"
"log" "log"
"os" "os"
"os/exec" "os/exec"
"path" "path"
"path/filepath" "path/filepath"
"regexp"
"runtime" "runtime"
"strconv" "strconv"
"strings" "strings"
"time" "time"
"github.com/cespare/cp" "github.com/cespare/cp"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/crypto/signify" "github.com/ethereum/go-ethereum/crypto/signify"
"github.com/ethereum/go-ethereum/internal/build" "github.com/ethereum/go-ethereum/internal/build"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
@ -73,6 +79,7 @@ var (
executablePath("bootnode"), executablePath("bootnode"),
executablePath("evm"), executablePath("evm"),
executablePath("geth"), executablePath("geth"),
executablePath("puppeth"),
executablePath("rlpdump"), executablePath("rlpdump"),
executablePath("clef"), executablePath("clef"),
} }
@ -95,6 +102,10 @@ var (
BinaryName: "geth", BinaryName: "geth",
Description: "Ethereum CLI client.", Description: "Ethereum CLI client.",
}, },
{
BinaryName: "puppeth",
Description: "Ethereum private network manager.",
},
{ {
BinaryName: "rlpdump", BinaryName: "rlpdump",
Description: "Developer utility tool that prints RLP structures.", Description: "Developer utility tool that prints RLP structures.",
@ -120,15 +131,13 @@ var (
// Distros for which packages are created. // Distros for which packages are created.
// Note: vivid is unsupported because there is no golang-1.6 package for it. // Note: vivid is unsupported because there is no golang-1.6 package for it.
// Note: the following Ubuntu releases have been officially deprecated on Launchpad: // Note: the following Ubuntu releases have been officially deprecated on Launchpad:
// wily, yakkety, zesty, artful, cosmic, disco, eoan, groovy, hirsuite, impish // wily, yakkety, zesty, artful, cosmic, disco, eoan, groovy
debDistroGoBoots = map[string]string{ debDistroGoBoots = map[string]string{
"trusty": "golang-1.11", // EOL: 04/2024 "trusty": "golang-1.11",
"xenial": "golang-go", // EOL: 04/2026 "xenial": "golang-go",
"bionic": "golang-go", // EOL: 04/2028 "bionic": "golang-go",
"focal": "golang-go", // EOL: 04/2030 "focal": "golang-go",
"jammy": "golang-go", // EOL: 04/2032 "hirsute": "golang-go",
"kinetic": "golang-go", // EOL: 07/2023
"lunar": "golang-go", // EOL: 01/2024
} }
debGoBootPaths = map[string]string{ debGoBootPaths = map[string]string{
@ -136,18 +145,10 @@ var (
"golang-go": "/usr/lib/go", "golang-go": "/usr/lib/go",
} }
// This is the version of Go that will be downloaded by // This is the version of go that will be downloaded by
// //
// go run ci.go install -dlgo // go run ci.go install -dlgo
dlgoVersion = "1.20.3" dlgoVersion = "1.17.5"
// This is the version of Go that will be used to bootstrap the PPA builder.
//
// This version is fine to be old and full of security holes, we just use it
// to build the latest Go. Don't change it. If it ever becomes insufficient,
// we need to switch over to a recursive builder to jumpt across supported
// versions.
gobootVersion = "1.19.6"
) )
var GOBIN, _ = filepath.Abs(filepath.Join("build", "bin")) var GOBIN, _ = filepath.Abs(filepath.Join("build", "bin"))
@ -162,7 +163,7 @@ func executablePath(name string) string {
func main() { func main() {
log.SetFlags(log.Lshortfile) log.SetFlags(log.Lshortfile)
if !common.FileExist(filepath.Join("build", "ci.go")) { if _, err := os.Stat(filepath.Join("build", "ci.go")); os.IsNotExist(err) {
log.Fatal("this script must be run from the root of the repository") log.Fatal("this script must be run from the root of the repository")
} }
if len(os.Args) < 2 { if len(os.Args) < 2 {
@ -183,6 +184,10 @@ func main() {
doDebianSource(os.Args[2:]) doDebianSource(os.Args[2:])
case "nsis": case "nsis":
doWindowsInstaller(os.Args[2:]) doWindowsInstaller(os.Args[2:])
case "aar":
doAndroidArchive(os.Args[2:])
case "xcode":
doXCodeFramework(os.Args[2:])
case "purge": case "purge":
doPurge(os.Args[2:]) doPurge(os.Args[2:])
default: default:
@ -194,10 +199,9 @@ func main() {
func doInstall(cmdline []string) { func doInstall(cmdline []string) {
var ( var (
dlgo = flag.Bool("dlgo", false, "Download Go and build with it") dlgo = flag.Bool("dlgo", false, "Download Go and build with it")
arch = flag.String("arch", "", "Architecture to cross build for") arch = flag.String("arch", "", "Architecture to cross build for")
cc = flag.String("cc", "", "C compiler to cross build with") cc = flag.String("cc", "", "C compiler to cross build with")
staticlink = flag.Bool("static", false, "Create statically-linked executable")
) )
flag.CommandLine.Parse(cmdline) flag.CommandLine.Parse(cmdline)
@ -208,12 +212,9 @@ func doInstall(cmdline []string) {
tc.Root = build.DownloadGo(csdb, dlgoVersion) tc.Root = build.DownloadGo(csdb, dlgoVersion)
} }
// Disable CLI markdown doc generation in release builds.
buildTags := []string{"urfave_cli_no_docs"}
// Configure the build. // Configure the build.
env := build.Env() env := build.Env()
gobuild := tc.Go("build", buildFlags(env, *staticlink, buildTags)...) gobuild := tc.Go("build", buildFlags(env)...)
// arm64 CI builders are memory-constrained and can't handle concurrent builds, // arm64 CI builders are memory-constrained and can't handle concurrent builds,
// better disable it. This check isn't the best, it should probably // better disable it. This check isn't the best, it should probably
@ -246,35 +247,25 @@ func doInstall(cmdline []string) {
} }
// buildFlags returns the go tool flags for building. // buildFlags returns the go tool flags for building.
func buildFlags(env build.Environment, staticLinking bool, buildTags []string) (flags []string) { func buildFlags(env build.Environment) (flags []string) {
var ld []string var ld []string
if env.Commit != "" { if env.Commit != "" {
ld = append(ld, "-X", "github.com/ethereum/go-ethereum/internal/version.gitCommit="+env.Commit) ld = append(ld, "-X", "main.gitCommit="+env.Commit)
ld = append(ld, "-X", "github.com/ethereum/go-ethereum/internal/version.gitDate="+env.Date) ld = append(ld, "-X", "main.gitDate="+env.Date)
} }
// Strip DWARF on darwin. This used to be required for certain things, // Strip DWARF on darwin. This used to be required for certain things,
// and there is no downside to this, so we just keep doing it. // and there is no downside to this, so we just keep doing it.
if runtime.GOOS == "darwin" { if runtime.GOOS == "darwin" {
ld = append(ld, "-s") ld = append(ld, "-s")
} }
// Enforce the stacksize to 8M, which is the case on most platforms apart from
// alpine Linux.
if runtime.GOOS == "linux" { if runtime.GOOS == "linux" {
// Enforce the stacksize to 8M, which is the case on most platforms apart from ld = append(ld, "-extldflags", "-Wl,-z,stack-size=0x800000")
// alpine Linux.
extld := []string{"-Wl,-z,stack-size=0x800000"}
if staticLinking {
extld = append(extld, "-static")
// Under static linking, use of certain glibc features must be
// disabled to avoid shared library dependencies.
buildTags = append(buildTags, "osusergo", "netgo")
}
ld = append(ld, "-extldflags", "'"+strings.Join(extld, " ")+"'")
} }
if len(ld) > 0 { if len(ld) > 0 {
flags = append(flags, "-ldflags", strings.Join(ld, " ")) flags = append(flags, "-ldflags", strings.Join(ld, " "))
} }
if len(buildTags) > 0 {
flags = append(flags, "-tags", strings.Join(buildTags, ","))
}
return flags return flags
} }
@ -341,21 +332,16 @@ func doLint(cmdline []string) {
// downloadLinter downloads and unpacks golangci-lint. // downloadLinter downloads and unpacks golangci-lint.
func downloadLinter(cachedir string) string { func downloadLinter(cachedir string) string {
const version = "1.51.1" const version = "1.42.0"
csdb := build.MustLoadChecksums("build/checksums.txt") csdb := build.MustLoadChecksums("build/checksums.txt")
arch := runtime.GOARCH arch := runtime.GOARCH
ext := ".tar.gz"
if runtime.GOOS == "windows" {
ext = ".zip"
}
if arch == "arm" { if arch == "arm" {
arch += "v" + os.Getenv("GOARM") arch += "v" + os.Getenv("GOARM")
} }
base := fmt.Sprintf("golangci-lint-%s-%s-%s", version, runtime.GOOS, arch) base := fmt.Sprintf("golangci-lint-%s-%s-%s", version, runtime.GOOS, arch)
url := fmt.Sprintf("https://github.com/golangci/golangci-lint/releases/download/v%s/%s%s", version, base, ext) url := fmt.Sprintf("https://github.com/golangci/golangci-lint/releases/download/v%s/%s.tar.gz", version, base)
archivePath := filepath.Join(cachedir, base+ext) archivePath := filepath.Join(cachedir, base+".tar.gz")
if err := csdb.DownloadFile(url, archivePath); err != nil { if err := csdb.DownloadFile(url, archivePath); err != nil {
log.Fatal(err) log.Fatal(err)
} }
@ -465,6 +451,10 @@ func maybeSkipArchive(env build.Environment) {
log.Printf("skipping archive creation because this is a PR build") log.Printf("skipping archive creation because this is a PR build")
os.Exit(0) os.Exit(0)
} }
if env.IsCronJob {
log.Printf("skipping archive creation because this is a cron job")
os.Exit(0)
}
if env.Branch != "master" && !strings.HasPrefix(env.Tag, "v1.") { if env.Branch != "master" && !strings.HasPrefix(env.Tag, "v1.") {
log.Printf("skipping archive creation because branch %q, tag %q is not on the inclusion list", env.Branch, env.Tag) log.Printf("skipping archive creation because branch %q, tag %q is not on the inclusion list", env.Branch, env.Tag)
os.Exit(0) os.Exit(0)
@ -598,7 +588,7 @@ func doDocker(cmdline []string) {
} }
if mismatch { if mismatch {
// Build numbers mismatching, retry in a short time to // Build numbers mismatching, retry in a short time to
// avoid concurrent fails in both publisher images. If // avoid concurrent failes in both publisher images. If
// however the retry failed too, it means the concurrent // however the retry failed too, it means the concurrent
// builder is still crunching, let that do the publish. // builder is still crunching, let that do the publish.
if i == 0 { if i == 0 {
@ -659,11 +649,10 @@ func doDebianSource(cmdline []string) {
gpg.Stdin = bytes.NewReader(key) gpg.Stdin = bytes.NewReader(key)
build.MustRun(gpg) build.MustRun(gpg)
} }
// Download and verify the Go source packages.
var ( // Download and verify the Go source package.
gobootbundle = downloadGoBootstrapSources(*cachedir) gobundle := downloadGoSources(*cachedir)
gobundle = downloadGoSources(*cachedir)
)
// Download all the dependencies needed to build the sources and run the ci script // Download all the dependencies needed to build the sources and run the ci script
srcdepfetch := tc.Go("mod", "download") srcdepfetch := tc.Go("mod", "download")
srcdepfetch.Env = append(srcdepfetch.Env, "GOPATH="+filepath.Join(*workdir, "modgopath")) srcdepfetch.Env = append(srcdepfetch.Env, "GOPATH="+filepath.Join(*workdir, "modgopath"))
@ -680,19 +669,12 @@ func doDebianSource(cmdline []string) {
meta := newDebMetadata(distro, goboot, *signer, env, now, pkg.Name, pkg.Version, pkg.Executables) meta := newDebMetadata(distro, goboot, *signer, env, now, pkg.Name, pkg.Version, pkg.Executables)
pkgdir := stageDebianSource(*workdir, meta) pkgdir := stageDebianSource(*workdir, meta)
// Add bootstrapper Go source code // Add Go source code
if err := build.ExtractArchive(gobootbundle, pkgdir); err != nil {
log.Fatalf("Failed to extract bootstrapper Go sources: %v", err)
}
if err := os.Rename(filepath.Join(pkgdir, "go"), filepath.Join(pkgdir, ".goboot")); err != nil {
log.Fatalf("Failed to rename bootstrapper Go source folder: %v", err)
}
// Add builder Go source code
if err := build.ExtractArchive(gobundle, pkgdir); err != nil { if err := build.ExtractArchive(gobundle, pkgdir); err != nil {
log.Fatalf("Failed to extract builder Go sources: %v", err) log.Fatalf("Failed to extract Go sources: %v", err)
} }
if err := os.Rename(filepath.Join(pkgdir, "go"), filepath.Join(pkgdir, ".go")); err != nil { if err := os.Rename(filepath.Join(pkgdir, "go"), filepath.Join(pkgdir, ".go")); err != nil {
log.Fatalf("Failed to rename builder Go source folder: %v", err) log.Fatalf("Failed to rename Go source folder: %v", err)
} }
// Add all dependency modules in compressed form // Add all dependency modules in compressed form
os.MkdirAll(filepath.Join(pkgdir, ".mod", "cache"), 0755) os.MkdirAll(filepath.Join(pkgdir, ".mod", "cache"), 0755)
@ -721,19 +703,6 @@ func doDebianSource(cmdline []string) {
} }
} }
// downloadGoBootstrapSources downloads the Go source tarball that will be used
// to bootstrap the builder Go.
func downloadGoBootstrapSources(cachedir string) string {
csdb := build.MustLoadChecksums("build/checksums.txt")
file := fmt.Sprintf("go%s.src.tar.gz", gobootVersion)
url := "https://dl.google.com/go/" + file
dst := filepath.Join(cachedir, file)
if err := csdb.DownloadFile(url, dst); err != nil {
log.Fatal(err)
}
return dst
}
// downloadGoSources downloads the Go source tarball. // downloadGoSources downloads the Go source tarball.
func downloadGoSources(cachedir string) string { func downloadGoSources(cachedir string) string {
csdb := build.MustLoadChecksums("build/checksums.txt") csdb := build.MustLoadChecksums("build/checksums.txt")
@ -759,8 +728,8 @@ func ppaUpload(workdir, ppa, sshUser string, files []string) {
var idfile string var idfile string
if sshkey := getenvBase64("PPA_SSH_KEY"); len(sshkey) > 0 { if sshkey := getenvBase64("PPA_SSH_KEY"); len(sshkey) > 0 {
idfile = filepath.Join(workdir, "sshkey") idfile = filepath.Join(workdir, "sshkey")
if !common.FileExist(idfile) { if _, err := os.Stat(idfile); os.IsNotExist(err) {
os.WriteFile(idfile, sshkey, 0600) ioutil.WriteFile(idfile, sshkey, 0600)
} }
} }
// Upload // Upload
@ -783,7 +752,7 @@ func makeWorkdir(wdflag string) string {
if wdflag != "" { if wdflag != "" {
err = os.MkdirAll(wdflag, 0744) err = os.MkdirAll(wdflag, 0744)
} else { } else {
wdflag, err = os.MkdirTemp("", "geth-build-") wdflag, err = ioutil.TempDir("", "geth-build-")
} }
if err != nil { if err != nil {
log.Fatal(err) log.Fatal(err)
@ -981,10 +950,10 @@ func doWindowsInstaller(cmdline []string) {
build.Render("build/nsis.pathupdate.nsh", filepath.Join(*workdir, "PathUpdate.nsh"), 0644, nil) build.Render("build/nsis.pathupdate.nsh", filepath.Join(*workdir, "PathUpdate.nsh"), 0644, nil)
build.Render("build/nsis.envvarupdate.nsh", filepath.Join(*workdir, "EnvVarUpdate.nsh"), 0644, nil) build.Render("build/nsis.envvarupdate.nsh", filepath.Join(*workdir, "EnvVarUpdate.nsh"), 0644, nil)
if err := cp.CopyFile(filepath.Join(*workdir, "SimpleFC.dll"), "build/nsis.simplefc.dll"); err != nil { if err := cp.CopyFile(filepath.Join(*workdir, "SimpleFC.dll"), "build/nsis.simplefc.dll"); err != nil {
log.Fatalf("Failed to copy SimpleFC.dll: %v", err) log.Fatal("Failed to copy SimpleFC.dll: %v", err)
} }
if err := cp.CopyFile(filepath.Join(*workdir, "COPYING"), "COPYING"); err != nil { if err := cp.CopyFile(filepath.Join(*workdir, "COPYING"), "COPYING"); err != nil {
log.Fatalf("Failed to copy copyright note: %v", err) log.Fatal("Failed to copy copyright note: %v", err)
} }
// Build the installer. This assumes that all the needed files have been previously // Build the installer. This assumes that all the needed files have been previously
// built (don't mix building and packaging to keep cross compilation complexity to a // built (don't mix building and packaging to keep cross compilation complexity to a
@ -993,10 +962,7 @@ func doWindowsInstaller(cmdline []string) {
if env.Commit != "" { if env.Commit != "" {
version[2] += "-" + env.Commit[:8] version[2] += "-" + env.Commit[:8]
} }
installer, err := filepath.Abs("geth-" + archiveBasename(*arch, params.ArchiveVersion(env.Commit)) + ".exe") installer, _ := filepath.Abs("geth-" + archiveBasename(*arch, params.ArchiveVersion(env.Commit)) + ".exe")
if err != nil {
log.Fatalf("Failed to convert installer file path: %v", err)
}
build.MustRunCommand("makensis.exe", build.MustRunCommand("makensis.exe",
"/DOUTPUTFILE="+installer, "/DOUTPUTFILE="+installer,
"/DMAJORVERSION="+version[0], "/DMAJORVERSION="+version[0],
@ -1011,6 +977,240 @@ func doWindowsInstaller(cmdline []string) {
} }
} }
// Android archives
func doAndroidArchive(cmdline []string) {
var (
local = flag.Bool("local", false, `Flag whether we're only doing a local build (skip Maven artifacts)`)
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. ANDROID_SIGNING_KEY)`)
signify = flag.String("signify", "", `Environment variable holding the signify signing key (e.g. ANDROID_SIGNIFY_KEY)`)
deploy = flag.String("deploy", "", `Destination to deploy the archive (usually "https://oss.sonatype.org")`)
upload = flag.String("upload", "", `Destination to upload the archive (usually "gethstore/builds")`)
)
flag.CommandLine.Parse(cmdline)
env := build.Env()
tc := new(build.GoToolchain)
// Sanity check that the SDK and NDK are installed and set
if os.Getenv("ANDROID_HOME") == "" {
log.Fatal("Please ensure ANDROID_HOME points to your Android SDK")
}
// Build gomobile.
install := tc.Install(GOBIN, "golang.org/x/mobile/cmd/gomobile@latest", "golang.org/x/mobile/cmd/gobind@latest")
install.Env = append(install.Env)
build.MustRun(install)
// Ensure all dependencies are available. This is required to make
// gomobile bind work because it expects go.sum to contain all checksums.
build.MustRun(tc.Go("mod", "download"))
// Build the Android archive and Maven resources
build.MustRun(gomobileTool("bind", "-ldflags", "-s -w", "--target", "android", "--javapkg", "org.ethereum", "-v", "github.com/ethereum/go-ethereum/mobile"))
if *local {
// If we're building locally, copy bundle to build dir and skip Maven
os.Rename("geth.aar", filepath.Join(GOBIN, "geth.aar"))
os.Rename("geth-sources.jar", filepath.Join(GOBIN, "geth-sources.jar"))
return
}
meta := newMavenMetadata(env)
build.Render("build/mvn.pom", meta.Package+".pom", 0755, meta)
// Skip Maven deploy and Azure upload for PR builds
maybeSkipArchive(env)
// Sign and upload the archive to Azure
archive := "geth-" + archiveBasename("android", params.ArchiveVersion(env.Commit)) + ".aar"
os.Rename("geth.aar", archive)
if err := archiveUpload(archive, *upload, *signer, *signify); err != nil {
log.Fatal(err)
}
// Sign and upload all the artifacts to Maven Central
os.Rename(archive, meta.Package+".aar")
if *signer != "" && *deploy != "" {
// Import the signing key into the local GPG instance
key := getenvBase64(*signer)
gpg := exec.Command("gpg", "--import")
gpg.Stdin = bytes.NewReader(key)
build.MustRun(gpg)
keyID, err := build.PGPKeyID(string(key))
if err != nil {
log.Fatal(err)
}
// Upload the artifacts to Sonatype and/or Maven Central
repo := *deploy + "/service/local/staging/deploy/maven2"
if meta.Develop {
repo = *deploy + "/content/repositories/snapshots"
}
build.MustRunCommand("mvn", "gpg:sign-and-deploy-file", "-e", "-X",
"-settings=build/mvn.settings", "-Durl="+repo, "-DrepositoryId=ossrh",
"-Dgpg.keyname="+keyID,
"-DpomFile="+meta.Package+".pom", "-Dfile="+meta.Package+".aar")
}
}
func gomobileTool(subcmd string, args ...string) *exec.Cmd {
cmd := exec.Command(filepath.Join(GOBIN, "gomobile"), subcmd)
cmd.Args = append(cmd.Args, args...)
cmd.Env = []string{
"PATH=" + GOBIN + string(os.PathListSeparator) + os.Getenv("PATH"),
}
for _, e := range os.Environ() {
if strings.HasPrefix(e, "GOPATH=") || strings.HasPrefix(e, "PATH=") || strings.HasPrefix(e, "GOBIN=") {
continue
}
cmd.Env = append(cmd.Env, e)
}
cmd.Env = append(cmd.Env, "GOBIN="+GOBIN)
return cmd
}
type mavenMetadata struct {
Version string
Package string
Develop bool
Contributors []mavenContributor
}
type mavenContributor struct {
Name string
Email string
}
func newMavenMetadata(env build.Environment) mavenMetadata {
// Collect the list of authors from the repo root
contribs := []mavenContributor{}
if authors, err := os.Open("AUTHORS"); err == nil {
defer authors.Close()
scanner := bufio.NewScanner(authors)
for scanner.Scan() {
// Skip any whitespace from the authors list
line := strings.TrimSpace(scanner.Text())
if line == "" || line[0] == '#' {
continue
}
// Split the author and insert as a contributor
re := regexp.MustCompile("([^<]+) <(.+)>")
parts := re.FindStringSubmatch(line)
if len(parts) == 3 {
contribs = append(contribs, mavenContributor{Name: parts[1], Email: parts[2]})
}
}
}
// Render the version and package strings
version := params.Version
if isUnstableBuild(env) {
version += "-SNAPSHOT"
}
return mavenMetadata{
Version: version,
Package: "geth-" + version,
Develop: isUnstableBuild(env),
Contributors: contribs,
}
}
// XCode frameworks
func doXCodeFramework(cmdline []string) {
var (
local = flag.Bool("local", false, `Flag whether we're only doing a local build (skip Maven artifacts)`)
signer = flag.String("signer", "", `Environment variable holding the signing key (e.g. IOS_SIGNING_KEY)`)
signify = flag.String("signify", "", `Environment variable holding the signify signing key (e.g. IOS_SIGNIFY_KEY)`)
deploy = flag.String("deploy", "", `Destination to deploy the archive (usually "trunk")`)
upload = flag.String("upload", "", `Destination to upload the archives (usually "gethstore/builds")`)
)
flag.CommandLine.Parse(cmdline)
env := build.Env()
tc := new(build.GoToolchain)
// Build gomobile.
build.MustRun(tc.Install(GOBIN, "golang.org/x/mobile/cmd/gomobile@latest", "golang.org/x/mobile/cmd/gobind@latest"))
// Ensure all dependencies are available. This is required to make
// gomobile bind work because it expects go.sum to contain all checksums.
build.MustRun(tc.Go("mod", "download"))
// Build the iOS XCode framework
bind := gomobileTool("bind", "-ldflags", "-s -w", "--target", "ios", "-v", "github.com/ethereum/go-ethereum/mobile")
if *local {
// If we're building locally, use the build folder and stop afterwards
bind.Dir = GOBIN
build.MustRun(bind)
return
}
// Create the archive.
maybeSkipArchive(env)
archive := "geth-" + archiveBasename("ios", params.ArchiveVersion(env.Commit))
if err := os.MkdirAll(archive, 0755); err != nil {
log.Fatal(err)
}
bind.Dir, _ = filepath.Abs(archive)
build.MustRun(bind)
build.MustRunCommand("tar", "-zcvf", archive+".tar.gz", archive)
// Sign and upload the framework to Azure
if err := archiveUpload(archive+".tar.gz", *upload, *signer, *signify); err != nil {
log.Fatal(err)
}
// Prepare and upload a PodSpec to CocoaPods
if *deploy != "" {
meta := newPodMetadata(env, archive)
build.Render("build/pod.podspec", "Geth.podspec", 0755, meta)
build.MustRunCommand("pod", *deploy, "push", "Geth.podspec", "--allow-warnings")
}
}
type podMetadata struct {
Version string
Commit string
Archive string
Contributors []podContributor
}
type podContributor struct {
Name string
Email string
}
func newPodMetadata(env build.Environment, archive string) podMetadata {
// Collect the list of authors from the repo root
contribs := []podContributor{}
if authors, err := os.Open("AUTHORS"); err == nil {
defer authors.Close()
scanner := bufio.NewScanner(authors)
for scanner.Scan() {
// Skip any whitespace from the authors list
line := strings.TrimSpace(scanner.Text())
if line == "" || line[0] == '#' {
continue
}
// Split the author and insert as a contributor
re := regexp.MustCompile("([^<]+) <(.+)>")
parts := re.FindStringSubmatch(line)
if len(parts) == 3 {
contribs = append(contribs, podContributor{Name: parts[1], Email: parts[2]})
}
}
}
version := params.Version
if isUnstableBuild(env) {
version += "-unstable." + env.Buildnum
}
return podMetadata{
Archive: archive,
Version: version,
Commit: env.Commit,
Contributors: contribs,
}
}
// Binary distribution cleanups // Binary distribution cleanups
func doPurge(cmdline []string) { func doPurge(cmdline []string) {
@ -1038,21 +1238,21 @@ func doPurge(cmdline []string) {
// Iterate over the blobs, collect and sort all unstable builds // Iterate over the blobs, collect and sort all unstable builds
for i := 0; i < len(blobs); i++ { for i := 0; i < len(blobs); i++ {
if !strings.Contains(*blobs[i].Name, "unstable") { if !strings.Contains(blobs[i].Name, "unstable") {
blobs = append(blobs[:i], blobs[i+1:]...) blobs = append(blobs[:i], blobs[i+1:]...)
i-- i--
} }
} }
for i := 0; i < len(blobs); i++ { for i := 0; i < len(blobs); i++ {
for j := i + 1; j < len(blobs); j++ { for j := i + 1; j < len(blobs); j++ {
if blobs[i].Properties.LastModified.After(*blobs[j].Properties.LastModified) { if blobs[i].Properties.LastModified.After(blobs[j].Properties.LastModified) {
blobs[i], blobs[j] = blobs[j], blobs[i] blobs[i], blobs[j] = blobs[j], blobs[i]
} }
} }
} }
// Filter out all archives more recent that the given threshold // Filter out all archives more recent that the given threshold
for i, blob := range blobs { for i, blob := range blobs {
if time.Since(*blob.Properties.LastModified) < time.Duration(*limit)*24*time.Hour { if time.Since(blob.Properties.LastModified) < time.Duration(*limit)*24*time.Hour {
blobs = blobs[:i] blobs = blobs[:i]
break break
} }

View File

@ -1,16 +0,0 @@
_geth_bash_autocomplete() {
if [[ "${COMP_WORDS[0]}" != "source" ]]; then
local cur opts base
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
if [[ "$cur" == "-"* ]]; then
opts=$( ${COMP_WORDS[@]:0:$COMP_CWORD} ${cur} --generate-bash-completion )
else
opts=$( ${COMP_WORDS[@]:0:$COMP_CWORD} --generate-bash-completion )
fi
COMPREPLY=( $(compgen -W "${opts}" -- ${cur}) )
return 0
fi
}
complete -o bashdefault -o default -o nospace -F _geth_bash_autocomplete geth

View File

@ -1,18 +0,0 @@
_geth_zsh_autocomplete() {
local -a opts
local cur
cur=${words[-1]}
if [[ "$cur" == "-"* ]]; then
opts=("${(@f)$(${words[@]:0:#words[@]-1} ${cur} --generate-bash-completion)}")
else
opts=("${(@f)$(${words[@]:0:#words[@]-1} --generate-bash-completion)}")
fi
if [[ "${opts[1]}" != "" ]]; then
_describe 'values' opts
else
_files
fi
}
compdef _geth_zsh_autocomplete geth

View File

@ -5,7 +5,7 @@ Maintainer: {{.Author}}
Build-Depends: debhelper (>= 8.0.0), {{.GoBootPackage}} Build-Depends: debhelper (>= 8.0.0), {{.GoBootPackage}}
Standards-Version: 3.9.5 Standards-Version: 3.9.5
Homepage: https://ethereum.org Homepage: https://ethereum.org
Vcs-Git: https://github.com/ethereum/go-ethereum.git Vcs-Git: git://github.com/ethereum/go-ethereum.git
Vcs-Browser: https://github.com/ethereum/go-ethereum Vcs-Browser: https://github.com/ethereum/go-ethereum
Package: {{.Name}} Package: {{.Name}}

View File

@ -1,5 +1 @@
build/bin/{{.BinaryName}} usr/bin build/bin/{{.BinaryName}} usr/bin
{{- if eq .BinaryName "geth" }}
build/deb/ethereum/completions/bash/geth etc/bash_completion.d
build/deb/ethereum/completions/zsh/_geth usr/share/zsh/vendor-completions
{{end -}}

View File

@ -16,11 +16,7 @@ override_dh_auto_build:
# We can't download a fresh Go within Launchpad, so we're shipping and building # We can't download a fresh Go within Launchpad, so we're shipping and building
# one on the fly. However, we can't build it inside the go-ethereum folder as # one on the fly. However, we can't build it inside the go-ethereum folder as
# bootstrapping clashes with go modules, so build in a sibling folder. # bootstrapping clashes with go modules, so build in a sibling folder.
# (mv .go ../ && cd ../.go/src && ./make.bash)
# We're also shipping the bootstrapper as of Go 1.20 as it had minimum version
# requirements opposed to older versions of Go.
(mv .goboot ../ && cd ../.goboot/src && ./make.bash)
(mv .go ../ && cd ../.go/src && GOROOT_BOOTSTRAP=`pwd`/../../.goboot ./make.bash)
# We can't download external go modules within Launchpad, so we're shipping the # We can't download external go modules within Launchpad, so we're shipping the
# entire dependency source cache with go-ethereum. # entire dependency source cache with go-ethereum.

57
build/mvn.pom Normal file
View File

@ -0,0 +1,57 @@
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.ethereum</groupId>
<artifactId>geth</artifactId>
<version>{{.Version}}</version>
<packaging>aar</packaging>
<name>Android Ethereum Client</name>
<description>Android port of the go-ethereum libraries and node</description>
<url>https://github.com/ethereum/go-ethereum</url>
<inceptionYear>2015</inceptionYear>
<licenses>
<license>
<name>GNU Lesser General Public License, Version 3.0</name>
<url>https://www.gnu.org/licenses/lgpl-3.0.en.html</url>
<distribution>repo</distribution>
</license>
</licenses>
<organization>
<name>Ethereum</name>
<url>https://ethereum.org</url>
</organization>
<developers>
<developer>
<id>karalabe</id>
<name>Péter Szilágyi</name>
<email>peterke@gmail.com</email>
<url>https://github.com/karalabe</url>
<properties>
<picUrl>https://www.gravatar.com/avatar/2ecbf0f5b4b79eebf8c193e5d324357f?s=256</picUrl>
</properties>
</developer>
</developers>
<contributors>{{range .Contributors}}
<contributor>
<name>{{.Name}}</name>
<email>{{.Email}}</email>
</contributor>{{end}}
</contributors>
<issueManagement>
<system>GitHub Issues</system>
<url>https://github.com/ethereum/go-ethereum/issues/</url>
</issueManagement>
<scm>
<url>https://github.com/ethereum/go-ethereum</url>
</scm>
</project>

24
build/mvn.settings Normal file
View File

@ -0,0 +1,24 @@
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
http://maven.apache.org/xsd/settings-1.0.0.xsd">
<servers>
<server>
<id>ossrh</id>
<username>${env.ANDROID_SONATYPE_USERNAME}</username>
<password>${env.ANDROID_SONATYPE_PASSWORD}</password>
</server>
</servers>
<profiles>
<profile>
<id>ossrh</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<properties>
<gpg.executable>gpg</gpg.executable>
<gpg.passphrase></gpg.passphrase>
</properties>
</profile>
</profiles>
</settings>

22
build/pod.podspec Normal file
View File

@ -0,0 +1,22 @@
Pod::Spec.new do |spec|
spec.name = 'Geth'
spec.version = '{{.Version}}'
spec.license = { :type => 'GNU Lesser General Public License, Version 3.0' }
spec.homepage = 'https://github.com/ethereum/go-ethereum'
spec.authors = { {{range .Contributors}}
'{{.Name}}' => '{{.Email}}',{{end}}
}
spec.summary = 'iOS Ethereum Client'
spec.source = { :git => 'https://github.com/ethereum/go-ethereum.git', :commit => '{{.Commit}}' }
spec.platform = :ios
spec.ios.deployment_target = '9.0'
spec.ios.vendored_frameworks = 'Frameworks/Geth.framework'
spec.prepare_command = <<-CMD
curl https://gethstore.blob.core.windows.net/builds/{{.Archive}}.tar.gz | tar -xvz
mkdir Frameworks
mv {{.Archive}}/Geth.framework Frameworks
rm -rf {{.Archive}}
CMD
end

View File

@ -14,7 +14,6 @@
// You should have received a copy of the GNU Lesser General Public License // You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>. // along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
//go:build none
// +build none // +build none
/* /*
@ -40,6 +39,7 @@ import (
"bufio" "bufio"
"bytes" "bytes"
"fmt" "fmt"
"io/ioutil"
"log" "log"
"os" "os"
"os/exec" "os/exec"
@ -68,9 +68,7 @@ var (
"common/bitutil/bitutil", "common/bitutil/bitutil",
"common/prque/", "common/prque/",
"consensus/ethash/xor.go", "consensus/ethash/xor.go",
"crypto/blake2b/",
"crypto/bn256/", "crypto/bn256/",
"crypto/bls12381/",
"crypto/ecies/", "crypto/ecies/",
"graphql/graphiql.go", "graphql/graphiql.go",
"internal/jsre/deps", "internal/jsre/deps",
@ -243,7 +241,7 @@ func gitAuthors(files []string) []string {
} }
func readAuthors() []string { func readAuthors() []string {
content, err := os.ReadFile("AUTHORS") content, err := ioutil.ReadFile("AUTHORS")
if err != nil && !os.IsNotExist(err) { if err != nil && !os.IsNotExist(err) {
log.Fatalln("error reading AUTHORS:", err) log.Fatalln("error reading AUTHORS:", err)
} }
@ -307,7 +305,7 @@ func writeAuthors(files []string) {
content.WriteString("\n") content.WriteString("\n")
} }
fmt.Println("writing AUTHORS") fmt.Println("writing AUTHORS")
if err := os.WriteFile("AUTHORS", content.Bytes(), 0644); err != nil { if err := ioutil.WriteFile("AUTHORS", content.Bytes(), 0644); err != nil {
log.Fatalln(err) log.Fatalln(err)
} }
} }
@ -342,10 +340,7 @@ func isGenerated(file string) bool {
} }
defer fd.Close() defer fd.Close()
buf := make([]byte, 2048) buf := make([]byte, 2048)
n, err := fd.Read(buf) n, _ := fd.Read(buf)
if err != nil {
return false
}
buf = buf[:n] buf = buf[:n]
for _, l := range bytes.Split(buf, []byte("\n")) { for _, l := range bytes.Split(buf, []byte("\n")) {
if bytes.HasPrefix(l, []byte("// Code generated")) { if bytes.HasPrefix(l, []byte("// Code generated")) {
@ -386,7 +381,7 @@ func writeLicense(info *info) {
if err != nil { if err != nil {
log.Fatalf("error stat'ing %s: %v\n", info.file, err) log.Fatalf("error stat'ing %s: %v\n", info.file, err)
} }
content, err := os.ReadFile(info.file) content, err := ioutil.ReadFile(info.file)
if err != nil { if err != nil {
log.Fatalf("error reading %s: %v\n", info.file, err) log.Fatalf("error reading %s: %v\n", info.file, err)
} }
@ -405,7 +400,7 @@ func writeLicense(info *info) {
return return
} }
fmt.Println("writing", info.ShortLicense(), info.file) fmt.Println("writing", info.ShortLicense(), info.file)
if err := os.WriteFile(info.file, buf.Bytes(), fi.Mode()); err != nil { if err := ioutil.WriteFile(info.file, buf.Bytes(), fi.Mode()); err != nil {
log.Fatalf("error writing %s: %v", info.file, err) log.Fatalf("error writing %s: %v", info.file, err)
} }
} }

View File

@ -1,18 +1,18 @@
// Copyright 2020 The go-ethereum Authors // Copyright 2019 The go-ethereum Authors
// This file is part of go-ethereum. // This file is part of the go-ethereum library.
// //
// go-ethereum is free software: you can redistribute it and/or modify // The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by // it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or // the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version. // (at your option) any later version.
// //
// go-ethereum is distributed in the hope that it will be useful, // The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of // but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details. // GNU Lesser General Public License for more details.
// //
// You should have received a copy of the GNU General Public License // You should have received a copy of the GNU Lesser General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>. // along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package main package main

View File

@ -19,91 +19,124 @@ package main
import ( import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io/ioutil"
"os" "os"
"path/filepath"
"regexp" "regexp"
"strings" "strings"
"github.com/ethereum/go-ethereum/accounts/abi"
"github.com/ethereum/go-ethereum/accounts/abi/bind" "github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/cmd/utils" "github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/common/compiler" "github.com/ethereum/go-ethereum/common/compiler"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/internal/flags" "github.com/ethereum/go-ethereum/internal/flags"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/urfave/cli/v2" "gopkg.in/urfave/cli.v1"
) )
var ( var (
// Git SHA1 commit hash of the release (set via linker flags)
gitCommit = ""
gitDate = ""
app *cli.App
// Flags needed by abigen // Flags needed by abigen
abiFlag = &cli.StringFlag{ abiFlag = cli.StringFlag{
Name: "abi", Name: "abi",
Usage: "Path to the Ethereum contract ABI json to bind, - for STDIN", Usage: "Path to the Ethereum contract ABI json to bind, - for STDIN",
} }
binFlag = &cli.StringFlag{ binFlag = cli.StringFlag{
Name: "bin", Name: "bin",
Usage: "Path to the Ethereum contract bytecode (generate deploy method)", Usage: "Path to the Ethereum contract bytecode (generate deploy method)",
} }
typeFlag = &cli.StringFlag{ typeFlag = cli.StringFlag{
Name: "type", Name: "type",
Usage: "Struct name for the binding (default = package name)", Usage: "Struct name for the binding (default = package name)",
} }
jsonFlag = &cli.StringFlag{ jsonFlag = cli.StringFlag{
Name: "combined-json", Name: "combined-json",
Usage: "Path to the combined-json file generated by compiler, - for STDIN", Usage: "Path to the combined-json file generated by compiler",
} }
excFlag = &cli.StringFlag{ solFlag = cli.StringFlag{
Name: "sol",
Usage: "Path to the Ethereum contract Solidity source to build and bind",
}
solcFlag = cli.StringFlag{
Name: "solc",
Usage: "Solidity compiler to use if source builds are requested",
Value: "solc",
}
vyFlag = cli.StringFlag{
Name: "vy",
Usage: "Path to the Ethereum contract Vyper source to build and bind",
}
vyperFlag = cli.StringFlag{
Name: "vyper",
Usage: "Vyper compiler to use if source builds are requested",
Value: "vyper",
}
excFlag = cli.StringFlag{
Name: "exc", Name: "exc",
Usage: "Comma separated types to exclude from binding", Usage: "Comma separated types to exclude from binding",
} }
pkgFlag = &cli.StringFlag{ pkgFlag = cli.StringFlag{
Name: "pkg", Name: "pkg",
Usage: "Package name to generate the binding into", Usage: "Package name to generate the binding into",
} }
outFlag = &cli.StringFlag{ outFlag = cli.StringFlag{
Name: "out", Name: "out",
Usage: "Output file for the generated binding (default = stdout)", Usage: "Output file for the generated binding (default = stdout)",
} }
langFlag = &cli.StringFlag{ langFlag = cli.StringFlag{
Name: "lang", Name: "lang",
Usage: "Destination language for the bindings (go)", Usage: "Destination language for the bindings (go, java, objc)",
Value: "go", Value: "go",
} }
aliasFlag = &cli.StringFlag{ aliasFlag = cli.StringFlag{
Name: "alias", Name: "alias",
Usage: "Comma separated aliases for function and event renaming, e.g. original1=alias1, original2=alias2", Usage: "Comma separated aliases for function and event renaming, e.g. original1=alias1, original2=alias2",
} }
) )
var app = flags.NewApp("Ethereum ABI wrapper code generator")
func init() { func init() {
app.Name = "abigen" app = flags.NewApp(gitCommit, gitDate, "ethereum checkpoint helper tool")
app.Flags = []cli.Flag{ app.Flags = []cli.Flag{
abiFlag, abiFlag,
binFlag, binFlag,
typeFlag, typeFlag,
jsonFlag, jsonFlag,
solFlag,
solcFlag,
vyFlag,
vyperFlag,
excFlag, excFlag,
pkgFlag, pkgFlag,
outFlag, outFlag,
langFlag, langFlag,
aliasFlag, aliasFlag,
} }
app.Action = abigen app.Action = utils.MigrateFlags(abigen)
cli.CommandHelpTemplate = flags.OriginCommandHelpTemplate
} }
func abigen(c *cli.Context) error { func abigen(c *cli.Context) error {
utils.CheckExclusive(c, abiFlag, jsonFlag) // Only one source can be selected. utils.CheckExclusive(c, abiFlag, jsonFlag, solFlag, vyFlag) // Only one source can be selected.
if c.GlobalString(pkgFlag.Name) == "" {
if c.String(pkgFlag.Name) == "" {
utils.Fatalf("No destination package specified (--pkg)") utils.Fatalf("No destination package specified (--pkg)")
} }
var lang bind.Lang var lang bind.Lang
switch c.String(langFlag.Name) { switch c.GlobalString(langFlag.Name) {
case "go": case "go":
lang = bind.LangGo lang = bind.LangGo
case "java":
lang = bind.LangJava
case "objc":
lang = bind.LangObjC
utils.Fatalf("Objc binding generation is uncompleted")
default: default:
utils.Fatalf("Unsupported destination language \"%s\" (--lang)", c.String(langFlag.Name)) utils.Fatalf("Unsupported destination language \"%s\" (--lang)", c.GlobalString(langFlag.Name))
} }
// If the entire solidity code was specified, build and bind based on that // If the entire solidity code was specified, build and bind based on that
var ( var (
@ -114,17 +147,17 @@ func abigen(c *cli.Context) error {
libs = make(map[string]string) libs = make(map[string]string)
aliases = make(map[string]string) aliases = make(map[string]string)
) )
if c.String(abiFlag.Name) != "" { if c.GlobalString(abiFlag.Name) != "" {
// Load up the ABI, optional bytecode and type name from the parameters // Load up the ABI, optional bytecode and type name from the parameters
var ( var (
abi []byte abi []byte
err error err error
) )
input := c.String(abiFlag.Name) input := c.GlobalString(abiFlag.Name)
if input == "-" { if input == "-" {
abi, err = io.ReadAll(os.Stdin) abi, err = ioutil.ReadAll(os.Stdin)
} else { } else {
abi, err = os.ReadFile(input) abi, err = ioutil.ReadFile(input)
} }
if err != nil { if err != nil {
utils.Fatalf("Failed to read input ABI: %v", err) utils.Fatalf("Failed to read input ABI: %v", err)
@ -132,8 +165,8 @@ func abigen(c *cli.Context) error {
abis = append(abis, string(abi)) abis = append(abis, string(abi))
var bin []byte var bin []byte
if binFile := c.String(binFlag.Name); binFile != "" { if binFile := c.GlobalString(binFlag.Name); binFile != "" {
if bin, err = os.ReadFile(binFile); err != nil { if bin, err = ioutil.ReadFile(binFile); err != nil {
utils.Fatalf("Failed to read input bytecode: %v", err) utils.Fatalf("Failed to read input bytecode: %v", err)
} }
if strings.Contains(string(bin), "//") { if strings.Contains(string(bin), "//") {
@ -142,35 +175,47 @@ func abigen(c *cli.Context) error {
} }
bins = append(bins, string(bin)) bins = append(bins, string(bin))
kind := c.String(typeFlag.Name) kind := c.GlobalString(typeFlag.Name)
if kind == "" { if kind == "" {
kind = c.String(pkgFlag.Name) kind = c.GlobalString(pkgFlag.Name)
} }
types = append(types, kind) types = append(types, kind)
} else { } else {
// Generate the list of types to exclude from binding // Generate the list of types to exclude from binding
var exclude *nameFilter exclude := make(map[string]bool)
if c.IsSet(excFlag.Name) { for _, kind := range strings.Split(c.GlobalString(excFlag.Name), ",") {
var err error exclude[strings.ToLower(kind)] = true
if exclude, err = newNameFilter(strings.Split(c.String(excFlag.Name), ",")...); err != nil {
utils.Fatalf("Failed to parse excludes: %v", err)
}
} }
var err error
var contracts map[string]*compiler.Contract var contracts map[string]*compiler.Contract
if c.IsSet(jsonFlag.Name) { switch {
var ( case c.GlobalIsSet(solFlag.Name):
input = c.String(jsonFlag.Name) contracts, err = compiler.CompileSolidity(c.GlobalString(solcFlag.Name), c.GlobalString(solFlag.Name))
jsonOutput []byte
err error
)
if input == "-" {
jsonOutput, err = io.ReadAll(os.Stdin)
} else {
jsonOutput, err = os.ReadFile(input)
}
if err != nil { if err != nil {
utils.Fatalf("Failed to read combined-json: %v", err) utils.Fatalf("Failed to build Solidity contract: %v", err)
}
case c.GlobalIsSet(vyFlag.Name):
output, err := compiler.CompileVyper(c.GlobalString(vyperFlag.Name), c.GlobalString(vyFlag.Name))
if err != nil {
utils.Fatalf("Failed to build Vyper contract: %v", err)
}
contracts = make(map[string]*compiler.Contract)
for n, contract := range output {
name := n
// Sanitize the combined json names to match the
// format expected by solidity.
if !strings.Contains(n, ":") {
// Remove extra path components
name = abi.ToCamelCase(strings.TrimSuffix(filepath.Base(name), ".vy"))
}
contracts[name] = contract
}
case c.GlobalIsSet(jsonFlag.Name):
jsonOutput, err := ioutil.ReadFile(c.GlobalString(jsonFlag.Name))
if err != nil {
utils.Fatalf("Failed to read combined-json from compiler: %v", err)
} }
contracts, err = compiler.ParseCombinedJSON(jsonOutput, "", "", "", "") contracts, err = compiler.ParseCombinedJSON(jsonOutput, "", "", "", "")
if err != nil { if err != nil {
@ -179,11 +224,7 @@ func abigen(c *cli.Context) error {
} }
// Gather all non-excluded contract for binding // Gather all non-excluded contract for binding
for name, contract := range contracts { for name, contract := range contracts {
// fully qualified name is of the form <solFilePath>:<type> if exclude[strings.ToLower(name)] {
nameParts := strings.Split(name, ":")
typeName := nameParts[len(nameParts)-1]
if exclude != nil && exclude.Matches(name) {
fmt.Fprintf(os.Stderr, "excluding: %v\n", name)
continue continue
} }
abi, err := json.Marshal(contract.Info.AbiDefinition) // Flatten the compiler parse abi, err := json.Marshal(contract.Info.AbiDefinition) // Flatten the compiler parse
@ -193,39 +234,36 @@ func abigen(c *cli.Context) error {
abis = append(abis, string(abi)) abis = append(abis, string(abi))
bins = append(bins, contract.Code) bins = append(bins, contract.Code)
sigs = append(sigs, contract.Hashes) sigs = append(sigs, contract.Hashes)
types = append(types, typeName) nameParts := strings.Split(name, ":")
types = append(types, nameParts[len(nameParts)-1])
// Derive the library placeholder which is a 34 character prefix of the libPattern := crypto.Keccak256Hash([]byte(name)).String()[2:36]
// hex encoding of the keccak256 hash of the fully qualified library name. libs[libPattern] = nameParts[len(nameParts)-1]
// Note that the fully qualified library name is the path of its source
// file and the library name separated by ":".
libPattern := crypto.Keccak256Hash([]byte(name)).String()[2:36] // the first 2 chars are 0x
libs[libPattern] = typeName
} }
} }
// Extract all aliases from the flags // Extract all aliases from the flags
if c.IsSet(aliasFlag.Name) { if c.GlobalIsSet(aliasFlag.Name) {
// We support multi-versions for aliasing // We support multi-versions for aliasing
// e.g. // e.g.
// foo=bar,foo2=bar2 // foo=bar,foo2=bar2
// foo:bar,foo2:bar2 // foo:bar,foo2:bar2
re := regexp.MustCompile(`(?:(\w+)[:=](\w+))`) re := regexp.MustCompile(`(?:(\w+)[:=](\w+))`)
submatches := re.FindAllStringSubmatch(c.String(aliasFlag.Name), -1) submatches := re.FindAllStringSubmatch(c.GlobalString(aliasFlag.Name), -1)
for _, match := range submatches { for _, match := range submatches {
aliases[match[1]] = match[2] aliases[match[1]] = match[2]
} }
} }
// Generate the contract binding // Generate the contract binding
code, err := bind.Bind(types, abis, bins, sigs, c.String(pkgFlag.Name), lang, libs, aliases) code, err := bind.Bind(types, abis, bins, sigs, c.GlobalString(pkgFlag.Name), lang, libs, aliases)
if err != nil { if err != nil {
utils.Fatalf("Failed to generate ABI binding: %v", err) utils.Fatalf("Failed to generate ABI binding: %v", err)
} }
// Either flush it out to a file or display on the standard output // Either flush it out to a file or display on the standard output
if !c.IsSet(outFlag.Name) { if !c.GlobalIsSet(outFlag.Name) {
fmt.Printf("%s\n", code) fmt.Printf("%s\n", code)
return nil return nil
} }
if err := os.WriteFile(c.String(outFlag.Name), []byte(code), 0600); err != nil { if err := ioutil.WriteFile(c.GlobalString(outFlag.Name), []byte(code), 0600); err != nil {
utils.Fatalf("Failed to write ABI binding: %v", err) utils.Fatalf("Failed to write ABI binding: %v", err)
} }
return nil return nil

View File

@ -1,58 +0,0 @@
package main
import (
"fmt"
"strings"
)
type nameFilter struct {
fulls map[string]bool // path/to/contract.sol:Type
files map[string]bool // path/to/contract.sol:*
types map[string]bool // *:Type
}
func newNameFilter(patterns ...string) (*nameFilter, error) {
f := &nameFilter{
fulls: make(map[string]bool),
files: make(map[string]bool),
types: make(map[string]bool),
}
for _, pattern := range patterns {
if err := f.add(pattern); err != nil {
return nil, err
}
}
return f, nil
}
func (f *nameFilter) add(pattern string) error {
ft := strings.Split(pattern, ":")
if len(ft) != 2 {
// filenames and types must not include ':' symbol
return fmt.Errorf("invalid pattern: %s", pattern)
}
file, typ := ft[0], ft[1]
if file == "*" {
f.types[typ] = true
return nil
} else if typ == "*" {
f.files[file] = true
return nil
}
f.fulls[pattern] = true
return nil
}
func (f *nameFilter) Matches(name string) bool {
ft := strings.Split(name, ":")
if len(ft) != 2 {
// If contract names are always of the fully-qualified form
// <filePath>:<type>, then this case will never happen.
return false
}
file, typ := ft[0], ft[1]
// full paths > file paths > types
return f.fulls[name] || f.files[file] || f.types[typ]
}

View File

@ -1,38 +0,0 @@
package main
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNameFilter(t *testing.T) {
_, err := newNameFilter("Foo")
require.Error(t, err)
_, err = newNameFilter("too/many:colons:Foo")
require.Error(t, err)
f, err := newNameFilter("a/path:A", "*:B", "c/path:*")
require.NoError(t, err)
for _, tt := range []struct {
name string
match bool
}{
{"a/path:A", true},
{"unknown/path:A", false},
{"a/path:X", false},
{"unknown/path:X", false},
{"any/path:B", true},
{"c/path:X", true},
{"c/path:foo:B", false},
} {
match := f.Matches(tt.name)
if tt.match {
assert.True(t, match, "expected match")
} else {
assert.False(t, match, "expected no match")
}
}
}

View File

@ -40,7 +40,7 @@ func main() {
writeAddr = flag.Bool("writeaddress", false, "write out the node's public key and quit") writeAddr = flag.Bool("writeaddress", false, "write out the node's public key and quit")
nodeKeyFile = flag.String("nodekey", "", "private key filename") nodeKeyFile = flag.String("nodekey", "", "private key filename")
nodeKeyHex = flag.String("nodekeyhex", "", "private key as hex (for testing)") nodeKeyHex = flag.String("nodekeyhex", "", "private key as hex (for testing)")
natdesc = flag.String("nat", "none", "port mapping mechanism (any|none|upnp|pmp|pmp:<IP>|extip:<IP>)") natdesc = flag.String("nat", "none", "port mapping mechanism (any|none|upnp|pmp|extip:<IP>)")
netrestrict = flag.String("netrestrict", "", "restrict network communication to the given IP networks (CIDR masks)") netrestrict = flag.String("netrestrict", "", "restrict network communication to the given IP networks (CIDR masks)")
runv5 = flag.Bool("v5", false, "run a v5 topic discovery bootnode") runv5 = flag.Bool("v5", false, "run a v5 topic discovery bootnode")
verbosity = flag.Int("verbosity", int(log.LvlInfo), "log verbosity (0-5)") verbosity = flag.Int("verbosity", int(log.LvlInfo), "log verbosity (0-5)")

View File

@ -46,7 +46,7 @@ Deploy checkpoint oracle contract. `--signers` indicates the specified trusted s
checkpoint-admin deploy --rpc <NODE_RPC_ENDPOINT> --clef <CLEF_ENDPOINT> --signer <SIGNER_TO_SIGN_TX> --signers <TRUSTED_SIGNER_LIST> --threshold 1 checkpoint-admin deploy --rpc <NODE_RPC_ENDPOINT> --clef <CLEF_ENDPOINT> --signer <SIGNER_TO_SIGN_TX> --signers <TRUSTED_SIGNER_LIST> --threshold 1
``` ```
It is worth noting that checkpoint-admin only supports clef as a signer for transactions and plain text(checkpoint). For more clef usage, please see the clef [tutorial](https://geth.ethereum.org/docs/tools/clef/tutorial) . It is worth noting that checkpoint-admin only supports clef as a signer for transactions and plain text(checkpoint). For more clef usage, please see the clef [tutorial](https://geth.ethereum.org/docs/clef/tutorial) .
#### Sign #### Sign
@ -86,7 +86,7 @@ checkpoint-admin status --rpc <NODE_RPC_ENDPOINT>
### Enable checkpoint oracle in your private network ### Enable checkpoint oracle in your private network
Currently, only the Ethereum mainnet and the default supported test networks (rinkeby, goerli) activate this feature. If you want to activate this feature in your private network, you can overwrite the relevant checkpoint oracle settings through the configuration file after deploying the oracle contract. Currently, only the Ethereum mainnet and the default supported test networks (ropsten, rinkeby, goerli) activate this feature. If you want to activate this feature in your private network, you can overwrite the relevant checkpoint oracle settings through the configuration file after deploying the oracle contract.
* Get your node configuration file `geth dumpconfig OTHER_COMMAND_LINE_OPTIONS > config.toml` * Get your node configuration file `geth dumpconfig OTHER_COMMAND_LINE_OPTIONS > config.toml`
* Edit the configuration file and add the following information * Edit the configuration file and add the following information

Some files were not shown because too many files have changed in this diff Show More