go-ethereum/cmd/utils/flags.go
Elizabeth 235bc6a2cb Statediffing geth
* Write state diff to CSV (#2)

* port statediff from 9b7fd9af80/statediff/statediff.go; minor fixes

* integrating state diff extracting, building, and persisting into geth processes

* work towards persisting created statediffs in ipfs; based off github.com/vulcanize/eth-block-extractor

* Add a state diff service

* Remove diff extractor from blockchain

* Update imports

* Move statediff on/off check to geth cmd config

* Update starting state diff service

* Add debugging logs for creating diff

* Add statediff extractor and builder tests and small refactoring

* Start to write statediff to a CSV

* Restructure statediff directory

* Pull CSV publishing methods into their own file

* Reformatting due to go fmt

* Add gomega to vendor dir

* Remove testing focuses

* Update statediff tests to use golang test pkg

instead of ginkgo

- builder_test
- extractor_test
- publisher_test

* Use hexutil.Encode instead of deprecated common.ToHex

* Remove OldValue from DiffBigInt and DiffUint64 fields

* Update builder test

* Remove old storage value from updated accounts

* Remove old values from created/deleted accounts

* Update publisher to account for only storing current account values

* Update service loop and fetching previous block

* Update testing

- remove statediff ginkgo test suite file
- move mocks to their own dir

* Updates per go fmt

* Updates to tests

* Pass statediff mode and path in through cli

* Return filename from publisher

* Remove some duplication in builder

* Remove code field from state diff output

this is the contract byte code, and it can still be obtained by querying
the db by the codeHash

* Consolidate acct diff structs for updated & updated/deleted accts

* Include block number in csv filename

* Clean up error logging

* Cleanup formatting, spelling, etc

* Address PR comments

* Add contract address and storage value to csv

* Refactor accumulating account row in csv publisher

* Add DiffStorage struct

* Add storage key to csv

* Address PR comments

* Fix publisher to include rows for accounts that don't have store updates

* Update builder test after merging in release/1.8

* Update test contract to include storage on contract intialization

- so that we're able to test that storage diffing works for created and
deleted accounts (not just updated accounts).

* Factor out a common trie iterator method in builder

* Apply goimports to statediff

* Apply gosimple changes to statediff

* Gracefully exit geth command(#4)

* Statediff for full node (#6)

* Open a trie from the in-memory database

* Use a node's LeafKey as an identifier instead of the address

It was proving difficult to find look the address up from a given path
with a full node (sometimes the value wouldn't exist in the disk db).
So, instead, for now we are using the node's LeafKey with is a Keccak256
hash of the address, so if we know the address we can figure out which
LeafKey it matches up to.

* Make sure that statediff has been processed before pruning

* Use blockchain stateCache.OpenTrie for storage diffs

* Clean up log lines and remove unnecessary fields from builder

* Apply go fmt changes

* Add a sleep to the blockchain test

* refactoring/reorganizing packages

* refactoring statediff builder and types and adjusted to relay proofs and paths (still need to make this optional)

* refactoring state diff service and adding api which allows for streaming state diff payloads over an rpc websocket subscription

* make proofs and paths optional + compress service loop into single for loop (may be missing something here)

* option to process intermediate nodes

* make state diff rlp serializable

* cli parameter to limit statediffing to select account addresses + test

* review fixes and fixes for issues ran into in integration

* review fixes; proper method signature for api; adjust service so that statediff processing is halted/paused until there is at least one subscriber listening for the results

* adjust buffering to improve stability; doc.go; fix notifier
err handling

* relay receipts with the rest of the data + review fixes/changes

* rpc method to get statediff at specific block; requires archival node or the block be within the pruning range

* fix linter issues

* include total difficulty to the payload

* fix state diff builder: emit actual leaf nodes instead of value nodes; diff on the leaf not on the value; emit correct path for intermediate nodes

* adjust statediff builder tests to changes and extend to test intermediate nodes; golint

* add genesis block to test; handle block 0 in StateDiffAt

* rlp files for mainnet blocks 0-3, for tests

* builder test on mainnet blocks

* common.BytesToHash(path) => crypto.Keaccak256(hash) in builder; BytesToHash produces same hash for e.g. []byte{} and []byte{\x00} - prefix \x00 steps are inconsequential to the hash result

* complete tests for early mainnet blocks

* diff type for representing deleted accounts

* fix builder so that we handle account deletions properly and properly diff storage when an account is moved to a new path; update params

* remove cli params; moving them to subscriber defined

* remove unneeded bc methods

* update service and api; statediffing params are now defined by user through api rather than by service provider by cli

* update top level tests

* add ability to watch specific storage slots (leaf keys) only

* comments; explain logic

* update mainnet blocks test

* update api_test.go

* storage leafkey filter test

* cleanup chain maker

* adjust chain maker for tests to add an empty account in block1 and switch to EIP-158 afterwards (now we just need to generate enough accounts until one causes the empty account to be touched and removed post-EIP-158 so we can simulate and test that process...); also added 2 new blocks where more contract storage is set and old slots are set to zero so they are removed so we can test that

* found an account whose creation causes the empty account to be moved to a new path; this should count as 'touching; the empty account and cause it to be removed according to eip-158... but it doesn't

* use new contract in unit tests that has self-destruct ability, so we can test eip-158 since simply moving an account to new path doesn't count as 'touchin' it

* handle storage deletions

* tests for eip-158 account removal and storage value deletions; there is one edge case left to test where we remove 1 account when only two exist such that the remaining account is moved up and replaces the root branch node

* finish testing known edge cases

* add endpoint to fetch all state and storage nodes at a given blockheight; useful for generating a recent atate cache/snapshot that we can diff forward from rather than needing to collect all diffs from genesis

* test for state trie builder

* if statediffing is on, lock tries in triedb until the statediffing service signals they are done using them

* fix mock blockchain; golint; bump patch

* increase maxRequestContentLength; bump patch

* log the sizes of the state objects we are sending

* CI build (#20)

* CI: run build on PR and on push to master

* CI: debug building geth

* CI: fix coping file

* CI: fix coping file v2

* CI: temporary upload file to release asset

* CI: get release upload_url by tag, upload asset to current relase

* CI: fix tag name

* fix ci build on statediff_at_anyblock-1.9.11 branch

* fix publishing assets in release

* use context deadline for timeout in eth_call

* collect and emit codehash=>code mappings for state objects

* subscription endpoint for retrieving all the codehash=>code mappings that exist at provided height

* Implement WriteStateDiffAt

* Writes state diffs directly to postgres

* Adds CLI flags to configure PG

* Refactors builder output with callbacks

* Copies refactored postgres handling code from ipld-eth-indexer

* rename PostgresCIDWriter.{index->upsert}*

* go.mod update

* rm unused

* cleanup

* output code & codehash iteratively

* had to rf some types for this

* prometheus metrics output

* duplicate recent eth-indexer changes

* migrations and metrics...

* [wip] prom.Init() here? another CLI flag?

* tidy & DRY

* statediff WriteLoop service + CLI flag

* [wip] update test mocks

* todo - do something meaningful to test write loop

* logging

* use geth log

* port tests to go testing

* drop ginkgo/gomega

* fix and cleanup tests

* fail before defer statement

* delete vendor/ dir

* fixes after rebase onto 1.9.23

* fix API registration

* use golang 1.15.5 version (#34)

* bump version meta; add 0.0.11 branch to actions

* bump version meta; update github actions workflows

* statediff: refactor metrics

* Remove redundant statediff/indexer/prom tooling and use existing
prometheus integration.

* "indexer" namespace for metrics

* add reporting loop for db metrics

* doc

* metrics for statediff stats

* metrics namespace/subsystem = statediff/{indexer,service}

* statediff: use a worker pool (for direct writes)

* fix test

* fix chain event subscription

* log tweaks

* func name

* unused import

* intermediate chain event channel for metrics

* update github actions; linting

* add poststate and status to receipt ipld indexes

* stateDiffFor endpoints for fetching or writing statediff object by blockhash; bump statediff version

* fixes after rebase on to v1.10.1

* update github actions and version meta; go fmt

* add leaf key to removed 'nodes'

* include Postgres migrations and schema

* service documentation

* touching up

update github actions after rebase

fix connection leak (misplaced defer) and perform proper rollback on errs

improve error logging; handle PushBlock internal err

* build docker image and publish it to Docker Hub on release

* add access list tx to unit tests

* MarshalBinary and UnmarshalBinary methods for receipt

* fix error caused by 2718 by using MarshalBinary instead of EncodeRLP methods

* ipld encoding/decoding tests

* update TxModel; add AccessListElementModel

* index tx type and access lists

* add access list metrics

* unit tests for tx_type and access list table

* unit tests for receipt marshal/unmarshal binary methods

* improve documentation of the encoding methods

* fix issue identified in linting

update github actions and version meta after rebase

unit test that fails undeterministically on eip2930 txs, giving same error we are seeing in prod

fix bug

Include genesis block state diff.

Fix linting issue.

documentation on versioning, rebasing, releasing; bump version meta

Add geth and statediff unit test to CI.

Set pgpassword in env.

Added comments.

Add new major branch to github action.

Fix failing test.

Fix lint errors.

Add support for Dynamic txn(EIP-1559).

Update version meta to 0.0.24

Verify block base fee in test.

Fix base_fee type and add backward compatible test.

Remove type definition for AccessListElementModel

Change basefee to int64/bigint.

block and uncle reward in PoA network = 0  (#87)

* in PoA networks there is no block and uncle rewards

* bump meta version

(cherry picked from commit b64ca14689)

Use Ropsten to test block reward.

Add Makefile target to build static linux binaries.

Strip symbol tables from static binaries.

Fix block_fee to support NULL values.

bump version meta.

Add new major branch to github action.

Add new major branch to github action.

Add new major branch to github action.

Add new major branch to github action.

rename doc.go to README.md

Create a seperate table for storing logs

Self review

Bump statediff version to 0.0.26.

add btree index to state/storage_cids.node_type; updated schema

Dedup receipt data.

Fix linter errors.

Address comments.

Bump statediff version to 0.0.27.

new cli flag for initializing db first time service is ran

only write Removed node ipld block (on db init) and reuse constant cid and mhkey

linting

test new handling of Removed nodes; don't require init flag

log metrics

Add new major branch to github action.

Fix build.

Update golang version in CI.

Use ipld-eth-db in testing.

Remove migration from repo.

Add new major branch to github action.

Use `GetTd` instead of `GetTdByHash`
6289137827
2021-10-21 13:59:22 +05:30

1986 lines
69 KiB
Go

// Copyright 2015 The go-ethereum Authors
// This file is part of go-ethereum.
//
// go-ethereum is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// go-ethereum is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
// Package utils contains internal helper functions for go-ethereum commands.
package utils
import (
"crypto/ecdsa"
"fmt"
"io"
"io/ioutil"
"math"
"math/big"
"os"
"path/filepath"
godebug "runtime/debug"
"strconv"
"strings"
"text/tabwriter"
"text/template"
"time"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/fdlimit"
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/clique"
"github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth"
"github.com/ethereum/go-ethereum/eth/downloader"
"github.com/ethereum/go-ethereum/eth/ethconfig"
"github.com/ethereum/go-ethereum/eth/gasprice"
"github.com/ethereum/go-ethereum/eth/tracers"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/ethstats"
"github.com/ethereum/go-ethereum/graphql"
"github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/ethereum/go-ethereum/internal/flags"
"github.com/ethereum/go-ethereum/les"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/metrics/exp"
"github.com/ethereum/go-ethereum/metrics/influxdb"
"github.com/ethereum/go-ethereum/miner"
"github.com/ethereum/go-ethereum/node"
"github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/nat"
"github.com/ethereum/go-ethereum/p2p/netutil"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/statediff"
pcsclite "github.com/gballet/go-libpcsclite"
gopsutil "github.com/shirou/gopsutil/mem"
"gopkg.in/urfave/cli.v1"
)
func init() {
cli.AppHelpTemplate = `{{.Name}} {{if .Flags}}[global options] {{end}}command{{if .Flags}} [command options]{{end}} [arguments...]
VERSION:
{{.Version}}
COMMANDS:
{{range .Commands}}{{.Name}}{{with .ShortName}}, {{.}}{{end}}{{ "\t" }}{{.Usage}}
{{end}}{{if .Flags}}
GLOBAL OPTIONS:
{{range .Flags}}{{.}}
{{end}}{{end}}
`
cli.CommandHelpTemplate = flags.CommandHelpTemplate
cli.HelpPrinter = printHelp
}
func printHelp(out io.Writer, templ string, data interface{}) {
funcMap := template.FuncMap{"join": strings.Join}
t := template.Must(template.New("help").Funcs(funcMap).Parse(templ))
w := tabwriter.NewWriter(out, 38, 8, 2, ' ', 0)
err := t.Execute(w, data)
if err != nil {
panic(err)
}
w.Flush()
}
// These are all the command line flags we support.
// If you add to this list, please remember to include the
// flag in the appropriate command definition.
//
// The flags are defined here so their names and help texts
// are the same for all commands.
var (
// General settings
DataDirFlag = DirectoryFlag{
Name: "datadir",
Usage: "Data directory for the databases and keystore",
Value: DirectoryString(node.DefaultDataDir()),
}
AncientFlag = DirectoryFlag{
Name: "datadir.ancient",
Usage: "Data directory for ancient chain segments (default = inside chaindata)",
}
MinFreeDiskSpaceFlag = DirectoryFlag{
Name: "datadir.minfreedisk",
Usage: "Minimum free disk space in MB, once reached triggers auto shut down (default = --cache.gc converted to MB, 0 = disabled)",
}
KeyStoreDirFlag = DirectoryFlag{
Name: "keystore",
Usage: "Directory for the keystore (default = inside the datadir)",
}
USBFlag = cli.BoolFlag{
Name: "usb",
Usage: "Enable monitoring and management of USB hardware wallets",
}
SmartCardDaemonPathFlag = cli.StringFlag{
Name: "pcscdpath",
Usage: "Path to the smartcard daemon (pcscd) socket file",
Value: pcsclite.PCSCDSockName,
}
NetworkIdFlag = cli.Uint64Flag{
Name: "networkid",
Usage: "Explicitly set network id (integer)(For testnets: use --ropsten, --rinkeby, --goerli instead)",
Value: ethconfig.Defaults.NetworkId,
}
MainnetFlag = cli.BoolFlag{
Name: "mainnet",
Usage: "Ethereum mainnet",
}
GoerliFlag = cli.BoolFlag{
Name: "goerli",
Usage: "Görli network: pre-configured proof-of-authority test network",
}
RinkebyFlag = cli.BoolFlag{
Name: "rinkeby",
Usage: "Rinkeby network: pre-configured proof-of-authority test network",
}
RopstenFlag = cli.BoolFlag{
Name: "ropsten",
Usage: "Ropsten network: pre-configured proof-of-work test network",
}
DeveloperFlag = cli.BoolFlag{
Name: "dev",
Usage: "Ephemeral proof-of-authority network with a pre-funded developer account, mining enabled",
}
DeveloperPeriodFlag = cli.IntFlag{
Name: "dev.period",
Usage: "Block period to use in developer mode (0 = mine only if transaction pending)",
}
IdentityFlag = cli.StringFlag{
Name: "identity",
Usage: "Custom node name",
}
DocRootFlag = DirectoryFlag{
Name: "docroot",
Usage: "Document Root for HTTPClient file scheme",
Value: DirectoryString(HomeDir()),
}
ExitWhenSyncedFlag = cli.BoolFlag{
Name: "exitwhensynced",
Usage: "Exits after block synchronisation completes",
}
IterativeOutputFlag = cli.BoolTFlag{
Name: "iterative",
Usage: "Print streaming JSON iteratively, delimited by newlines",
}
ExcludeStorageFlag = cli.BoolFlag{
Name: "nostorage",
Usage: "Exclude storage entries (save db lookups)",
}
IncludeIncompletesFlag = cli.BoolFlag{
Name: "incompletes",
Usage: "Include accounts for which we don't have the address (missing preimage)",
}
ExcludeCodeFlag = cli.BoolFlag{
Name: "nocode",
Usage: "Exclude contract code (save db lookups)",
}
StartKeyFlag = cli.StringFlag{
Name: "start",
Usage: "Start position. Either a hash or address",
Value: "0x0000000000000000000000000000000000000000000000000000000000000000",
}
DumpLimitFlag = cli.Uint64Flag{
Name: "limit",
Usage: "Max number of elements (0 = no limit)",
Value: 0,
}
defaultSyncMode = ethconfig.Defaults.SyncMode
SyncModeFlag = TextMarshalerFlag{
Name: "syncmode",
Usage: `Blockchain sync mode ("fast", "full", "snap" or "light")`,
Value: &defaultSyncMode,
}
GCModeFlag = cli.StringFlag{
Name: "gcmode",
Usage: `Blockchain garbage collection mode ("full", "archive")`,
Value: "full",
}
SnapshotFlag = cli.BoolTFlag{
Name: "snapshot",
Usage: `Enables snapshot-database mode (default = enable)`,
}
TxLookupLimitFlag = cli.Uint64Flag{
Name: "txlookuplimit",
Usage: "Number of recent blocks to maintain transactions index for (default = about one year, 0 = entire chain)",
Value: ethconfig.Defaults.TxLookupLimit,
}
LightKDFFlag = cli.BoolFlag{
Name: "lightkdf",
Usage: "Reduce key-derivation RAM & CPU usage at some expense of KDF strength",
}
WhitelistFlag = cli.StringFlag{
Name: "whitelist",
Usage: "Comma separated block number-to-hash mappings to enforce (<number>=<hash>)",
}
BloomFilterSizeFlag = cli.Uint64Flag{
Name: "bloomfilter.size",
Usage: "Megabytes of memory allocated to bloom-filter for pruning",
Value: 2048,
}
OverrideLondonFlag = cli.Uint64Flag{
Name: "override.london",
Usage: "Manually specify London fork-block, overriding the bundled setting",
}
// Light server and client settings
LightServeFlag = cli.IntFlag{
Name: "light.serve",
Usage: "Maximum percentage of time allowed for serving LES requests (multi-threaded processing allows values over 100)",
Value: ethconfig.Defaults.LightServ,
}
LightIngressFlag = cli.IntFlag{
Name: "light.ingress",
Usage: "Incoming bandwidth limit for serving light clients (kilobytes/sec, 0 = unlimited)",
Value: ethconfig.Defaults.LightIngress,
}
LightEgressFlag = cli.IntFlag{
Name: "light.egress",
Usage: "Outgoing bandwidth limit for serving light clients (kilobytes/sec, 0 = unlimited)",
Value: ethconfig.Defaults.LightEgress,
}
LightMaxPeersFlag = cli.IntFlag{
Name: "light.maxpeers",
Usage: "Maximum number of light clients to serve, or light servers to attach to",
Value: ethconfig.Defaults.LightPeers,
}
UltraLightServersFlag = cli.StringFlag{
Name: "ulc.servers",
Usage: "List of trusted ultra-light servers",
Value: strings.Join(ethconfig.Defaults.UltraLightServers, ","),
}
UltraLightFractionFlag = cli.IntFlag{
Name: "ulc.fraction",
Usage: "Minimum % of trusted ultra-light servers required to announce a new head",
Value: ethconfig.Defaults.UltraLightFraction,
}
UltraLightOnlyAnnounceFlag = cli.BoolFlag{
Name: "ulc.onlyannounce",
Usage: "Ultra light server sends announcements only",
}
LightNoPruneFlag = cli.BoolFlag{
Name: "light.nopruning",
Usage: "Disable ancient light chain data pruning",
}
LightNoSyncServeFlag = cli.BoolFlag{
Name: "light.nosyncserve",
Usage: "Enables serving light clients before syncing",
}
// Ethash settings
EthashCacheDirFlag = DirectoryFlag{
Name: "ethash.cachedir",
Usage: "Directory to store the ethash verification caches (default = inside the datadir)",
}
EthashCachesInMemoryFlag = cli.IntFlag{
Name: "ethash.cachesinmem",
Usage: "Number of recent ethash caches to keep in memory (16MB each)",
Value: ethconfig.Defaults.Ethash.CachesInMem,
}
EthashCachesOnDiskFlag = cli.IntFlag{
Name: "ethash.cachesondisk",
Usage: "Number of recent ethash caches to keep on disk (16MB each)",
Value: ethconfig.Defaults.Ethash.CachesOnDisk,
}
EthashCachesLockMmapFlag = cli.BoolFlag{
Name: "ethash.cacheslockmmap",
Usage: "Lock memory maps of recent ethash caches",
}
EthashDatasetDirFlag = DirectoryFlag{
Name: "ethash.dagdir",
Usage: "Directory to store the ethash mining DAGs",
Value: DirectoryString(ethconfig.Defaults.Ethash.DatasetDir),
}
EthashDatasetsInMemoryFlag = cli.IntFlag{
Name: "ethash.dagsinmem",
Usage: "Number of recent ethash mining DAGs to keep in memory (1+GB each)",
Value: ethconfig.Defaults.Ethash.DatasetsInMem,
}
EthashDatasetsOnDiskFlag = cli.IntFlag{
Name: "ethash.dagsondisk",
Usage: "Number of recent ethash mining DAGs to keep on disk (1+GB each)",
Value: ethconfig.Defaults.Ethash.DatasetsOnDisk,
}
EthashDatasetsLockMmapFlag = cli.BoolFlag{
Name: "ethash.dagslockmmap",
Usage: "Lock memory maps for recent ethash mining DAGs",
}
// Transaction pool settings
TxPoolLocalsFlag = cli.StringFlag{
Name: "txpool.locals",
Usage: "Comma separated accounts to treat as locals (no flush, priority inclusion)",
}
TxPoolNoLocalsFlag = cli.BoolFlag{
Name: "txpool.nolocals",
Usage: "Disables price exemptions for locally submitted transactions",
}
TxPoolJournalFlag = cli.StringFlag{
Name: "txpool.journal",
Usage: "Disk journal for local transaction to survive node restarts",
Value: core.DefaultTxPoolConfig.Journal,
}
TxPoolRejournalFlag = cli.DurationFlag{
Name: "txpool.rejournal",
Usage: "Time interval to regenerate the local transaction journal",
Value: core.DefaultTxPoolConfig.Rejournal,
}
TxPoolPriceLimitFlag = cli.Uint64Flag{
Name: "txpool.pricelimit",
Usage: "Minimum gas price limit to enforce for acceptance into the pool",
Value: ethconfig.Defaults.TxPool.PriceLimit,
}
TxPoolPriceBumpFlag = cli.Uint64Flag{
Name: "txpool.pricebump",
Usage: "Price bump percentage to replace an already existing transaction",
Value: ethconfig.Defaults.TxPool.PriceBump,
}
TxPoolAccountSlotsFlag = cli.Uint64Flag{
Name: "txpool.accountslots",
Usage: "Minimum number of executable transaction slots guaranteed per account",
Value: ethconfig.Defaults.TxPool.AccountSlots,
}
TxPoolGlobalSlotsFlag = cli.Uint64Flag{
Name: "txpool.globalslots",
Usage: "Maximum number of executable transaction slots for all accounts",
Value: ethconfig.Defaults.TxPool.GlobalSlots,
}
TxPoolAccountQueueFlag = cli.Uint64Flag{
Name: "txpool.accountqueue",
Usage: "Maximum number of non-executable transaction slots permitted per account",
Value: ethconfig.Defaults.TxPool.AccountQueue,
}
TxPoolGlobalQueueFlag = cli.Uint64Flag{
Name: "txpool.globalqueue",
Usage: "Maximum number of non-executable transaction slots for all accounts",
Value: ethconfig.Defaults.TxPool.GlobalQueue,
}
TxPoolLifetimeFlag = cli.DurationFlag{
Name: "txpool.lifetime",
Usage: "Maximum amount of time non-executable transaction are queued",
Value: ethconfig.Defaults.TxPool.Lifetime,
}
// Performance tuning settings
CacheFlag = cli.IntFlag{
Name: "cache",
Usage: "Megabytes of memory allocated to internal caching (default = 4096 mainnet full node, 128 light mode)",
Value: 1024,
}
CacheDatabaseFlag = cli.IntFlag{
Name: "cache.database",
Usage: "Percentage of cache memory allowance to use for database io",
Value: 50,
}
CacheTrieFlag = cli.IntFlag{
Name: "cache.trie",
Usage: "Percentage of cache memory allowance to use for trie caching (default = 15% full mode, 30% archive mode)",
Value: 15,
}
CacheTrieJournalFlag = cli.StringFlag{
Name: "cache.trie.journal",
Usage: "Disk journal directory for trie cache to survive node restarts",
Value: ethconfig.Defaults.TrieCleanCacheJournal,
}
CacheTrieRejournalFlag = cli.DurationFlag{
Name: "cache.trie.rejournal",
Usage: "Time interval to regenerate the trie cache journal",
Value: ethconfig.Defaults.TrieCleanCacheRejournal,
}
CacheGCFlag = cli.IntFlag{
Name: "cache.gc",
Usage: "Percentage of cache memory allowance to use for trie pruning (default = 25% full mode, 0% archive mode)",
Value: 25,
}
CacheSnapshotFlag = cli.IntFlag{
Name: "cache.snapshot",
Usage: "Percentage of cache memory allowance to use for snapshot caching (default = 10% full mode, 20% archive mode)",
Value: 10,
}
CacheNoPrefetchFlag = cli.BoolFlag{
Name: "cache.noprefetch",
Usage: "Disable heuristic state prefetch during block import (less CPU and disk IO, more time waiting for data)",
}
CachePreimagesFlag = cli.BoolFlag{
Name: "cache.preimages",
Usage: "Enable recording the SHA3/keccak preimages of trie keys",
}
// Miner settings
MiningEnabledFlag = cli.BoolFlag{
Name: "mine",
Usage: "Enable mining",
}
MinerThreadsFlag = cli.IntFlag{
Name: "miner.threads",
Usage: "Number of CPU threads to use for mining",
Value: 0,
}
MinerNotifyFlag = cli.StringFlag{
Name: "miner.notify",
Usage: "Comma separated HTTP URL list to notify of new work packages",
}
MinerNotifyFullFlag = cli.BoolFlag{
Name: "miner.notify.full",
Usage: "Notify with pending block headers instead of work packages",
}
MinerGasLimitFlag = cli.Uint64Flag{
Name: "miner.gaslimit",
Usage: "Target gas ceiling for mined blocks",
Value: ethconfig.Defaults.Miner.GasCeil,
}
MinerGasPriceFlag = BigFlag{
Name: "miner.gasprice",
Usage: "Minimum gas price for mining a transaction",
Value: ethconfig.Defaults.Miner.GasPrice,
}
MinerEtherbaseFlag = cli.StringFlag{
Name: "miner.etherbase",
Usage: "Public address for block mining rewards (default = first account)",
Value: "0",
}
MinerExtraDataFlag = cli.StringFlag{
Name: "miner.extradata",
Usage: "Block extra data set by the miner (default = client version)",
}
MinerRecommitIntervalFlag = cli.DurationFlag{
Name: "miner.recommit",
Usage: "Time interval to recreate the block being mined",
Value: ethconfig.Defaults.Miner.Recommit,
}
MinerNoVerifyFlag = cli.BoolFlag{
Name: "miner.noverify",
Usage: "Disable remote sealing verification",
}
// Account settings
UnlockedAccountFlag = cli.StringFlag{
Name: "unlock",
Usage: "Comma separated list of accounts to unlock",
Value: "",
}
PasswordFileFlag = cli.StringFlag{
Name: "password",
Usage: "Password file to use for non-interactive password input",
Value: "",
}
ExternalSignerFlag = cli.StringFlag{
Name: "signer",
Usage: "External signer (url or path to ipc file)",
Value: "",
}
VMEnableDebugFlag = cli.BoolFlag{
Name: "vmdebug",
Usage: "Record information useful for VM and contract debugging",
}
InsecureUnlockAllowedFlag = cli.BoolFlag{
Name: "allow-insecure-unlock",
Usage: "Allow insecure account unlocking when account-related RPCs are exposed by http",
}
RPCGlobalGasCapFlag = cli.Uint64Flag{
Name: "rpc.gascap",
Usage: "Sets a cap on gas that can be used in eth_call/estimateGas (0=infinite)",
Value: ethconfig.Defaults.RPCGasCap,
}
RPCGlobalEVMTimeoutFlag = cli.DurationFlag{
Name: "rpc.evmtimeout",
Usage: "Sets a timeout used for eth_call (0=infinite)",
Value: ethconfig.Defaults.RPCEVMTimeout,
}
RPCGlobalTxFeeCapFlag = cli.Float64Flag{
Name: "rpc.txfeecap",
Usage: "Sets a cap on transaction fee (in ether) that can be sent via the RPC APIs (0 = no cap)",
Value: ethconfig.Defaults.RPCTxFeeCap,
}
// Logging and debug settings
EthStatsURLFlag = cli.StringFlag{
Name: "ethstats",
Usage: "Reporting URL of a ethstats service (nodename:secret@host:port)",
}
FakePoWFlag = cli.BoolFlag{
Name: "fakepow",
Usage: "Disables proof-of-work verification",
}
NoCompactionFlag = cli.BoolFlag{
Name: "nocompaction",
Usage: "Disables db compaction after import",
}
// RPC settings
IPCDisabledFlag = cli.BoolFlag{
Name: "ipcdisable",
Usage: "Disable the IPC-RPC server",
}
IPCPathFlag = DirectoryFlag{
Name: "ipcpath",
Usage: "Filename for IPC socket/pipe within the datadir (explicit paths escape it)",
}
HTTPEnabledFlag = cli.BoolFlag{
Name: "http",
Usage: "Enable the HTTP-RPC server",
}
HTTPListenAddrFlag = cli.StringFlag{
Name: "http.addr",
Usage: "HTTP-RPC server listening interface",
Value: node.DefaultHTTPHost,
}
HTTPPortFlag = cli.IntFlag{
Name: "http.port",
Usage: "HTTP-RPC server listening port",
Value: node.DefaultHTTPPort,
}
HTTPCORSDomainFlag = cli.StringFlag{
Name: "http.corsdomain",
Usage: "Comma separated list of domains from which to accept cross origin requests (browser enforced)",
Value: "",
}
HTTPVirtualHostsFlag = cli.StringFlag{
Name: "http.vhosts",
Usage: "Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard.",
Value: strings.Join(node.DefaultConfig.HTTPVirtualHosts, ","),
}
HTTPApiFlag = cli.StringFlag{
Name: "http.api",
Usage: "API's offered over the HTTP-RPC interface",
Value: "",
}
HTTPPathPrefixFlag = cli.StringFlag{
Name: "http.rpcprefix",
Usage: "HTTP path path prefix on which JSON-RPC is served. Use '/' to serve on all paths.",
Value: "",
}
GraphQLEnabledFlag = cli.BoolFlag{
Name: "graphql",
Usage: "Enable GraphQL on the HTTP-RPC server. Note that GraphQL can only be started if an HTTP server is started as well.",
}
GraphQLCORSDomainFlag = cli.StringFlag{
Name: "graphql.corsdomain",
Usage: "Comma separated list of domains from which to accept cross origin requests (browser enforced)",
Value: "",
}
GraphQLVirtualHostsFlag = cli.StringFlag{
Name: "graphql.vhosts",
Usage: "Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard.",
Value: strings.Join(node.DefaultConfig.GraphQLVirtualHosts, ","),
}
WSEnabledFlag = cli.BoolFlag{
Name: "ws",
Usage: "Enable the WS-RPC server",
}
WSListenAddrFlag = cli.StringFlag{
Name: "ws.addr",
Usage: "WS-RPC server listening interface",
Value: node.DefaultWSHost,
}
WSPortFlag = cli.IntFlag{
Name: "ws.port",
Usage: "WS-RPC server listening port",
Value: node.DefaultWSPort,
}
WSApiFlag = cli.StringFlag{
Name: "ws.api",
Usage: "API's offered over the WS-RPC interface",
Value: "",
}
WSAllowedOriginsFlag = cli.StringFlag{
Name: "ws.origins",
Usage: "Origins from which to accept websockets requests",
Value: "",
}
WSPathPrefixFlag = cli.StringFlag{
Name: "ws.rpcprefix",
Usage: "HTTP path prefix on which JSON-RPC is served. Use '/' to serve on all paths.",
Value: "",
}
ExecFlag = cli.StringFlag{
Name: "exec",
Usage: "Execute JavaScript statement",
}
PreloadJSFlag = cli.StringFlag{
Name: "preload",
Usage: "Comma separated list of JavaScript files to preload into the console",
}
AllowUnprotectedTxs = cli.BoolFlag{
Name: "rpc.allow-unprotected-txs",
Usage: "Allow for unprotected (non EIP155 signed) transactions to be submitted via RPC",
}
// Network Settings
MaxPeersFlag = cli.IntFlag{
Name: "maxpeers",
Usage: "Maximum number of network peers (network disabled if set to 0)",
Value: node.DefaultConfig.P2P.MaxPeers,
}
MaxPendingPeersFlag = cli.IntFlag{
Name: "maxpendpeers",
Usage: "Maximum number of pending connection attempts (defaults used if set to 0)",
Value: node.DefaultConfig.P2P.MaxPendingPeers,
}
ListenPortFlag = cli.IntFlag{
Name: "port",
Usage: "Network listening port",
Value: 30303,
}
BootnodesFlag = cli.StringFlag{
Name: "bootnodes",
Usage: "Comma separated enode URLs for P2P discovery bootstrap",
Value: "",
}
NodeKeyFileFlag = cli.StringFlag{
Name: "nodekey",
Usage: "P2P node key file",
}
NodeKeyHexFlag = cli.StringFlag{
Name: "nodekeyhex",
Usage: "P2P node key as hex (for testing)",
}
NATFlag = cli.StringFlag{
Name: "nat",
Usage: "NAT port mapping mechanism (any|none|upnp|pmp|extip:<IP>)",
Value: "any",
}
NoDiscoverFlag = cli.BoolFlag{
Name: "nodiscover",
Usage: "Disables the peer discovery mechanism (manual peer addition)",
}
DiscoveryV5Flag = cli.BoolFlag{
Name: "v5disc",
Usage: "Enables the experimental RLPx V5 (Topic Discovery) mechanism",
}
NetrestrictFlag = cli.StringFlag{
Name: "netrestrict",
Usage: "Restricts network communication to the given IP networks (CIDR masks)",
}
DNSDiscoveryFlag = cli.StringFlag{
Name: "discovery.dns",
Usage: "Sets DNS discovery entry points (use \"\" to disable DNS)",
}
// ATM the url is left to the user and deployment to
JSpathFlag = DirectoryFlag{
Name: "jspath",
Usage: "JavaScript root path for `loadScript`",
Value: DirectoryString("."),
}
// Gas price oracle settings
GpoBlocksFlag = cli.IntFlag{
Name: "gpo.blocks",
Usage: "Number of recent blocks to check for gas prices",
Value: ethconfig.Defaults.GPO.Blocks,
}
GpoPercentileFlag = cli.IntFlag{
Name: "gpo.percentile",
Usage: "Suggested gas price is the given percentile of a set of recent transaction gas prices",
Value: ethconfig.Defaults.GPO.Percentile,
}
GpoMaxGasPriceFlag = cli.Int64Flag{
Name: "gpo.maxprice",
Usage: "Maximum transaction priority fee (or gasprice before London fork) to be recommended by gpo",
Value: ethconfig.Defaults.GPO.MaxPrice.Int64(),
}
GpoIgnoreGasPriceFlag = cli.Int64Flag{
Name: "gpo.ignoreprice",
Usage: "Gas price below which gpo will ignore transactions",
Value: ethconfig.Defaults.GPO.IgnorePrice.Int64(),
}
// Metrics flags
MetricsEnabledFlag = cli.BoolFlag{
Name: "metrics",
Usage: "Enable metrics collection and reporting",
}
MetricsEnabledExpensiveFlag = cli.BoolFlag{
Name: "metrics.expensive",
Usage: "Enable expensive metrics collection and reporting",
}
// MetricsHTTPFlag defines the endpoint for a stand-alone metrics HTTP endpoint.
// Since the pprof service enables sensitive/vulnerable behavior, this allows a user
// to enable a public-OK metrics endpoint without having to worry about ALSO exposing
// other profiling behavior or information.
MetricsHTTPFlag = cli.StringFlag{
Name: "metrics.addr",
Usage: "Enable stand-alone metrics HTTP server listening interface",
Value: metrics.DefaultConfig.HTTP,
}
MetricsPortFlag = cli.IntFlag{
Name: "metrics.port",
Usage: "Metrics HTTP server listening port",
Value: metrics.DefaultConfig.Port,
}
MetricsEnableInfluxDBFlag = cli.BoolFlag{
Name: "metrics.influxdb",
Usage: "Enable metrics export/push to an external InfluxDB database",
}
MetricsInfluxDBEndpointFlag = cli.StringFlag{
Name: "metrics.influxdb.endpoint",
Usage: "InfluxDB API endpoint to report metrics to",
Value: metrics.DefaultConfig.InfluxDBEndpoint,
}
MetricsInfluxDBDatabaseFlag = cli.StringFlag{
Name: "metrics.influxdb.database",
Usage: "InfluxDB database name to push reported metrics to",
Value: metrics.DefaultConfig.InfluxDBDatabase,
}
MetricsInfluxDBUsernameFlag = cli.StringFlag{
Name: "metrics.influxdb.username",
Usage: "Username to authorize access to the database",
Value: metrics.DefaultConfig.InfluxDBUsername,
}
MetricsInfluxDBPasswordFlag = cli.StringFlag{
Name: "metrics.influxdb.password",
Usage: "Password to authorize access to the database",
Value: metrics.DefaultConfig.InfluxDBPassword,
}
// Tags are part of every measurement sent to InfluxDB. Queries on tags are faster in InfluxDB.
// For example `host` tag could be used so that we can group all nodes and average a measurement
// across all of them, but also so that we can select a specific node and inspect its measurements.
// https://docs.influxdata.com/influxdb/v1.4/concepts/key_concepts/#tag-key
MetricsInfluxDBTagsFlag = cli.StringFlag{
Name: "metrics.influxdb.tags",
Usage: "Comma-separated InfluxDB tags (key/values) attached to all measurements",
Value: metrics.DefaultConfig.InfluxDBTags,
}
MetricsEnableInfluxDBV2Flag = cli.BoolFlag{
Name: "metrics.influxdbv2",
Usage: "Enable metrics export/push to an external InfluxDB v2 database",
}
MetricsInfluxDBTokenFlag = cli.StringFlag{
Name: "metrics.influxdb.token",
Usage: "Token to authorize access to the database (v2 only)",
Value: metrics.DefaultConfig.InfluxDBToken,
}
MetricsInfluxDBBucketFlag = cli.StringFlag{
Name: "metrics.influxdb.bucket",
Usage: "InfluxDB bucket name to push reported metrics to (v2 only)",
Value: metrics.DefaultConfig.InfluxDBBucket,
}
MetricsInfluxDBOrganizationFlag = cli.StringFlag{
Name: "metrics.influxdb.organization",
Usage: "InfluxDB organization name (v2 only)",
Value: metrics.DefaultConfig.InfluxDBOrganization,
}
CatalystFlag = cli.BoolFlag{
Name: "catalyst",
Usage: "Catalyst mode (eth2 integration testing)",
}
StateDiffFlag = cli.BoolFlag{
Name: "statediff",
Usage: "Enables the processing of state diffs between each block",
}
StateDiffDBFlag = cli.StringFlag{
Name: "statediff.db",
Usage: "PostgreSQL database connection string for writing state diffs",
}
StateDiffDBNodeIDFlag = cli.StringFlag{
Name: "statediff.dbnodeid",
Usage: "Node ID to use when writing state diffs to database",
}
StateDiffDBClientNameFlag = cli.StringFlag{
Name: "statediff.dbclientname",
Usage: "Client name to use when writing state diffs to database",
}
StateDiffWritingFlag = cli.BoolFlag{
Name: "statediff.writing",
Usage: "Activates progressive writing of state diffs to database as new block are synced",
}
StateDiffWorkersFlag = cli.UintFlag{
Name: "statediff.workers",
Usage: "Number of concurrent workers to use during statediff processing (0 = 1)",
}
)
// MakeDataDir retrieves the currently requested data directory, terminating
// if none (or the empty string) is specified. If the node is starting a testnet,
// then a subdirectory of the specified datadir will be used.
func MakeDataDir(ctx *cli.Context) string {
if path := ctx.GlobalString(DataDirFlag.Name); path != "" {
if ctx.GlobalBool(RopstenFlag.Name) {
// Maintain compatibility with older Geth configurations storing the
// Ropsten database in `testnet` instead of `ropsten`.
return filepath.Join(path, "ropsten")
}
if ctx.GlobalBool(RinkebyFlag.Name) {
return filepath.Join(path, "rinkeby")
}
if ctx.GlobalBool(GoerliFlag.Name) {
return filepath.Join(path, "goerli")
}
return path
}
Fatalf("Cannot determine default data directory, please set manually (--datadir)")
return ""
}
// setNodeKey creates a node key from set command line flags, either loading it
// from a file or as a specified hex value. If neither flags were provided, this
// method returns nil and an emphemeral key is to be generated.
func setNodeKey(ctx *cli.Context, cfg *p2p.Config) {
var (
hex = ctx.GlobalString(NodeKeyHexFlag.Name)
file = ctx.GlobalString(NodeKeyFileFlag.Name)
key *ecdsa.PrivateKey
err error
)
switch {
case file != "" && hex != "":
Fatalf("Options %q and %q are mutually exclusive", NodeKeyFileFlag.Name, NodeKeyHexFlag.Name)
case file != "":
if key, err = crypto.LoadECDSA(file); err != nil {
Fatalf("Option %q: %v", NodeKeyFileFlag.Name, err)
}
cfg.PrivateKey = key
case hex != "":
if key, err = crypto.HexToECDSA(hex); err != nil {
Fatalf("Option %q: %v", NodeKeyHexFlag.Name, err)
}
cfg.PrivateKey = key
}
}
// setNodeUserIdent creates the user identifier from CLI flags.
func setNodeUserIdent(ctx *cli.Context, cfg *node.Config) {
if identity := ctx.GlobalString(IdentityFlag.Name); len(identity) > 0 {
cfg.UserIdent = identity
}
}
// setBootstrapNodes creates a list of bootstrap nodes from the command line
// flags, reverting to pre-configured ones if none have been specified.
func setBootstrapNodes(ctx *cli.Context, cfg *p2p.Config) {
urls := params.MainnetBootnodes
switch {
case ctx.GlobalIsSet(BootnodesFlag.Name):
urls = SplitAndTrim(ctx.GlobalString(BootnodesFlag.Name))
case ctx.GlobalBool(RopstenFlag.Name):
urls = params.RopstenBootnodes
case ctx.GlobalBool(RinkebyFlag.Name):
urls = params.RinkebyBootnodes
case ctx.GlobalBool(GoerliFlag.Name):
urls = params.GoerliBootnodes
case cfg.BootstrapNodes != nil:
return // already set, don't apply defaults.
}
cfg.BootstrapNodes = make([]*enode.Node, 0, len(urls))
for _, url := range urls {
if url != "" {
node, err := enode.Parse(enode.ValidSchemes, url)
if err != nil {
log.Crit("Bootstrap URL invalid", "enode", url, "err", err)
continue
}
cfg.BootstrapNodes = append(cfg.BootstrapNodes, node)
}
}
}
// setBootstrapNodesV5 creates a list of bootstrap nodes from the command line
// flags, reverting to pre-configured ones if none have been specified.
func setBootstrapNodesV5(ctx *cli.Context, cfg *p2p.Config) {
urls := params.V5Bootnodes
switch {
case ctx.GlobalIsSet(BootnodesFlag.Name):
urls = SplitAndTrim(ctx.GlobalString(BootnodesFlag.Name))
case cfg.BootstrapNodesV5 != nil:
return // already set, don't apply defaults.
}
cfg.BootstrapNodesV5 = make([]*enode.Node, 0, len(urls))
for _, url := range urls {
if url != "" {
node, err := enode.Parse(enode.ValidSchemes, url)
if err != nil {
log.Error("Bootstrap URL invalid", "enode", url, "err", err)
continue
}
cfg.BootstrapNodesV5 = append(cfg.BootstrapNodesV5, node)
}
}
}
// setListenAddress creates a TCP listening address string from set command
// line flags.
func setListenAddress(ctx *cli.Context, cfg *p2p.Config) {
if ctx.GlobalIsSet(ListenPortFlag.Name) {
cfg.ListenAddr = fmt.Sprintf(":%d", ctx.GlobalInt(ListenPortFlag.Name))
}
}
// setNAT creates a port mapper from command line flags.
func setNAT(ctx *cli.Context, cfg *p2p.Config) {
if ctx.GlobalIsSet(NATFlag.Name) {
natif, err := nat.Parse(ctx.GlobalString(NATFlag.Name))
if err != nil {
Fatalf("Option %s: %v", NATFlag.Name, err)
}
cfg.NAT = natif
}
}
// SplitAndTrim splits input separated by a comma
// and trims excessive white space from the substrings.
func SplitAndTrim(input string) (ret []string) {
l := strings.Split(input, ",")
for _, r := range l {
if r = strings.TrimSpace(r); r != "" {
ret = append(ret, r)
}
}
return ret
}
// setHTTP creates the HTTP RPC listener interface string from the set
// command line flags, returning empty if the HTTP endpoint is disabled.
func setHTTP(ctx *cli.Context, cfg *node.Config) {
if ctx.GlobalBool(HTTPEnabledFlag.Name) && cfg.HTTPHost == "" {
cfg.HTTPHost = "127.0.0.1"
if ctx.GlobalIsSet(HTTPListenAddrFlag.Name) {
cfg.HTTPHost = ctx.GlobalString(HTTPListenAddrFlag.Name)
}
}
if ctx.GlobalIsSet(HTTPPortFlag.Name) {
cfg.HTTPPort = ctx.GlobalInt(HTTPPortFlag.Name)
}
if ctx.GlobalIsSet(HTTPCORSDomainFlag.Name) {
cfg.HTTPCors = SplitAndTrim(ctx.GlobalString(HTTPCORSDomainFlag.Name))
}
if ctx.GlobalIsSet(HTTPApiFlag.Name) {
cfg.HTTPModules = SplitAndTrim(ctx.GlobalString(HTTPApiFlag.Name))
}
if ctx.GlobalIsSet(HTTPVirtualHostsFlag.Name) {
cfg.HTTPVirtualHosts = SplitAndTrim(ctx.GlobalString(HTTPVirtualHostsFlag.Name))
}
if ctx.GlobalIsSet(HTTPPathPrefixFlag.Name) {
cfg.HTTPPathPrefix = ctx.GlobalString(HTTPPathPrefixFlag.Name)
}
if ctx.GlobalIsSet(AllowUnprotectedTxs.Name) {
cfg.AllowUnprotectedTxs = ctx.GlobalBool(AllowUnprotectedTxs.Name)
}
}
// setGraphQL creates the GraphQL listener interface string from the set
// command line flags, returning empty if the GraphQL endpoint is disabled.
func setGraphQL(ctx *cli.Context, cfg *node.Config) {
if ctx.GlobalIsSet(GraphQLCORSDomainFlag.Name) {
cfg.GraphQLCors = SplitAndTrim(ctx.GlobalString(GraphQLCORSDomainFlag.Name))
}
if ctx.GlobalIsSet(GraphQLVirtualHostsFlag.Name) {
cfg.GraphQLVirtualHosts = SplitAndTrim(ctx.GlobalString(GraphQLVirtualHostsFlag.Name))
}
}
// setWS creates the WebSocket RPC listener interface string from the set
// command line flags, returning empty if the HTTP endpoint is disabled.
func setWS(ctx *cli.Context, cfg *node.Config) {
if ctx.GlobalBool(WSEnabledFlag.Name) && cfg.WSHost == "" {
cfg.WSHost = "127.0.0.1"
if ctx.GlobalIsSet(WSListenAddrFlag.Name) {
cfg.WSHost = ctx.GlobalString(WSListenAddrFlag.Name)
}
}
if ctx.GlobalIsSet(WSPortFlag.Name) {
cfg.WSPort = ctx.GlobalInt(WSPortFlag.Name)
}
if ctx.GlobalIsSet(WSAllowedOriginsFlag.Name) {
cfg.WSOrigins = SplitAndTrim(ctx.GlobalString(WSAllowedOriginsFlag.Name))
}
if ctx.GlobalIsSet(WSApiFlag.Name) {
cfg.WSModules = SplitAndTrim(ctx.GlobalString(WSApiFlag.Name))
}
if ctx.GlobalIsSet(WSPathPrefixFlag.Name) {
cfg.WSPathPrefix = ctx.GlobalString(WSPathPrefixFlag.Name)
}
if ctx.GlobalBool(StateDiffFlag.Name) {
cfg.WSModules = append(cfg.WSModules, "statediff")
}
}
// setIPC creates an IPC path configuration from the set command line flags,
// returning an empty string if IPC was explicitly disabled, or the set path.
func setIPC(ctx *cli.Context, cfg *node.Config) {
CheckExclusive(ctx, IPCDisabledFlag, IPCPathFlag)
switch {
case ctx.GlobalBool(IPCDisabledFlag.Name):
cfg.IPCPath = ""
case ctx.GlobalIsSet(IPCPathFlag.Name):
cfg.IPCPath = ctx.GlobalString(IPCPathFlag.Name)
}
}
// setLes configures the les server and ultra light client settings from the command line flags.
func setLes(ctx *cli.Context, cfg *ethconfig.Config) {
if ctx.GlobalIsSet(LightServeFlag.Name) {
cfg.LightServ = ctx.GlobalInt(LightServeFlag.Name)
}
if ctx.GlobalIsSet(LightIngressFlag.Name) {
cfg.LightIngress = ctx.GlobalInt(LightIngressFlag.Name)
}
if ctx.GlobalIsSet(LightEgressFlag.Name) {
cfg.LightEgress = ctx.GlobalInt(LightEgressFlag.Name)
}
if ctx.GlobalIsSet(LightMaxPeersFlag.Name) {
cfg.LightPeers = ctx.GlobalInt(LightMaxPeersFlag.Name)
}
if ctx.GlobalIsSet(UltraLightServersFlag.Name) {
cfg.UltraLightServers = strings.Split(ctx.GlobalString(UltraLightServersFlag.Name), ",")
}
if ctx.GlobalIsSet(UltraLightFractionFlag.Name) {
cfg.UltraLightFraction = ctx.GlobalInt(UltraLightFractionFlag.Name)
}
if cfg.UltraLightFraction <= 0 && cfg.UltraLightFraction > 100 {
log.Error("Ultra light fraction is invalid", "had", cfg.UltraLightFraction, "updated", ethconfig.Defaults.UltraLightFraction)
cfg.UltraLightFraction = ethconfig.Defaults.UltraLightFraction
}
if ctx.GlobalIsSet(UltraLightOnlyAnnounceFlag.Name) {
cfg.UltraLightOnlyAnnounce = ctx.GlobalBool(UltraLightOnlyAnnounceFlag.Name)
}
if ctx.GlobalIsSet(LightNoPruneFlag.Name) {
cfg.LightNoPrune = ctx.GlobalBool(LightNoPruneFlag.Name)
}
if ctx.GlobalIsSet(LightNoSyncServeFlag.Name) {
cfg.LightNoSyncServe = ctx.GlobalBool(LightNoSyncServeFlag.Name)
}
}
// MakeDatabaseHandles raises out the number of allowed file handles per process
// for Geth and returns half of the allowance to assign to the database.
func MakeDatabaseHandles() int {
limit, err := fdlimit.Maximum()
if err != nil {
Fatalf("Failed to retrieve file descriptor allowance: %v", err)
}
raised, err := fdlimit.Raise(uint64(limit))
if err != nil {
Fatalf("Failed to raise file descriptor allowance: %v", err)
}
return int(raised / 2) // Leave half for networking and other stuff
}
// MakeAddress converts an account specified directly as a hex encoded string or
// a key index in the key store to an internal account representation.
func MakeAddress(ks *keystore.KeyStore, account string) (accounts.Account, error) {
// If the specified account is a valid address, return it
if common.IsHexAddress(account) {
return accounts.Account{Address: common.HexToAddress(account)}, nil
}
// Otherwise try to interpret the account as a keystore index
index, err := strconv.Atoi(account)
if err != nil || index < 0 {
return accounts.Account{}, fmt.Errorf("invalid account address or index %q", account)
}
log.Warn("-------------------------------------------------------------------")
log.Warn("Referring to accounts by order in the keystore folder is dangerous!")
log.Warn("This functionality is deprecated and will be removed in the future!")
log.Warn("Please use explicit addresses! (can search via `geth account list`)")
log.Warn("-------------------------------------------------------------------")
accs := ks.Accounts()
if len(accs) <= index {
return accounts.Account{}, fmt.Errorf("index %d higher than number of accounts %d", index, len(accs))
}
return accs[index], nil
}
// setEtherbase retrieves the etherbase either from the directly specified
// command line flags or from the keystore if CLI indexed.
func setEtherbase(ctx *cli.Context, ks *keystore.KeyStore, cfg *ethconfig.Config) {
// Extract the current etherbase
var etherbase string
if ctx.GlobalIsSet(MinerEtherbaseFlag.Name) {
etherbase = ctx.GlobalString(MinerEtherbaseFlag.Name)
}
// Convert the etherbase into an address and configure it
if etherbase != "" {
if ks != nil {
account, err := MakeAddress(ks, etherbase)
if err != nil {
Fatalf("Invalid miner etherbase: %v", err)
}
cfg.Miner.Etherbase = account.Address
} else {
Fatalf("No etherbase configured")
}
}
}
// MakePasswordList reads password lines from the file specified by the global --password flag.
func MakePasswordList(ctx *cli.Context) []string {
path := ctx.GlobalString(PasswordFileFlag.Name)
if path == "" {
return nil
}
text, err := ioutil.ReadFile(path)
if err != nil {
Fatalf("Failed to read password file: %v", err)
}
lines := strings.Split(string(text), "\n")
// Sanitise DOS line endings.
for i := range lines {
lines[i] = strings.TrimRight(lines[i], "\r")
}
return lines
}
func SetP2PConfig(ctx *cli.Context, cfg *p2p.Config) {
setNodeKey(ctx, cfg)
setNAT(ctx, cfg)
setListenAddress(ctx, cfg)
setBootstrapNodes(ctx, cfg)
setBootstrapNodesV5(ctx, cfg)
lightClient := ctx.GlobalString(SyncModeFlag.Name) == "light"
lightServer := (ctx.GlobalInt(LightServeFlag.Name) != 0)
lightPeers := ctx.GlobalInt(LightMaxPeersFlag.Name)
if lightClient && !ctx.GlobalIsSet(LightMaxPeersFlag.Name) {
// dynamic default - for clients we use 1/10th of the default for servers
lightPeers /= 10
}
if ctx.GlobalIsSet(MaxPeersFlag.Name) {
cfg.MaxPeers = ctx.GlobalInt(MaxPeersFlag.Name)
if lightServer && !ctx.GlobalIsSet(LightMaxPeersFlag.Name) {
cfg.MaxPeers += lightPeers
}
} else {
if lightServer {
cfg.MaxPeers += lightPeers
}
if lightClient && ctx.GlobalIsSet(LightMaxPeersFlag.Name) && cfg.MaxPeers < lightPeers {
cfg.MaxPeers = lightPeers
}
}
if !(lightClient || lightServer) {
lightPeers = 0
}
ethPeers := cfg.MaxPeers - lightPeers
if lightClient {
ethPeers = 0
}
log.Info("Maximum peer count", "ETH", ethPeers, "LES", lightPeers, "total", cfg.MaxPeers)
if ctx.GlobalIsSet(MaxPendingPeersFlag.Name) {
cfg.MaxPendingPeers = ctx.GlobalInt(MaxPendingPeersFlag.Name)
}
if ctx.GlobalIsSet(NoDiscoverFlag.Name) || lightClient {
cfg.NoDiscovery = true
}
// if we're running a light client or server, force enable the v5 peer discovery
// unless it is explicitly disabled with --nodiscover note that explicitly specifying
// --v5disc overrides --nodiscover, in which case the later only disables v4 discovery
forceV5Discovery := (lightClient || lightServer) && !ctx.GlobalBool(NoDiscoverFlag.Name)
if ctx.GlobalIsSet(DiscoveryV5Flag.Name) {
cfg.DiscoveryV5 = ctx.GlobalBool(DiscoveryV5Flag.Name)
} else if forceV5Discovery {
cfg.DiscoveryV5 = true
}
if netrestrict := ctx.GlobalString(NetrestrictFlag.Name); netrestrict != "" {
list, err := netutil.ParseNetlist(netrestrict)
if err != nil {
Fatalf("Option %q: %v", NetrestrictFlag.Name, err)
}
cfg.NetRestrict = list
}
if ctx.GlobalBool(DeveloperFlag.Name) || ctx.GlobalBool(CatalystFlag.Name) {
// --dev mode can't use p2p networking.
cfg.MaxPeers = 0
cfg.ListenAddr = ""
cfg.NoDial = true
cfg.NoDiscovery = true
cfg.DiscoveryV5 = false
}
}
// SetNodeConfig applies node-related command line flags to the config.
func SetNodeConfig(ctx *cli.Context, cfg *node.Config) {
SetP2PConfig(ctx, &cfg.P2P)
setIPC(ctx, cfg)
setHTTP(ctx, cfg)
setGraphQL(ctx, cfg)
setWS(ctx, cfg)
setNodeUserIdent(ctx, cfg)
setDataDir(ctx, cfg)
setSmartCard(ctx, cfg)
if ctx.GlobalIsSet(ExternalSignerFlag.Name) {
cfg.ExternalSigner = ctx.GlobalString(ExternalSignerFlag.Name)
}
if ctx.GlobalIsSet(KeyStoreDirFlag.Name) {
cfg.KeyStoreDir = ctx.GlobalString(KeyStoreDirFlag.Name)
}
if ctx.GlobalIsSet(DeveloperFlag.Name) {
cfg.UseLightweightKDF = true
}
if ctx.GlobalIsSet(LightKDFFlag.Name) {
cfg.UseLightweightKDF = ctx.GlobalBool(LightKDFFlag.Name)
}
if ctx.GlobalIsSet(NoUSBFlag.Name) || cfg.NoUSB {
log.Warn("Option nousb is deprecated and USB is deactivated by default. Use --usb to enable")
}
if ctx.GlobalIsSet(USBFlag.Name) {
cfg.USB = ctx.GlobalBool(USBFlag.Name)
}
if ctx.GlobalIsSet(InsecureUnlockAllowedFlag.Name) {
cfg.InsecureUnlockAllowed = ctx.GlobalBool(InsecureUnlockAllowedFlag.Name)
}
}
func setSmartCard(ctx *cli.Context, cfg *node.Config) {
// Skip enabling smartcards if no path is set
path := ctx.GlobalString(SmartCardDaemonPathFlag.Name)
if path == "" {
return
}
// Sanity check that the smartcard path is valid
fi, err := os.Stat(path)
if err != nil {
log.Info("Smartcard socket not found, disabling", "err", err)
return
}
if fi.Mode()&os.ModeType != os.ModeSocket {
log.Error("Invalid smartcard daemon path", "path", path, "type", fi.Mode().String())
return
}
// Smartcard daemon path exists and is a socket, enable it
cfg.SmartCardDaemonPath = path
}
func setDataDir(ctx *cli.Context, cfg *node.Config) {
switch {
case ctx.GlobalIsSet(DataDirFlag.Name):
cfg.DataDir = ctx.GlobalString(DataDirFlag.Name)
case ctx.GlobalBool(DeveloperFlag.Name):
cfg.DataDir = "" // unless explicitly requested, use memory databases
case ctx.GlobalBool(RopstenFlag.Name) && cfg.DataDir == node.DefaultDataDir():
// Maintain compatibility with older Geth configurations storing the
// Ropsten database in `testnet` instead of `ropsten`.
legacyPath := filepath.Join(node.DefaultDataDir(), "testnet")
if _, err := os.Stat(legacyPath); !os.IsNotExist(err) {
log.Warn("Using the deprecated `testnet` datadir. Future versions will store the Ropsten chain in `ropsten`.")
cfg.DataDir = legacyPath
} else {
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "ropsten")
}
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "ropsten")
case ctx.GlobalBool(RinkebyFlag.Name) && cfg.DataDir == node.DefaultDataDir():
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "rinkeby")
case ctx.GlobalBool(GoerliFlag.Name) && cfg.DataDir == node.DefaultDataDir():
cfg.DataDir = filepath.Join(node.DefaultDataDir(), "goerli")
}
}
func setGPO(ctx *cli.Context, cfg *gasprice.Config, light bool) {
// If we are running the light client, apply another group
// settings for gas oracle.
if light {
*cfg = ethconfig.LightClientGPO
}
if ctx.GlobalIsSet(GpoBlocksFlag.Name) {
cfg.Blocks = ctx.GlobalInt(GpoBlocksFlag.Name)
}
if ctx.GlobalIsSet(GpoPercentileFlag.Name) {
cfg.Percentile = ctx.GlobalInt(GpoPercentileFlag.Name)
}
if ctx.GlobalIsSet(GpoMaxGasPriceFlag.Name) {
cfg.MaxPrice = big.NewInt(ctx.GlobalInt64(GpoMaxGasPriceFlag.Name))
}
if ctx.GlobalIsSet(GpoIgnoreGasPriceFlag.Name) {
cfg.IgnorePrice = big.NewInt(ctx.GlobalInt64(GpoIgnoreGasPriceFlag.Name))
}
}
func setTxPool(ctx *cli.Context, cfg *core.TxPoolConfig) {
if ctx.GlobalIsSet(TxPoolLocalsFlag.Name) {
locals := strings.Split(ctx.GlobalString(TxPoolLocalsFlag.Name), ",")
for _, account := range locals {
if trimmed := strings.TrimSpace(account); !common.IsHexAddress(trimmed) {
Fatalf("Invalid account in --txpool.locals: %s", trimmed)
} else {
cfg.Locals = append(cfg.Locals, common.HexToAddress(account))
}
}
}
if ctx.GlobalIsSet(TxPoolNoLocalsFlag.Name) {
cfg.NoLocals = ctx.GlobalBool(TxPoolNoLocalsFlag.Name)
}
if ctx.GlobalIsSet(TxPoolJournalFlag.Name) {
cfg.Journal = ctx.GlobalString(TxPoolJournalFlag.Name)
}
if ctx.GlobalIsSet(TxPoolRejournalFlag.Name) {
cfg.Rejournal = ctx.GlobalDuration(TxPoolRejournalFlag.Name)
}
if ctx.GlobalIsSet(TxPoolPriceLimitFlag.Name) {
cfg.PriceLimit = ctx.GlobalUint64(TxPoolPriceLimitFlag.Name)
}
if ctx.GlobalIsSet(TxPoolPriceBumpFlag.Name) {
cfg.PriceBump = ctx.GlobalUint64(TxPoolPriceBumpFlag.Name)
}
if ctx.GlobalIsSet(TxPoolAccountSlotsFlag.Name) {
cfg.AccountSlots = ctx.GlobalUint64(TxPoolAccountSlotsFlag.Name)
}
if ctx.GlobalIsSet(TxPoolGlobalSlotsFlag.Name) {
cfg.GlobalSlots = ctx.GlobalUint64(TxPoolGlobalSlotsFlag.Name)
}
if ctx.GlobalIsSet(TxPoolAccountQueueFlag.Name) {
cfg.AccountQueue = ctx.GlobalUint64(TxPoolAccountQueueFlag.Name)
}
if ctx.GlobalIsSet(TxPoolGlobalQueueFlag.Name) {
cfg.GlobalQueue = ctx.GlobalUint64(TxPoolGlobalQueueFlag.Name)
}
if ctx.GlobalIsSet(TxPoolLifetimeFlag.Name) {
cfg.Lifetime = ctx.GlobalDuration(TxPoolLifetimeFlag.Name)
}
}
func setEthash(ctx *cli.Context, cfg *ethconfig.Config) {
if ctx.GlobalIsSet(EthashCacheDirFlag.Name) {
cfg.Ethash.CacheDir = ctx.GlobalString(EthashCacheDirFlag.Name)
}
if ctx.GlobalIsSet(EthashDatasetDirFlag.Name) {
cfg.Ethash.DatasetDir = ctx.GlobalString(EthashDatasetDirFlag.Name)
}
if ctx.GlobalIsSet(EthashCachesInMemoryFlag.Name) {
cfg.Ethash.CachesInMem = ctx.GlobalInt(EthashCachesInMemoryFlag.Name)
}
if ctx.GlobalIsSet(EthashCachesOnDiskFlag.Name) {
cfg.Ethash.CachesOnDisk = ctx.GlobalInt(EthashCachesOnDiskFlag.Name)
}
if ctx.GlobalIsSet(EthashCachesLockMmapFlag.Name) {
cfg.Ethash.CachesLockMmap = ctx.GlobalBool(EthashCachesLockMmapFlag.Name)
}
if ctx.GlobalIsSet(EthashDatasetsInMemoryFlag.Name) {
cfg.Ethash.DatasetsInMem = ctx.GlobalInt(EthashDatasetsInMemoryFlag.Name)
}
if ctx.GlobalIsSet(EthashDatasetsOnDiskFlag.Name) {
cfg.Ethash.DatasetsOnDisk = ctx.GlobalInt(EthashDatasetsOnDiskFlag.Name)
}
if ctx.GlobalIsSet(EthashDatasetsLockMmapFlag.Name) {
cfg.Ethash.DatasetsLockMmap = ctx.GlobalBool(EthashDatasetsLockMmapFlag.Name)
}
}
func setMiner(ctx *cli.Context, cfg *miner.Config) {
if ctx.GlobalIsSet(MinerNotifyFlag.Name) {
cfg.Notify = strings.Split(ctx.GlobalString(MinerNotifyFlag.Name), ",")
}
cfg.NotifyFull = ctx.GlobalBool(MinerNotifyFullFlag.Name)
if ctx.GlobalIsSet(MinerExtraDataFlag.Name) {
cfg.ExtraData = []byte(ctx.GlobalString(MinerExtraDataFlag.Name))
}
if ctx.GlobalIsSet(MinerGasLimitFlag.Name) {
cfg.GasCeil = ctx.GlobalUint64(MinerGasLimitFlag.Name)
}
if ctx.GlobalIsSet(MinerGasPriceFlag.Name) {
cfg.GasPrice = GlobalBig(ctx, MinerGasPriceFlag.Name)
}
if ctx.GlobalIsSet(MinerRecommitIntervalFlag.Name) {
cfg.Recommit = ctx.GlobalDuration(MinerRecommitIntervalFlag.Name)
}
if ctx.GlobalIsSet(MinerNoVerifyFlag.Name) {
cfg.Noverify = ctx.GlobalBool(MinerNoVerifyFlag.Name)
}
if ctx.GlobalIsSet(LegacyMinerGasTargetFlag.Name) {
log.Warn("The generic --miner.gastarget flag is deprecated and will be removed in the future!")
}
}
func setWhitelist(ctx *cli.Context, cfg *ethconfig.Config) {
whitelist := ctx.GlobalString(WhitelistFlag.Name)
if whitelist == "" {
return
}
cfg.Whitelist = make(map[uint64]common.Hash)
for _, entry := range strings.Split(whitelist, ",") {
parts := strings.Split(entry, "=")
if len(parts) != 2 {
Fatalf("Invalid whitelist entry: %s", entry)
}
number, err := strconv.ParseUint(parts[0], 0, 64)
if err != nil {
Fatalf("Invalid whitelist block number %s: %v", parts[0], err)
}
var hash common.Hash
if err = hash.UnmarshalText([]byte(parts[1])); err != nil {
Fatalf("Invalid whitelist hash %s: %v", parts[1], err)
}
cfg.Whitelist[number] = hash
}
}
// CheckExclusive verifies that only a single instance of the provided flags was
// set by the user. Each flag might optionally be followed by a string type to
// specialize it further.
func CheckExclusive(ctx *cli.Context, args ...interface{}) {
set := make([]string, 0, 1)
for i := 0; i < len(args); i++ {
// Make sure the next argument is a flag and skip if not set
flag, ok := args[i].(cli.Flag)
if !ok {
panic(fmt.Sprintf("invalid argument, not cli.Flag type: %T", args[i]))
}
// Check if next arg extends current and expand its name if so
name := flag.GetName()
if i+1 < len(args) {
switch option := args[i+1].(type) {
case string:
// Extended flag check, make sure value set doesn't conflict with passed in option
if ctx.GlobalString(flag.GetName()) == option {
name += "=" + option
set = append(set, "--"+name)
}
// shift arguments and continue
i++
continue
case cli.Flag:
default:
panic(fmt.Sprintf("invalid argument, not cli.Flag or string extension: %T", args[i+1]))
}
}
// Mark the flag if it's set
if ctx.GlobalIsSet(flag.GetName()) {
set = append(set, "--"+name)
}
}
if len(set) > 1 {
Fatalf("Flags %v can't be used at the same time", strings.Join(set, ", "))
}
}
// SetEthConfig applies eth-related command line flags to the config.
func SetEthConfig(ctx *cli.Context, stack *node.Node, cfg *ethconfig.Config) {
// Avoid conflicting network flags
CheckExclusive(ctx, MainnetFlag, DeveloperFlag, RopstenFlag, RinkebyFlag, GoerliFlag)
CheckExclusive(ctx, LightServeFlag, SyncModeFlag, "light")
CheckExclusive(ctx, DeveloperFlag, ExternalSignerFlag) // Can't use both ephemeral unlocked and external signer
if ctx.GlobalString(GCModeFlag.Name) == "archive" && ctx.GlobalUint64(TxLookupLimitFlag.Name) != 0 {
ctx.GlobalSet(TxLookupLimitFlag.Name, "0")
log.Warn("Disable transaction unindexing for archive node")
}
if ctx.GlobalIsSet(LightServeFlag.Name) && ctx.GlobalUint64(TxLookupLimitFlag.Name) != 0 {
log.Warn("LES server cannot serve old transaction status and cannot connect below les/4 protocol version if transaction lookup index is limited")
}
var ks *keystore.KeyStore
if keystores := stack.AccountManager().Backends(keystore.KeyStoreType); len(keystores) > 0 {
ks = keystores[0].(*keystore.KeyStore)
}
setEtherbase(ctx, ks, cfg)
setGPO(ctx, &cfg.GPO, ctx.GlobalString(SyncModeFlag.Name) == "light")
setTxPool(ctx, &cfg.TxPool)
setEthash(ctx, cfg)
setMiner(ctx, &cfg.Miner)
setWhitelist(ctx, cfg)
setLes(ctx, cfg)
// Cap the cache allowance and tune the garbage collector
mem, err := gopsutil.VirtualMemory()
if err == nil {
if 32<<(^uintptr(0)>>63) == 32 && mem.Total > 2*1024*1024*1024 {
log.Warn("Lowering memory allowance on 32bit arch", "available", mem.Total/1024/1024, "addressable", 2*1024)
mem.Total = 2 * 1024 * 1024 * 1024
}
allowance := int(mem.Total / 1024 / 1024 / 3)
if cache := ctx.GlobalInt(CacheFlag.Name); cache > allowance {
log.Warn("Sanitizing cache to Go's GC limits", "provided", cache, "updated", allowance)
ctx.GlobalSet(CacheFlag.Name, strconv.Itoa(allowance))
}
}
// Ensure Go's GC ignores the database cache for trigger percentage
cache := ctx.GlobalInt(CacheFlag.Name)
gogc := math.Max(20, math.Min(100, 100/(float64(cache)/1024)))
log.Debug("Sanitizing Go's GC trigger", "percent", int(gogc))
godebug.SetGCPercent(int(gogc))
if ctx.GlobalIsSet(SyncModeFlag.Name) {
cfg.SyncMode = *GlobalTextMarshaler(ctx, SyncModeFlag.Name).(*downloader.SyncMode)
}
if ctx.GlobalIsSet(NetworkIdFlag.Name) {
cfg.NetworkId = ctx.GlobalUint64(NetworkIdFlag.Name)
}
if ctx.GlobalIsSet(CacheFlag.Name) || ctx.GlobalIsSet(CacheDatabaseFlag.Name) {
cfg.DatabaseCache = ctx.GlobalInt(CacheFlag.Name) * ctx.GlobalInt(CacheDatabaseFlag.Name) / 100
}
cfg.DatabaseHandles = MakeDatabaseHandles()
if ctx.GlobalIsSet(AncientFlag.Name) {
cfg.DatabaseFreezer = ctx.GlobalString(AncientFlag.Name)
}
if gcmode := ctx.GlobalString(GCModeFlag.Name); gcmode != "full" && gcmode != "archive" {
Fatalf("--%s must be either 'full' or 'archive'", GCModeFlag.Name)
}
if ctx.GlobalIsSet(GCModeFlag.Name) {
cfg.NoPruning = ctx.GlobalString(GCModeFlag.Name) == "archive"
}
if ctx.GlobalIsSet(CacheNoPrefetchFlag.Name) {
cfg.NoPrefetch = ctx.GlobalBool(CacheNoPrefetchFlag.Name)
}
// Read the value from the flag no matter if it's set or not.
cfg.Preimages = ctx.GlobalBool(CachePreimagesFlag.Name)
if cfg.NoPruning && !cfg.Preimages {
cfg.Preimages = true
log.Info("Enabling recording of key preimages since archive mode is used")
}
if ctx.GlobalIsSet(TxLookupLimitFlag.Name) {
cfg.TxLookupLimit = ctx.GlobalUint64(TxLookupLimitFlag.Name)
}
if ctx.GlobalIsSet(CacheFlag.Name) || ctx.GlobalIsSet(CacheTrieFlag.Name) {
cfg.TrieCleanCache = ctx.GlobalInt(CacheFlag.Name) * ctx.GlobalInt(CacheTrieFlag.Name) / 100
}
if ctx.GlobalIsSet(CacheTrieJournalFlag.Name) {
cfg.TrieCleanCacheJournal = ctx.GlobalString(CacheTrieJournalFlag.Name)
}
if ctx.GlobalIsSet(CacheTrieRejournalFlag.Name) {
cfg.TrieCleanCacheRejournal = ctx.GlobalDuration(CacheTrieRejournalFlag.Name)
}
if ctx.GlobalIsSet(CacheFlag.Name) || ctx.GlobalIsSet(CacheGCFlag.Name) {
cfg.TrieDirtyCache = ctx.GlobalInt(CacheFlag.Name) * ctx.GlobalInt(CacheGCFlag.Name) / 100
}
if ctx.GlobalIsSet(CacheFlag.Name) || ctx.GlobalIsSet(CacheSnapshotFlag.Name) {
cfg.SnapshotCache = ctx.GlobalInt(CacheFlag.Name) * ctx.GlobalInt(CacheSnapshotFlag.Name) / 100
}
if !ctx.GlobalBool(SnapshotFlag.Name) {
// If snap-sync is requested, this flag is also required
if cfg.SyncMode == downloader.SnapSync {
log.Info("Snap sync requested, enabling --snapshot")
} else {
cfg.TrieCleanCache += cfg.SnapshotCache
cfg.SnapshotCache = 0 // Disabled
}
}
if ctx.GlobalIsSet(DocRootFlag.Name) {
cfg.DocRoot = ctx.GlobalString(DocRootFlag.Name)
}
if ctx.GlobalIsSet(VMEnableDebugFlag.Name) {
// TODO(fjl): force-enable this in --dev mode
cfg.EnablePreimageRecording = ctx.GlobalBool(VMEnableDebugFlag.Name)
}
if ctx.GlobalIsSet(RPCGlobalGasCapFlag.Name) {
cfg.RPCGasCap = ctx.GlobalUint64(RPCGlobalGasCapFlag.Name)
}
if cfg.RPCGasCap != 0 {
log.Info("Set global gas cap", "cap", cfg.RPCGasCap)
} else {
log.Info("Global gas cap disabled")
}
if ctx.GlobalIsSet(RPCGlobalEVMTimeoutFlag.Name) {
cfg.RPCEVMTimeout = ctx.GlobalDuration(RPCGlobalEVMTimeoutFlag.Name)
}
if ctx.GlobalIsSet(RPCGlobalTxFeeCapFlag.Name) {
cfg.RPCTxFeeCap = ctx.GlobalFloat64(RPCGlobalTxFeeCapFlag.Name)
}
if ctx.GlobalIsSet(NoDiscoverFlag.Name) {
cfg.EthDiscoveryURLs, cfg.SnapDiscoveryURLs = []string{}, []string{}
} else if ctx.GlobalIsSet(DNSDiscoveryFlag.Name) {
urls := ctx.GlobalString(DNSDiscoveryFlag.Name)
if urls == "" {
cfg.EthDiscoveryURLs = []string{}
} else {
cfg.EthDiscoveryURLs = SplitAndTrim(urls)
}
}
// Override any default configs for hard coded networks.
switch {
case ctx.GlobalBool(MainnetFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
cfg.NetworkId = 1
}
cfg.Genesis = core.DefaultGenesisBlock()
SetDNSDiscoveryDefaults(cfg, params.MainnetGenesisHash)
case ctx.GlobalBool(RopstenFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
cfg.NetworkId = 3
}
cfg.Genesis = core.DefaultRopstenGenesisBlock()
SetDNSDiscoveryDefaults(cfg, params.RopstenGenesisHash)
case ctx.GlobalBool(RinkebyFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
cfg.NetworkId = 4
}
cfg.Genesis = core.DefaultRinkebyGenesisBlock()
SetDNSDiscoveryDefaults(cfg, params.RinkebyGenesisHash)
case ctx.GlobalBool(GoerliFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
cfg.NetworkId = 5
}
cfg.Genesis = core.DefaultGoerliGenesisBlock()
SetDNSDiscoveryDefaults(cfg, params.GoerliGenesisHash)
case ctx.GlobalBool(DeveloperFlag.Name):
if !ctx.GlobalIsSet(NetworkIdFlag.Name) {
cfg.NetworkId = 1337
}
cfg.SyncMode = downloader.FullSync
// Create new developer account or reuse existing one
var (
developer accounts.Account
passphrase string
err error
)
if list := MakePasswordList(ctx); len(list) > 0 {
// Just take the first value. Although the function returns a possible multiple values and
// some usages iterate through them as attempts, that doesn't make sense in this setting,
// when we're definitely concerned with only one account.
passphrase = list[0]
}
// setEtherbase has been called above, configuring the miner address from command line flags.
if cfg.Miner.Etherbase != (common.Address{}) {
developer = accounts.Account{Address: cfg.Miner.Etherbase}
} else if accs := ks.Accounts(); len(accs) > 0 {
developer = ks.Accounts()[0]
} else {
developer, err = ks.NewAccount(passphrase)
if err != nil {
Fatalf("Failed to create developer account: %v", err)
}
}
if err := ks.Unlock(developer, passphrase); err != nil {
Fatalf("Failed to unlock developer account: %v", err)
}
log.Info("Using developer account", "address", developer.Address)
// Create a new developer genesis block or reuse existing one
cfg.Genesis = core.DeveloperGenesisBlock(uint64(ctx.GlobalInt(DeveloperPeriodFlag.Name)), developer.Address)
if ctx.GlobalIsSet(DataDirFlag.Name) {
// Check if we have an already initialized chain and fall back to
// that if so. Otherwise we need to generate a new genesis spec.
chaindb := MakeChainDatabase(ctx, stack, false) // TODO (MariusVanDerWijden) make this read only
if rawdb.ReadCanonicalHash(chaindb, 0) != (common.Hash{}) {
cfg.Genesis = nil // fallback to db content
}
chaindb.Close()
}
if !ctx.GlobalIsSet(MinerGasPriceFlag.Name) {
cfg.Miner.GasPrice = big.NewInt(1)
}
default:
if cfg.NetworkId == 1 {
SetDNSDiscoveryDefaults(cfg, params.MainnetGenesisHash)
}
}
}
// SetDNSDiscoveryDefaults configures DNS discovery with the given URL if
// no URLs are set.
func SetDNSDiscoveryDefaults(cfg *ethconfig.Config, genesis common.Hash) {
if cfg.EthDiscoveryURLs != nil {
return // already set through flags/config
}
protocol := "all"
if cfg.SyncMode == downloader.LightSync {
protocol = "les"
}
if url := params.KnownDNSNetwork(genesis, protocol); url != "" {
cfg.EthDiscoveryURLs = []string{url}
cfg.SnapDiscoveryURLs = cfg.EthDiscoveryURLs
}
}
// RegisterEthService adds an Ethereum client to the stack.
// The second return value is the full node instance, which may be nil if the
// node is running as a light client.
func RegisterEthService(stack *node.Node, cfg *ethconfig.Config) (ethapi.Backend, *eth.Ethereum) {
if cfg.SyncMode == downloader.LightSync {
backend, err := les.New(stack, cfg)
if err != nil {
Fatalf("Failed to register the Ethereum service: %v", err)
}
stack.RegisterAPIs(tracers.APIs(backend.ApiBackend))
return backend.ApiBackend, nil
}
backend, err := eth.New(stack, cfg)
if err != nil {
Fatalf("Failed to register the Ethereum service: %v", err)
}
if cfg.LightServ > 0 {
_, err := les.NewLesServer(stack, backend, cfg)
if err != nil {
Fatalf("Failed to create the LES server: %v", err)
}
}
stack.RegisterAPIs(tracers.APIs(backend.APIBackend))
return backend.APIBackend, backend
}
// RegisterLesEthService adds an Ethereum les client to the stack.
func RegisterLesEthService(stack *node.Node, cfg *eth.Config) *les.LightEthereum {
backend, err := les.New(stack, cfg)
if err != nil {
Fatalf("Failed to register the Ethereum service: %v", err)
}
return backend
}
// RegisterEthStatsService configures the Ethereum Stats daemon and adds it to
// the given node.
func RegisterEthStatsService(stack *node.Node, backend ethapi.Backend, url string) {
if err := ethstats.New(stack, backend, backend.Engine(), url); err != nil {
Fatalf("Failed to register the Ethereum Stats service: %v", err)
}
}
// RegisterGraphQLService is a utility function to construct a new service and register it against a node.
func RegisterGraphQLService(stack *node.Node, backend ethapi.Backend, cfg node.Config) {
if err := graphql.New(stack, backend, cfg.GraphQLCors, cfg.GraphQLVirtualHosts); err != nil {
Fatalf("Failed to register the GraphQL service: %v", err)
}
}
// RegisterStateDiffService configures and registers a service to stream state diff data over RPC
func RegisterStateDiffService(stack *node.Node, ethServ *eth.Ethereum, cfg *ethconfig.Config, params statediff.ServiceParams) {
if err := statediff.New(stack, ethServ, cfg, params); err != nil {
Fatalf("Failed to register the Statediff service: %v", err)
}
}
func SetupMetrics(ctx *cli.Context) {
if metrics.Enabled {
log.Info("Enabling metrics collection")
var (
enableExport = ctx.GlobalBool(MetricsEnableInfluxDBFlag.Name)
enableExportV2 = ctx.GlobalBool(MetricsEnableInfluxDBV2Flag.Name)
)
if enableExport || enableExportV2 {
CheckExclusive(ctx, MetricsEnableInfluxDBFlag, MetricsEnableInfluxDBV2Flag)
v1FlagIsSet := ctx.GlobalIsSet(MetricsInfluxDBUsernameFlag.Name) ||
ctx.GlobalIsSet(MetricsInfluxDBPasswordFlag.Name)
v2FlagIsSet := ctx.GlobalIsSet(MetricsInfluxDBTokenFlag.Name) ||
ctx.GlobalIsSet(MetricsInfluxDBOrganizationFlag.Name) ||
ctx.GlobalIsSet(MetricsInfluxDBBucketFlag.Name)
if enableExport && v2FlagIsSet {
Fatalf("Flags --influxdb.metrics.organization, --influxdb.metrics.token, --influxdb.metrics.bucket are only available for influxdb-v2")
} else if enableExportV2 && v1FlagIsSet {
Fatalf("Flags --influxdb.metrics.username, --influxdb.metrics.password are only available for influxdb-v1")
}
}
var (
endpoint = ctx.GlobalString(MetricsInfluxDBEndpointFlag.Name)
database = ctx.GlobalString(MetricsInfluxDBDatabaseFlag.Name)
username = ctx.GlobalString(MetricsInfluxDBUsernameFlag.Name)
password = ctx.GlobalString(MetricsInfluxDBPasswordFlag.Name)
token = ctx.GlobalString(MetricsInfluxDBTokenFlag.Name)
bucket = ctx.GlobalString(MetricsInfluxDBBucketFlag.Name)
organization = ctx.GlobalString(MetricsInfluxDBOrganizationFlag.Name)
)
if enableExport {
tagsMap := SplitTagsFlag(ctx.GlobalString(MetricsInfluxDBTagsFlag.Name))
log.Info("Enabling metrics export to InfluxDB")
go influxdb.InfluxDBWithTags(metrics.DefaultRegistry, 10*time.Second, endpoint, database, username, password, "geth.", tagsMap)
} else if enableExportV2 {
tagsMap := SplitTagsFlag(ctx.GlobalString(MetricsInfluxDBTagsFlag.Name))
log.Info("Enabling metrics export to InfluxDB (v2)")
go influxdb.InfluxDBV2WithTags(metrics.DefaultRegistry, 10*time.Second, endpoint, token, bucket, organization, "geth.", tagsMap)
}
if ctx.GlobalIsSet(MetricsHTTPFlag.Name) {
address := fmt.Sprintf("%s:%d", ctx.GlobalString(MetricsHTTPFlag.Name), ctx.GlobalInt(MetricsPortFlag.Name))
log.Info("Enabling stand-alone metrics HTTP endpoint", "address", address)
exp.Setup(address)
}
}
}
func SplitTagsFlag(tagsFlag string) map[string]string {
tags := strings.Split(tagsFlag, ",")
tagsMap := map[string]string{}
for _, t := range tags {
if t != "" {
kv := strings.Split(t, "=")
if len(kv) == 2 {
tagsMap[kv[0]] = kv[1]
}
}
}
return tagsMap
}
// MakeChainDatabase open an LevelDB using the flags passed to the client and will hard crash if it fails.
func MakeChainDatabase(ctx *cli.Context, stack *node.Node, readonly bool) ethdb.Database {
var (
cache = ctx.GlobalInt(CacheFlag.Name) * ctx.GlobalInt(CacheDatabaseFlag.Name) / 100
handles = MakeDatabaseHandles()
err error
chainDb ethdb.Database
)
if ctx.GlobalString(SyncModeFlag.Name) == "light" {
name := "lightchaindata"
chainDb, err = stack.OpenDatabase(name, cache, handles, "", readonly)
} else {
name := "chaindata"
chainDb, err = stack.OpenDatabaseWithFreezer(name, cache, handles, ctx.GlobalString(AncientFlag.Name), "", readonly)
}
if err != nil {
Fatalf("Could not open database: %v", err)
}
return chainDb
}
func MakeGenesis(ctx *cli.Context) *core.Genesis {
var genesis *core.Genesis
switch {
case ctx.GlobalBool(MainnetFlag.Name):
genesis = core.DefaultGenesisBlock()
case ctx.GlobalBool(RopstenFlag.Name):
genesis = core.DefaultRopstenGenesisBlock()
case ctx.GlobalBool(RinkebyFlag.Name):
genesis = core.DefaultRinkebyGenesisBlock()
case ctx.GlobalBool(GoerliFlag.Name):
genesis = core.DefaultGoerliGenesisBlock()
case ctx.GlobalBool(DeveloperFlag.Name):
Fatalf("Developer chains are ephemeral")
}
return genesis
}
// MakeChain creates a chain manager from set command line flags.
func MakeChain(ctx *cli.Context, stack *node.Node) (chain *core.BlockChain, chainDb ethdb.Database) {
var err error
chainDb = MakeChainDatabase(ctx, stack, false) // TODO(rjl493456442) support read-only database
config, _, err := core.SetupGenesisBlock(chainDb, MakeGenesis(ctx))
if err != nil {
Fatalf("%v", err)
}
var engine consensus.Engine
if config.Clique != nil {
engine = clique.New(config.Clique, chainDb)
} else {
engine = ethash.NewFaker()
if !ctx.GlobalBool(FakePoWFlag.Name) {
engine = ethash.New(ethash.Config{
CacheDir: stack.ResolvePath(ethconfig.Defaults.Ethash.CacheDir),
CachesInMem: ethconfig.Defaults.Ethash.CachesInMem,
CachesOnDisk: ethconfig.Defaults.Ethash.CachesOnDisk,
CachesLockMmap: ethconfig.Defaults.Ethash.CachesLockMmap,
DatasetDir: stack.ResolvePath(ethconfig.Defaults.Ethash.DatasetDir),
DatasetsInMem: ethconfig.Defaults.Ethash.DatasetsInMem,
DatasetsOnDisk: ethconfig.Defaults.Ethash.DatasetsOnDisk,
DatasetsLockMmap: ethconfig.Defaults.Ethash.DatasetsLockMmap,
}, nil, false)
}
}
if gcmode := ctx.GlobalString(GCModeFlag.Name); gcmode != "full" && gcmode != "archive" {
Fatalf("--%s must be either 'full' or 'archive'", GCModeFlag.Name)
}
cache := &core.CacheConfig{
TrieCleanLimit: ethconfig.Defaults.TrieCleanCache,
TrieCleanNoPrefetch: ctx.GlobalBool(CacheNoPrefetchFlag.Name),
TrieDirtyLimit: ethconfig.Defaults.TrieDirtyCache,
TrieDirtyDisabled: ctx.GlobalString(GCModeFlag.Name) == "archive",
TrieTimeLimit: ethconfig.Defaults.TrieTimeout,
SnapshotLimit: ethconfig.Defaults.SnapshotCache,
Preimages: ctx.GlobalBool(CachePreimagesFlag.Name),
}
if cache.TrieDirtyDisabled && !cache.Preimages {
cache.Preimages = true
log.Info("Enabling recording of key preimages since archive mode is used")
}
if !ctx.GlobalBool(SnapshotFlag.Name) {
cache.SnapshotLimit = 0 // Disabled
}
if ctx.GlobalIsSet(CacheFlag.Name) || ctx.GlobalIsSet(CacheTrieFlag.Name) {
cache.TrieCleanLimit = ctx.GlobalInt(CacheFlag.Name) * ctx.GlobalInt(CacheTrieFlag.Name) / 100
}
if ctx.GlobalIsSet(CacheFlag.Name) || ctx.GlobalIsSet(CacheGCFlag.Name) {
cache.TrieDirtyLimit = ctx.GlobalInt(CacheFlag.Name) * ctx.GlobalInt(CacheGCFlag.Name) / 100
}
vmcfg := vm.Config{EnablePreimageRecording: ctx.GlobalBool(VMEnableDebugFlag.Name)}
// TODO(rjl493456442) disable snapshot generation/wiping if the chain is read only.
// Disable transaction indexing/unindexing by default.
chain, err = core.NewBlockChain(chainDb, cache, config, engine, vmcfg, nil, nil)
if err != nil {
Fatalf("Can't create BlockChain: %v", err)
}
return chain, chainDb
}
// MakeConsolePreloads retrieves the absolute paths for the console JavaScript
// scripts to preload before starting.
func MakeConsolePreloads(ctx *cli.Context) []string {
// Skip preloading if there's nothing to preload
if ctx.GlobalString(PreloadJSFlag.Name) == "" {
return nil
}
// Otherwise resolve absolute paths and return them
var preloads []string
for _, file := range strings.Split(ctx.GlobalString(PreloadJSFlag.Name), ",") {
preloads = append(preloads, strings.TrimSpace(file))
}
return preloads
}
// MigrateFlags sets the global flag from a local flag when it's set.
// This is a temporary function used for migrating old command/flags to the
// new format.
//
// e.g. geth account new --keystore /tmp/mykeystore --lightkdf
//
// is equivalent after calling this method with:
//
// geth --keystore /tmp/mykeystore --lightkdf account new
//
// This allows the use of the existing configuration functionality.
// When all flags are migrated this function can be removed and the existing
// configuration functionality must be changed that is uses local flags
func MigrateFlags(action func(ctx *cli.Context) error) func(*cli.Context) error {
return func(ctx *cli.Context) error {
for _, name := range ctx.FlagNames() {
if ctx.IsSet(name) {
ctx.GlobalSet(name, ctx.String(name))
}
}
return action(ctx)
}
}