go-ethereum/statediff/indexer/ipld/eth_parser.go
Elizabeth 3aead03aeb Statediffing geth
* Write state diff to CSV (#2)

* port statediff from 9b7fd9af80/statediff/statediff.go; minor fixes

* integrating state diff extracting, building, and persisting into geth processes

* work towards persisting created statediffs in ipfs; based off github.com/vulcanize/eth-block-extractor

* Add a state diff service

* Remove diff extractor from blockchain

* Update imports

* Move statediff on/off check to geth cmd config

* Update starting state diff service

* Add debugging logs for creating diff

* Add statediff extractor and builder tests and small refactoring

* Start to write statediff to a CSV

* Restructure statediff directory

* Pull CSV publishing methods into their own file

* Reformatting due to go fmt

* Add gomega to vendor dir

* Remove testing focuses

* Update statediff tests to use golang test pkg

instead of ginkgo

- builder_test
- extractor_test
- publisher_test

* Use hexutil.Encode instead of deprecated common.ToHex

* Remove OldValue from DiffBigInt and DiffUint64 fields

* Update builder test

* Remove old storage value from updated accounts

* Remove old values from created/deleted accounts

* Update publisher to account for only storing current account values

* Update service loop and fetching previous block

* Update testing

- remove statediff ginkgo test suite file
- move mocks to their own dir

* Updates per go fmt

* Updates to tests

* Pass statediff mode and path in through cli

* Return filename from publisher

* Remove some duplication in builder

* Remove code field from state diff output

this is the contract byte code, and it can still be obtained by querying
the db by the codeHash

* Consolidate acct diff structs for updated & updated/deleted accts

* Include block number in csv filename

* Clean up error logging

* Cleanup formatting, spelling, etc

* Address PR comments

* Add contract address and storage value to csv

* Refactor accumulating account row in csv publisher

* Add DiffStorage struct

* Add storage key to csv

* Address PR comments

* Fix publisher to include rows for accounts that don't have store updates

* Update builder test after merging in release/1.8

* Update test contract to include storage on contract intialization

- so that we're able to test that storage diffing works for created and
deleted accounts (not just updated accounts).

* Factor out a common trie iterator method in builder

* Apply goimports to statediff

* Apply gosimple changes to statediff

* Gracefully exit geth command(#4)

* Statediff for full node (#6)

* Open a trie from the in-memory database

* Use a node's LeafKey as an identifier instead of the address

It was proving difficult to find look the address up from a given path
with a full node (sometimes the value wouldn't exist in the disk db).
So, instead, for now we are using the node's LeafKey with is a Keccak256
hash of the address, so if we know the address we can figure out which
LeafKey it matches up to.

* Make sure that statediff has been processed before pruning

* Use blockchain stateCache.OpenTrie for storage diffs

* Clean up log lines and remove unnecessary fields from builder

* Apply go fmt changes

* Add a sleep to the blockchain test

* refactoring/reorganizing packages

* refactoring statediff builder and types and adjusted to relay proofs and paths (still need to make this optional)

* refactoring state diff service and adding api which allows for streaming state diff payloads over an rpc websocket subscription

* make proofs and paths optional + compress service loop into single for loop (may be missing something here)

* option to process intermediate nodes

* make state diff rlp serializable

* cli parameter to limit statediffing to select account addresses + test

* review fixes and fixes for issues ran into in integration

* review fixes; proper method signature for api; adjust service so that statediff processing is halted/paused until there is at least one subscriber listening for the results

* adjust buffering to improve stability; doc.go; fix notifier
err handling

* relay receipts with the rest of the data + review fixes/changes

* rpc method to get statediff at specific block; requires archival node or the block be within the pruning range

* fix linter issues

* include total difficulty to the payload

* fix state diff builder: emit actual leaf nodes instead of value nodes; diff on the leaf not on the value; emit correct path for intermediate nodes

* adjust statediff builder tests to changes and extend to test intermediate nodes; golint

* add genesis block to test; handle block 0 in StateDiffAt

* rlp files for mainnet blocks 0-3, for tests

* builder test on mainnet blocks

* common.BytesToHash(path) => crypto.Keaccak256(hash) in builder; BytesToHash produces same hash for e.g. []byte{} and []byte{\x00} - prefix \x00 steps are inconsequential to the hash result

* complete tests for early mainnet blocks

* diff type for representing deleted accounts

* fix builder so that we handle account deletions properly and properly diff storage when an account is moved to a new path; update params

* remove cli params; moving them to subscriber defined

* remove unneeded bc methods

* update service and api; statediffing params are now defined by user through api rather than by service provider by cli

* update top level tests

* add ability to watch specific storage slots (leaf keys) only

* comments; explain logic

* update mainnet blocks test

* update api_test.go

* storage leafkey filter test

* cleanup chain maker

* adjust chain maker for tests to add an empty account in block1 and switch to EIP-158 afterwards (now we just need to generate enough accounts until one causes the empty account to be touched and removed post-EIP-158 so we can simulate and test that process...); also added 2 new blocks where more contract storage is set and old slots are set to zero so they are removed so we can test that

* found an account whose creation causes the empty account to be moved to a new path; this should count as 'touching; the empty account and cause it to be removed according to eip-158... but it doesn't

* use new contract in unit tests that has self-destruct ability, so we can test eip-158 since simply moving an account to new path doesn't count as 'touchin' it

* handle storage deletions

* tests for eip-158 account removal and storage value deletions; there is one edge case left to test where we remove 1 account when only two exist such that the remaining account is moved up and replaces the root branch node

* finish testing known edge cases

* add endpoint to fetch all state and storage nodes at a given blockheight; useful for generating a recent atate cache/snapshot that we can diff forward from rather than needing to collect all diffs from genesis

* test for state trie builder

* if statediffing is on, lock tries in triedb until the statediffing service signals they are done using them

* fix mock blockchain; golint; bump patch

* increase maxRequestContentLength; bump patch

* log the sizes of the state objects we are sending

* CI build (#20)

* CI: run build on PR and on push to master

* CI: debug building geth

* CI: fix coping file

* CI: fix coping file v2

* CI: temporary upload file to release asset

* CI: get release upload_url by tag, upload asset to current relase

* CI: fix tag name

* fix ci build on statediff_at_anyblock-1.9.11 branch

* fix publishing assets in release

* use context deadline for timeout in eth_call

* collect and emit codehash=>code mappings for state objects

* subscription endpoint for retrieving all the codehash=>code mappings that exist at provided height

* Implement WriteStateDiffAt

* Writes state diffs directly to postgres

* Adds CLI flags to configure PG

* Refactors builder output with callbacks

* Copies refactored postgres handling code from ipld-eth-indexer

* rename PostgresCIDWriter.{index->upsert}*

* rm unused

* output code & codehash iteratively

* had to rf some types for this

* prometheus metrics output

* duplicate recent eth-indexer changes

* migrations and metrics...

* [wip] prom.Init() here? another CLI flag?

* tidy & DRY

* statediff WriteLoop service + CLI flag

* [wip] update test mocks

* todo - do something meaningful to test write loop

* logging

* use geth log

* port tests to go testing

* drop ginkgo/gomega

* fix and cleanup tests

* fail before defer statement

* delete vendor/ dir

* fixes after rebase onto 1.9.23

* fix API registration

* use golang 1.15.5 version (#34)

* bump version meta; add 0.0.11 branch to actions

* bump version meta; update github actions workflows

* statediff: refactor metrics

* Remove redundant statediff/indexer/prom tooling and use existing
prometheus integration.

* "indexer" namespace for metrics

* add reporting loop for db metrics

* doc

* metrics for statediff stats

* metrics namespace/subsystem = statediff/{indexer,service}

* statediff: use a worker pool (for direct writes)

* fix test

* fix chain event subscription

* log tweaks

* func name

* unused import

* intermediate chain event channel for metrics

* update github actions; linting

* add poststate and status to receipt ipld indexes

* stateDiffFor endpoints for fetching or writing statediff object by blockhash; bump statediff version

* fixes after rebase on to v1.10.1

* update github actions and version meta; go fmt

* add leaf key to removed 'nodes'

* include Postgres migrations and schema

* service documentation

* touching up

* update github actions after rebase

* fix connection leak (misplaced defer) and perform proper rollback on errs

* improve error logging; handle PushBlock internal err

* build docker image and publish it to Docker Hub on release

* add access list tx to unit tests

* MarshalBinary and UnmarshalBinary methods for receipt

* fix error caused by 2718 by using MarshalBinary instead of EncodeRLP methods

* ipld encoding/decoding tests

* update TxModel; add AccessListElementModel

* index tx type and access lists

* add access list metrics

* unit tests for tx_type and access list table

* unit tests for receipt marshal/unmarshal binary methods

* improve documentation of the encoding methods

* fix issue identified in linting

* update github actions and version meta after rebase

* unit test that fails undeterministically on eip2930 txs, giving same error we are seeing in prod

* Include genesis block state diff.

* documentation on versioning, rebasing, releasing; bump version meta

* Add geth and statediff unit test to CI.

* Set pgpassword in env.

* Added comments.

* Add new major branch to github action.

* Add support for Dynamic txn(EIP-1559).

* Update version meta to 0.0.24

* Verify block base fee in test.

* Fix base_fee type and add backward compatible test.

* Remove type definition for AccessListElementModel

* Change basefee to int64/bigint.

* block and uncle reward in PoA network = 0  (#87)

* in PoA networks there is no block and uncle rewards

* bump meta version

* (cherry picked from commit b64ca14689)

* Use Ropsten to test block reward.

* Add Makefile target to build static linux binaries.

* Strip symbol tables from static binaries.

* Fix block_fee to support NULL values.

* bump version meta.

* Add new major branch to github action.

* rename doc.go to README.md

* Create a seperate table for storing logs

* Bump statediff version to 0.0.26.

* add btree index to state/storage_cids.node_type; updated schema

* Dedup receipt data.

* Fix linter errors.

* Address comments.

* Bump statediff version to 0.0.27.

* new cli flag for initializing db first time service is ran

* only write Removed node ipld block (on db init) and reuse constant cid and mhkey

* test new handling of Removed nodes; don't require init flag

* log metrics

* Add new major branch to github action.

* Fix build.

* Update golang version in CI.

* Use ipld-eth-db in testing.

* Remove migration from repo.

* Add new major branch to github action.

*Use `GetTd` instead of `GetTdByHash`
6289137827

* Add new major branch to github action.

* Report DB metrics

* batch inserts to public.blocks

* v2 => v3  major refactor

* fixes and cli integration for new options

* update example command in readme

* ashwin's fix for failing pgx unit test

* update to use new schema; fix pgx driver

* indexer that writes sql stmts out to a file

* cli integration

* fix unit tests

* use node_id as PK/FK

* misc fixes/adjustments

* update README

* cleanup; more unit tests

* basefee is big.Int, it won't always fit in int64

* adjust for schema updates

* finish unit tests

* test harnest for arbitrary mainnet blocks and receipts

* cache problematic block locally for quicker testing/easier CI testing

* fix issue with log/logTrie processing

* remove some unecessary hashing operations

* handle edge case

* add more 'bad blocks' to mainnet_tests

* increase file write buffer size

* increase buffer further

* fix rct trie multicodec type

* extend testing

* log trie fk fix

* bump statediff meta version; use db v0.3.0 in compose

* skip file writing tests in CI, for now

* prevent parallel execution of tests in different pkgs (suspect this is what causes our deadlock to show up only in CI test env)
adjust write buffering

* fix rct unit tests

* fix README formatting

* port retry on deadlock detection feature

* new workflow on-'master' targets

* update version meta

* improve test coverage for logs

* fix possible race condition

* fix CI

* check tx pool state at end of unit tests

* better logging of rollbacks and dead lock retries
2022-02-16 14:40:08 -06:00

303 lines
9.4 KiB
Go

// VulcanizeDB
// Copyright © 2019 Vulcanize
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
package ipld
import (
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"github.com/ipfs/go-cid"
node "github.com/ipfs/go-ipld-format"
"github.com/multiformats/go-multihash"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/rlp"
)
// FromBlockRLP takes an RLP message representing
// an ethereum block header or body (header, ommers and txs)
// to return it as a set of IPLD nodes for further processing.
func FromBlockRLP(r io.Reader) (*EthHeader, []*EthTx, []*EthTxTrie, error) {
// We may want to use this stream several times
rawdata, err := ioutil.ReadAll(r)
if err != nil {
return nil, nil, nil, err
}
// Let's try to decode the received element as a block body
var decodedBlock types.Block
err = rlp.Decode(bytes.NewBuffer(rawdata), &decodedBlock)
if err != nil {
if err.Error()[:41] != "rlp: expected input list for types.Header" {
return nil, nil, nil, err
}
// Maybe it is just a header... (body sans ommers and txs)
var decodedHeader types.Header
err := rlp.Decode(bytes.NewBuffer(rawdata), &decodedHeader)
if err != nil {
return nil, nil, nil, err
}
c, err := RawdataToCid(MEthHeader, rawdata, multihash.KECCAK_256)
if err != nil {
return nil, nil, nil, err
}
// It was a header
return &EthHeader{
Header: &decodedHeader,
cid: c,
rawdata: rawdata,
}, nil, nil, nil
}
// This is a block body (header + ommers + txs)
// We'll extract the header bits here
headerRawData := getRLP(decodedBlock.Header())
c, err := RawdataToCid(MEthHeader, headerRawData, multihash.KECCAK_256)
if err != nil {
return nil, nil, nil, err
}
ethBlock := &EthHeader{
Header: decodedBlock.Header(),
cid: c,
rawdata: headerRawData,
}
// Process the found eth-tx objects
ethTxNodes, ethTxTrieNodes, err := processTransactions(decodedBlock.Transactions(),
decodedBlock.Header().TxHash[:])
if err != nil {
return nil, nil, nil, err
}
return ethBlock, ethTxNodes, ethTxTrieNodes, nil
}
// FromBlockJSON takes the output of an ethereum client JSON API
// (i.e. parity or geth) and returns a set of IPLD nodes.
func FromBlockJSON(r io.Reader) (*EthHeader, []*EthTx, []*EthTxTrie, error) {
var obj objJSONHeader
dec := json.NewDecoder(r)
err := dec.Decode(&obj)
if err != nil {
return nil, nil, nil, err
}
headerRawData := getRLP(obj.Result.Header)
c, err := RawdataToCid(MEthHeader, headerRawData, multihash.KECCAK_256)
if err != nil {
return nil, nil, nil, err
}
ethBlock := &EthHeader{
Header: &obj.Result.Header,
cid: c,
rawdata: headerRawData,
}
// Process the found eth-tx objects
ethTxNodes, ethTxTrieNodes, err := processTransactions(obj.Result.Transactions,
obj.Result.Header.TxHash[:])
if err != nil {
return nil, nil, nil, err
}
return ethBlock, ethTxNodes, ethTxTrieNodes, nil
}
// FromBlockAndReceipts takes a block and processes it
// to return it a set of IPLD nodes for further processing.
func FromBlockAndReceipts(block *types.Block, receipts []*types.Receipt) (*EthHeader, []*EthHeader, []*EthTx, []*EthTxTrie, []*EthReceipt, []*EthRctTrie, [][]node.Node, [][]cid.Cid, []cid.Cid, error) {
// Process the header
headerNode, err := NewEthHeader(block.Header())
if err != nil {
return nil, nil, nil, nil, nil, nil, nil, nil, nil, err
}
// Process the uncles
uncleNodes := make([]*EthHeader, len(block.Uncles()))
for i, uncle := range block.Uncles() {
uncleNode, err := NewEthHeader(uncle)
if err != nil {
return nil, nil, nil, nil, nil, nil, nil, nil, nil, err
}
uncleNodes[i] = uncleNode
}
// Process the txs
txNodes, txTrieNodes, err := processTransactions(block.Transactions(),
block.Header().TxHash[:])
if err != nil {
return nil, nil, nil, nil, nil, nil, nil, nil, nil, err
}
// Process the receipts and logs
rctNodes, tctTrieNodes, logTrieAndLogNodes, logLeafNodeCIDs, rctLeafNodeCIDs, err := processReceiptsAndLogs(receipts,
block.Header().ReceiptHash[:])
return headerNode, uncleNodes, txNodes, txTrieNodes, rctNodes, tctTrieNodes, logTrieAndLogNodes, logLeafNodeCIDs, rctLeafNodeCIDs, err
}
// processTransactions will take the found transactions in a parsed block body
// to return IPLD node slices for eth-tx and eth-tx-trie
func processTransactions(txs []*types.Transaction, expectedTxRoot []byte) ([]*EthTx, []*EthTxTrie, error) {
var ethTxNodes []*EthTx
transactionTrie := newTxTrie()
for idx, tx := range txs {
ethTx, err := NewEthTx(tx)
if err != nil {
return nil, nil, err
}
ethTxNodes = append(ethTxNodes, ethTx)
if err := transactionTrie.Add(idx, ethTx.RawData()); err != nil {
return nil, nil, err
}
}
if !bytes.Equal(transactionTrie.rootHash(), expectedTxRoot) {
return nil, nil, fmt.Errorf("wrong transaction hash computed")
}
txTrieNodes, err := transactionTrie.getNodes()
return ethTxNodes, txTrieNodes, err
}
// processReceiptsAndLogs will take in receipts
// to return IPLD node slices for eth-rct, eth-rct-trie, eth-log, eth-log-trie, eth-log-trie-CID, eth-rct-trie-CID
func processReceiptsAndLogs(rcts []*types.Receipt, expectedRctRoot []byte) ([]*EthReceipt, []*EthRctTrie, [][]node.Node, [][]cid.Cid, []cid.Cid, error) {
// Pre allocating memory.
ethRctNodes := make([]*EthReceipt, 0, len(rcts))
ethLogleafNodeCids := make([][]cid.Cid, 0, len(rcts))
ethLogTrieAndLogNodes := make([][]node.Node, 0, len(rcts))
receiptTrie := NewRctTrie()
for idx, rct := range rcts {
// Process logs for each receipt.
logTrieNodes, leafNodeCids, logTrieHash, err := processLogs(rct.Logs)
if err != nil {
return nil, nil, nil, nil, nil, err
}
rct.LogRoot = logTrieHash
ethLogTrieAndLogNodes = append(ethLogTrieAndLogNodes, logTrieNodes)
ethLogleafNodeCids = append(ethLogleafNodeCids, leafNodeCids)
ethRct, err := NewReceipt(rct)
if err != nil {
return nil, nil, nil, nil, nil, err
}
ethRctNodes = append(ethRctNodes, ethRct)
if err = receiptTrie.Add(idx, ethRct.RawData()); err != nil {
return nil, nil, nil, nil, nil, err
}
}
if !bytes.Equal(receiptTrie.rootHash(), expectedRctRoot) {
return nil, nil, nil, nil, nil, fmt.Errorf("wrong receipt hash computed")
}
rctTrieNodes, err := receiptTrie.GetNodes()
if err != nil {
return nil, nil, nil, nil, nil, err
}
rctLeafNodes, keys, err := receiptTrie.GetLeafNodes()
if err != nil {
return nil, nil, nil, nil, nil, err
}
ethRctleafNodeCids := make([]cid.Cid, len(rctLeafNodes))
for i, rln := range rctLeafNodes {
var idx uint
r := bytes.NewReader(keys[i].TrieKey)
err = rlp.Decode(r, &idx)
if err != nil {
return nil, nil, nil, nil, nil, err
}
ethRctleafNodeCids[idx] = rln.Cid()
}
return ethRctNodes, rctTrieNodes, ethLogTrieAndLogNodes, ethLogleafNodeCids, ethRctleafNodeCids, err
}
const keccak256Length = 32
func processLogs(logs []*types.Log) ([]node.Node, []cid.Cid, common.Hash, error) {
logTr := newLogTrie()
shortLog := make(map[uint64]*EthLog, len(logs))
for idx, log := range logs {
logRaw, err := rlp.EncodeToBytes(log)
if err != nil {
return nil, nil, common.Hash{}, err
}
// if len(logRaw) <= keccak256Length it is possible this value's "leaf node"
// will be stored in its parent branch but only if len(partialPathOfTheNode) + len(logRaw) <= keccak256Length
// But we can't tell what the partial path will be until the trie is Commit()-ed
// So wait until we collect all the leaf nodes, and if we are missing any at the indexes we note in shortLogCIDs
// we know that these "leaf nodes" were internalized into their parent branch node and we move forward with
// using the cid.Cid we cached in shortLogCIDs
if len(logRaw) <= keccak256Length {
logNode, err := NewLog(log)
if err != nil {
return nil, nil, common.Hash{}, err
}
shortLog[uint64(idx)] = logNode
}
if err = logTr.Add(idx, logRaw); err != nil {
return nil, nil, common.Hash{}, err
}
}
logTrieNodes, err := logTr.getNodes()
if err != nil {
return nil, nil, common.Hash{}, err
}
leafNodes, keys, err := logTr.getLeafNodes()
if err != nil {
return nil, nil, common.Hash{}, err
}
leafNodeCids := make([]cid.Cid, len(logs))
for i, ln := range leafNodes {
var idx uint
r := bytes.NewReader(keys[i].TrieKey)
err = rlp.Decode(r, &idx)
if err != nil {
return nil, nil, common.Hash{}, err
}
leafNodeCids[idx] = ln.Cid()
}
// this is where we check which logs <= keccak256Length were actually internalized into parent branch node
// and replace those that were with the cid.Cid for the raw log IPLD
for i, l := range shortLog {
if !leafNodeCids[i].Defined() {
leafNodeCids[i] = l.Cid()
// if the leaf node was internalized, we append an IPLD for log itself to the list of IPLDs we need to publish
logTrieNodes = append(logTrieNodes, l)
}
}
return logTrieNodes, leafNodeCids, common.BytesToHash(logTr.rootHash()), err
}