Merge pull request #94 from vulcanize/documentation-updates

Documentation updates
This commit is contained in:
Elizabeth 2019-05-15 13:22:15 -05:00 committed by GitHub
commit 79765c7998
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
18 changed files with 269 additions and 162 deletions

View File

@ -7,35 +7,40 @@
## Table of Contents ## Table of Contents
1. [Background](../staging/README.md#background) 1. [Background](#background)
1. [Dependencies](../staging/README.md#dependencies) 1. [Install](#install)
1. [Install](../staging/README.md#install) 1. [Usage](#usage)
1. [Usage](../staging/README.md#usage) 1. [Contributing](#contributing)
1. [Tests](../staging/README.md#tests) 1. [License](#license)
1. [API](../staging/README.md#API)
1. [Contributing](../staging/README.md#contributing)
1. [License](../staging/README.md#license)
## Background ## Background
The same data structures and encodings that make Ethereum an effective and trust-less distributed virtual machine The same data structures and encodings that make Ethereum an effective and trust-less distributed virtual machine
complicate data accessibility and usability for dApp developers. VulcanizeDB improves Ethereum data accessibility by complicate data accessibility and usability for dApp developers. VulcanizeDB improves Ethereum data accessibility by
providing a suite of tools to ease the extraction and transformation of data into a more useful state. providing a suite of tools to ease the extraction and transformation of data into a more useful state, including
allowing for exposing aggregate data from a suite of smart contracts.
VulanizeDB includes processes that sync, transform and expose data. Syncing involves
querying an Ethereum node and then persisting core data into a Postgres database. Transforming focuses on using previously synced data to
query for and transform log event and storage data for specifically configured smart contract addresses. Exposing data is a matter of getting
data from VulcanizeDB's underlying Postgres database and making it accessible.
## Dependencies ![VulcanizeDB Overview Diagram](documentation/diagrams/vdb-overview.png)
## Install
1. [Dependencies](#dependencies)
1. [Building the project](#building-the-project)
1. [Setting up the database](#setting-up-the-database)
1. [Configuring a synced Ethereum node](#configuring-a-synced-ethereum-node)
### Dependencies
- Go 1.11+ - Go 1.11+
- Postgres 11.2 - Postgres 11.2
- Ethereum Node - Ethereum Node
- [Go Ethereum](https://ethereum.github.io/go-ethereum/downloads/) (1.8.23+) - [Go Ethereum](https://ethereum.github.io/go-ethereum/downloads/) (1.8.23+)
- [Parity 1.8.11+](https://github.com/paritytech/parity/releases) - [Parity 1.8.11+](https://github.com/paritytech/parity/releases)
## Install
1. [Building the project](../staging/README.md#building-the-project)
1. [Setting up the database](../staging/README.md#setting-up-the-database)
1. [Configuring a synced Ethereum node](../staging/README.md#configuring-a-synced-ethereum-node)
### Building the project ### Building the project
Download the codebase to your local `GOPATH` via: Download the codebase to your local `GOPATH` via:
@ -68,7 +73,8 @@ It can be additionally helpful to add `$GOPATH/bin` to your shell's `$PATH`.
* See below for configuring additional environments * See below for configuring additional environments
In some cases (such as recent Ubuntu systems), it may be necessary to overcome failures of password authentication from localhost. To allow access on Ubuntu, set localhost connections via hostname, ipv4, and ipv6 from peer/md5 to trust in: /etc/postgresql/<version>/pg_hba.conf In some cases (such as recent Ubuntu systems), it may be necessary to overcome failures of password authentication from
localhost. To allow access on Ubuntu, set localhost connections via hostname, ipv4, and ipv6 from peer/md5 to trust in: /etc/postgresql/<version>/pg_hba.conf
(It should be noted that trusted auth should only be enabled on systems without sensitive data in them: development and local test databases) (It should be noted that trusted auth should only be enabled on systems without sensitive data in them: development and local test databases)
@ -80,8 +86,9 @@ In some cases (such as recent Ubuntu systems), it may be necessary to overcome f
- The IPC file is called `geth.ipc`. - The IPC file is called `geth.ipc`.
- The geth IPC file path is printed to the console when you start geth. - The geth IPC file path is printed to the console when you start geth.
- The default location is: - The default location is:
- Mac: `<full home path>/Library/Ethereum` - Mac: `<full home path>/Library/Ethereum/geth.ipc`
- Linux: `<full home path>/ethereum/geth.ipc` - Linux: `<full home path>/ethereum/geth.ipc`
- Note: the geth.ipc file may not exist until you've started the geth process
- For Parity: - For Parity:
- The IPC file is called `jsonrpc.ipc`. - The IPC file is called `jsonrpc.ipc`.
@ -98,27 +105,35 @@ In some cases (such as recent Ubuntu systems), it may be necessary to overcome f
## Usage ## Usage
Usage is broken up into two processes: As mentioned above, VulcanizeDB's processes can be split into three categories: syncing, transforming and exposing data.
### Data syncing ### Data syncing
To provide data for transformations, raw Ethereum data must first be synced into vDB. To provide data for transformations, raw Ethereum data must first be synced into VulcanizeDB.
This is accomplished through the use of the `headerSync`, `sync`, or `coldImport` commands. This is accomplished through the use of the `headerSync`, `fullSync`, or `coldImport` commands.
These commands are described in detail [here](../staging/documentation/sync.md). These commands are described in detail [here](documentation/data-syncing.md).
### Data transformation ### Data transformation
Contract watchers use the raw data that has been synced into Postgres to filter out and apply transformations to specific data of interest. Data transformation uses the raw data that has been synced into Postgres to filter out and apply transformations to
specific data of interest. Since there are different types of data that may be useful for observing smart contracts, it
follows that there are different ways to transform this data. We've started by categorizing this into Generic and
Custom transformers:
There is a built-in `contractWatcher` command which provides generic transformation of most contract data. - Generic Contract Transformer: Generic contract transformation can be done using a built-in command,
The `contractWatcher` command is described further [here](../staging/documentation/contractWatcher.md). `contractWatcher`, which transforms contract events provided the contract's ABI is available. It also
provides some state variable coverage by automating polling of public methods, with some restrictions.
`contractWatcher` is described further [here](documentation/generic-transformer.md).
In many cases a custom transformer or set of transformers will need to be written to provide complete or more comprehensive coverage or to optimize other aspects of the output for a specific end-use. - Custom Transformers: In many cases custom transformers will need to be written to provide
In this case we have provided the `compose`, `execute`, and `composeAndExecute` commands for running custom transformers from external repositories. more comprehensive coverage of contract data. In this case we have provided the `compose`, `execute`, and
`composeAndExecute` commands for running custom transformers from external repositories. Documentation on how to write,
build and run custom transformers as Go plugins can be found
[here](documentation/custom-transformers.md).
Usage of the `compose`, `execute`, and `composeAndExecute` commands is described further [here](../staging/documentation/composeAndExecute.md). ### Exposing the data
[Postgraphile](https://www.graphile.org/postgraphile/) is used to expose GraphQL endpoints for our database schemas, this is described in detail [here](documentation/postgraphile.md).
Documentation on how to build custom transformers to work with these commands can be found [here](../staging/documentation/transformers.md).
## Tests ### Tests
- Replace the empty `ipcPath` in the `environments/infura.toml` with a path to a full node's eth_jsonrpc endpoint (e.g. local geth node ipc path or infura url) - Replace the empty `ipcPath` in the `environments/infura.toml` with a path to a full node's eth_jsonrpc endpoint (e.g. local geth node ipc path or infura url)
- Note: integration tests require configuration with an archival node - Note: integration tests require configuration with an archival node
- `createdb vulcanize_private` will create the test db - `createdb vulcanize_private` will create the test db
@ -126,15 +141,13 @@ Documentation on how to build custom transformers to work with these commands ca
- `make test` will run the unit tests and skip the integration tests - `make test` will run the unit tests and skip the integration tests
- `make integrationtest` will run just the integration tests - `make integrationtest` will run just the integration tests
## API
[Postgraphile](https://www.graphile.org/postgraphile/) is used to expose GraphQL endpoints for our database schemas, this is described in detail [here](../staging/documentation/postgraphile.md).
## Contributing ## Contributing
Contributions are welcome! For more on this, please see [here](../staging/documentation/contributing.md). Contributions are welcome!
Small note: If editing the Readme, please conform to the [standard-readme specification](https://github.com/RichardLitt/standard-readme). VulcanizeDB follows the [Contributor Covenant Code of Conduct](https://www.contributor-covenant.org/version/1/4/code-of-conduct).
For more information on contributing, please see [here](documentation/contributing.md).
## License ## License
[AGPL-3.0](../staging/LICENSE) © Vulcanize Inc [AGPL-3.0](LICENSE) © Vulcanize Inc

View File

@ -26,7 +26,7 @@ import (
st "github.com/vulcanize/vulcanizedb/libraries/shared/transformer" st "github.com/vulcanize/vulcanizedb/libraries/shared/transformer"
ft "github.com/vulcanize/vulcanizedb/pkg/contract_watcher/full/transformer" ft "github.com/vulcanize/vulcanizedb/pkg/contract_watcher/full/transformer"
lt "github.com/vulcanize/vulcanizedb/pkg/contract_watcher/header/transformer" ht "github.com/vulcanize/vulcanizedb/pkg/contract_watcher/header/transformer"
"github.com/vulcanize/vulcanizedb/utils" "github.com/vulcanize/vulcanizedb/utils"
) )
@ -99,7 +99,7 @@ func contractWatcher() {
con.PrepConfig() con.PrepConfig()
switch mode { switch mode {
case "header": case "header":
t = lt.NewTransformer(con, blockChain, &db) t = ht.NewTransformer(con, blockChain, &db)
case "full": case "full":
t = ft.NewTransformer(con, blockChain, &db) t = ft.NewTransformer(con, blockChain, &db)
default: default:

View File

@ -29,14 +29,14 @@ import (
"github.com/vulcanize/vulcanizedb/utils" "github.com/vulcanize/vulcanizedb/utils"
) )
// syncCmd represents the sync command // fullSyncCmd represents the fullSync command
var syncCmd = &cobra.Command{ var fullSyncCmd = &cobra.Command{
Use: "sync", Use: "fullSync",
Short: "Syncs VulcanizeDB with local ethereum node", Short: "Syncs VulcanizeDB with local ethereum node",
Long: `Syncs VulcanizeDB with local ethereum node. Populates Long: `Syncs VulcanizeDB with local ethereum node. Populates
Postgres with blocks, transactions, receipts, and logs. Postgres with blocks, transactions, receipts, and logs.
./vulcanizedb sync --starting-block-number 0 --config public.toml ./vulcanizedb fullSync --starting-block-number 0 --config public.toml
Expects ethereum node to be running and requires a .toml config: Expects ethereum node to be running and requires a .toml config:
@ -49,14 +49,14 @@ Expects ethereum node to be running and requires a .toml config:
ipcPath = "/Users/user/Library/Ethereum/geth.ipc" ipcPath = "/Users/user/Library/Ethereum/geth.ipc"
`, `,
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
sync() fullSync()
}, },
} }
func init() { func init() {
rootCmd.AddCommand(syncCmd) rootCmd.AddCommand(fullSyncCmd)
syncCmd.Flags().Int64VarP(&startingBlockNumber, "starting-block-number", "s", 0, "Block number to start syncing from") fullSyncCmd.Flags().Int64VarP(&startingBlockNumber, "starting-block-number", "s", 0, "Block number to start syncing from")
} }
func backFillAllBlocks(blockchain core.BlockChain, blockRepository datastore.BlockRepository, missingBlocksPopulated chan int, startingBlockNumber int64) { func backFillAllBlocks(blockchain core.BlockChain, blockRepository datastore.BlockRepository, missingBlocksPopulated chan int, startingBlockNumber int64) {
@ -67,20 +67,20 @@ func backFillAllBlocks(blockchain core.BlockChain, blockRepository datastore.Blo
missingBlocksPopulated <- populated missingBlocksPopulated <- populated
} }
func sync() { func fullSync() {
ticker := time.NewTicker(pollingInterval) ticker := time.NewTicker(pollingInterval)
defer ticker.Stop() defer ticker.Stop()
blockChain := getBlockChain() blockChain := getBlockChain()
lastBlock, err := blockChain.LastBlock() lastBlock, err := blockChain.LastBlock()
if err != nil { if err != nil {
log.Error("sync: Error getting last block: ", err) log.Error("fullSync: Error getting last block: ", err)
} }
if lastBlock.Int64() == 0 { if lastBlock.Int64() == 0 {
log.Fatal("geth initial: state sync not finished") log.Fatal("geth initial: state sync not finished")
} }
if startingBlockNumber > lastBlock.Int64() { if startingBlockNumber > lastBlock.Int64() {
log.Fatal("sync: starting block number > current block number") log.Fatal("fullSync: starting block number > current block number")
} }
db := utils.LoadPostgres(databaseConfig, blockChain.Node()) db := utils.LoadPostgres(databaseConfig, blockChain.Node())
@ -94,7 +94,7 @@ func sync() {
case <-ticker.C: case <-ticker.C:
window, err := validator.ValidateBlocks() window, err := validator.ValidateBlocks()
if err != nil { if err != nil {
log.Error("sync: error in validateBlocks: ", err) log.Error("fullSync: error in validateBlocks: ", err)
} }
log.Info(window.GetString()) log.Info(window.GetString())
case <-missingBlocksPopulated: case <-missingBlocksPopulated:

View File

@ -1,11 +1,25 @@
# Contribution guidelines # Contribution guidelines
Contributions are welcome! In addition to core contributions, developers are encouraged to build their own custom transformers which Contributions are welcome! Please open an Issues or Pull Request for any changes.
In addition to core contributions, developers are encouraged to build their own custom transformers which
can be run together with other custom transformers using the [composeAndExeucte](../../staging/documentation/composeAndExecute.md) command. can be run together with other custom transformers using the [composeAndExeucte](../../staging/documentation/composeAndExecute.md) command.
## Pull Requests
- `go fmt` is run as part of `make test` and `make integrationtest`, please make sure to check in the format changes.
- Ensure that new code is well tested, including integration testing if applicable.
- Make sure the build is passing.
- Update the README or any [documentation files](./) as necessary. If editing the Readme, please
conform to the
[standard-readme specification](https://github.com/RichardLitt/standard-readme).
- Once a Pull Request has received two approvals it can be merged in by a core developer.
## Creating a new migration file ## Creating a new migration file
1. `make new_migration NAME=add_columnA_to_table1` 1. `make new_migration NAME=add_columnA_to_table1`
- This will create a new timestamped migration file in `db/migrations` - This will create a new timestamped migration file in `db/migrations`
1. Write the migration code in the created file, under the respective `goose` pragma 1. Write the migration code in the created file, under the respective `goose` pragma
- Goose automatically runs each migration in a transaction; don't add `BEGIN` and `COMMIT` statements. - Goose automatically runs each migration in a transaction; don't add `BEGIN` and `COMMIT` statements.
1. Core migrations should be committed in their `goose fix`ed form. 1. Core migrations should be committed in their `goose fix`ed form. To do this, run `make version_migrations` which
converts timestamped migrations to migrations versioned by an incremented integer.
VulcanizeDB follows the [Contributor Covenant Code of Conduct](https://www.contributor-covenant.org/version/1/4/code-of-conduct).

View File

@ -1,44 +1,80 @@
# composeAndExecute # Custom Transformers
The `composeAndExecute` command is used to compose and execute over an arbitrary set of custom transformers. When the capabilities of the generic `contractWatcher` are not sufficient, custom transformers tailored to a specific
This is accomplished by generating a Go pluggin which allows the `vulcanizedb` binary to link to external transformers, so purpose can be leveraged.
long as they abide by one of the standard [interfaces](../staging/libraries/shared/transformer).
Additionally, there are separate `compose` and `execute` commands to allow pre-building and linking to a pre-built .so file. Individual custom transformers can be composed together from any number of external repositories and executed as a
single process using the `compose` and `execute` commands or the `composeAndExecute` command. This is accomplished by
generating a Go plugin which allows the `vulcanizedb` binary to link to the external transformers, so long as they
abide by one of the standard [interfaces](../staging/libraries/shared/transformer).
**NOTE:** ## Writing custom transformers
1. It is necessary that the .so file was built with the same exact dependencies that are present in the execution environment, For help with writing different types of custom transformers please see below:
i.e. we need to `compose` and `execute` the plugin .so file with the same exact version of vulcanizeDB.
1. The plugin migrations are run during the plugin's composition. As such, if `execute` is used to run a prebuilt .so in a different
environment than the one it was composed in then the migrations for that plugin will first need to be manually ran against that environment's Postgres database.
These commands require Go 1.11+ and use [Go plugins](https://golang.org/pkg/plugin/) which only work on Unix-based systems. Storage Transformers: transform data derived from contract storage tries
There is also an ongoing [conflict](https://github.com/golang/go/issues/20481) between Go plugins and the use vendored dependencies which * [Guide](../../staging/libraries/shared/factories/storage/README.md)
imposes certain limitations on how the plugins are built. * [Example](../../staging/libraries/shared/factories/storage/EXAMPLE.md)
## Commands Event Transformers: transform data derived from Ethereum log events
The `compose` and `composeAndExecute` commands assume you are in the vulcanizdb directory located at your system's `$GOPATH`, * [Guide](../../staging/libraries/shared/factories/event/README.md)
and that all of the transformer repositories for building the plugin are present at their `$GOPATH` directories. * [Example 1](https://github.com/vulcanize/ens_transformers/tree/master/transformers/registar)
* [Example 2](https://github.com/vulcanize/ens_transformers/tree/master/transformers/registry)
* [Example 3](https://github.com/vulcanize/ens_transformers/tree/master/transformers/resolver)
The `execute` command does not require the plugin transformer dependencies be located in their Contract Transformers: transform data derived from Ethereum log events and use it to poll public contract methods
`$GOPATH` directories, instead it expects a prebuilt .so file (of the name specified in the config file) * [Example 1](https://github.com/vulcanize/account_transformers)
to be in `$GOPATH/src/github.com/vulcanize/vulcanizedb/plugins/` and, as noted above, also expects the plugin * [Example 2](https://github.com/vulcanize/ens_transformers/tree/master/transformers/domain_records)
db migrations to have already been ran against the database.
compose: ## Preparing custom transformers to work as part of a plugin
To plug in an external transformer we need to:
`./vulcanizedb compose --config=./environments/config_name.toml` 1. Create a package that exports a variable `TransformerInitializer`, `StorageTransformerInitializer`, or `ContractTransformerInitializer` that are of type [TransformerInitializer](../staging/libraries/shared/transformer/event_transformer.go#L33)
or [StorageTransformerInitializer](../../staging/libraries/shared/transformer/storage_transformer.go#L31),
or [ContractTransformerInitializer](../../staging/libraries/shared/transformer/contract_transformer.go#L31), respectively
2. Design the transformers to work in the context of their [event](../staging/libraries/shared/watcher/event_watcher.go#L83),
[storage](../../staging/libraries/shared/watcher/storage_watcher.go#L53),
or [contract](../../staging/libraries/shared/watcher/contract_watcher.go#L68) watcher execution modes
3. Create db migrations to run against vulcanizeDB so that we can store the transformer output
* Do not `goose fix` the transformer migrations, this is to ensure they are always ran after the core vulcanizedb migrations which are kept in their fixed form
* Specify migration locations for each transformer in the config with the `exporter.transformer.migrations` fields
* If the base vDB migrations occupy this path as well, they need to be in their `goose fix`ed form
as they are [here](../../staging/db/migrations)
execute: To update a plugin repository with changes to the core vulcanizedb repository, run `dep ensure` to update its dependencies.
`./vulcanizedb execute --config=./environments/config_name.toml` ## Building and Running Custom Transformers
### Commands
* The `compose`, `execute`, `composeAndExecute` commands require Go 1.11+ and use [Go plugins](https://golang
.org/pkg/plugin/) which only work on Unix-based systems.
composeAndExecute: * There is an ongoing [conflict](https://github.com/golang/go/issues/20481) between Go plugins and the use of vendored
dependencies which imposes certain limitations on how the plugins are built.
`./vulcanizedb composeAndExecute --config=./environments/config_name.toml` * Separate `compose` and `execute` commands allow pre-building and linking to the pre-built .so file. So, if
these are run independently, instead of using `composeAndExecute`, a couple of things need to be considered:
* It is necessary that the .so file was built with the same exact dependencies that are present in the execution
environment, i.e. we need to `compose` and `execute` the plugin .so file with the same exact version of vulcanizeDB.
* The plugin migrations are run during the plugin's composition. As such, if `execute` is used to run a prebuilt .so
in a different environment than the one it was composed in, then the database structure will need to be loaded
into the environment's Postgres database. This can either be done by manually loading the plugin's schema into
Postgres, or by manually running the plugin's migrations.
## Flags * The `compose` and `composeAndExecute` commands assume you are in the vulcanizdb directory located at your system's
`$GOPATH`, and that the plugin dependencies are present at their `$GOPATH` directories.
The `compose` and `composeAndExecute` commands can be passed optional flags to specify the operation of the watchers: * The `execute` command does not require the plugin transformer dependencies be located in their `$GOPATH` directories,
instead it expects a .so file (of the name specified in the config file) to be in
`$GOPATH/src/github.com/vulcanize/vulcanizedb/plugins/` and, as noted above, also expects the plugin db migrations to
have already been ran against the database.
* Usage:
* compose: `./vulcanizedb compose --config=environments/config_name.toml`
* execute: `./vulcanizedb execute --config=environments/config_name.toml`
* composeAndExecute: `./vulcanizedb composeAndExecute --config=environments/config_name.toml`
### Flags
The `execute` and `composeAndExecute` commands can be passed optional flags to specify the operation of the watchers:
- `--recheck-headers`/`-r` - specifies whether to re-check headers for events after the header has already been queried for watched logs. - `--recheck-headers`/`-r` - specifies whether to re-check headers for events after the header has already been queried for watched logs.
Can be useful for redundancy if you suspect that your node is not always returning all desired logs on every query. Can be useful for redundancy if you suspect that your node is not always returning all desired logs on every query.
@ -50,7 +86,7 @@ Defaults to `false`.
Argument is expected to be a duration (integer measured in nanoseconds): e.g. `-q=10m30s` (for 10 minute, 30 second intervals). Argument is expected to be a duration (integer measured in nanoseconds): e.g. `-q=10m30s` (for 10 minute, 30 second intervals).
Defaults to `5m` (5 minutes). Defaults to `5m` (5 minutes).
## Configuration ### Configuration
A .toml config file is specified when executing the commands. A .toml config file is specified when executing the commands.
The config provides information for composing a set of transformers from external repositories: The config provides information for composing a set of transformers from external repositories:

View File

@ -0,0 +1,81 @@
# Syncing commands
These commands are used to sync raw Ethereum data into Postgres, with varying levels of data granularity.
## headerSync
Syncs block headers from a running Ethereum node into the VulcanizeDB table `headers`.
- Queries the Ethereum node using RPC calls.
- Validates headers from the last 15 blocks to ensure that data is up to date.
- Useful when you want a minimal baseline from which to track targeted data on the blockchain (e.g. individual smart
contract storage values or event logs).
- Handles chain reorgs by [validating the most recent blocks' hashes](../pkg/history/header_validator.go). If the hash is
different from what we have already stored in the database, the header record will be updated.
#### Usage
- Run: `./vulcanizedb headerSync --config <config.toml> --starting-block-number <block-number>`
- The config file must be formatted as follows, and should contain an ipc path to a running Ethereum node:
```toml
[database]
name = "vulcanize_public"
hostname = "localhost"
user = "vulcanize"
password = "vulcanize"
port = 5432
[client]
ipcPath = <path to a running Ethereum node>
```
- Alternatively, the ipc path can be passed as a flag instead `--client-ipcPath`.
## fullSync
Syncs blocks, transactions, receipts and logs from a running Ethereum node into VulcanizeDB tables named
`blocks`, `uncles`, `full_sync_transactions`, `full_sync_receipts` and `logs`.
- Queries the Ethereum node using RPC calls.
- Validates headers from the last 15 blocks to ensure that data is up to date.
- Useful when you want to maintain a broad cache of what's happening on the blockchain.
- Handles chain reorgs by [validating the most recent blocks' hashes](../pkg/history/header_validator.go). If the hash is
different from what we have already stored in the database, the header record will be updated.
#### Usage
- Run `./vulcanizedb fullSync --config <config.toml> --starting-block-number <block-number>`
- The config file must be formatted as follows, and should contain an ipc path to a running Ethereum node:
```toml
[database]
name = "vulcanize_public"
hostname = "localhost"
user = "vulcanize"
password = "vulcanize"
port = 5432
[client]
ipcPath = <path to a running Ethereum node>
```
- Alternatively, the ipc path can be passed as a flag instead `--client-ipcPath`.
*Please note, that if you are fast syncing your Ethereum node, wait for the initial sync to finish.*
## coldImport
Syncs VulcanizeDB from Geth's underlying LevelDB datastore and persists Ethereum blocks,
transactions, receipts and logs into VulcanizeDB tables named `blocks`, `uncles`,
`full_sync_transactions`, `full_sync_receipts` and `logs` respectively.
#### Usage
1. Ensure the Ethereum node you're point at is not running, and that it has synced to the desired block height.
1. Run `./vulcanizedb coldImport --config <config.toml>`
1. Optional flags:
- `--starting-block-number <block number>`/`-s <block number>`: block number to start syncing from
- `--ending-block-number <block number>`/`-e <block number>`: block number to sync to
- `--all`/`-a`: sync all missing blocks
The config file can be formatted as follows, and must contain the LevelDB path.
```toml
[database]
name = "vulcanize_public"
hostname = "localhost"
user = "vulcanize"
password = "vulcanize"
port = 5432
[client]
leveldbpath = "/Users/user/Library/Ethereum/geth/chaindata"
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

View File

@ -1,4 +1,4 @@
# contractWatcher # Generic Transformer
The `contractWatcher` command is a built-in generic contract watcher. It can watch any and all events for a given contract provided the contract's ABI is available. The `contractWatcher` command is a built-in generic contract watcher. It can watch any and all events for a given contract provided the contract's ABI is available.
It also provides some state variable coverage by automating polling of public methods, with some restrictions: It also provides some state variable coverage by automating polling of public methods, with some restrictions:
1. The method must have 2 or less arguments 1. The method must have 2 or less arguments

View File

@ -9,13 +9,15 @@ As of April 30, 2019, you can run Postgraphile pointed at the default `vulcanize
``` ```
npm install -g postgraphile npm install -g postgraphile
postgraphile --connection postgres://localhost/vulcanize_public --schema=public,custom --disable-default-mutations postgraphile --connection postgres://localhost/vulcanize_public --schema=public,custom --disable-default-mutations --no-ignore-rbac
``` ```
Arguments: Arguments:
- `--connection` specifies the database. The above command connects to the default `vulcanize_public` database defined in [the example config](../staging/environments/public.toml.example). - `--connection` specifies the database. The above command connects to the default `vulcanize_public` database
defined in [the example config](../environments/public.toml.example).
- `--schema` defines what schema(s) to expose. The above exposes the `public` schema (for core VulcanizeDB data) as well as a `custom` schema (where `custom` is the name of a schema defined in executed transformers). - `--schema` defines what schema(s) to expose. The above exposes the `public` schema (for core VulcanizeDB data) as well as a `custom` schema (where `custom` is the name of a schema defined in executed transformers).
- `--disable-default-mutations` prevents Postgraphile from exposing create, update, and delete operations on your data, which are otherwise enabled by default. - `--disable-default-mutations` prevents Postgraphile from exposing create, update, and delete operations on your data, which are otherwise enabled by default.
- `--no-ignore-rbac` ensures that Postgraphile will only expose the tables, columns, fields, and query functions that the user has explicit access to.
## Customizing Postgraphile ## Customizing Postgraphile

View File

@ -0,0 +1,27 @@
# Repository Maintenance
## Diagrams
- Diagrams were created with [draw.io](draw.io).
- To update a diagram:
1. Go to [draw.io](draw.io).
1. Click on *File > Open from* and choose the location of the diagram you want to update.
1. Once open in draw.io, you may update it.
1. Export the diagram to this repository's directory and add commit it.
## Generating the Changelog
We use [github-changelog-generator](https://github.com/github-changelog-generator/github-changelog-generator) to
generate release Changelogs. To be consistent with previous Changelogs, the following flags should be passed to the
command:
```
--user vulcanize
--project vulcanizedb
--token {YOUR_GITHUB_TOKEN}
--no-issues
--usernames-as-github-logins
--since-tag {PREVIOUS_RELEASE_TAG}
```
More information on why your github token is needed, and how to generate it here:https://github
.com/github-changelog-generator/github-changelog-generator#github-token

View File

@ -1,26 +0,0 @@
# Syncing commands
These commands are used to sync raw Ethereum data into Postgres.
## headerSync
Syncs VulcanizeDB with the configured Ethereum node, populating only block headers.
This command is useful when you want a minimal baseline from which to track targeted data on the blockchain (e.g. individual smart contract storage values or event logs).
1. Start Ethereum node
1. In a separate terminal start VulcanizeDB:
- `./vulcanizedb headerSync --config <config.toml> --starting-block-number <block-number>`
## sync
Syncs VulcanizeDB with the configured Ethereum node, populating blocks, transactions, receipts, and logs.
This command is useful when you want to maintain a broad cache of what's happening on the blockchain.
1. Start Ethereum node (**if fast syncing your Ethereum node, wait for initial sync to finish**)
1. In a separate terminal start VulcanizeDB:
- `./vulcanizedb sync --config <config.toml> --starting-block-number <block-number>`
## coldImport
Sync VulcanizeDB from the LevelDB underlying a Geth node.
1. Assure node is not running, and that it has synced to the desired block height.
1. Start vulcanize_db
- `./vulcanizedb coldImport --config <config.toml>`
1. Optional flags:
- `--starting-block-number <block number>`/`-s <block number>`: block number to start syncing from
- `--ending-block-number <block number>`/`-e <block number>`: block number to sync to
- `--all`/`-a`: sync all missing blocks

View File

@ -1,40 +0,0 @@
# Custom transformers
When the capabilities of the generic `contractWatcher` are not sufficient, custom transformers tailored to a specific
purpose can be leveraged.
Individual transformers can be composed together from any number of external repositories and executed as a single process using
the `compose` and `execute` commands or the `composeAndExecute` command.
## Writing custom transformers
For help with writing different types of custom transformers for the `composeAndExecute` set of commands, please see the below:
Storage Transformers
* [Guide](../../staging/libraries/shared/factories/storage/README.md)
* [Example](../../staging/libraries/shared/factories/storage/EXAMPLE.md)
Event Transformers
* [Guide](../../staging/libraries/shared/factories/event/README.md)
* [Example 1](https://github.com/vulcanize/ens_transformers/tree/master/transformers/registar)
* [Example 2](https://github.com/vulcanize/ens_transformers/tree/master/transformers/registry)
* [Example 3](https://github.com/vulcanize/ens_transformers/tree/master/transformers/resolver)
Contract Transformers
* [Example 1](https://github.com/vulcanize/account_transformers)
* [Example 2](https://github.com/vulcanize/ens_transformers/tree/master/transformers/domain_records)
## Preparing custom transformers to work as part of a plugin
To plug in an external transformer we need to:
1. Create a package that exports a variable `TransformerInitializer`, `StorageTransformerInitializer`, or `ContractTransformerInitializer` that are of type [TransformerInitializer](../staging/libraries/shared/transformer/event_transformer.go#L33)
or [StorageTransformerInitializer](../../staging/libraries/shared/transformer/storage_transformer.go#L31),
or [ContractTransformerInitializer](../../staging/libraries/shared/transformer/contract_transformer.go#L31), respectively
2. Design the transformers to work in the context of their [event](../staging/libraries/shared/watcher/event_watcher.go#L83),
[storage](../../staging/libraries/shared/watcher/storage_watcher.go#L53),
or [contract](../../staging/libraries/shared/watcher/contract_watcher.go#L68) watcher execution modes
3. Create db migrations to run against vulcanizeDB so that we can store the transformer output
* Do not `goose fix` the transformer migrations, this is to ensure they are always ran after the core vulcanizedb migrations which are kept in their fixed form
* Specify migration locations for each transformer in the config with the `exporter.transformer.migrations` fields
* If the base vDB migrations occupy this path as well, they need to be in their `goose fix`ed form
as they are [here](../../staging/db/migrations)
To update a plugin repository with changes to the core vulcanizedb repository, run `dep ensure` to update its dependencies.

View File

@ -164,7 +164,7 @@ func (tr *Transformer) Init() error {
// Iterates through stored, initialized contract objects // Iterates through stored, initialized contract objects
// Iterates through contract's event filters, grabbing watched event logs // Iterates through contract's event filters, grabbing watched event logs
// Uses converter to convert logs into custom log type // Uses converter to convert logs into custom log type
// Persists converted logs into custuom postgres tables // Persists converted logs into custom postgres tables
// Calls selected methods, using token holder address generated during event log conversion // Calls selected methods, using token holder address generated during event log conversion
func (tr *Transformer) Execute() error { func (tr *Transformer) Execute() error {
if len(tr.Contracts) == 0 { if len(tr.Contracts) == 0 {

View File

@ -30,7 +30,7 @@ type BlockChain interface {
GetBlockByNumber(blockNumber int64) (Block, error) GetBlockByNumber(blockNumber int64) (Block, error)
GetEthLogsWithCustomQuery(query ethereum.FilterQuery) ([]types.Log, error) GetEthLogsWithCustomQuery(query ethereum.FilterQuery) ([]types.Log, error)
GetHeaderByNumber(blockNumber int64) (Header, error) GetHeaderByNumber(blockNumber int64) (Header, error)
GetHeaderByNumbers(blockNumbers []int64) ([]Header, error) GetHeadersByNumbers(blockNumbers []int64) ([]Header, error)
GetLogs(contract Contract, startingBlockNumber *big.Int, endingBlockNumber *big.Int) ([]Log, error) GetLogs(contract Contract, startingBlockNumber *big.Int, endingBlockNumber *big.Int) ([]Log, error)
GetTransactions(transactionHashes []common.Hash) ([]TransactionModel, error) GetTransactions(transactionHashes []common.Hash) ([]TransactionModel, error)
LastBlock() (*big.Int, error) LastBlock() (*big.Int, error)

View File

@ -98,7 +98,7 @@ func (chain *MockBlockChain) GetHeaderByNumber(blockNumber int64) (core.Header,
return core.Header{BlockNumber: blockNumber}, nil return core.Header{BlockNumber: blockNumber}, nil
} }
func (chain *MockBlockChain) GetHeaderByNumbers(blockNumbers []int64) ([]core.Header, error) { func (chain *MockBlockChain) GetHeadersByNumbers(blockNumbers []int64) ([]core.Header, error) {
var headers []core.Header var headers []core.Header
for _, blockNumber := range blockNumbers { for _, blockNumber := range blockNumbers {
var header = core.Header{BlockNumber: int64(blockNumber)} var header = core.Header{BlockNumber: int64(blockNumber)}

View File

@ -79,7 +79,7 @@ func (blockChain *BlockChain) GetHeaderByNumber(blockNumber int64) (header core.
return blockChain.getPOWHeader(blockNumber) return blockChain.getPOWHeader(blockNumber)
} }
func (blockChain *BlockChain) GetHeaderByNumbers(blockNumbers []int64) (header []core.Header, err error) { func (blockChain *BlockChain) GetHeadersByNumbers(blockNumbers []int64) (header []core.Header, err error) {
if blockChain.node.NetworkID == core.KOVAN_NETWORK_ID { if blockChain.node.NetworkID == core.KOVAN_NETWORK_ID {
return blockChain.getPOAHeaders(blockNumbers) return blockChain.getPOAHeaders(blockNumbers)
} }

View File

@ -93,7 +93,7 @@ var _ = Describe("Geth blockchain", func() {
}) })
It("fetches headers with multiple blocks", func() { It("fetches headers with multiple blocks", func() {
_, err := blockChain.GetHeaderByNumbers([]int64{100, 99}) _, err := blockChain.GetHeadersByNumbers([]int64{100, 99})
Expect(err).NotTo(HaveOccurred()) Expect(err).NotTo(HaveOccurred())
mockRpcClient.AssertBatchCalledWith("eth_getBlockByNumber", 2) mockRpcClient.AssertBatchCalledWith("eth_getBlockByNumber", 2)
@ -139,7 +139,7 @@ var _ = Describe("Geth blockchain", func() {
blockNumber := hexutil.Big(*big.NewInt(100)) blockNumber := hexutil.Big(*big.NewInt(100))
mockRpcClient.SetReturnPOAHeaders([]vulcCore.POAHeader{{Number: &blockNumber}}) mockRpcClient.SetReturnPOAHeaders([]vulcCore.POAHeader{{Number: &blockNumber}})
_, err := blockChain.GetHeaderByNumbers([]int64{100, 99}) _, err := blockChain.GetHeadersByNumbers([]int64{100, 99})
Expect(err).NotTo(HaveOccurred()) Expect(err).NotTo(HaveOccurred())
mockRpcClient.AssertBatchCalledWith("eth_getBlockByNumber", 2) mockRpcClient.AssertBatchCalledWith("eth_getBlockByNumber", 2)

View File

@ -24,14 +24,14 @@ import (
"github.com/vulcanize/vulcanizedb/pkg/datastore/postgres/repositories" "github.com/vulcanize/vulcanizedb/pkg/datastore/postgres/repositories"
) )
func PopulateMissingHeaders(blockchain core.BlockChain, headerRepository datastore.HeaderRepository, startingBlockNumber int64) (int, error) { func PopulateMissingHeaders(blockChain core.BlockChain, headerRepository datastore.HeaderRepository, startingBlockNumber int64) (int, error) {
lastBlock, err := blockchain.LastBlock() lastBlock, err := blockChain.LastBlock()
if err != nil { if err != nil {
log.Error("PopulateMissingHeaders: Error getting last block: ", err) log.Error("PopulateMissingHeaders: Error getting last block: ", err)
return 0, err return 0, err
} }
blockNumbers, err := headerRepository.MissingBlockNumbers(startingBlockNumber, lastBlock.Int64(), blockchain.Node().ID) blockNumbers, err := headerRepository.MissingBlockNumbers(startingBlockNumber, lastBlock.Int64(), blockChain.Node().ID)
if err != nil { if err != nil {
log.Error("PopulateMissingHeaders: Error getting missing block numbers: ", err) log.Error("PopulateMissingHeaders: Error getting missing block numbers: ", err)
return 0, err return 0, err
@ -40,7 +40,7 @@ func PopulateMissingHeaders(blockchain core.BlockChain, headerRepository datasto
} }
log.Printf("Backfilling %d blocks\n\n", len(blockNumbers)) log.Printf("Backfilling %d blocks\n\n", len(blockNumbers))
_, err = RetrieveAndUpdateHeaders(blockchain, headerRepository, blockNumbers) _, err = RetrieveAndUpdateHeaders(blockChain, headerRepository, blockNumbers)
if err != nil { if err != nil {
log.Error("PopulateMissingHeaders: Error getting/updating headers:", err) log.Error("PopulateMissingHeaders: Error getting/updating headers:", err)
return 0, err return 0, err
@ -48,8 +48,8 @@ func PopulateMissingHeaders(blockchain core.BlockChain, headerRepository datasto
return len(blockNumbers), nil return len(blockNumbers), nil
} }
func RetrieveAndUpdateHeaders(chain core.BlockChain, headerRepository datastore.HeaderRepository, blockNumbers []int64) (int, error) { func RetrieveAndUpdateHeaders(blockChain core.BlockChain, headerRepository datastore.HeaderRepository, blockNumbers []int64) (int, error) {
headers, err := chain.GetHeaderByNumbers(blockNumbers) headers, err := blockChain.GetHeadersByNumbers(blockNumbers)
for _, header := range headers { for _, header := range headers {
_, err = headerRepository.CreateOrUpdateHeader(header) _, err = headerRepository.CreateOrUpdateHeader(header)
if err != nil { if err != nil {