Various README updates

This commit is contained in:
Elizabeth Engelman 2019-05-07 16:21:08 -05:00
parent c5cfa85f4c
commit b96f6c4d31
3 changed files with 38 additions and 19 deletions

View File

@ -22,6 +22,10 @@ The same data structures and encodings that make Ethereum an effective and trust
complicate data accessibility and usability for dApp developers. VulcanizeDB improves Ethereum data accessibility by complicate data accessibility and usability for dApp developers. VulcanizeDB improves Ethereum data accessibility by
providing a suite of tools to ease the extraction and transformation of data into a more useful state. providing a suite of tools to ease the extraction and transformation of data into a more useful state.
VulanizeDB includes processes that sync, transform and expose data. Syncing involves
querying an Ethereum node and then persisting core data into a Postgres database. Transforming focuses on using previously synced data to
query for and transform log event and storage data for specifically configured smart contract addresses. Exposing data is a matter of getting
data from VulcanizeDB's underlying Postgres database and making it accessible.
## Dependencies ## Dependencies
- Go 1.11+ - Go 1.11+
@ -80,8 +84,9 @@ In some cases (such as recent Ubuntu systems), it may be necessary to overcome f
- The IPC file is called `geth.ipc`. - The IPC file is called `geth.ipc`.
- The geth IPC file path is printed to the console when you start geth. - The geth IPC file path is printed to the console when you start geth.
- The default location is: - The default location is:
- Mac: `<full home path>/Library/Ethereum` - Mac: `<full home path>/Library/Ethereum/geth.ipc`
- Linux: `<full home path>/ethereum/geth.ipc` - Linux: `<full home path>/ethereum/geth.ipc`
- Note: the geth.ipc file may not exist until you've started the geth process
- For Parity: - For Parity:
- The IPC file is called `jsonrpc.ipc`. - The IPC file is called `jsonrpc.ipc`.
@ -98,10 +103,10 @@ In some cases (such as recent Ubuntu systems), it may be necessary to overcome f
## Usage ## Usage
Usage is broken up into two processes: As mentioned above, VulcanizeDB's processes can be split into three categories: syncing, transforming and exposing data.
### Data syncing ### Data syncing
To provide data for transformations, raw Ethereum data must first be synced into vDB. To provide data for transformations, raw Ethereum data must first be synced into VulcanizeDB.
This is accomplished through the use of the `headerSync`, `sync`, or `coldImport` commands. This is accomplished through the use of the `headerSync`, `sync`, or `coldImport` commands.
These commands are described in detail [here](../staging/documentation/sync.md). These commands are described in detail [here](../staging/documentation/sync.md).
@ -118,6 +123,10 @@ Usage of the `compose`, `execute`, and `composeAndExecute` commands is described
Documentation on how to build custom transformers to work with these commands can be found [here](../staging/documentation/transformers.md). Documentation on how to build custom transformers to work with these commands can be found [here](../staging/documentation/transformers.md).
### Exposing the data
[Postgraphile](https://www.graphile.org/postgraphile/) is used to expose GraphQL endpoints for our database schemas, this is described in detail [here](../staging/documentation/postgraphile.md).
## Tests ## Tests
- Replace the empty `ipcPath` in the `environments/infura.toml` with a path to a full node's eth_jsonrpc endpoint (e.g. local geth node ipc path or infura url) - Replace the empty `ipcPath` in the `environments/infura.toml` with a path to a full node's eth_jsonrpc endpoint (e.g. local geth node ipc path or infura url)
- Note: integration tests require configuration with an archival node - Note: integration tests require configuration with an archival node
@ -126,9 +135,6 @@ Documentation on how to build custom transformers to work with these commands ca
- `make test` will run the unit tests and skip the integration tests - `make test` will run the unit tests and skip the integration tests
- `make integrationtest` will run just the integration tests - `make integrationtest` will run just the integration tests
## API
[Postgraphile](https://www.graphile.org/postgraphile/) is used to expose GraphQL endpoints for our database schemas, this is described in detail [here](../staging/documentation/postgraphile.md).
## Contributing ## Contributing
Contributions are welcome! For more on this, please see [here](../staging/documentation/contributing.md). Contributions are welcome! For more on this, please see [here](../staging/documentation/contributing.md).

View File

@ -1,6 +1,6 @@
# composeAndExecute # composeAndExecute
The `composeAndExecute` command is used to compose and execute over an arbitrary set of custom transformers. The `composeAndExecute` command is used to compose and execute over an arbitrary set of custom transformers.
This is accomplished by generating a Go pluggin which allows the `vulcanizedb` binary to link to external transformers, so This is accomplished by generating a Go plugin which allows the `vulcanizedb` binary to link to external transformers, so
long as they abide by one of the standard [interfaces](../staging/libraries/shared/transformer). long as they abide by one of the standard [interfaces](../staging/libraries/shared/transformer).
Additionally, there are separate `compose` and `execute` commands to allow pre-building and linking to a pre-built .so file. Additionally, there are separate `compose` and `execute` commands to allow pre-building and linking to a pre-built .so file.

View File

@ -1,22 +1,35 @@
# Syncing commands # Syncing commands
These commands are used to sync raw Ethereum data into Postgres. These commands are used to sync raw Ethereum data into Postgres, with varying levels of data granularity.
## headerSync ## headerSync
Syncs VulcanizeDB with the configured Ethereum node, populating only block headers. Syncs block headers from a running Ethereum node into the VulcanizeDB table `headers`.
This command is useful when you want a minimal baseline from which to track targeted data on the blockchain (e.g. individual smart contract storage values or event logs). - Queries the Ethereum node using RPC calls.
1. Start Ethereum node - Validates headers from the last 15 blocks to ensure that data is up to date.
- Useful when you want a minimal baseline from which to track targeted data on the blockchain (e.g. individual smart contract storage values or event logs).
##### Usage
1. Start Ethereum node.
1. In a separate terminal start VulcanizeDB: 1. In a separate terminal start VulcanizeDB:
- `./vulcanizedb headerSync --config <config.toml> --starting-block-number <block-number>` `./vulcanizedb headerSync --config <config.toml> --starting-block-number <block-number>`
## sync ## sync
Syncs VulcanizeDB with the configured Ethereum node, populating blocks, transactions, receipts, and logs. Syncs blocks, transactions, receipts and logs from a running Ethereum node into VulcanizeDB tables named
This command is useful when you want to maintain a broad cache of what's happening on the blockchain. `blocks`, `uncles`, `full_sync_transactions`, `full_sync_receipts` and `logs`.
1. Start Ethereum node (**if fast syncing your Ethereum node, wait for initial sync to finish**) - Queries the Ethereum node using RPC calls.
- Validates headers from the last 15 blocks to ensure that data is up to date.
- Useful when you want to maintain a broad cache of what's happening on the blockchain.
##### Usage
1. Start Ethereum node (**if fast syncing your Ethereum node, wait for initial sync to finish**).
1. In a separate terminal start VulcanizeDB: 1. In a separate terminal start VulcanizeDB:
- `./vulcanizedb sync --config <config.toml> --starting-block-number <block-number>` `./vulcanizedb sync --config <config.toml> --starting-block-number <block-number>`
## coldImport ## coldImport
Sync VulcanizeDB from the LevelDB underlying a Geth node. Syncs VulcanizeDB from Geth's underlying LevelDB datastore and persists Ethereum blocks,
transactions, receipts and logs into VulcanizeDB tables named `blocks`, `uncles`,
`full_sync_transactions`, `full_sync_receipts` and `logs` respectively.
##### Usage
1. Assure node is not running, and that it has synced to the desired block height. 1. Assure node is not running, and that it has synced to the desired block height.
1. Start vulcanize_db 1. Start vulcanize_db
- `./vulcanizedb coldImport --config <config.toml>` - `./vulcanizedb coldImport --config <config.toml>`