Server backend for indexed ETH IPLD objects
Go to file
2018-11-04 21:37:31 -06:00
cmd beginning work on method polling; first need to generate list of token holder address in a completely generic/contratc agnostic fashion. Created address retriever that can iterate through any given contract's watched events, finding the inputs/arguments with address type, and generate a list from those values. Edit: Contract objects now cache every event emitted address as its event logs are transformed into the repo to grow a list of contract associated addresses as we go 2018-11-04 21:37:31 -06:00
db Changes to address Rob's comments. Added generic/event_triggered transformer, tests, and repo migrations for Burn and Mint events 2018-11-03 13:49:23 -05:00
dockerfiles/rinkeby dockerfiles/rinkeby: added named volume for pg container 2018-11-03 13:49:23 -05:00
environments Generic watcher that takes a contract address, grabs the contract abi and starting block number, creates custom event filters, and extracts and transforms event data into postgres. Can configure to look at only a subset of events through CLI flag. Building but needs testing. 2018-11-03 14:00:25 -05:00
examples beginning work on method polling; first need to generate list of token holder address in a completely generic/contratc agnostic fashion. Created address retriever that can iterate through any given contract's watched events, finding the inputs/arguments with address type, and generate a list from those values. Edit: Contract objects now cache every event emitted address as its event logs are transformed into the repo to grow a list of contract associated addresses as we go 2018-11-04 21:37:31 -06:00
integration_test Changes to address Rob's comments. Added generic/event_triggered transformer, tests, and repo migrations for Burn and Mint events 2018-11-03 13:49:23 -05:00
libraries/shared tests and fixes for fetcher, parser, retriever, converter, and repository; update cmd and watcher 2018-11-04 01:49:11 -05:00
pkg beginning work on method polling; first need to generate list of token holder address in a completely generic/contratc agnostic fashion. Created address retriever that can iterate through any given contract's watched events, finding the inputs/arguments with address type, and generate a list from those values. Edit: Contract objects now cache every event emitted address as its event logs are transformed into the repo to grow a list of contract associated addresses as we go 2018-11-04 21:37:31 -06:00
scripts Get transactions (#45) 2018-03-27 16:06:12 -05:00
test_config Changes to address Rob's comments. Added generic/event_triggered transformer, tests, and repo migrations for Burn and Mint events 2018-11-03 13:49:23 -05:00
utils Changes to address Rob's comments. Added generic/event_triggered transformer, tests, and repo migrations for Burn and Mint events 2018-11-03 13:49:23 -05:00
vendor tests and fixes for fetcher, parser, retriever, converter, and repository; update cmd and watcher 2018-11-04 01:49:11 -05:00
.gitignore Handle events 2018-03-05 10:01:50 -06:00
.private_blockchain_password Add integration test 2017-10-24 15:36:50 -05:00
.travis.yml Get transactions (#45) 2018-03-27 16:06:12 -05:00
Gopkg.lock Add ERC20 token watcher example 2018-11-03 13:49:23 -05:00
Gopkg.toml Allow Parity as ingest node (#36) 2018-03-07 15:29:21 -06:00
LICENSE Add LICENSE 2017-11-09 12:58:17 -06:00
main.go Merge old private repo into vulcanize 2018-01-25 18:08:26 -06:00
Makefile adjust retriever to pull token holder addresses from Transfer and Approval events (iterating over Approvals might be redundant); edit Makefile to import new missing dependencies of go-ethereum/accounts/keystore, organizing mocks and adding event related mocks and filters 2018-11-03 13:49:23 -05:00
README.md Add several improvements to README.md (#28) 2018-11-03 13:49:23 -05:00

Vulcanize DB

Join the chat at https://gitter.im/vulcanizeio/VulcanizeDB

Build Status

About

Vulcanize DB is a set of tools that make it easier for developers to write application-specific indexes and caches for dapps built on Ethereum.

Dependencies

Project Setup

Using Vulcanize for the first time requires several steps be done in order to allow use of the software. The following instructions will offer a guide through the steps of the process:

  1. Fetching the project
  2. Installing dependencies
  3. Configuring shell environment
  4. Database setup
  5. Configuring synced Ethereum node integration
  6. Data syncing

Installation

In order to fetch the project codebase for local use or modification, install it to your GOPATH via:

go get github.com/vulcanize/vulcanizedb

Once fetched, dependencies can be installed via go get or (the preferred method) at specific versions via golang/dep, the prototype golang pakcage manager. Installation instructions are here.

In order to install packages with dep, ensure you are in the project directory now within your GOPATH (default location is ~/go/src/github.com/vulcanize/vulcanizedb/) and run:

dep ensure

After dep finishes, dependencies should be installed within your GOPATH at the versions specified in Gopkg.toml.

Lastly, ensure that GOPATH is defined in your shell. If necessary, GOPATH can be set in ~/.bashrc or ~/.bash_profile, depending upon your system. It can be additionally helpful to add $GOPATH/bin to your shell's $PATH.

Setting up the Database

  1. Install Postgres

  2. Create a superuser for yourself and make sure psql --list works without prompting for a password.

  3. Execute createdb vulcanize_public

  4. Execute cd $GOPATH/src/github.com/vulcanize/vulcanizedb

  5. Run the migrations: make migrate HOST_NAME=localhost NAME=vulcanize_public PORT=<postgres port, default 5432>

    • See below for configuring additional environments

In some cases (such as recent Ubuntu systems), it may be necessary to overcome failures of password authentication from localhost. To allow access on Ubuntu, set localhost connections via hostname, ipv4, and ipv6 from peer/md5 to trust in: /etc/postgresql/<version>/pg_hba.conf

(It should be noted that trusted auth should only be enabled on systems without sensitive data in them: development and local test databases.)

Configuring Ethereum Node Integration

  • To use a local Ethereum node, copy environments/public.toml.example to environments/public.toml and update the ipcPath and levelDbPath.

    • ipcPath should match the local node's IPC filepath:

      • For Geth:

        • The IPC file is called geth.ipc.
        • The geth IPC file path is printed to the console when you start geth.
        • The default location is:
          • Mac: <full home path>/Library/Ethereum
          • Linux: <full home path>/ethereum/geth.ipc
      • For Parity:

        • The IPC file is called jsonrpc.ipc.
        • The default location is:
          • Mac: <full home path>/Library/Application\ Support/io.parity.ethereum/
          • Linux: <full home path>/local/share/io.parity.ethereum/
    • levelDbPath should match Geth's chaindata directory path.

      • The geth LevelDB chaindata path is printed to the console when you start geth.
      • The default location is:
        • Mac: <full home path>/Library/Ethereum/geth/chaindata
        • Linux: <full home path>/ethereum/geth/chaindata
      • levelDbPath is irrelevant (and coldImport is currently unavailable) if only running parity.
  • See environments/infura.toml to configure commands to run against infura, if a local node is unavailable. (Support is currently experimental, at this time.)

Start syncing with postgres

Syncs VulcanizeDB with the configured Ethereum node.

  1. Start the node
    • If node state is not yet fully synced, Vulcanize will not be able to operate on the fetched data. You will need to wait for the initial sync to finish.
  2. Start the vulcanize_db sync
    • Execute ./vulcanizedb sync --config <path to config.toml>
    • Or to sync from a specific block: ./vulcanizedb sync --config <config.toml> --starting-block-number <block-number>

Alternatively, sync from Geth's underlying LevelDB

Sync VulcanizeDB from the LevelDB underlying a Geth node.

  1. Assure node is not running, and that it has synced to the desired block height.
  2. Start vulcanize_db
    • ./vulcanizedb coldImport --config <config.toml>
  3. Optional flags:
    • --starting-block-number <block number>/-s <block number>: block number to start syncing from
    • --ending-block-number <block number>/-e <block number>: block number to sync to
    • --all/-a: sync all missing blocks

Running the Tests

In order to run the full test suite, a test database must be prepared. By default, the rests use a database named vulcanize_private. Create the database in Postgres, and run migrations on the new database in preparation for executing tests:

make migrate HOST_NAME=localhost NAME=vulcanize_private PORT=<postgres port, default 5432>

Ginkgo is declared as a dep package test execution. Linting and tests can be run together via a provided make task:

make test

Tests can be run directly via Ginkgo in the project's root directory:

ginkgo -r

Start full environment in docker by single command

Geth Rinkeby

make command description
rinkeby_env_up start geth, postgres and rolling migrations, after migrations done starting vulcanizedb container
rinkeby_env_deploy build and run vulcanizedb container in rinkeby environment
rinkeby_env_migrate build and run rinkeby env migrations
rinkeby_env_down stop and remove all rinkeby env containers

Success run of the VulcanizeDB container require full geth state sync, attach to geth console and check sync state:

$ docker exec -it rinkeby_vulcanizedb_geth geth --rinkeby attach
...
> eth.syncing
false

If you have full rinkeby chaindata you can move it to rinkeby_vulcanizedb_geth_data docker volume to skip long wait of sync.