2019-12-04 02:24:13 +00:00
# Super Node
2019-06-18 17:28:57 +00:00
Vulcanizedb can act as an index for Ethereum data stored on IPFS through the use of the `syncAndPublish` and
2019-06-10 22:06:38 +00:00
`syncPublishScreenAndServe` commands.
2019-09-05 22:47:47 +00:00
## Manual Setup
2019-06-10 22:06:38 +00:00
2019-09-05 22:47:47 +00:00
These commands work in conjunction with a [state-diffing full Geth node ](https://github.com/vulcanize/go-ethereum/tree/statediffing )
2019-06-10 22:06:38 +00:00
and IPFS.
### IPFS
To start, download and install [IPFS ](https://github.com/vulcanize/go-ipfs )
`go get github.com/ipfs/go-ipfs`
`cd $GOPATH/src/github.com/ipfs/go-ipfs`
`make install`
2019-09-05 22:47:47 +00:00
If we want to use Postgres as our backing datastore, we need to use the vulcanize fork of go-ipfs.
2019-06-10 22:06:38 +00:00
2019-06-12 17:36:56 +00:00
Start by adding the fork and switching over to it:
2019-06-10 22:06:38 +00:00
`git remote add vulcanize https://github.com/vulcanize/go-ipfs.git`
`git fetch vulcanize`
2019-06-12 17:36:56 +00:00
`git checkout -b postgres_update vulcanize/postgres_update`
2019-06-10 22:06:38 +00:00
2019-09-05 22:47:47 +00:00
Now install this fork of ipfs, first be sure to remove any previous installation.
2019-06-10 22:06:38 +00:00
2019-09-05 22:47:47 +00:00
`make install`
2019-06-10 22:06:38 +00:00
2019-06-12 17:36:56 +00:00
Check that is installed properly by running
2019-06-10 22:06:38 +00:00
2019-06-12 17:36:56 +00:00
`ipfs`
2019-06-10 22:06:38 +00:00
2019-06-12 17:36:56 +00:00
You should see the CLI info/help output.
2019-06-10 22:06:38 +00:00
2019-06-12 17:36:56 +00:00
And now we initialize with the `postgresds` profile.
If ipfs was previously initialized we will need to remove the old profile first.
2019-06-10 22:06:38 +00:00
We also need to provide env variables for the postgres connection:
We can either set these manually, e.g.
```bash
export IPFS_PGHOST=
export IPFS_PGUSER=
export IPFS_PGDATABASE=
export IPFS_PGPORT=
export IPFS_PGPASSWORD=
```
And then run the ipfs command
`ipfs init --profile=postgresds`
Or we can use the pre-made script at `GOPATH/src/github.com/ipfs/go-ipfs/misc/utility/ipfs_postgres.sh`
which has usage:
2019-06-18 17:28:57 +00:00
`./ipfs_postgres.sh <IPFS_PGHOST> <IPFS_PGPORT> <IPFS_PGUSER> <IPFS_PGDATABASE>"`
2019-06-10 22:06:38 +00:00
and will ask us to enter the password, avoiding storing it to an ENV variable.
2019-09-05 22:47:47 +00:00
Once we have initialized ipfs, that is all we need to do with it- we do not need to run a daemon during the subsequent processes (in fact, we can't).
2019-06-10 22:06:38 +00:00
### Geth
2019-06-12 17:36:56 +00:00
For Geth, we currently *require* a special fork, and we can set this up as follows:
2019-06-10 22:06:38 +00:00
Begin by downloading geth and switching to the vulcanize/rpc_statediffing branch
`go get github.com/ethereum/go-ethereum`
`cd $GOPATH/src/github.com/ethereum/go-ethereum`
`git remote add vulcanize https://github.com/vulcanize/go-ethereum.git`
`git fetch vulcanize`
2019-09-05 22:47:47 +00:00
`git checkout -b statediffing vulcanize/statediffing`
2019-06-10 22:06:38 +00:00
Now, install this fork of geth (make sure any old versions have been uninstalled/binaries removed first)
`make geth`
And run the output binary with statediffing turned on:
`cd $GOPATH/src/github.com/ethereum/go-ethereum/build/bin`
`./geth --statediff --statediff.streamblock --ws --syncmode=full`
Note: other CLI options- statediff specific ones included- can be explored with `./geth help`
The output from geth should mention that it is `Starting statediff service` and block synchronization should begin shortly thereafter.
Note that until it receives a subscriber, the statediffing process does essentially nothing. Once a subscription is received, this
will be indicated in the output.
Also in the output will be the websocket url and ipc paths that we will use to subscribe to the statediffing process.
The default ws url is "ws://127.0.0.1:8546" and the default ipcPath- on Darwin systems only- is "Users/user/Library/Ethereum/geth.ipc"
### Vulcanizedb
There are two commands to choose from:
#### syncAndPublish
2019-10-02 14:10:37 +00:00
`syncAndPublih` performs the functions of the super node- syncing data from Geth, converting them to IPLDs,
2019-06-10 22:06:38 +00:00
publishing those IPLDs to IPFS, and creating a local Postgres index to relate their CIDS to useful metadata.
Usage:
`./vulcanizedb syncAndPublish --config=<config_file.toml>`
The config file for the `syncAndPublish` command looks very similar to the basic config file
```toml
[database]
name = "vulcanize_demo"
hostname = "localhost"
port = 5432
[client]
ipcPath = "ws://127.0.0.1:8546"
ipfsPath = "/Users/user/.ipfs"
```
With an additional field, `client.ipcPath` , that is either the ws url or the ipc path that Geth has exposed (the url and path output
when the geth sync was started), and `client.ipfsPath` which is the path the ipfs datastore directory.
#### syncPublishScreenAndServe
2019-08-05 17:56:15 +00:00
`syncPublishScreenAndServe` does everything that `syncAndPublish` does, plus it opens up an RPC server which exposes
an endpoint to allow transformers to subscribe to subsets of the sync-and-published data that are relevant to their transformations
2019-06-10 22:06:38 +00:00
Usage:
`./vulcanizedb syncPublishScreenAndServe --config=<config_file.toml>`
The config file for the `syncPublishScreenAndServe` command has two additional fields and looks like:
```toml
[database]
name = "vulcanize_demo"
hostname = "localhost"
port = 5432
[client]
ipcPath = "ws://127.0.0.1:8546"
ipfsPath = "/Users/user/.ipfs"
[server]
ipcPath = "/Users/user/.vulcanize/vulcanize.ipc"
2019-08-05 17:56:15 +00:00
wsEndpoint = "127.0.0.1:80"
2019-10-02 14:10:37 +00:00
2019-11-04 19:14:05 +00:00
[superNodeBackFill]
2019-10-02 14:10:37 +00:00
on = false
ipcPath = ""
frequency = 5
2019-06-10 22:06:38 +00:00
```
The additional `server.ipcPath` and `server.wsEndpoint` fields are used to set what ipc endpoint and ws url
the `syncPublishScreenAndServe` rpc server will expose itself to subscribing transformers over, respectively.
2019-10-02 14:10:37 +00:00
Any valid and available path and endpoint is acceptable, but keep in mind that this path and endpoint need to
be known by transformers for them to subscribe to the super node.
Because the super node syncs data from a geth full node as it progresses through its block synchronization, there is potential
for the super node to miss data both at the beginning of the sync due to lag between initialization of the two processes and
2019-11-04 19:14:05 +00:00
anywhere throughout the sync if the processes are interrupted. The `superNodeBackFill` config mapping is used to optionally configure
2019-10-02 14:10:37 +00:00
the super node with an archival geth client that exposes a `statediff.StateDiffAt` rpc endpoint, to enable it to fill in these data gaps.
2019-11-04 19:14:05 +00:00
`superNodeBackFill.on` turns the backfill process on, the `superNodeBackFill.ipcPath` is the rpc path for the archival geth node, and `superNodeBackFill.frequency`
2019-10-02 14:10:37 +00:00
sets at what frequency (in minutes) the backfill process checks for and fills in gaps.
2019-06-10 22:06:38 +00:00
2019-09-05 22:47:47 +00:00
## Dockerfile Setup
2019-10-02 14:10:37 +00:00
The below provides step-by-step directions for how to setup the super node using the provided Dockerfile on an AWS Linux AMI instance.
2019-09-05 22:47:47 +00:00
Note that the instance will need sufficient memory and storage for this to work.
1. Install basic dependencies
```
sudo yum update
sudo yum install -y curl gpg gcc gcc-c++ make git
```
2. Install Go 1.12
```
wget https://dl.google.com/go/go1.12.6.linux-amd64.tar.gz
tar -xzf go1.12.6.linux-amd64.tar.gz
sudo mv go /usr/local
```
3. Edit .bash_profile to export GOPATH
```
export GOROOT=/usr/local/go
export GOPATH=$HOME/go
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
```
4. Install and setup Postgres
```
sudo yum install postgresql postgresql96-server
sudo service postgresql96 initdb
sudo service postgresql96 start
sudo -u postgres createuser -s ec2-user
sudo -u postgres createdb ec2-user
sudo su postgres
psql
ALTER USER "ec2-user" WITH SUPERUSER;
/q
exit
```
4b. Edit hba_file to trust connections
```
psql
SHOW hba_file;
/q
sudo vim {PATH_TO_FILE}
```
4c. Stop and restart Postgres server to affect changes
```
sudo service postgresql96 stop
sudo service postgresql96 start
```
5. Install and start Docker (exit and re-enter ec2 instance afterwards to affect changes)
```
sudo yum install -y docker
sudo service docker start
sudo usermod -aG docker ec2-user
```
6. Fetch the repository and switch to this working branch
```
go get github.com/vulcanize/vulcanizedb
cd $GOPATH/src/github.com/vulcanize/vulcanizedb
git checkout ipfs_concurrency
```
7. Create the db
```
createdb vulcanize_public
```
8. Build and run the Docker image
```
2019-10-02 14:10:37 +00:00
cd $GOPATH/src/github.com/vulcanize/vulcanizedb/dockerfiles/super_node
2019-09-05 22:47:47 +00:00
docker build .
docker run --network host -e VDB_PG_CONNECT=postgres://localhost:5432/vulcanize_public?sslmode=disable {IMAGE_ID}
```
## Subscribing
2019-06-10 22:06:38 +00:00
A transformer can subscribe to the `syncPublishScreenAndServe` service over its ipc or ws endpoints, when subscribing the transformer
specifies which subsets of the synced data it is interested in and the server will forward only these data.
2019-10-02 14:10:37 +00:00
The `streamSubscribe` command serves as a simple demonstration/example of subscribing to the super-node feed, it subscribes with a set of parameters
defined in the loaded config file, and prints the streamed data to stdout. To build transformers that subscribe to and use super-node data,
2019-06-10 22:06:38 +00:00
the shared/libraries/streamer can be used.
Usage:
`./vulcanizedb streamSubscribe --config=<config_file.toml>`
The config for `streamSubscribe` has the `subscribe` set of parameters, for example:
```toml
[subscription]
2019-09-05 22:47:47 +00:00
path = "ws://127.0.0.1:8080"
2019-06-10 22:06:38 +00:00
backfill = true
backfillOnly = false
startingBlock = 0
endingBlock = 0
[subscription.headerFilter]
off = false
2019-10-08 20:31:07 +00:00
uncles = false
2019-06-10 22:06:38 +00:00
[subscription.trxFilter]
off = false
src = [
"0xde0B295669a9FD93d5F28D9Ec85E40f4cb697BAe",
]
dst = [
"0xde0B295669a9FD93d5F28D9Ec85E40f4cb697BAe",
]
[subscription.receiptFilter]
off = false
topic0s = [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0x930a61a57a70a73c2a503615b87e2e54fe5b9cdeacda518270b852296ab1a377"
]
[subscription.stateFilter]
off = false
addresses = [
"0xde0B295669a9FD93d5F28D9Ec85E40f4cb697BAe"
]
intermediateNodes = false
[subscription.storageFilter]
off = true
addresses = [
"",
""
]
storageKeys = [
"",
""
]
intermediateNodes = false
```
2019-10-02 14:10:37 +00:00
`subscription.path` is used to define the ws url OR ipc endpoint we will subscribe to the super-node over
(the `server.ipcPath` or `server.wsEndpoint` that the super-node has defined in their config file).
2019-06-10 22:06:38 +00:00
2019-10-02 14:10:37 +00:00
`subscription.backfill` specifies whether or not the super-node should look up historical data in its cache and
send that to the subscriber, if this is set to `false` then the super-node only forwards newly synced/incoming data.
2019-06-10 22:06:38 +00:00
2019-10-02 14:10:37 +00:00
`subscription.backfillOnly` will tell the super-node to only send historical data and not stream incoming data going forward.
2019-06-10 22:06:38 +00:00
`subscription.startingBlock` is the starting block number for the range we want to receive data in.
`subscription.endingBlock` is the ending block number for the range we want to receive data in;
setting to 0 means there is no end/we will continue indefinitely.
2019-10-08 20:31:07 +00:00
`subscription.headerFilter` has two sub-options: `off` and `uncles` . Setting `off` to true tells the super-node to
not send any headers to the subscriber; setting `uncles` to true tells the super-node to send uncles in addition to normal headers.
2019-06-10 22:06:38 +00:00
2019-10-02 14:10:37 +00:00
`subscription.trxFilter` has three sub-options: `off` , `src` , and `dst` . Setting `off` to true tells the super-node to
2019-06-10 22:06:38 +00:00
not send any transactions to the subscriber; `src` and `dst` are string arrays which can be filled with ETH addresses we want to filter transactions for,
2019-10-02 14:10:37 +00:00
if they have any addresses then the super-node will only send transactions that were sent or received by the addresses contained
2019-06-10 22:06:38 +00:00
in `src` and `dst` , respectively.
2019-10-02 14:10:37 +00:00
`subscription.receiptFilter` has two sub-options: `off` and `topics` . Setting `off` to true tells the super-node to
2019-06-10 22:06:38 +00:00
not send any receipts to the subscriber; `topic0s` is a string array which can be filled with event topics we want to filter for,
2019-10-02 14:10:37 +00:00
if it has any topics then the super-node will only send receipts that contain logs which have that topic0.
2019-06-10 22:06:38 +00:00
2019-10-02 14:10:37 +00:00
`subscription.stateFilter` has three sub-options: `off` , `addresses` , and `intermediateNodes` . Setting `off` to true tells the super-node to
2019-06-10 22:06:38 +00:00
not send any state data to the subscriber; `addresses` is a string array which can be filled with ETH addresses we want to filter state for,
2019-10-02 14:10:37 +00:00
if it has any addresses then the super-node will only send state leafs (accounts) corresponding to those account addresses. By default the super-node
2019-06-10 22:06:38 +00:00
only sends along state leafs, if we want to receive branch and extension nodes as well `intermediateNodes` can be set to `true` .
2019-10-02 14:10:37 +00:00
`subscription.storageFilter` has four sub-options: `off` , `addresses` , `storageKeys` , and `intermediateNodes` . Setting `off` to true tells the super-node to
2019-06-10 22:06:38 +00:00
not send any storage data to the subscriber; `addresses` is a string array which can be filled with ETH addresses we want to filter storage for,
2019-10-02 14:10:37 +00:00
if it has any addresses then the super-node will only send storage nodes from the storage tries at those state addresses. `storageKeys` is another string
2019-06-10 22:06:38 +00:00
array that can be filled with storage keys we want to filter storage data for. It is important to note that the storageKeys are the actual keccak256 hashes, whereas
2019-10-02 14:10:37 +00:00
the addresses in the `addresses` fields are the ETH addresses and not their keccak256 hashes that serve as the actual state keys. By default the super-node
2019-06-10 22:06:38 +00:00
only sends along storage leafs, if we want to receive branch and extension nodes as well `intermediateNodes` can be set to `true` .