Allow building of npm packages using locally published dependencies #86
16
README.md
16
README.md
@ -3,9 +3,7 @@
|
||||
Stack Orchestrator allows building and deployment of a Laconic stack on a single machine with minimial prerequisites.
|
||||
|
||||
## Setup
|
||||
### User Mode
|
||||
User mode runs the orchestrator from a "binary" single-file release and does not require special Python environment setup. Use this mode unless you plan to make changes to the orchestrator source code.
|
||||
#### Prerequisites
|
||||
### Prerequisites
|
||||
Stack Orchestrator is a Python3 CLI tool that runs on any OS with Python3 and Docker. Tested on: Ubuntu 20/22.
|
||||
|
||||
Ensure that the following are already installed:
|
||||
@ -30,11 +28,17 @@ Ensure that the following are already installed:
|
||||
# see https://docs.docker.com/compose/install/linux/#install-the-plugin-manually for further details
|
||||
# or to install for all users.
|
||||
```
|
||||
#### Install
|
||||
|
||||
### User Mode Install
|
||||
|
||||
User mode runs the orchestrator from a "binary" single-file release and does not require special Python environment setup. Use this mode unless you plan to make changes to the orchestrator source code.
|
||||
|
||||
*NOTE: User Mode is currently broken, use "Developer mode" described below for now.*
|
||||
|
||||
1. Download the latest release from [this page](https://github.com/cerc-io/stack-orchestrator/tags), into a suitable directory (e.g. `~/bin`):
|
||||
```
|
||||
$ cd ~/bin
|
||||
$ curl https://github.com/cerc-io/stack-orchestrator/releases/download/v1.0.3-alpha/laconic-so
|
||||
$ curl -L https://github.com/cerc-io/stack-orchestrator/releases/download/v1.0.3-alpha/laconic-so
|
||||
```
|
||||
1. Ensure `laconic-so` is on the `PATH`
|
||||
1. Verify operation:
|
||||
@ -56,7 +60,7 @@ Ensure that the following are already installed:
|
||||
deploy-system deploy a stack
|
||||
setup-repositories git clone the set of repositories required to build...
|
||||
```
|
||||
### Developer mode
|
||||
### Developer mode Install
|
||||
Suitable for developers either modifying or debugging the orchestrator Python code:
|
||||
#### Prerequisites
|
||||
In addition to the binary install prerequisites listed above, the following are required:
|
||||
|
@ -13,7 +13,12 @@ cerc/laconic-cns-cli
|
||||
cerc/fixturenet-eth-geth
|
||||
cerc/fixturenet-eth-lighthouse
|
||||
cerc/watcher-mobymask
|
||||
cerc/watcher-erc20
|
||||
cerc/watcher-erc721
|
||||
cerc/watcher-uniswap-v3
|
||||
cerc/uniswap-v3-info
|
||||
cerc/test-container
|
||||
cerc/eth-probe
|
||||
cerc/builder-js
|
||||
cerc/keycloak
|
||||
cerc/tx-spammer
|
||||
|
@ -11,6 +11,10 @@ laconicd
|
||||
fixturenet-laconicd
|
||||
fixturenet-eth
|
||||
watcher-mobymask
|
||||
watcher-erc20
|
||||
watcher-erc721
|
||||
watcher-uniswap-v3
|
||||
test
|
||||
eth-probe
|
||||
keycloak
|
||||
tx-spammer
|
||||
|
@ -1,16 +1,18 @@
|
||||
vulcanize/ops
|
||||
cerc-io/ipld-eth-db
|
||||
cerc-io/go-ethereum
|
||||
cerc-io/ipld-eth-server
|
||||
cerc-io/eth-statediff-service
|
||||
vulcanize/eth-statediff-fill-service
|
||||
vulcanize/ipld-eth-db-validator
|
||||
vulcanize/ipld-eth-beacon-indexer
|
||||
vulcanize/ipld-eth-beacon-db
|
||||
cerc-io/ipld-eth-beacon-indexer
|
||||
cerc-io/ipld-eth-beacon-db
|
||||
cerc-io/laconicd
|
||||
cerc-io/laconic-sdk
|
||||
cerc-io/laconic-registry-cli
|
||||
cerc-io/mobymask-watcher
|
||||
cerc-io/watcher-ts
|
||||
vulcanize/uniswap-watcher-ts
|
||||
vulcanize/uniswap-v3-info
|
||||
vulcanize/assemblyscript
|
||||
cerc-io/eth-probe
|
||||
cerc-io/tx-spammer
|
||||
|
@ -73,7 +73,7 @@ def command(ctx, include, exclude, cluster, command, services):
|
||||
docker = DockerClient(compose_files=compose_files, compose_project_name=cluster)
|
||||
|
||||
services_list = list(services) or None
|
||||
|
||||
|
||||
if not dry_run:
|
||||
if command == "up":
|
||||
if verbose:
|
||||
|
@ -30,4 +30,9 @@ services:
|
||||
ports:
|
||||
- "127.0.0.1:8081:8081"
|
||||
- "127.0.0.1:8082:8082"
|
||||
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-v", "localhost", "8081"]
|
||||
interval: 20s
|
||||
timeout: 5s
|
||||
retries: 15
|
||||
start_period: 5s
|
||||
|
18
compose/docker-compose-tx-spammer.yml
Normal file
18
compose/docker-compose-tx-spammer.yml
Normal file
@ -0,0 +1,18 @@
|
||||
version: '3.2'
|
||||
|
||||
services:
|
||||
tx-spammer:
|
||||
restart: always
|
||||
image: cerc/tx-spammer:local
|
||||
env_file:
|
||||
- ../config/tx-spammer/tx-spammer.env
|
||||
environment:
|
||||
ACCOUNTS_CSV_URL: http://fixturenet-eth-bootnode-geth:9898/accounts.csv
|
||||
ETH_HTTP_PATH: http://fixturenet-eth-geth-1:8545
|
||||
LOG_LEVEL: debug
|
||||
SPAMMER_COMMAND: autoSend
|
||||
depends_on:
|
||||
fixturenet-eth-bootnode-geth:
|
||||
condition: service_started
|
||||
fixturenet-eth-geth-1:
|
||||
condition: service_healthy
|
49
compose/docker-compose-watcher-erc20.yml
Normal file
49
compose/docker-compose-watcher-erc20.yml
Normal file
@ -0,0 +1,49 @@
|
||||
version: '3.2'
|
||||
|
||||
services:
|
||||
|
||||
erc20-watcher-db:
|
||||
restart: unless-stopped
|
||||
image: postgres:14-alpine
|
||||
environment:
|
||||
- POSTGRES_USER=vdbm
|
||||
- POSTGRES_MULTIPLE_DATABASES=erc20-watcher,erc20-watcher-job-queue
|
||||
- POSTGRES_EXTENSION=erc20-watcher-job-queue:pgcrypto
|
||||
- POSTGRES_PASSWORD=password
|
||||
volumes:
|
||||
- ../config/postgresql/multiple-postgressql-databases.sh:/docker-entrypoint-initdb.d/multiple-postgressql-databases.sh
|
||||
- erc20_watcher_db_data:/var/lib/postgresql/data
|
||||
ports:
|
||||
- "0.0.0.0:15433:5432"
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-v", "localhost", "5432"]
|
||||
interval: 20s
|
||||
timeout: 5s
|
||||
retries: 15
|
||||
start_period: 10s
|
||||
|
||||
erc20-watcher:
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
ipld-eth-server:
|
||||
condition: service_healthy
|
||||
erc20-watcher-db:
|
||||
condition: service_healthy
|
||||
image: cerc/watcher-erc20:local
|
||||
environment:
|
||||
- ETH_RPC_URL=http://go-ethereum:8545
|
||||
command: ["sh", "-c", "yarn server"]
|
||||
volumes:
|
||||
- ../config/watcher-erc20/erc20-watcher.toml:/app/packages/erc20-watcher/environments/local.toml
|
||||
ports:
|
||||
- "0.0.0.0:3002:3001"
|
||||
- "0.0.0.0:9002:9001"
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-v", "localhost", "3002"]
|
||||
interval: 20s
|
||||
timeout: 5s
|
||||
retries: 15
|
||||
start_period: 5s
|
||||
|
||||
volumes:
|
||||
erc20_watcher_db_data:
|
49
compose/docker-compose-watcher-erc721.yml
Normal file
49
compose/docker-compose-watcher-erc721.yml
Normal file
@ -0,0 +1,49 @@
|
||||
version: '3.2'
|
||||
|
||||
services:
|
||||
|
||||
erc721-watcher-db:
|
||||
restart: unless-stopped
|
||||
image: postgres:14-alpine
|
||||
environment:
|
||||
- POSTGRES_USER=vdbm
|
||||
- POSTGRES_MULTIPLE_DATABASES=erc721-watcher,erc721-watcher-job-queue
|
||||
- POSTGRES_EXTENSION=erc721-watcher-job-queue:pgcrypto
|
||||
- POSTGRES_PASSWORD=password
|
||||
volumes:
|
||||
- ../config/postgresql/multiple-postgressql-databases.sh:/docker-entrypoint-initdb.d/multiple-postgressql-databases.sh
|
||||
- erc721_watcher_db_data:/var/lib/postgresql/data
|
||||
ports:
|
||||
- "0.0.0.0:15434:5432"
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-v", "localhost", "5432"]
|
||||
interval: 20s
|
||||
timeout: 5s
|
||||
retries: 15
|
||||
start_period: 10s
|
||||
|
||||
erc721-watcher:
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
ipld-eth-server:
|
||||
condition: service_healthy
|
||||
erc721-watcher-db:
|
||||
condition: service_healthy
|
||||
image: cerc/watcher-erc721:local
|
||||
environment:
|
||||
- ETH_RPC_URL=http://go-ethereum:8545
|
||||
command: ["sh", "-c", "yarn server"]
|
||||
volumes:
|
||||
- ../config/watcher-erc721/erc721-watcher.toml:/app/packages/erc721-watcher/environments/local.toml
|
||||
ports:
|
||||
- "0.0.0.0:3009:3009"
|
||||
- "0.0.0.0:9003:9001"
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-v", "localhost", "3009"]
|
||||
interval: 20s
|
||||
timeout: 5s
|
||||
retries: 15
|
||||
start_period: 5s
|
||||
|
||||
volumes:
|
||||
erc721_watcher_db_data:
|
@ -4,7 +4,7 @@ version: '3.2'
|
||||
|
||||
services:
|
||||
|
||||
watcher-db:
|
||||
mobymask-watcher-db:
|
||||
restart: unless-stopped
|
||||
image: postgres:14-alpine
|
||||
environment:
|
||||
@ -14,7 +14,7 @@ services:
|
||||
- POSTGRES_PASSWORD=password
|
||||
volumes:
|
||||
- ../config/postgresql/multiple-postgressql-databases.sh:/docker-entrypoint-initdb.d/multiple-postgressql-databases.sh
|
||||
- watcher_db_data:/var/lib/postgresql/data
|
||||
- mobymask_watcher_db_data:/var/lib/postgresql/data
|
||||
ports:
|
||||
- "0.0.0.0:15432:5432"
|
||||
healthcheck:
|
||||
@ -27,12 +27,12 @@ services:
|
||||
mobymask-watcher-server:
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
watcher-db:
|
||||
mobymask-watcher-db:
|
||||
condition: service_healthy
|
||||
image: cerc/watcher-mobymask:local
|
||||
command: ["sh", "-c", "yarn server"]
|
||||
volumes:
|
||||
- ../config/watcher-mobymask/mobymask-watcher.toml:/app/watcher-ts/packages/mobymask-watcher/environments/local.toml
|
||||
- ../config/watcher-mobymask/mobymask-watcher.toml:/app/packages/mobymask-watcher/environments/local.toml
|
||||
ports:
|
||||
- "0.0.0.0:3001:3001"
|
||||
- "0.0.0.0:9001:9001"
|
||||
@ -50,18 +50,16 @@ services:
|
||||
depends_on:
|
||||
mobymask-watcher-server:
|
||||
condition: service_healthy
|
||||
watcher-db:
|
||||
mobymask-watcher-db:
|
||||
condition: service_healthy
|
||||
image: cerc/watcher-mobymask:local
|
||||
command: ["sh", "-c", "yarn job-runner"]
|
||||
volumes:
|
||||
- ../config/watcher-mobymask/mobymask-watcher.toml:/app/watcher-ts/packages/mobymask-watcher/environments/local.toml
|
||||
- ../config/watcher-mobymask/mobymask-watcher.toml:/app/packages/mobymask-watcher/environments/local.toml
|
||||
ports:
|
||||
- "0.0.0.0:9000:9000"
|
||||
extra_hosts:
|
||||
- "ipld-eth-server:host-gateway"
|
||||
|
||||
volumes:
|
||||
indexer_db_data:
|
||||
watcher_db_data:
|
||||
|
||||
mobymask_watcher_db_data:
|
||||
|
170
compose/docker-compose-watcher-uniswap-v3.yml
Normal file
170
compose/docker-compose-watcher-uniswap-v3.yml
Normal file
@ -0,0 +1,170 @@
|
||||
version: '3.2'
|
||||
|
||||
services:
|
||||
|
||||
uniswap-watcher-db:
|
||||
restart: unless-stopped
|
||||
image: postgres:14-alpine
|
||||
environment:
|
||||
- POSTGRES_USER=vdbm
|
||||
- POSTGRES_MULTIPLE_DATABASES=erc20-watcher,uni-watcher,uni-info-watcher,erc20-watcher-job-queue,uni-watcher-job-queue,uni-info-watcher-job-queue
|
||||
- POSTGRES_EXTENSION=erc20-watcher-job-queue:pgcrypto,uni-watcher-job-queue:pgcrypto,uni-info-watcher-job-queue:pgcrypto
|
||||
- POSTGRES_PASSWORD=password
|
||||
command: ["postgres", "-c", "shared_preload_libraries=pg_stat_statements", "-c", "pg_stat_statements.track=all", "-c", "work_mem=2GB"]
|
||||
volumes:
|
||||
- ../config/postgresql/multiple-postgressql-databases.sh:/docker-entrypoint-initdb.d/multiple-postgressql-databases.sh
|
||||
- ../config/postgresql/create-pg-stat-statements.sql:/docker-entrypoint-initdb.d/create-pg-stat-statements.sql
|
||||
- uniswap_watcher_db_data:/var/lib/postgresql/data
|
||||
ports:
|
||||
- "0.0.0.0:15435:5432"
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-v", "localhost", "5432"]
|
||||
interval: 20s
|
||||
timeout: 5s
|
||||
retries: 15
|
||||
start_period: 10s
|
||||
shm_size: '8GB'
|
||||
|
||||
erc20-watcher-server:
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
uniswap-watcher-db:
|
||||
condition: service_healthy
|
||||
image: cerc/watcher-uniswap-v3:local
|
||||
working_dir: /app/packages/erc20-watcher
|
||||
environment:
|
||||
- DEBUG=vulcanize:*
|
||||
command: ["node", "--enable-source-maps", "dist/server.js"]
|
||||
volumes:
|
||||
- ../config/watcher-uniswap-v3/erc20-watcher.toml:/app/packages/erc20-watcher/environments/local.toml
|
||||
ports:
|
||||
- "0.0.0.0:3005:3001"
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-v", "localhost", "3001"]
|
||||
interval: 20s
|
||||
timeout: 5s
|
||||
retries: 15
|
||||
start_period: 5s
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
|
||||
uni-watcher-job-runner:
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
uniswap-watcher-db:
|
||||
condition: service_healthy
|
||||
image: cerc/watcher-uniswap-v3:local
|
||||
working_dir: /app/packages/uni-watcher
|
||||
environment:
|
||||
- DEBUG=vulcanize:*
|
||||
command: ["sh", "-c", "./watch-contract.sh && node --enable-source-maps dist/job-runner.js"]
|
||||
volumes:
|
||||
- ../config/watcher-uniswap-v3/uni-watcher.toml:/app/packages/uni-watcher/environments/local.toml
|
||||
- ../config/watcher-uniswap-v3/watch-contract.sh:/app/packages/uni-watcher/watch-contract.sh
|
||||
ports:
|
||||
- "0.0.0.0:9004:9000"
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-v", "localhost", "9000"]
|
||||
interval: 20s
|
||||
timeout: 5s
|
||||
retries: 15
|
||||
start_period: 5s
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
|
||||
uni-watcher-server:
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
uniswap-watcher-db:
|
||||
condition: service_healthy
|
||||
uni-watcher-job-runner:
|
||||
condition: service_healthy
|
||||
image: cerc/watcher-uniswap-v3:local
|
||||
environment:
|
||||
- UNISWAP_START_BLOCK=12369621
|
||||
- DEBUG=vulcanize:*
|
||||
working_dir: /app/packages/uni-watcher
|
||||
command: ["./run.sh"]
|
||||
volumes:
|
||||
- ../config/watcher-uniswap-v3/uni-watcher.toml:/app/packages/uni-watcher/environments/local.toml
|
||||
- ../config/watcher-uniswap-v3/run.sh:/app/packages/uni-watcher/run.sh
|
||||
ports:
|
||||
- "0.0.0.0:3003:3003"
|
||||
- "0.0.0.0:9005:9001"
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-v", "localhost", "3003"]
|
||||
interval: 20s
|
||||
timeout: 5s
|
||||
retries: 15
|
||||
start_period: 5s
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
|
||||
uni-info-watcher-job-runner:
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
uniswap-watcher-db:
|
||||
condition: service_healthy
|
||||
erc20-watcher-server:
|
||||
condition: service_healthy
|
||||
uni-watcher-server:
|
||||
condition: service_healthy
|
||||
image: cerc/watcher-uniswap-v3:local
|
||||
working_dir: /app/packages/uni-info-watcher
|
||||
environment:
|
||||
- DEBUG=vulcanize:*
|
||||
command: ["node", "--enable-source-maps", "dist/job-runner.js"]
|
||||
volumes:
|
||||
- ../config/watcher-uniswap-v3/uni-info-watcher.toml:/app/packages/uni-info-watcher/environments/local.toml
|
||||
ports:
|
||||
- "0.0.0.0:9006:9002"
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-v", "localhost", "9002"]
|
||||
interval: 20s
|
||||
timeout: 5s
|
||||
retries: 15
|
||||
start_period: 5s
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
|
||||
uni-info-watcher-server:
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
uniswap-watcher-db:
|
||||
condition: service_healthy
|
||||
erc20-watcher-server:
|
||||
condition: service_healthy
|
||||
uni-watcher-server:
|
||||
condition: service_healthy
|
||||
uni-info-watcher-job-runner:
|
||||
condition: service_healthy
|
||||
image: cerc/watcher-uniswap-v3:local
|
||||
environment:
|
||||
- UNISWAP_START_BLOCK=12369621
|
||||
working_dir: /app/packages/uni-info-watcher
|
||||
command: ["./run.sh"]
|
||||
volumes:
|
||||
- ../config/watcher-uniswap-v3/uni-info-watcher.toml:/app/packages/uni-info-watcher/environments/local.toml
|
||||
- ../config/watcher-uniswap-v3/run.sh:/app/packages/uni-info-watcher/run.sh
|
||||
ports:
|
||||
- "0.0.0.0:3004:3004"
|
||||
- "0.0.0.0:9007:9003"
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-v", "localhost", "3004"]
|
||||
interval: 20s
|
||||
timeout: 5s
|
||||
retries: 15
|
||||
start_period: 5s
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
|
||||
uniswap-v3-info:
|
||||
depends_on:
|
||||
uni-info-watcher-server:
|
||||
condition: service_healthy
|
||||
image: cerc/uniswap-v3-info:local
|
||||
ports:
|
||||
- "0.0.0.0:3006:3000"
|
||||
|
||||
volumes:
|
||||
uniswap_watcher_db_data:
|
1
config/postgresql/create-pg-stat-statements.sql
Normal file
1
config/postgresql/create-pg-stat-statements.sql
Normal file
@ -0,0 +1 @@
|
||||
CREATE EXTENSION pg_stat_statements;
|
2
config/tx-spammer/tx-spammer.env
Normal file
2
config/tx-spammer/tx-spammer.env
Normal file
@ -0,0 +1,2 @@
|
||||
ETH_CALL_FREQ=1000
|
||||
ETH_SEND_FREQ=1000
|
41
config/watcher-erc20/erc20-watcher.toml
Normal file
41
config/watcher-erc20/erc20-watcher.toml
Normal file
@ -0,0 +1,41 @@
|
||||
[server]
|
||||
host = "0.0.0.0"
|
||||
port = 3001
|
||||
mode = "storage"
|
||||
kind = "lazy"
|
||||
|
||||
[metrics]
|
||||
host = "127.0.0.1"
|
||||
port = 9000
|
||||
[metrics.gql]
|
||||
port = 9001
|
||||
|
||||
[database]
|
||||
type = "postgres"
|
||||
host = "erc20-watcher-db"
|
||||
port = 5432
|
||||
database = "erc20-watcher"
|
||||
username = "vdbm"
|
||||
password = "password"
|
||||
synchronize = true
|
||||
logging = false
|
||||
maxQueryExecutionTime = 100
|
||||
|
||||
[upstream]
|
||||
[upstream.ethServer]
|
||||
gqlApiEndpoint = "http://ipld-eth-server:8082/graphql"
|
||||
rpcProviderEndpoint = "http://ipld-eth-server:8081"
|
||||
|
||||
[upstream.cache]
|
||||
name = "requests"
|
||||
enabled = false
|
||||
deleteOnStart = false
|
||||
|
||||
[jobQueue]
|
||||
dbConnectionString = "postgres://vdbm:password@erc20-watcher-db:5432/erc20-watcher-job-queue"
|
||||
maxCompletionLagInSecs = 300
|
||||
jobDelayInMilliSecs = 100
|
||||
eventsInBatch = 50
|
||||
blockDelayInMilliSecs = 2000
|
||||
prefetchBlocksInMem = true
|
||||
prefetchBlockCount = 10
|
56
config/watcher-erc721/erc721-watcher.toml
Normal file
56
config/watcher-erc721/erc721-watcher.toml
Normal file
@ -0,0 +1,56 @@
|
||||
[server]
|
||||
host = "0.0.0.0"
|
||||
port = 3009
|
||||
kind = "lazy"
|
||||
|
||||
# Checkpointing state.
|
||||
checkpointing = true
|
||||
|
||||
# Checkpoint interval in number of blocks.
|
||||
checkpointInterval = 2000
|
||||
|
||||
# Enable state creation
|
||||
enableState = true
|
||||
|
||||
# Boolean to filter logs by contract.
|
||||
filterLogs = false
|
||||
|
||||
# Max block range for which to return events in eventsInRange GQL query.
|
||||
# Use -1 for skipping check on block range.
|
||||
maxEventsBlockRange = 1000
|
||||
|
||||
[metrics]
|
||||
host = "127.0.0.1"
|
||||
port = 9000
|
||||
[metrics.gql]
|
||||
port = 9001
|
||||
|
||||
[database]
|
||||
type = "postgres"
|
||||
host = "erc721-watcher-db"
|
||||
port = 5432
|
||||
database = "erc721-watcher"
|
||||
username = "vdbm"
|
||||
password = "password"
|
||||
synchronize = true
|
||||
logging = false
|
||||
maxQueryExecutionTime = 100
|
||||
|
||||
[upstream]
|
||||
[upstream.ethServer]
|
||||
gqlApiEndpoint = "http://ipld-eth-server:8082/graphql"
|
||||
rpcProviderEndpoint = "http://ipld-eth-server:8081"
|
||||
|
||||
[upstream.cache]
|
||||
name = "requests"
|
||||
enabled = false
|
||||
deleteOnStart = false
|
||||
|
||||
[jobQueue]
|
||||
dbConnectionString = "postgres://vdbm:password@erc721-watcher-db:5432/erc721-watcher-job-queue"
|
||||
maxCompletionLagInSecs = 300
|
||||
jobDelayInMilliSecs = 100
|
||||
eventsInBatch = 50
|
||||
blockDelayInMilliSecs = 2000
|
||||
prefetchBlocksInMem = true
|
||||
prefetchBlockCount = 10
|
@ -27,7 +27,7 @@
|
||||
|
||||
[database]
|
||||
type = "postgres"
|
||||
host = "watcher-db"
|
||||
host = "mobymask-watcher-db"
|
||||
port = 5432
|
||||
database = "mobymask-watcher"
|
||||
username = "vdbm"
|
||||
@ -47,7 +47,7 @@
|
||||
deleteOnStart = false
|
||||
|
||||
[jobQueue]
|
||||
dbConnectionString = "postgres://vdbm:password@watcher-db/mobymask-watcher-job-queue"
|
||||
dbConnectionString = "postgres://vdbm:password@mobymask-watcher-db/mobymask-watcher-job-queue"
|
||||
maxCompletionLagInSecs = 300
|
||||
jobDelayInMilliSecs = 100
|
||||
eventsInBatch = 50
|
||||
|
39
config/watcher-uniswap-v3/erc20-watcher.toml
Normal file
39
config/watcher-uniswap-v3/erc20-watcher.toml
Normal file
@ -0,0 +1,39 @@
|
||||
[server]
|
||||
host = "0.0.0.0"
|
||||
port = 3001
|
||||
mode = "eth_call"
|
||||
kind = "lazy"
|
||||
|
||||
[metrics]
|
||||
host = "127.0.0.1"
|
||||
port = 9000
|
||||
[metrics.gql]
|
||||
port = 9001
|
||||
|
||||
[database]
|
||||
type = "postgres"
|
||||
host = "uniswap-watcher-db"
|
||||
port = 5432
|
||||
database = "erc20-watcher"
|
||||
username = "vdbm"
|
||||
password = "password"
|
||||
synchronize = true
|
||||
logging = false
|
||||
maxQueryExecutionTime = 100
|
||||
|
||||
[upstream]
|
||||
[upstream.ethServer]
|
||||
gqlApiEndpoint = "http://ipld-eth-server.example.com:8083/graphql"
|
||||
rpcProviderEndpoint = "http://ipld-eth-server.example.com:8082"
|
||||
|
||||
[upstream.cache]
|
||||
name = "requests"
|
||||
enabled = false
|
||||
deleteOnStart = false
|
||||
|
||||
[jobQueue]
|
||||
dbConnectionString = "postgres://vdbm:password@uniswap-watcher-db:5432/erc20-watcher-job-queue"
|
||||
maxCompletionLagInSecs = 300
|
||||
jobDelayInMilliSecs = 100
|
||||
eventsInBatch = 50
|
||||
blockDelayInMilliSecs = 2000
|
10
config/watcher-uniswap-v3/run.sh
Executable file
10
config/watcher-uniswap-v3/run.sh
Executable file
@ -0,0 +1,10 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
set -u
|
||||
|
||||
echo "Initializing watcher..."
|
||||
yarn fill --start-block $UNISWAP_START_BLOCK --end-block $((UNISWAP_START_BLOCK + 1))
|
||||
|
||||
echo "Running active server"
|
||||
DEBUG=vulcanize:* exec node --enable-source-maps dist/server.js
|
90
config/watcher-uniswap-v3/uni-info-watcher.toml
Normal file
90
config/watcher-uniswap-v3/uni-info-watcher.toml
Normal file
@ -0,0 +1,90 @@
|
||||
[server]
|
||||
host = "0.0.0.0"
|
||||
port = 3004
|
||||
mode = "prod"
|
||||
kind = "active"
|
||||
|
||||
# Checkpointing state.
|
||||
checkpointing = true
|
||||
|
||||
# Checkpoint interval in number of blocks.
|
||||
checkpointInterval = 50000
|
||||
|
||||
# Enable state creation
|
||||
enableState = false
|
||||
|
||||
# Max block range for which to return events in eventsInRange GQL query.
|
||||
# Use -1 for skipping check on block range.
|
||||
maxEventsBlockRange = 1000
|
||||
|
||||
# Interval in number of blocks at which to clear entities cache.
|
||||
clearEntitiesCacheInterval = 1000
|
||||
|
||||
# Boolean to skip updating entity fields required in state creation and not required in the frontend.
|
||||
skipStateFieldsUpdate = false
|
||||
|
||||
# Boolean to load GQL query nested entity relations sequentially.
|
||||
loadRelationsSequential = false
|
||||
|
||||
# Max GQL API requests to process simultaneously (defaults to 1).
|
||||
maxSimultaneousRequests = 1
|
||||
|
||||
# GQL cache settings
|
||||
[server.gqlCache]
|
||||
enabled = true
|
||||
|
||||
# Max in-memory cache size (in bytes) (default 8 MB)
|
||||
# maxCacheSize
|
||||
|
||||
# GQL cache-control max-age settings (in seconds)
|
||||
maxAge = 15
|
||||
timeTravelMaxAge = 86400 # 1 day
|
||||
|
||||
[metrics]
|
||||
host = "0.0.0.0"
|
||||
port = 9002
|
||||
[metrics.gql]
|
||||
port = 9003
|
||||
|
||||
[database]
|
||||
type = "postgres"
|
||||
host = "uniswap-watcher-db"
|
||||
port = 5432
|
||||
database = "uni-info-watcher"
|
||||
username = "vdbm"
|
||||
password = "password"
|
||||
synchronize = true
|
||||
logging = false
|
||||
maxQueryExecutionTime = 100
|
||||
|
||||
[database.extra]
|
||||
# maximum number of clients the pool should contain
|
||||
max = 20
|
||||
|
||||
[upstream]
|
||||
[upstream.ethServer]
|
||||
gqlApiEndpoint = "http://ipld-eth-server.example.com:8083/graphql"
|
||||
rpcProviderEndpoint = "http://ipld-eth-server.example.com:8082"
|
||||
|
||||
[upstream.cache]
|
||||
name = "requests"
|
||||
enabled = false
|
||||
deleteOnStart = false
|
||||
|
||||
[upstream.uniWatcher]
|
||||
gqlEndpoint = "http://uni-watcher-server:3003/graphql"
|
||||
gqlSubscriptionEndpoint = "ws://uni-watcher-server:3003/graphql"
|
||||
|
||||
[upstream.tokenWatcher]
|
||||
gqlEndpoint = "http://erc20-watcher-server:3001/graphql"
|
||||
gqlSubscriptionEndpoint = "ws://erc20-watcher-server:3001/graphql"
|
||||
|
||||
[jobQueue]
|
||||
dbConnectionString = "postgres://vdbm:password@uniswap-watcher-db:5432/uni-info-watcher-job-queue"
|
||||
maxCompletionLagInSecs = 300
|
||||
jobDelayInMilliSecs = 1000
|
||||
eventsInBatch = 50
|
||||
subgraphEventsOrder = true
|
||||
blockDelayInMilliSecs = 2000
|
||||
prefetchBlocksInMem = true
|
||||
prefetchBlockCount = 10
|
41
config/watcher-uniswap-v3/uni-watcher.toml
Normal file
41
config/watcher-uniswap-v3/uni-watcher.toml
Normal file
@ -0,0 +1,41 @@
|
||||
[server]
|
||||
host = "0.0.0.0"
|
||||
port = 3003
|
||||
kind = "active"
|
||||
|
||||
[metrics]
|
||||
host = "0.0.0.0"
|
||||
port = 9000
|
||||
[metrics.gql]
|
||||
port = 9001
|
||||
|
||||
[database]
|
||||
type = "postgres"
|
||||
host = "uniswap-watcher-db"
|
||||
port = 5432
|
||||
database = "uni-watcher"
|
||||
username = "vdbm"
|
||||
password = "password"
|
||||
synchronize = true
|
||||
logging = false
|
||||
maxQueryExecutionTime = 100
|
||||
|
||||
[upstream]
|
||||
[upstream.ethServer]
|
||||
gqlApiEndpoint = "http://ipld-eth-server.example.com:8083/graphql"
|
||||
rpcProviderEndpoint = "http://ipld-eth-server.example.com:8082"
|
||||
|
||||
[upstream.cache]
|
||||
name = "requests"
|
||||
enabled = false
|
||||
deleteOnStart = false
|
||||
|
||||
[jobQueue]
|
||||
dbConnectionString = "postgres://vdbm:password@uniswap-watcher-db:5432/uni-watcher-job-queue"
|
||||
maxCompletionLagInSecs = 300
|
||||
jobDelayInMilliSecs = 0
|
||||
eventsInBatch = 50
|
||||
lazyUpdateBlockProgress = true
|
||||
blockDelayInMilliSecs = 2000
|
||||
prefetchBlocksInMem = true
|
||||
prefetchBlockCount = 10
|
10
config/watcher-uniswap-v3/watch-contract.sh
Executable file
10
config/watcher-uniswap-v3/watch-contract.sh
Executable file
@ -0,0 +1,10 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
set -u
|
||||
|
||||
echo "Watching factory contract 0x1F98431c8aD98523631AE4a59f267346ea31F984"
|
||||
yarn watch:contract --address 0x1F98431c8aD98523631AE4a59f267346ea31F984 --kind factory --startingBlock 12369621 --checkpoint
|
||||
|
||||
echo "Watching nfpm contract 0xC36442b4a4522E871399CD717aBDD847Ab11FE88"
|
||||
yarn watch:contract --address 0xC36442b4a4522E871399CD717aBDD847Ab11FE88 --kind nfpm --startingBlock 12369651 --checkpoint
|
@ -10,7 +10,7 @@
|
||||
"petersburgBlock": 0,
|
||||
"istanbulBlock": 0,
|
||||
"clique": {
|
||||
"period": 2,
|
||||
"period": 5,
|
||||
"epoch": 3000
|
||||
}
|
||||
},
|
||||
|
3
container-build/cerc-tx-spammer/build.sh
Executable file
3
container-build/cerc-tx-spammer/build.sh
Executable file
@ -0,0 +1,3 @@
|
||||
#!/usr/bin/env bash
|
||||
# Build cerc/tx-spammer
|
||||
docker build -t cerc/tx-spammer:local ${CERC_REPO_BASE_DIR}/tx-spammer
|
13
container-build/cerc-uniswap-v3-info/Dockerfile
Normal file
13
container-build/cerc-uniswap-v3-info/Dockerfile
Normal file
@ -0,0 +1,13 @@
|
||||
FROM node:15.3.0-alpine3.10
|
||||
|
||||
RUN apk --update --no-cache add make git
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN echo "Building uniswap-v3-info" && \
|
||||
git checkout v0.1.1 && \
|
||||
yarn
|
||||
|
||||
CMD ["sh", "-c", "yarn start"]
|
7
container-build/cerc-uniswap-v3-info/build.sh
Executable file
7
container-build/cerc-uniswap-v3-info/build.sh
Executable file
@ -0,0 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
# Build cerc/uniswap-v3-info
|
||||
|
||||
# See: https://stackoverflow.com/a/246128/1701505
|
||||
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
|
||||
|
||||
docker build -t cerc/uniswap-v3-info:local -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/uniswap-v3-info
|
13
container-build/cerc-watcher-erc20/Dockerfile
Normal file
13
container-build/cerc-watcher-erc20/Dockerfile
Normal file
@ -0,0 +1,13 @@
|
||||
FROM node:16.17.1-alpine3.16
|
||||
|
||||
RUN apk --update --no-cache add git python3 alpine-sdk
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN echo "Building watcher-ts" && \
|
||||
git checkout v0.2.19 && \
|
||||
yarn && yarn build
|
||||
|
||||
WORKDIR /app/packages/erc20-watcher
|
7
container-build/cerc-watcher-erc20/build.sh
Executable file
7
container-build/cerc-watcher-erc20/build.sh
Executable file
@ -0,0 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
# Build cerc/watcher-erc20
|
||||
|
||||
# See: https://stackoverflow.com/a/246128/1701505
|
||||
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
|
||||
|
||||
docker build -t cerc/watcher-erc20:local -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/watcher-ts
|
13
container-build/cerc-watcher-erc721/Dockerfile
Normal file
13
container-build/cerc-watcher-erc721/Dockerfile
Normal file
@ -0,0 +1,13 @@
|
||||
FROM node:16.17.1-alpine3.16
|
||||
|
||||
RUN apk --update --no-cache add git python3 alpine-sdk
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN echo "Building watcher-ts" && \
|
||||
git checkout v0.2.19 && \
|
||||
yarn && yarn build
|
||||
|
||||
WORKDIR /app/packages/erc721-watcher
|
7
container-build/cerc-watcher-erc721/build.sh
Executable file
7
container-build/cerc-watcher-erc721/build.sh
Executable file
@ -0,0 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
# Build cerc/watcher-erc721
|
||||
|
||||
# See: https://stackoverflow.com/a/246128/1701505
|
||||
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
|
||||
|
||||
docker build -t cerc/watcher-erc721:local -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/watcher-ts
|
@ -5,23 +5,10 @@ RUN apk --update --no-cache add git python3 alpine-sdk
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY assemblyscript assemblyscript
|
||||
COPY watcher-ts watcher-ts
|
||||
COPY . .
|
||||
|
||||
# TODO: needs branch ng-integrate-asyncify
|
||||
# We use a mixture of npm and yarn below because the upstream
|
||||
# project checked in an npm package-log.json file
|
||||
RUN echo "Building assemblyscript" && \
|
||||
cd assemblyscript && \
|
||||
npm install && npm run build && yarn link
|
||||
RUN echo "Building watcher-ts" && \
|
||||
git checkout v0.2.19 && \
|
||||
yarn && yarn build
|
||||
|
||||
# TODO: needs branch v0.2.13
|
||||
# The shenanigans below is due to yarn and lerna being a dumpster-fire
|
||||
RUN echo "Linking watcher-ts to local assemblyscript" && \
|
||||
cd watcher-ts/packages/graph-node && yarn remove @vulcanize/assemblyscript && \
|
||||
yarn add https://github.com/vulcanize/assemblyscript.git#ng-integrate-asyncify
|
||||
|
||||
RUN echo "Building watcher-tst" && \
|
||||
cd watcher-ts && yarn && yarn link "@vulcanize/assemblyscript" && yarn build
|
||||
|
||||
WORKDIR /app/watcher-ts/packages/mobymask-watcher
|
||||
WORKDIR /app/packages/mobymask-watcher
|
||||
|
@ -4,5 +4,6 @@
|
||||
# See: https://stackoverflow.com/a/246128/1701505
|
||||
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
|
||||
|
||||
# TODO: add a mechanism to pass two repos into a container rather than the parent directory as below
|
||||
docker build -t cerc/watcher-mobymask:local -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}
|
||||
docker build -t cerc/watcher-mobymask:local -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/watcher-ts
|
||||
|
||||
# TODO: add a mechanism to pass two repos into a container rather than the parent directory
|
||||
|
11
container-build/cerc-watcher-uniswap-v3/Dockerfile
Normal file
11
container-build/cerc-watcher-uniswap-v3/Dockerfile
Normal file
@ -0,0 +1,11 @@
|
||||
FROM node:16.17.1-alpine3.16
|
||||
|
||||
RUN apk --update --no-cache add git python3 alpine-sdk
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN echo "Building uniswap-watcher-ts" && \
|
||||
git checkout v0.3.4 && \
|
||||
yarn && yarn build && yarn build:contracts
|
7
container-build/cerc-watcher-uniswap-v3/build.sh
Executable file
7
container-build/cerc-watcher-uniswap-v3/build.sh
Executable file
@ -0,0 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
# Build cerc/watcher-uniswap-v3
|
||||
|
||||
# See: https://stackoverflow.com/a/246128/1701505
|
||||
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
|
||||
|
||||
docker build -t cerc/watcher-uniswap-v3:local -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/uniswap-watcher-ts
|
157
stacks/erc20/README.md
Normal file
157
stacks/erc20/README.md
Normal file
@ -0,0 +1,157 @@
|
||||
# ERC20 Watcher
|
||||
|
||||
Instructions to deploy a local ERC20 watcher stack (core + watcher) for demonstration and testing purposes using [laconic-stack-orchestrator](../../README.md#setup)
|
||||
|
||||
## Setup
|
||||
|
||||
* Clone / pull required repositories:
|
||||
|
||||
```bash
|
||||
$ laconic-so setup-repositories --include cerc-io/go-ethereum,cerc-io/ipld-eth-db,cerc-io/ipld-eth-server,cerc-io/watcher-ts --pull
|
||||
```
|
||||
|
||||
* Build the core and watcher container images:
|
||||
|
||||
```bash
|
||||
$ laconic-so build-containers --include cerc/go-ethereum,cerc/go-ethereum-foundry,cerc/ipld-eth-db,cerc/ipld-eth-server,cerc/watcher-erc20
|
||||
```
|
||||
|
||||
This should create the required docker images in the local image registry.
|
||||
|
||||
* Deploy the stack:
|
||||
|
||||
```bash
|
||||
$ laconic-so deploy-system --include db,go-ethereum-foundry,ipld-eth-server,watcher-erc20 up
|
||||
```
|
||||
|
||||
## Demo
|
||||
|
||||
* Find the watcher container's id using `docker ps` and export it for later use:
|
||||
|
||||
```bash
|
||||
$ export CONTAINER_ID=<CONTAINER_ID>
|
||||
```
|
||||
|
||||
* Deploy an ERC20 token:
|
||||
|
||||
```bash
|
||||
$ docker exec $CONTAINER_ID yarn token:deploy:docker
|
||||
```
|
||||
|
||||
Export the address of the deployed token to a shell variable for later use:
|
||||
|
||||
```bash
|
||||
$ export TOKEN_ADDRESS=<TOKEN_ADDRESS>
|
||||
```
|
||||
|
||||
* Open `http://localhost:3002/graphql` (GraphQL Playground) in a browser window
|
||||
|
||||
* Connect MetaMask to `http://localhost:8545` (with chain ID `99`)
|
||||
|
||||
* Add the deployed token as an asset in MetaMask and check that the initial balance is zero
|
||||
|
||||
* Export your MetaMask account (second account) address to a shell variable for later use:
|
||||
|
||||
```bash
|
||||
$ export RECIPIENT_ADDRESS=<RECIPIENT_ADDRESS>
|
||||
```
|
||||
|
||||
* To get the primary account's address, run:
|
||||
|
||||
```bash
|
||||
$ docker exec $CONTAINER_ID yarn account:docker
|
||||
```
|
||||
|
||||
* To get the current block hash at any time, run:
|
||||
|
||||
```bash
|
||||
$ docker exec $CONTAINER_ID yarn block:latest:docker
|
||||
```
|
||||
|
||||
* Fire a GQL query in the playground to get the name, symbol and total supply of the deployed token:
|
||||
|
||||
```graphql
|
||||
query {
|
||||
name(
|
||||
blockHash: "LATEST_BLOCK_HASH"
|
||||
token: "TOKEN_ADDRESS"
|
||||
) {
|
||||
value
|
||||
proof {
|
||||
data
|
||||
}
|
||||
}
|
||||
|
||||
symbol(
|
||||
blockHash: "LATEST_BLOCK_HASH"
|
||||
token: "TOKEN_ADDRESS"
|
||||
) {
|
||||
value
|
||||
proof {
|
||||
data
|
||||
}
|
||||
}
|
||||
|
||||
totalSupply(
|
||||
blockHash: "LATEST_BLOCK_HASH"
|
||||
token: "TOKEN_ADDRESS"
|
||||
) {
|
||||
value
|
||||
proof {
|
||||
data
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* Fire the following query to get balances for the primary and the recipient account at the latest block hash:
|
||||
|
||||
```graphql
|
||||
query {
|
||||
fromBalanceOf: balanceOf(
|
||||
blockHash: "LATEST_BLOCK_HASH"
|
||||
token: "TOKEN_ADDRESS",
|
||||
# primary account having all the balance initially
|
||||
owner: "PRIMARY_ADDRESS"
|
||||
) {
|
||||
value
|
||||
proof {
|
||||
data
|
||||
}
|
||||
}
|
||||
toBalanceOf: balanceOf(
|
||||
blockHash: "LATEST_BLOCK_HASH"
|
||||
token: "TOKEN_ADDRESS",
|
||||
owner: "RECIPIENT_ADDRESS"
|
||||
) {
|
||||
value
|
||||
proof {
|
||||
data
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* The initial balance for the primary account should be `1000000000000000000000`
|
||||
* The initial balance for the recipient should be `0`
|
||||
|
||||
* Transfer tokens to the recipient account:
|
||||
|
||||
```bash
|
||||
$ docker exec $CONTAINER_ID yarn token:transfer:docker --token $TOKEN_ADDRESS --to $RECIPIENT_ADDRESS --amount 100
|
||||
```
|
||||
|
||||
* Fire the above GQL query again with the latest block hash to get updated balances for the primary (`from`) and the recipient (`to`) account:
|
||||
|
||||
* The balance for the primary account should be reduced by the transfer amount (`100`)
|
||||
* The balance for the recipient account should be equal to the transfer amount (`100`)
|
||||
|
||||
* Transfer funds between different accounts using MetaMask and use the playground to query the balance before and after the transfer.
|
||||
|
||||
## Clean up
|
||||
|
||||
* To stop all the services running in background run:
|
||||
|
||||
```bash
|
||||
$ laconic-so deploy-system --include db,go-ethereum-foundry,ipld-eth-server,watcher-erc20 down
|
||||
```
|
18
stacks/erc20/stack.yml
Normal file
18
stacks/erc20/stack.yml
Normal file
@ -0,0 +1,18 @@
|
||||
version: "1.0"
|
||||
name: erc20-watcher
|
||||
repos:
|
||||
- cerc-io/go-ethereum
|
||||
- cerc-io/ipld-eth-db
|
||||
- cerc-io/ipld-eth-server
|
||||
- cerc-io/watcher-ts
|
||||
containers:
|
||||
- cerc/go-ethereum
|
||||
- cerc/go-ethereum-foundry
|
||||
- cerc/ipld-eth-db
|
||||
- cerc/ipld-eth-server
|
||||
- cerc/watcher-erc20
|
||||
pods:
|
||||
- go-ethereum-foundry
|
||||
- db
|
||||
- ipld-eth-server
|
||||
- watcher-erc20
|
214
stacks/erc721/README.md
Normal file
214
stacks/erc721/README.md
Normal file
@ -0,0 +1,214 @@
|
||||
# ERC721 Watcher
|
||||
|
||||
Instructions to deploy a local ERC721 watcher stack (core + watcher) for demonstration and testing purposes using [laconic-stack-orchestrator](../../README.md#setup)
|
||||
|
||||
## Setup
|
||||
|
||||
* Clone / pull required repositories:
|
||||
|
||||
```bash
|
||||
$ laconic-so setup-repositories --include cerc-io/go-ethereum,cerc-io/ipld-eth-db,cerc-io/ipld-eth-server,cerc-io/watcher-ts --pull
|
||||
```
|
||||
|
||||
* Build the core and watcher container images:
|
||||
|
||||
```bash
|
||||
$ laconic-so build-containers --include cerc/go-ethereum,cerc/go-ethereum-foundry,cerc/ipld-eth-db,cerc/ipld-eth-server,cerc/watcher-erc721
|
||||
```
|
||||
|
||||
This should create the required docker images in the local image registry.
|
||||
|
||||
* Deploy the stack:
|
||||
|
||||
```bash
|
||||
$ laconic-so deploy-system --include db,go-ethereum-foundry,ipld-eth-server,watcher-erc721 up
|
||||
```
|
||||
|
||||
## Demo
|
||||
|
||||
* Find the watcher container's id using `docker ps` and export it for later use:
|
||||
|
||||
```bash
|
||||
$ export CONTAINER_ID=<CONTAINER_ID>
|
||||
```
|
||||
|
||||
* Deploy an ERC721 token:
|
||||
|
||||
```bash
|
||||
$ docker exec $CONTAINER_ID yarn nft:deploy:docker
|
||||
```
|
||||
|
||||
Export the address of the deployed token to a shell variable for later use:
|
||||
|
||||
```bash
|
||||
$ export NFT_ADDRESS=<NFT_ADDRESS>
|
||||
```
|
||||
|
||||
* Open `http://localhost:3009/graphql` (GraphQL Playground) in a browser window
|
||||
|
||||
* Connect MetaMask to `http://localhost:8545` (with chain ID `99`)
|
||||
|
||||
* Export your MetaMask account (second account) address to a shell variable for later use:
|
||||
|
||||
```bash
|
||||
$ export RECIPIENT_ADDRESS=<RECIPIENT_ADDRESS>
|
||||
```
|
||||
|
||||
* To get the primary account's address, run:
|
||||
|
||||
```bash
|
||||
$ docker exec $CONTAINER_ID yarn account:docker
|
||||
```
|
||||
|
||||
Export it to shell variable for later use:
|
||||
|
||||
```bash
|
||||
$ export PRIMARY_ADDRESS=<PRIMARY_ADDRESS>
|
||||
```
|
||||
|
||||
* To get the current block hash at any time, run:
|
||||
|
||||
```bash
|
||||
$ docker exec $CONTAINER_ID yarn block:latest:docker
|
||||
```
|
||||
|
||||
* Fire the following GQL query (uses `eth_call`) in the playground:
|
||||
|
||||
```graphql
|
||||
query {
|
||||
name(
|
||||
blockHash: "LATEST_BLOCK_HASH"
|
||||
contractAddress: "NFT_ADDRESS"
|
||||
) {
|
||||
value
|
||||
proof {
|
||||
data
|
||||
}
|
||||
}
|
||||
|
||||
symbol(
|
||||
blockHash: "LATEST_BLOCK_HASH"
|
||||
contractAddress: "NFT_ADDRESS"
|
||||
) {
|
||||
value
|
||||
proof {
|
||||
data
|
||||
}
|
||||
}
|
||||
|
||||
balanceOf(
|
||||
blockHash: "LATEST_BLOCK_HASH"
|
||||
contractAddress: "NFT_ADDRESS"
|
||||
owner: "PRIMARY_ADDRESS"
|
||||
) {
|
||||
value
|
||||
proof {
|
||||
data
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Balance for the `PRIMARY_ADDRESS` should be `0` as the token is yet to be minted.
|
||||
|
||||
* Fire the following GQL query (uses `storage` calls) in the playground:
|
||||
|
||||
```graphql
|
||||
query {
|
||||
_name(
|
||||
blockHash: "LATEST_BLOCK_HASH"
|
||||
contractAddress: "NFT_ADDRESS"
|
||||
) {
|
||||
value
|
||||
proof {
|
||||
data
|
||||
}
|
||||
}
|
||||
|
||||
_symbol(
|
||||
blockHash: "LATEST_BLOCK_HASH"
|
||||
contractAddress: "NFT_ADDRESS"
|
||||
) {
|
||||
value
|
||||
proof {
|
||||
data
|
||||
}
|
||||
}
|
||||
|
||||
_balances(
|
||||
blockHash: "LATEST_BLOCK_HASH"
|
||||
contractAddress: "NFT_ADDRESS"
|
||||
key0: "PRIMARY_ADDRESS"
|
||||
) {
|
||||
value
|
||||
proof {
|
||||
data
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* Mint the token:
|
||||
|
||||
```bash
|
||||
$ docker exec $CONTAINER_ID yarn nft:mint:docker --nft $NFT_ADDRESS --to $PRIMARY_ADDRESS --token-id 1
|
||||
```
|
||||
|
||||
Fire the GQL query above again with latest block hash. The balance should increase to `1`.
|
||||
|
||||
* Get the latest block hash and run the following GQL query in the playground for `balanceOf` and `ownerOf` (`eth_call`):
|
||||
|
||||
```graphql
|
||||
query {
|
||||
fromBalanceOf: balanceOf(
|
||||
blockHash: "LATEST_BLOCK_HASH"
|
||||
contractAddress: "NFT_ADDRESS"
|
||||
owner: "PRIMARY_ADDRESS"
|
||||
) {
|
||||
value
|
||||
proof {
|
||||
data
|
||||
}
|
||||
}
|
||||
|
||||
toBalanceOf: balanceOf(
|
||||
blockHash: "LATEST_BLOCK_HASH"
|
||||
contractAddress: "NFT_ADDRESS"
|
||||
owner: "RECIPIENT_ADDRESS"
|
||||
) {
|
||||
value
|
||||
proof {
|
||||
data
|
||||
}
|
||||
}
|
||||
|
||||
ownerOf(
|
||||
blockHash: "LATEST_BLOCK_HASH"
|
||||
contractAddress: "NFT_ADDRESS"
|
||||
tokenId: 1
|
||||
) {
|
||||
value
|
||||
proof {
|
||||
data
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Balance should be `1` for the `PRIMARY_ADDRESS`, `0` for the `RECIPIENT_ADDRESS` and owner value of the token should be equal to the `PRIMARY_ADDRESS`.
|
||||
|
||||
* Transfer the token:
|
||||
|
||||
```bash
|
||||
$ docker exec $CONTAINER_ID yarn nft:transfer:docker --nft $NFT_ADDRESS --from $PRIMARY_ADDRESS --to $RECIPIENT_ADDRESS --token-id 1
|
||||
```
|
||||
|
||||
Fire the GQL query above again with the latest block hash. The token should be transferred to the recipient.
|
||||
|
||||
## Clean up
|
||||
|
||||
* To stop all the services running in background:
|
||||
|
||||
```bash
|
||||
$ laconic-so deploy-system --include db,go-ethereum-foundry,ipld-eth-server,watcher-erc721 down
|
||||
```
|
18
stacks/erc721/stack.yml
Normal file
18
stacks/erc721/stack.yml
Normal file
@ -0,0 +1,18 @@
|
||||
version: "1.0"
|
||||
name: erc721-watcher
|
||||
repos:
|
||||
- cerc-io/go-ethereum
|
||||
- cerc-io/ipld-eth-db
|
||||
- cerc-io/ipld-eth-server
|
||||
- cerc-io/watcher-ts
|
||||
containers:
|
||||
- cerc/go-ethereum
|
||||
- cerc/go-ethereum-foundry
|
||||
- cerc/ipld-eth-db
|
||||
- cerc/ipld-eth-server
|
||||
- cerc/watcher-erc721
|
||||
pods:
|
||||
- go-ethereum-foundry
|
||||
- db
|
||||
- ipld-eth-server
|
||||
- watcher-erc721
|
98
stacks/fixturenet-eth/README.md
Normal file
98
stacks/fixturenet-eth/README.md
Normal file
@ -0,0 +1,98 @@
|
||||
# fixturenet-eth
|
||||
|
||||
Instructions for deploying a local a geth + lighthouse blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator (the installation of which is covered [here](https://github.com/cerc-io/stack-orchestrator#user-mode)):
|
||||
|
||||
## Clone required repositories
|
||||
```
|
||||
$ laconic-so setup-repositories --include cerc-io/go-ethereum
|
||||
```
|
||||
|
||||
## Build the fixturenet-eth containers
|
||||
```
|
||||
$ laconic-so build-containers --include cerc/go-ethereum,cerc/lighthouse,cerc/fixturenet-eth-geth,cerc/fixturenet-eth-lighthouse
|
||||
```
|
||||
This should create several container images in the local image registry:
|
||||
|
||||
* cerc/go-ethereum
|
||||
* cerc/lighthouse
|
||||
* cerc/fixturenet-eth-geth
|
||||
* cerc/fixturenet-eth-lighthouse
|
||||
|
||||
## Deploy the stack
|
||||
```
|
||||
$ laconic-so deploy-system --include fixturenet-eth up
|
||||
```
|
||||
|
||||
## Check status
|
||||
|
||||
```
|
||||
$ container-build/cerc-fixturenet-eth-lighthouse/scripts/status.sh
|
||||
Waiting for geth to generate DAG ..................................................... DONE!
|
||||
Waiting for beacon phase0 .... DONE!
|
||||
Waiting for beacon altair .... DONE!
|
||||
Waiting for beacon bellatrix pre-merge .... DONE!
|
||||
Waiting for beacon bellatrix merge .... DONE!
|
||||
|
||||
$ docker ps -f 'name=laconic' --format 'table {{.Names}}\t{{.Ports}}' | cut -d'-' -f3- | sort
|
||||
NAMES PORTS
|
||||
fixturenet-eth-bootnode-geth-1 8545-8546/tcp, 30303/udp, 0.0.0.0:55847->9898/tcp, 0.0.0.0:55848->30303/tcp
|
||||
fixturenet-eth-bootnode-lighthouse-1
|
||||
fixturenet-eth-geth-1-1 8546/tcp, 30303/tcp, 30303/udp, 0.0.0.0:55851->8545/tcp
|
||||
fixturenet-eth-geth-2-1 8545-8546/tcp, 30303/tcp, 30303/udp
|
||||
fixturenet-eth-lighthouse-1-1 0.0.0.0:55858->8001/tcp
|
||||
fixturenet-eth-lighthouse-2-1
|
||||
```
|
||||
|
||||
## Additional pieces
|
||||
|
||||
Several other containers can used with the basic `fixturenet-eth`:
|
||||
|
||||
* `ipld-eth-db` (enables statediffing)
|
||||
* `ipld-eth-server` (GQL and Ethereum API server, requires `ipld-eth-db`)
|
||||
* `ipld-eth-beacon-db` and `ipld-eth-beacon-indexer` (for indexing Beacon chain blocks)
|
||||
* `eth-probe` (captures eth1 tx gossip)
|
||||
* `keycloak` (nginx proxy with keycloak auth for API authentication)
|
||||
* `tx-spammer` (generates and sends automated transactions to the fixturenet)
|
||||
|
||||
It is not necessary to use them all at once, but a complete example follows:
|
||||
|
||||
```
|
||||
# Setup
|
||||
$ laconic-so setup-repositories --include cerc-io/go-ethereum,cerc-io/ipld-eth-db,cerc-io/ipld-eth-server,cerc-io/ipld-eth-beacon-db,cerc-io/ipld-eth-beacon-indexer,cerc-io/eth-probe,cerc-io/tx-spammer
|
||||
|
||||
# Build
|
||||
$ laconic-so build-containers --include cerc/go-ethereum,cerc/lighthouse,cerc/fixturenet-eth-geth,cerc/fixturenet-eth-lighthouse,cerc/ipld-eth-db,cerc/ipld-eth-server,cerc/ipld-eth-beacon-db,cerc/ipld-eth-beacon-indexer,cerc/eth-probe,cerc/keycloak,cerc/tx-spammer
|
||||
|
||||
# Deploy
|
||||
$ laconic-so deploy-system --include db,fixturenet-eth,ipld-eth-server,ipld-eth-beacon-db,ipld-eth-beacon-indexer,eth-probe,keycloak,tx-spammer up
|
||||
|
||||
# Status
|
||||
|
||||
$ container-build/cerc-fixturenet-eth-lighthouse/scripts/status.sh
|
||||
Waiting for geth to generate DAG.... DONE!
|
||||
Waiting for beacon phase0.... DONE!
|
||||
Waiting for beacon altair.... DONE!
|
||||
Waiting for beacon bellatrix pre-merge.... DONE!
|
||||
Waiting for beacon bellatrix merge.... DONE!
|
||||
|
||||
$ docker ps -f 'name=laconic' --format 'table {{.Names}}\t{{.Ports}}' | cut -d'-' -f3- | sort
|
||||
NAMES PORTS
|
||||
eth-probe-db-1 0.0.0.0:55849->5432/tcp
|
||||
eth-probe-mq-1
|
||||
eth-probe-probe-1
|
||||
fixturenet-eth-bootnode-geth-1 8545-8546/tcp, 30303/udp, 0.0.0.0:55847->9898/tcp, 0.0.0.0:55848->30303/tcp
|
||||
fixturenet-eth-bootnode-lighthouse-1
|
||||
fixturenet-eth-geth-1-1 8546/tcp, 30303/tcp, 30303/udp, 0.0.0.0:55851->8545/tcp
|
||||
fixturenet-eth-geth-2-1 8545-8546/tcp, 30303/tcp, 30303/udp
|
||||
fixturenet-eth-lighthouse-1-1 0.0.0.0:55858->8001/tcp
|
||||
fixturenet-eth-lighthouse-2-1
|
||||
ipld-eth-beacon-db-1 127.0.0.1:8076->5432/tcp
|
||||
ipld-eth-beacon-indexer-1
|
||||
ipld-eth-db-1 127.0.0.1:8077->5432/tcp
|
||||
ipld-eth-server-1 127.0.0.1:8081-8082->8081-8082/tcp
|
||||
keycloak-1 8443/tcp, 0.0.0.0:55857->8080/tcp
|
||||
keycloak-db-1 0.0.0.0:55850->5432/tcp
|
||||
keycloak-nginx-1 0.0.0.0:55859->80/tcp
|
||||
migrations-1
|
||||
tx-spammer-1
|
||||
```
|
12
stacks/fixturenet-eth/stack.yml
Normal file
12
stacks/fixturenet-eth/stack.yml
Normal file
@ -0,0 +1,12 @@
|
||||
version: "1.0"
|
||||
name: fixturenet-eth
|
||||
repos:
|
||||
- cerc-io/go-ethereum
|
||||
- cerc-io/lighthouse
|
||||
containers:
|
||||
- cerc/go-ethereum
|
||||
- cerc/lighthouse
|
||||
- cerc/fixturenet-eth-geth
|
||||
- cerc/fixturenet-eth-lighthouse
|
||||
pods:
|
||||
- fixturenet-eth
|
@ -3,36 +3,51 @@
|
||||
The MobyMask watcher is a Laconic Network component that provides efficient access to MobyMask contract data from Ethereum, along with evidence allowing users to verify the correctness of that data. The watcher source code is available in [this repository](https://github.com/cerc-io/watcher-ts/tree/main/packages/mobymask-watcher) and a developer-oriented Docker Compose setup for the watcher can be found [here](https://github.com/cerc-io/mobymask-watcher). The watcher can be deployed automatically using the Laconic Stack Orchestrator tool as detailed below:
|
||||
|
||||
## Deploy the MobyMask Watcher
|
||||
|
||||
The instructions below show how to deploy a MobyMask watcher using laconic-stack-orchestrator (the installation of which is covered [here](https://github.com/cerc-io/stack-orchestrator#user-mode)).
|
||||
|
||||
This deployment expects that ipld-eth-server's endpoints are available on the local machine at http://ipld-eth-server.example.com:8083/graphql and http://ipld-eth-server.example.com:8082. More advanced configurations are supported by modifying the watcher's [config file](../../config/watcher-mobymask/mobymask-watcher.toml).
|
||||
|
||||
## Clone required repositories
|
||||
|
||||
```
|
||||
$ laconic-so setup-repositories --include vulcanize/assemblyscript,cerc-io/watcher-ts
|
||||
```
|
||||
Checkout required branches for the current release:
|
||||
```
|
||||
$ cd ~/cerc/assemblyscript
|
||||
$ git checkout ng-integrate-asyncify
|
||||
$ cd ~/cerc/watcher-ts
|
||||
$ git checkout v0.2.13
|
||||
$ laconic-so setup-repositories --include cerc-io/watcher-ts
|
||||
```
|
||||
|
||||
## Build the watcher container
|
||||
|
||||
```
|
||||
$ laconic-sh build-containers --include cerc/watcher-mobymask
|
||||
$ laconic-so build-containers --include cerc/watcher-mobymask
|
||||
```
|
||||
|
||||
This should create a container with tag `cerc/watcher-mobymask` in the local image registry.
|
||||
|
||||
## Deploy the stack
|
||||
First the watcher database has to be initialized. Start only the watcher-db service:
|
||||
|
||||
First the watcher database has to be initialized. Start only the mobymask-watcher-db service:
|
||||
|
||||
```
|
||||
$ laconic-so deploy-system --include watcher-mobymask up watcher-db
|
||||
$ laconic-so deploy-system --include watcher-mobymask up mobymask-watcher-db
|
||||
```
|
||||
|
||||
Next find the container's id using `docker ps` then run the following command to initialize the database:
|
||||
|
||||
```
|
||||
$ docker exec -i <watcher-db-container> psql -U vdbm mobymask-watcher < config/watcher-mobymask/mobymask-watcher-db.sql
|
||||
$ docker exec -i <mobymask-watcher-db-container> psql -U vdbm mobymask-watcher < config/watcher-mobymask/mobymask-watcher-db.sql
|
||||
```
|
||||
|
||||
Finally start the remaining containers:
|
||||
|
||||
```
|
||||
$ laconic-so deploy-system --include watcher-mobymask
|
||||
$ laconic-so deploy-system --include watcher-mobymask up
|
||||
```
|
||||
|
||||
Correct operation should be verified by following the instructions [here](https://github.com/cerc-io/mobymask-watcher/tree/main/mainnet-watcher-only#run), checking GraphQL queries return valid results in the watcher's [playground](http://127.0.0.1:3001/graphql).
|
||||
|
||||
## Clean up
|
||||
|
||||
Stop all the services running in background:
|
||||
|
||||
```bash
|
||||
$ laconic-so deploy-system --include watcher-mobymask down
|
||||
```
|
||||
|
@ -1,8 +1,7 @@
|
||||
version: "1.0"
|
||||
name: mobymask-watcher
|
||||
repos:
|
||||
- cerc-io/watcher-ts/v0.2.13
|
||||
- vulcanize/assemblyscript/ng-integrate-asyncify
|
||||
- cerc-io/watcher-ts/v0.2.19
|
||||
containers:
|
||||
- cerc/watcher-mobymask
|
||||
pods:
|
||||
|
83
stacks/uniswap-v3/README.md
Normal file
83
stacks/uniswap-v3/README.md
Normal file
@ -0,0 +1,83 @@
|
||||
# Uniswap v3
|
||||
|
||||
Instructions to deploy Uniswap v3 watcher stack (watcher + uniswap-v3-info frontend app) using [laconic-stack-orchestrator](../../README.md#setup)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Access to [uniswap-watcher-ts](https://github.com/vulcanize/uniswap-watcher-ts).
|
||||
|
||||
* This deployment expects core services to be running; specifically, it requires `ipld-eth-server` RPC and GQL endpoints. Update the `upstream.ethServer` endpoints in the [watcher config files](../../config/watcher-uniswap-v3) accordingly:
|
||||
|
||||
```toml
|
||||
[upstream]
|
||||
[upstream.ethServer]
|
||||
gqlApiEndpoint = "http://ipld-eth-server.example.com:8083/graphql"
|
||||
rpcProviderEndpoint = "http://ipld-eth-server.example.com:8082"
|
||||
```
|
||||
|
||||
* `uni-watcher` and `uni-info-watcher` database dumps (optional).
|
||||
|
||||
## Setup
|
||||
|
||||
* Clone / pull required repositories:
|
||||
|
||||
```bash
|
||||
$ laconic-so setup-repositories --include vulcanize/uniswap-watcher-ts,vulcanize/uniswap-v3-info --git-ssh --pull
|
||||
```
|
||||
|
||||
* Build watcher and info app container images:
|
||||
|
||||
```bash
|
||||
$ laconic-so build-containers --include cerc/watcher-uniswap-v3,cerc/uniswap-v3-info
|
||||
```
|
||||
|
||||
This should create the required docker images in the local image registry.
|
||||
|
||||
## Deploy
|
||||
|
||||
* (Optional) Initialize the watcher database with existing database dumps if available:
|
||||
|
||||
* Start the watcher database to be initialized:
|
||||
|
||||
```bash
|
||||
$ laconic-so deploy-system --include watcher-uniswap-v3 up uniswap-watcher-db
|
||||
```
|
||||
|
||||
* Find the watcher database container's id using `docker ps` and export it for further usage:
|
||||
|
||||
```bash
|
||||
$ export CONTAINER_ID=<CONTAINER_ID>
|
||||
```
|
||||
|
||||
* Load watcher database dumps:
|
||||
|
||||
```bash
|
||||
# uni-watcher database
|
||||
$ docker exec -i $CONTAINER_ID psql -U vdbm uni-watcher < UNI_WATCHER_DB_DUMP_FILE_PATH.sql
|
||||
|
||||
# uni-info-watcher database
|
||||
$ docker exec -i $CONTAINER_ID psql -U vdbm uni-info-watcher < UNI_INFO_WATCHER_DB_DUMP_FILE_PATH.sql
|
||||
```
|
||||
|
||||
* Start all the watcher and info app services:
|
||||
|
||||
```bash
|
||||
$ laconic-so deploy-system --include watcher-uniswap-v3 up
|
||||
```
|
||||
|
||||
* Check that all the services are up and healthy:
|
||||
|
||||
```bash
|
||||
$ docker ps
|
||||
```
|
||||
|
||||
* The `uni-info-watcher` GraphQL Playground can be accessed at `http://localhost:3004/graphql`
|
||||
* The frontend app can be accessed at `http://localhost:3006`
|
||||
|
||||
## Clean up
|
||||
|
||||
* To stop all the services running in background:
|
||||
|
||||
```bash
|
||||
$ laconic-so deploy-system --include watcher-uniswap-v3 down
|
||||
```
|
10
stacks/uniswap-v3/stack.yml
Normal file
10
stacks/uniswap-v3/stack.yml
Normal file
@ -0,0 +1,10 @@
|
||||
version: "1.0"
|
||||
name: uniswap-v3
|
||||
repos:
|
||||
- vulcanize/uniswap-watcher-ts
|
||||
- vulcanize/uniswap-v3-info
|
||||
containers:
|
||||
- cerc/watcher-uniswap-v3
|
||||
- cerc/uniswap-v3-info
|
||||
pods:
|
||||
- watcher-uniswap-v3
|
Loading…
Reference in New Issue
Block a user