Update subgraph watcher versions and instructions to use deployments (#775)
All checks were successful
Fixturenet-Laconicd-Test / Run an Laconicd fixturenet test (push) Successful in 9m39s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 53m46s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 55m59s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m44s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m32s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m19s
Lint Checks / Run linter (push) Successful in 38s
Publish / Build and publish (push) Successful in 1m16s
Webapp Test / Run webapp test suite (push) Successful in 4m10s
Deploy Test / Run deploy test suite (push) Successful in 5m30s
Smoke Test / Run basic test suite (push) Successful in 4m53s
All checks were successful
Fixturenet-Laconicd-Test / Run an Laconicd fixturenet test (push) Successful in 9m39s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 53m46s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 55m59s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m44s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m32s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m19s
Lint Checks / Run linter (push) Successful in 38s
Publish / Build and publish (push) Successful in 1m16s
Webapp Test / Run webapp test suite (push) Successful in 4m10s
Deploy Test / Run deploy test suite (push) Successful in 5m30s
Smoke Test / Run basic test suite (push) Successful in 4m53s
Part of https://www.notion.so/Setup-watchers-on-sandman-34b5514a10634c6fbf3ec338967c871c Reviewed-on: #775 Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com> Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
This commit is contained in:
parent
523b5779be
commit
17e860d6e4
@ -16,26 +16,55 @@ laconic-so --stack merkl-sushiswap-v3 build-containers
|
||||
|
||||
## Deploy
|
||||
|
||||
### Configuration
|
||||
|
||||
Create and update an env file to be used in the next step:
|
||||
|
||||
```bash
|
||||
# External Filecoin (ETH RPC) endpoint to point the watcher
|
||||
CERC_ETH_RPC_ENDPOINT=
|
||||
```
|
||||
|
||||
### Deploy the stack
|
||||
Create a spec file for the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so --stack merkl-sushiswap-v3 deploy --cluster merkl_sushiswap_v3 --env-file <PATH_TO_ENV_FILE> up
|
||||
laconic-so --stack merkl-sushiswap-v3 deploy init --output merkl-sushiswap-v3-spec.yml
|
||||
```
|
||||
|
||||
### Ports
|
||||
|
||||
Edit `network` in the spec file to map container ports to host ports as required:
|
||||
|
||||
```
|
||||
...
|
||||
network:
|
||||
ports:
|
||||
merkl-sushiswap-v3-watcher-db:
|
||||
- '5432'
|
||||
merkl-sushiswap-v3-watcher-job-runner:
|
||||
- 9002:9000
|
||||
merkl-sushiswap-v3-watcher-server:
|
||||
- 127.0.0.1:3007:3008
|
||||
- 9003:9001
|
||||
```
|
||||
|
||||
### Create a deployment
|
||||
|
||||
Create a deployment from the spec file:
|
||||
|
||||
```bash
|
||||
laconic-so --stack merkl-sushiswap-v3 deploy create --spec-file merkl-sushiswap-v3-spec.yml --deployment-dir merkl-sushiswap-v3-deployment
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
Inside deployment directory, open the `config.env` file and set following env variables:
|
||||
|
||||
```bash
|
||||
# External Filecoin (ETH RPC) endpoint to point the watcher to
|
||||
CERC_ETH_RPC_ENDPOINT=https://example-lotus-endpoint/rpc/v1
|
||||
```
|
||||
|
||||
### Start the deployment
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir merkl-sushiswap-v3-deployment start
|
||||
```
|
||||
|
||||
* To list down and monitor the running containers:
|
||||
|
||||
```bash
|
||||
laconic-so --stack merkl-sushiswap-v3 deploy --cluster merkl_sushiswap_v3 ps
|
||||
|
||||
# With status
|
||||
docker ps -a
|
||||
|
||||
@ -46,6 +75,7 @@ laconic-so --stack merkl-sushiswap-v3 deploy --cluster merkl_sushiswap_v3 --env-
|
||||
* Open the GQL playground at http://localhost:3007/graphql
|
||||
|
||||
```graphql
|
||||
# Example query
|
||||
{
|
||||
_meta {
|
||||
block {
|
||||
@ -54,7 +84,7 @@ laconic-so --stack merkl-sushiswap-v3 deploy --cluster merkl_sushiswap_v3 --env-
|
||||
}
|
||||
hasIndexingErrors
|
||||
}
|
||||
|
||||
|
||||
factories {
|
||||
id
|
||||
poolCount
|
||||
@ -64,18 +94,21 @@ laconic-so --stack merkl-sushiswap-v3 deploy --cluster merkl_sushiswap_v3 --env-
|
||||
|
||||
## Clean up
|
||||
|
||||
Stop all the services running in background:
|
||||
Stop all the merkl-sushiswap-v3 services running in background:
|
||||
|
||||
```bash
|
||||
laconic-so --stack merkl-sushiswap-v3 deploy --cluster merkl_sushiswap_v3 down
|
||||
# Only stop the docker containers
|
||||
laconic-so deployment --dir merkl-sushiswap-v3-deployment stop
|
||||
|
||||
# Run 'start' to restart the deployment
|
||||
```
|
||||
|
||||
Clear volumes created by this stack:
|
||||
To stop all the merkl-sushiswap-v3 services and also delete data:
|
||||
|
||||
```bash
|
||||
# List all relevant volumes
|
||||
docker volume ls -q --filter "name=merkl_sushiswap_v3"
|
||||
# Stop the docker containers
|
||||
laconic-so deployment --dir merkl-sushiswap-v3-deployment stop --delete-volumes
|
||||
|
||||
# Remove all the listed volumes
|
||||
docker volume rm $(docker volume ls -q --filter "name=merkl_sushiswap_v3")
|
||||
# Remove deployment directory (deployment will have to be recreated for a re-run)
|
||||
rm -r merkl-sushiswap-v3-deployment
|
||||
```
|
||||
|
@ -2,7 +2,7 @@ version: "1.0"
|
||||
name: merkl-sushiswap-v3
|
||||
description: "SushiSwap v3 watcher stack"
|
||||
repos:
|
||||
- github.com/cerc-io/merkl-sushiswap-v3-watcher-ts@v0.1.6
|
||||
- github.com/cerc-io/merkl-sushiswap-v3-watcher-ts@v0.1.7
|
||||
containers:
|
||||
- cerc/watcher-merkl-sushiswap-v3
|
||||
pods:
|
||||
|
@ -55,7 +55,7 @@ ports:
|
||||
Create deployment:
|
||||
|
||||
```bash
|
||||
laconic-so deploy create --spec-file sushiswap-subgraph-spec.yml --deployment-dir sushiswap-subgraph-deployment
|
||||
laconic-so --stack sushiswap-subgraph deploy create --spec-file sushiswap-subgraph-spec.yml --deployment-dir sushiswap-subgraph-deployment
|
||||
```
|
||||
|
||||
## Start the stack
|
||||
|
@ -16,26 +16,55 @@ laconic-so --stack sushiswap-v3 build-containers
|
||||
|
||||
## Deploy
|
||||
|
||||
### Configuration
|
||||
|
||||
Create and update an env file to be used in the next step:
|
||||
|
||||
```bash
|
||||
# External Filecoin (ETH RPC) endpoint to point the watcher
|
||||
CERC_ETH_RPC_ENDPOINT=
|
||||
```
|
||||
|
||||
### Deploy the stack
|
||||
Create a spec file for the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so --stack sushiswap-v3 deploy --cluster sushiswap_v3 --env-file <PATH_TO_ENV_FILE> up
|
||||
laconic-so --stack sushiswap-v3 deploy init --output sushiswap-v3-spec.yml
|
||||
```
|
||||
|
||||
### Ports
|
||||
|
||||
Edit `network` in the spec file to map container ports to host ports as required:
|
||||
|
||||
```
|
||||
...
|
||||
network:
|
||||
ports:
|
||||
sushiswap-v3-watcher-db:
|
||||
- '5432'
|
||||
sushiswap-v3-watcher-job-runner:
|
||||
- 9000:9000
|
||||
sushiswap-v3-watcher-server:
|
||||
- 127.0.0.1:3008:3008
|
||||
- 9001:9001
|
||||
```
|
||||
|
||||
### Create a deployment
|
||||
|
||||
Create a deployment from the spec file:
|
||||
|
||||
```bash
|
||||
laconic-so --stack sushiswap-v3 deploy create --spec-file sushiswap-v3-spec.yml --deployment-dir sushiswap-v3-deployment
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
Inside deployment directory, open the `config.env` file and set following env variables:
|
||||
|
||||
```bash
|
||||
# External Filecoin (ETH RPC) endpoint to point the watcher to
|
||||
CERC_ETH_RPC_ENDPOINT=https://example-lotus-endpoint/rpc/v1
|
||||
```
|
||||
|
||||
### Start the deployment
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir sushiswap-v3-deployment start
|
||||
```
|
||||
|
||||
* To list down and monitor the running containers:
|
||||
|
||||
```bash
|
||||
laconic-so --stack sushiswap-v3 deploy --cluster sushiswap_v3 ps
|
||||
|
||||
# With status
|
||||
docker ps -a
|
||||
|
||||
@ -43,20 +72,43 @@ laconic-so --stack sushiswap-v3 deploy --cluster sushiswap_v3 --env-file <PATH_T
|
||||
docker logs -f <CONTAINER_ID>
|
||||
```
|
||||
|
||||
* Open the GQL playground at http://localhost:3008/graphql
|
||||
|
||||
```graphql
|
||||
# Example query
|
||||
{
|
||||
_meta {
|
||||
block {
|
||||
number
|
||||
timestamp
|
||||
}
|
||||
hasIndexingErrors
|
||||
}
|
||||
|
||||
factories {
|
||||
id
|
||||
poolCount
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Clean up
|
||||
|
||||
Stop all the services running in background:
|
||||
Stop all the sushiswap-v3 services running in background:
|
||||
|
||||
```bash
|
||||
laconic-so --stack sushiswap-v3 deploy --cluster sushiswap_v3 down
|
||||
# Only stop the docker containers
|
||||
laconic-so deployment --dir sushiswap-v3-deployment stop
|
||||
|
||||
# Run 'start' to restart the deployment
|
||||
```
|
||||
|
||||
Clear volumes created by this stack:
|
||||
To stop all the sushiswap-v3 services and also delete data:
|
||||
|
||||
```bash
|
||||
# List all relevant volumes
|
||||
docker volume ls -q --filter "name=sushiswap_v3"
|
||||
# Stop the docker containers
|
||||
laconic-so deployment --dir sushiswap-v3-deployment stop --delete-volumes
|
||||
|
||||
# Remove all the listed volumes
|
||||
docker volume rm $(docker volume ls -q --filter "name=sushiswap_v3")
|
||||
# Remove deployment directory (deployment will have to be recreated for a re-run)
|
||||
rm -r sushiswap-v3-deployment
|
||||
```
|
||||
|
@ -2,7 +2,7 @@ version: "1.0"
|
||||
name: sushiswap-v3
|
||||
description: "SushiSwap v3 watcher stack"
|
||||
repos:
|
||||
- github.com/cerc-io/sushiswap-v3-watcher-ts@v0.1.6
|
||||
- github.com/cerc-io/sushiswap-v3-watcher-ts@v0.1.7
|
||||
containers:
|
||||
- cerc/watcher-sushiswap-v3
|
||||
pods:
|
||||
|
Loading…
Reference in New Issue
Block a user