stack-orchestrator/stack_orchestrator/data/stacks/azimuth
Nabarun a322d6eed4
Some checks failed
Lint Checks / Run linter (push) Successful in 1m5s
Publish / Build and publish (push) Successful in 2m27s
Webapp Test / Run webapp test suite (push) Successful in 7m1s
Smoke Test / Run basic test suite (push) Successful in 7m30s
Deploy Test / Run deploy test suite (push) Successful in 9m23s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 24m12s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 51m33s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 6m56s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1h40m25s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m3s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m4s
External Stack Test / Run external stack test suite (push) Successful in 4m31s
Add dashboard for graph-node subgraphs (#832)
Part of [Deploy v2 and updated v3 sushiswap subgraphs](https://www.notion.so/Deploy-v2-and-updated-v3-sushiswap-subgraphs-e331945fdeea487c890706fc22c6cc94)

- Add param `GRAPH_ETHEREUM_BLOCK_INGESTOR_MAX_CONCURRENT_JSON_RPC_CALLS_FOR_TXN_RECEIPTS` in graph-node stack
  - <https://github.com/graphprotocol/graph-node/blob/v0.31.0/docs/environment-variables.md#json-rpc-configuration-for-evm-chains>
- Add dashboard for subgraphs deployment in graph-node
  -  Show subgraph names in dashboard
- Add watcher dashboard panel for showing watcher release version and commit hash

Reviewed-on: #832
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-06-04 07:21:27 +00:00
..
README.md Upgrade watchers and their config (#827) 2024-05-23 04:12:31 +00:00
stack.yml Add dashboard for graph-node subgraphs (#832) 2024-06-04 07:21:27 +00:00

Azimuth Watcher

Instructions to setup and deploy Azimuth Watcher stack

Setup

Prerequisite: External RPC endpoints

Clone required repositories:

laconic-so --stack azimuth setup-repositories --pull

NOTE: If the repository already exists and checked out to a different version, setup-repositories command will throw an error. For getting around this, the azimuth-watcher-ts repository can be removed and then run the command again.

Build the container images:

laconic-so --stack azimuth build-containers

This should create the required docker images in the local image registry.

Create a deployment

First, create a spec file for the deployment, which will map the stack's ports and volumes to the host:

laconic-so --stack azimuth deploy init --output azimuth-spec.yml

Ports

Edit network in spec file to map container ports to same ports in host

...
network:
  ports:
    watcher-db:
     - 0.0.0.0:15432:5432
    azimuth-watcher-job-runner:
     - 0.0.0.0:9000:9000
    azimuth-watcher-server:
     - 0.0.0.0:3001:3001
    censures-watcher-job-runner:
     - 0.0.0.0:9002:9002
    censures-watcher-server:
     - 0.0.0.0:3002:3002
    claims-watcher-job-runner:
     - 0.0.0.0:9004:9004
    claims-watcher-server:
     - 0.0.0.0:3003:3003
    conditional-star-release-watcher-job-runner:
     - 0.0.0.0:9006:9006
    conditional-star-release-watcher-server:
     - 0.0.0.0:3004:3004
    delegated-sending-watcher-job-runner:
     - 0.0.0.0:9008:9008
    delegated-sending-watcher-server:
     - 0.0.0.0:3005:3005
    ecliptic-watcher-job-runner:
     - 0.0.0.0:9010:9010
    ecliptic-watcher-server:
     - 0.0.0.0:3006:3006
    linear-star-release-watcher-job-runner:
     - 0.0.0.0:9012:9012
    linear-star-release-watcher-server:
     - 0.0.0.0:3007:3007
    polls-watcher-job-runner:
     - 0.0.0.0:9014:9014
    polls-watcher-server:
     - 0.0.0.0:3008:3008
    gateway-server:
     - 0.0.0.0:4000:4000
...

Data volumes

Container data volumes are bind-mounted to specified paths in the host filesystem. The default setup (generated by laconic-so deploy init) places the volumes in the ./data subdirectory of the deployment directory. The default mappings can be customized by editing the "spec" file generated by laconic-so deploy init.


Once you've made any needed changes to the spec file, create a deployment from it:

laconic-so --stack azimuth deploy create --spec-file azimuth-spec.yml --deployment-dir azimuth-deployment

Set env variables

Inside the deployment directory, open the file config.env and add variable to update RPC endpoint :

# External RPC endpoints
CERC_ETH_RPC_ENDPOINTS=https://example-rpc-endpoint-1,https://example-rpc-endpoint-2
  • NOTE: If RPC endpoint is on the host machine, use host.docker.internal as the hostname to access the host port, or use the ip a command to find the IP address of the docker0 interface (this will usually be something like 172.17.0.1 or 172.18.0.1)

Start the stack

Start the deployment:

laconic-so deployment --dir azimuth-deployment start
  • List and check the health status of all the containers using docker ps and wait for them to be healthy

Clean up

To stop all azimuth services running in the background, while preserving chain data:

laconic-so deployment --dir azimuth-deployment stop

To stop all azimuth services and also delete data:

laconic-so deployment --dir azimuth-deployment stop --delete-volumes

# Remove deployment directory (deployment will have to be recreated for a re-run)
rm -r azimuth-deployment