Compare commits

...

27 Commits

Author SHA1 Message Date
a2d6201be9 Merge pull request 'Remove quotes from git config' (#870) from dboreham/fix-git-config-command into main
Some checks failed
Publish / Build and publish (push) Successful in 1m19s
Deploy Test / Run deploy test suite (push) Successful in 5m12s
Webapp Test / Run webapp test suite (push) Successful in 4m37s
Smoke Test / Run basic test suite (push) Successful in 4m0s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m11s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m17s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h4m0s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h3m59s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m35s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m10s
External Stack Test / Run external stack test suite (push) Successful in 4m43s
Lint Checks / Run linter (push) Successful in 34s
Reviewed-on: #870
2024-07-05 15:56:12 +00:00
62f7825ec2 Remove quotes from git config
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 42s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m57s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m26s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m12s
Smoke Test / Run basic test suite (pull_request) Successful in 4m28s
2024-07-05 09:55:14 -06:00
6d24d4a7e6 Set github auth token if present (#868)
Some checks failed
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h9m59s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h10m0s
Lint Checks / Run linter (push) Successful in 33s
Publish / Build and publish (push) Successful in 1m10s
Deploy Test / Run deploy test suite (push) Successful in 4m33s
Webapp Test / Run webapp test suite (push) Successful in 4m20s
Smoke Test / Run basic test suite (push) Successful in 3m52s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m6s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m27s
Reviewed-on: #868
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-07-05 12:27:22 +00:00
f06e5f9a2a Don't try to tag remote images (#866)
All checks were successful
Lint Checks / Run linter (push) Successful in 37s
Publish / Build and publish (push) Successful in 1m14s
Webapp Test / Run webapp test suite (push) Successful in 4m18s
Smoke Test / Run basic test suite (push) Successful in 4m3s
Deploy Test / Run deploy test suite (push) Successful in 4m53s
Reviewed-on: #866
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-07-04 23:51:06 +00:00
c3a1402042 Derive the webapp host container id from stable data (#865)
All checks were successful
Lint Checks / Run linter (push) Successful in 34s
Publish / Build and publish (push) Successful in 1m20s
Webapp Test / Run webapp test suite (push) Successful in 4m32s
Smoke Test / Run basic test suite (push) Successful in 4m5s
Deploy Test / Run deploy test suite (push) Successful in 4m50s
Reviewed-on: #865
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-07-04 22:38:07 +00:00
ca5fffaed5 Add logging to webapp deployer (#863)
All checks were successful
Lint Checks / Run linter (push) Successful in 32s
Publish / Build and publish (push) Successful in 1m7s
Deploy Test / Run deploy test suite (push) Successful in 4m46s
Webapp Test / Run webapp test suite (push) Successful in 4m17s
Smoke Test / Run basic test suite (push) Successful in 3m59s
Helps with diagnosing failures and odd behavior seen in the deployer in production.

Reviewed-on: #863
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-07-04 19:46:42 +00:00
df776c1b4c Install git for apps that want to get their commit sha (#854)
Some checks failed
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m15s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m10s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h10m0s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h9m59s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m16s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m33s
External Stack Test / Run external stack test suite (push) Successful in 4m31s
Lint Checks / Run linter (push) Successful in 35s
Publish / Build and publish (push) Successful in 1m9s
Deploy Test / Run deploy test suite (push) Successful in 4m40s
Webapp Test / Run webapp test suite (push) Successful in 4m13s
Smoke Test / Run basic test suite (push) Successful in 3m51s
Reviewed-on: #854
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Co-authored-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-committed-by: David Boreham <dboreham@noreply.git.vdb.to>
2024-06-25 05:03:49 +00:00
48a3e79e6a Merge pull request 'Fix argument errors in command code' (#853) from dboreham/fix-laconic-mainnet-command into main
All checks were successful
Lint Checks / Run linter (push) Successful in 34s
Publish / Build and publish (push) Successful in 1m20s
Webapp Test / Run webapp test suite (push) Successful in 4m31s
Smoke Test / Run basic test suite (push) Successful in 3m54s
Deploy Test / Run deploy test suite (push) Successful in 4m58s
Reviewed-on: #853
2024-06-24 20:21:08 +00:00
0eaa5b8f09 Fix argument errors in command code
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 35s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m24s
Smoke Test / Run basic test suite (pull_request) Successful in 4m0s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m54s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 7m38s
2024-06-24 14:15:46 -06:00
8980ac2581 Merge pull request 'Fixes for current SO objects' (#852) from dboreham/fix-mainnet-laconic into main
All checks were successful
Lint Checks / Run linter (push) Successful in 41s
Publish / Build and publish (push) Successful in 1m18s
Webapp Test / Run webapp test suite (push) Successful in 4m17s
Smoke Test / Run basic test suite (push) Successful in 3m48s
Deploy Test / Run deploy test suite (push) Successful in 4m44s
Reviewed-on: #852
2024-06-24 19:54:04 +00:00
fd15252c3f Fixes for current SO objects
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 33s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m23s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m56s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 7m27s
Smoke Test / Run basic test suite (pull_request) Successful in 3m56s
2024-06-24 13:41:15 -06:00
b4e82ebc19 Merge pull request 'Fix mainnet laconic deploy setup' (#850) from dboreham/fix-mainnet-laconic-init into main
Some checks failed
Lint Checks / Run linter (push) Successful in 48s
Publish / Build and publish (push) Successful in 1m15s
Webapp Test / Run webapp test suite (push) Successful in 4m19s
Smoke Test / Run basic test suite (push) Successful in 3m52s
Deploy Test / Run deploy test suite (push) Successful in 4m47s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m19s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 56m43s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m23s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1h12m31s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m46s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m36s
External Stack Test / Run external stack test suite (push) Successful in 4m30s
Reviewed-on: #850
2024-06-22 01:25:07 +00:00
2364924a59 Fix mainnet laconic deploy setup
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 31s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m6s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m9s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m33s
Smoke Test / Run basic test suite (pull_request) Successful in 4m9s
2024-06-21 19:24:33 -06:00
a223797b4a Update graph-node dashboard to show individual subgraph increase in query count (#846)
Some checks failed
Lint Checks / Run linter (push) Successful in 35s
Publish / Build and publish (push) Successful in 1m10s
Deploy Test / Run deploy test suite (push) Successful in 4m42s
Webapp Test / Run webapp test suite (push) Successful in 4m18s
Smoke Test / Run basic test suite (push) Successful in 3m53s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m13s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 51m15s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 55m9s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m20s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m4s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m35s
External Stack Test / Run external stack test suite (push) Successful in 4m24s
Part of [Metrics and logging for GQL queries in watcher](https://www.notion.so/Metrics-and-logging-for-GQL-queries-in-watcher-928c692292b140a2a4f52cda9795df5e)

Reviewed-on: #846
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-06-20 09:27:23 +00:00
b8004e9870 Add Grafana panels for graph-node subgraph GQL queries (#845)
Some checks failed
Lint Checks / Run linter (push) Successful in 42s
Publish / Build and publish (push) Successful in 1m22s
Deploy Test / Run deploy test suite (push) Successful in 5m12s
Webapp Test / Run webapp test suite (push) Successful in 4m34s
Smoke Test / Run basic test suite (push) Successful in 3m59s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m17s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 51m47s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m40s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1h38m30s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m22s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m49s
External Stack Test / Run external stack test suite (push) Successful in 4m50s
Part of [Deploy v2 and updated v3 sushiswap subgraphs](https://www.notion.so/Deploy-v2-and-updated-v3-sushiswap-subgraphs-e331945fdeea487c890706fc22c6cc94)

Reviewed-on: #845
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-06-19 10:40:54 +00:00
3fd99a1522 Handle race condition in laconic registry CLI tests (#843)
All checks were successful
Lint Checks / Run linter (push) Successful in 33s
Publish / Build and publish (push) Successful in 1m16s
Deploy Test / Run deploy test suite (push) Successful in 5m11s
Webapp Test / Run webapp test suite (push) Successful in 4m9s
Smoke Test / Run basic test suite (push) Successful in 3m46s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m37s
Part of cerc-io/laconic-registry-cli#63

Reviewed-on: #843
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-06-19 06:41:58 +00:00
842d999792 Add alert rules for secured secured-finance subgraph watcher (#842)
Some checks failed
Lint Checks / Run linter (push) Successful in 38s
Publish / Build and publish (push) Successful in 1m22s
Webapp Test / Run webapp test suite (push) Successful in 4m44s
Smoke Test / Run basic test suite (push) Successful in 4m34s
Deploy Test / Run deploy test suite (push) Successful in 5m25s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m30s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 53m10s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1h1m56s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m37s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m23s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m56s
External Stack Test / Run external stack test suite (push) Successful in 4m46s
Part of [Generate secured-finance subgraph watcher with codegen](https://www.notion.so/Generate-secured-finance-subgraph-watcher-with-codegen-2923413e0af54ea787c5435d6966f3bb)
- Update watcher dashboard labels

Reviewed-on: #842
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-06-18 12:28:02 +00:00
d6a1fb3279 Merge pull request 'Fix image tag name' (#841) from dboreham/fix-remote-image-tags into main
Some checks failed
Lint Checks / Run linter (push) Successful in 32s
Publish / Build and publish (push) Successful in 1m30s
Deploy Test / Run deploy test suite (push) Successful in 5m13s
Webapp Test / Run webapp test suite (push) Successful in 4m53s
Smoke Test / Run basic test suite (push) Successful in 5m48s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m11s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 50m51s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m23s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h8m0s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m16s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m55s
External Stack Test / Run external stack test suite (push) Successful in 4m48s
Reviewed-on: #841
2024-06-13 14:32:26 +00:00
bf1eccd486 Fix image tag name
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 50s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m36s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m27s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m33s
Smoke Test / Run basic test suite (pull_request) Successful in 4m18s
2024-06-13 08:31:45 -06:00
3fb025b5c9 Make remote image tags unique to the deployment (#838)
Some checks failed
Lint Checks / Run linter (push) Successful in 34s
Publish / Build and publish (push) Successful in 1m22s
Deploy Test / Run deploy test suite (push) Successful in 4m41s
Webapp Test / Run webapp test suite (push) Successful in 4m24s
Smoke Test / Run basic test suite (push) Successful in 3m49s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m45s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 55m4s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h8m0s
Reviewed-on: #838
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-06-13 03:26:58 +00:00
4acb06325b Update watcher dashboard and config templates (#835)
Some checks failed
Lint Checks / Run linter (push) Successful in 33s
Publish / Build and publish (push) Successful in 1m21s
Webapp Test / Run webapp test suite (push) Successful in 4m11s
Smoke Test / Run basic test suite (push) Successful in 5m24s
Deploy Test / Run deploy test suite (push) Successful in 6m59s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m40s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 52m22s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m5s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h8m0s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m24s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m44s
External Stack Test / Run external stack test suite (push) Successful in 4m28s
Part of [Metrics and logging for GQL queries in watcher](https://www.notion.so/Metrics-and-logging-for-GQL-queries-in-watcher-928c692292b140a2a4f52cda9795df5e)

- Update watcher config templates after config refactoring
- Mount watcher GQL query log files on volumes
- Update watcher dashboard to
  - add a panel to show latest processed block number
  - use latest processed block from sync status for diff values

Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Reviewed-on: #835
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-06-12 11:52:51 +00:00
b80b647fa4 Merge pull request 'Remove files migrated to external repo' (#836) from dboreham/remove-snowball-stack into main
Some checks failed
Lint Checks / Run linter (push) Successful in 43s
Publish / Build and publish (push) Successful in 1m16s
Webapp Test / Run webapp test suite (push) Successful in 4m43s
Deploy Test / Run deploy test suite (push) Successful in 5m10s
Smoke Test / Run basic test suite (push) Successful in 4m30s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m15s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 54m48s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m28s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 2h38m9s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m33s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m36s
External Stack Test / Run external stack test suite (push) Successful in 4m36s
Reviewed-on: #836
2024-06-07 03:17:48 +00:00
9a1d3bb0f1 Remove files migrated to external repo
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 34s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m29s
Smoke Test / Run basic test suite (pull_request) Successful in 4m21s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m26s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m24s
2024-06-06 20:44:51 -06:00
abc0c2423f Add panels for GQL metrics to watcher dashboard (#834)
Some checks failed
Lint Checks / Run linter (push) Successful in 43s
Publish / Build and publish (push) Successful in 1m22s
Webapp Test / Run webapp test suite (push) Successful in 4m22s
Deploy Test / Run deploy test suite (push) Successful in 5m1s
Smoke Test / Run basic test suite (push) Successful in 4m20s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m6s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 52m19s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m10s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h12m59s
Database Test / Run database hosting test on kind/k8s (push) Successful in 8m43s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m39s
External Stack Test / Run external stack test suite (push) Successful in 4m37s
Part of [Metrics and logging for GQL queries in watcher](https://www.notion.so/Metrics-and-logging-for-GQL-queries-in-watcher-928c692292b140a2a4f52cda9795df5e)

Reviewed-on: #834
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-06-06 11:47:18 +00:00
a322d6eed4 Add dashboard for graph-node subgraphs (#832)
Some checks failed
Lint Checks / Run linter (push) Successful in 1m5s
Publish / Build and publish (push) Successful in 2m27s
Webapp Test / Run webapp test suite (push) Successful in 7m1s
Smoke Test / Run basic test suite (push) Successful in 7m30s
Deploy Test / Run deploy test suite (push) Successful in 9m23s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 24m12s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 51m33s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 6m56s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1h40m25s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m3s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m4s
External Stack Test / Run external stack test suite (push) Successful in 4m31s
Part of [Deploy v2 and updated v3 sushiswap subgraphs](https://www.notion.so/Deploy-v2-and-updated-v3-sushiswap-subgraphs-e331945fdeea487c890706fc22c6cc94)

- Add param `GRAPH_ETHEREUM_BLOCK_INGESTOR_MAX_CONCURRENT_JSON_RPC_CALLS_FOR_TXN_RECEIPTS` in graph-node stack
  - <https://github.com/graphprotocol/graph-node/blob/v0.31.0/docs/environment-variables.md#json-rpc-configuration-for-evm-chains>
- Add dashboard for subgraphs deployment in graph-node
  -  Show subgraph names in dashboard
- Add watcher dashboard panel for showing watcher release version and commit hash

Reviewed-on: #832
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-06-04 07:21:27 +00:00
ed8914b8d3 Upgrade watchers and their config (#827)
Some checks failed
Lint Checks / Run linter (push) Successful in 51s
Publish / Build and publish (push) Successful in 1m28s
Webapp Test / Run webapp test suite (push) Successful in 4m29s
Smoke Test / Run basic test suite (push) Successful in 4m32s
Deploy Test / Run deploy test suite (push) Successful in 5m26s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 22m46s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Failing after 9m10s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h13m0s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h13m0s
Database Test / Run database hosting test on kind/k8s (push) Successful in 18m47s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 5m12s
External Stack Test / Run external stack test suite (push) Successful in 6m6s
Part of [Investigate subgraph watchers lagging behind head](https://www.notion.so/Investigate-subgraph-watchers-lagging-behind-head-01b72294ca8e4f658e4c0e86b36d19e2)

Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Reviewed-on: #827
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-05-23 04:12:31 +00:00
fef7649683 Refactor SPA check. (#831)
All checks were successful
Lint Checks / Run linter (push) Successful in 40s
Publish / Build and publish (push) Successful in 1m18s
Webapp Test / Run webapp test suite (push) Successful in 4m23s
Smoke Test / Run basic test suite (push) Successful in 4m23s
Deploy Test / Run deploy test suite (push) Successful in 5m15s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m16s
External Stack Test / Run external stack test suite (push) Successful in 4m53s
Reviewed-on: #831
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-05-22 19:00:42 +00:00
48 changed files with 3396 additions and 576 deletions

View File

@ -4,3 +4,4 @@ Trigger
Trigger
Trigger
Trigger
Trigger

View File

@ -18,6 +18,8 @@ services:
"/update-grafana-alerts-config.sh && /run.sh"
ports:
- "3000"
extra_hosts:
- "host.docker.internal:host-gateway"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3000"]
interval: 30s

View File

@ -22,6 +22,7 @@ services:
GRAPH_ETHEREUM_JSON_RPC_TIMEOUT: ${GRAPH_ETHEREUM_JSON_RPC_TIMEOUT:-180}
GRAPH_ETHEREUM_REQUEST_RETRIES: ${GRAPH_ETHEREUM_REQUEST_RETRIES:-10}
GRAPH_ETHEREUM_MAX_BLOCK_RANGE_SIZE: ${GRAPH_ETHEREUM_MAX_BLOCK_RANGE_SIZE:-2000}
GRAPH_ETHEREUM_BLOCK_INGESTOR_MAX_CONCURRENT_JSON_RPC_CALLS_FOR_TXN_RECEIPTS: ${GRAPH_ETHEREUM_BLOCK_INGESTOR_MAX_CONCURRENT_JSON_RPC_CALLS_FOR_TXN_RECEIPTS:-1000}
entrypoint: ["bash", "-c"]
# Wait for ETH RPC endpoint to be up when running with fixturenet-lotus
command: |
@ -31,6 +32,7 @@ services:
- "8001"
- "8020"
- "8030"
- "8040"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "8020"]
interval: 30s

View File

@ -28,15 +28,37 @@ services:
extra_hosts:
- "host.docker.internal:host-gateway"
chain-head-exporter:
ethereum-chain-head-exporter:
image: cerc/watcher-ts:local
restart: always
working_dir: /app/packages/cli
environment:
ETH_RPC_ENDPOINT: ${CERC_ETH_RPC_ENDPOINT}
FIL_RPC_ENDPOINT: ${CERC_FIL_RPC_ENDPOINT}
ETH_RPC_ENDPOINT: ${CERC_ETH_RPC_ENDPOINT:-https://mainnet.infura.io/v3}
ETH_RPC_API_KEY: ${CERC_INFURA_KEY}
PORT: ${CERC_METRICS_PORT}
command: ["sh", "-c", "yarn export-metrics:chain-heads"]
ports:
- '5000'
extra_hosts:
- "host.docker.internal:host-gateway"
filecoin-chain-head-exporter:
image: cerc/watcher-ts:local
restart: always
working_dir: /app/packages/cli
environment:
ETH_RPC_ENDPOINT: ${CERC_FIL_RPC_ENDPOINT:-https://api.node.glif.io/rpc/v1}
command: ["sh", "-c", "yarn export-metrics:chain-heads"]
ports:
- '5000'
extra_hosts:
- "host.docker.internal:host-gateway"
graph-node-upstream-head-exporter:
image: cerc/watcher-ts:local
restart: always
working_dir: /app/packages/cli
environment:
ETH_RPC_ENDPOINT: ${GRAPH_NODE_RPC_ENDPOINT}
command: ["sh", "-c", "yarn export-metrics:chain-heads"]
ports:
- '5000'

View File

@ -1,13 +0,0 @@
services:
snowballtools-base-backend:
image: cerc/snowballtools-base-backend:local
restart: always
volumes:
- data:/data
- config:/config:ro
ports:
- 8000
volumes:
data:
config:

View File

@ -60,6 +60,7 @@ services:
volumes:
- ../config/watcher-ajna/watcher-config-template.toml:/app/environments/watcher-config-template.toml
- ../config/watcher-ajna/start-server.sh:/app/start-server.sh
- ajna_watcher_gql_logs_data:/app/gql-logs
ports:
- "3008"
- "9001"
@ -74,3 +75,4 @@ services:
volumes:
ajna_watcher_db_data:
ajna_watcher_gql_logs_data:

View File

@ -32,8 +32,8 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
CERC_HISTORICAL_BLOCK_RANGE: 500
CONTRACT_ADDRESS: 0x223c067F8CF28ae173EE5CafEa60cA44C335fecB
CONTRACT_NAME: Azimuth
@ -47,7 +47,7 @@ services:
ports:
- "9000"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "9000"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "9000"]
interval: 20s
timeout: 5s
retries: 15
@ -66,18 +66,20 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
working_dir: /app/packages/azimuth-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/azimuth-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/azimuth-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/azimuth-watcher/start-server.sh
- azimuth_watcher_gql_logs_data:/app/packages/azimuth-watcher/gql-logs
ports:
- "3001"
- "9001"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3001"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "3001"]
interval: 20s
timeout: 5s
retries: 15
@ -94,8 +96,8 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
CONTRACT_ADDRESS: 0x325f68d32BdEe6Ed86E7235ff2480e2A433D6189
CONTRACT_NAME: Censures
STARTING_BLOCK: 6784954
@ -108,7 +110,7 @@ services:
ports:
- "9002"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "9002"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "9002"]
interval: 20s
timeout: 5s
retries: 15
@ -127,18 +129,20 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
working_dir: /app/packages/censures-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/censures-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/censures-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/censures-watcher/start-server.sh
- censures_watcher_gql_logs_data:/app/packages/censures-watcher/gql-logs
ports:
- "3002"
- "9003"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3002"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "3002"]
interval: 20s
timeout: 5s
retries: 15
@ -155,8 +159,8 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
CONTRACT_ADDRESS: 0xe7e7f69b34D7d9Bd8d61Fb22C33b22708947971A
CONTRACT_NAME: Claims
STARTING_BLOCK: 6784941
@ -169,7 +173,7 @@ services:
ports:
- "9004"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "9004"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "9004"]
interval: 20s
timeout: 5s
retries: 15
@ -188,18 +192,20 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
working_dir: /app/packages/claims-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/claims-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/claims-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/claims-watcher/start-server.sh
- claims_watcher_gql_logs_data:/app/packages/claims-watcher/gql-logs
ports:
- "3003"
- "9005"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3003"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "3003"]
interval: 20s
timeout: 5s
retries: 15
@ -216,8 +222,8 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
CONTRACT_ADDRESS: 0x8C241098C3D3498Fe1261421633FD57986D74AeA
CONTRACT_NAME: ConditionalStarRelease
STARTING_BLOCK: 6828004
@ -230,7 +236,7 @@ services:
ports:
- "9006"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "9006"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "9006"]
interval: 20s
timeout: 5s
retries: 15
@ -249,18 +255,20 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
working_dir: /app/packages/conditional-star-release-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/conditional-star-release-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/conditional-star-release-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/conditional-star-release-watcher/start-server.sh
- conditional_star_release_watcher_gql_logs_data:/app/packages/conditional-star-release-watcher/gql-logs
ports:
- "3004"
- "9007"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3004"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "3004"]
interval: 20s
timeout: 5s
retries: 15
@ -277,8 +285,8 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
CONTRACT_ADDRESS: 0xf6b461fE1aD4bd2ce25B23Fe0aff2ac19B3dFA76
CONTRACT_NAME: DelegatedSending
STARTING_BLOCK: 6784956
@ -291,7 +299,7 @@ services:
ports:
- "9008"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "9008"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "9008"]
interval: 20s
timeout: 5s
retries: 15
@ -310,18 +318,20 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
working_dir: /app/packages/delegated-sending-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/delegated-sending-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/delegated-sending-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/delegated-sending-watcher/start-server.sh
- delegated_sending_watcher_gql_logs_data:/app/packages/delegated-sending-watcher/gql-logs
ports:
- "3005"
- "9009"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3005"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "3005"]
interval: 20s
timeout: 5s
retries: 15
@ -338,8 +348,8 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
CONTRACT_ADDRESS: 0x33EeCbf908478C10614626A9D304bfe18B78DD73
CONTRACT_NAME: Ecliptic
STARTING_BLOCK: 13692129
@ -352,7 +362,7 @@ services:
ports:
- "9010"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "9010"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "9010"]
interval: 20s
timeout: 5s
retries: 15
@ -371,18 +381,20 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
working_dir: /app/packages/ecliptic-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/ecliptic-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/ecliptic-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/ecliptic-watcher/start-server.sh
- ecliptic_watcher_gql_logs_data:/app/packages/ecliptic-watcher/gql-logs
ports:
- "3006"
- "9011"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3006"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "3006"]
interval: 20s
timeout: 5s
retries: 15
@ -399,8 +411,8 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
CONTRACT_ADDRESS: 0x86cd9cd0992F04231751E3761De45cEceA5d1801
CONTRACT_NAME: LinearStarRelease
STARTING_BLOCK: 6784943
@ -413,7 +425,7 @@ services:
ports:
- "9012"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "9012"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "9012"]
interval: 20s
timeout: 5s
retries: 15
@ -432,18 +444,20 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
working_dir: /app/packages/linear-star-release-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/linear-star-release-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/linear-star-release-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/linear-star-release-watcher/start-server.sh
- linear_star_release_watcher_gql_logs_data:/app/packages/linear-star-release-watcher/gql-logs
ports:
- "3007"
- "9013"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3007"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "3007"]
interval: 20s
timeout: 5s
retries: 15
@ -460,8 +474,8 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
CONTRACT_ADDRESS: 0x7fEcaB617c868Bb5996d99D95200D2Fa708218e4
CONTRACT_NAME: Polls
STARTING_BLOCK: 6784912
@ -474,7 +488,7 @@ services:
ports:
- "9014"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "9014"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "9014"]
interval: 20s
timeout: 5s
retries: 15
@ -493,18 +507,20 @@ services:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_ETH_RPC_ENDPOINTS: ${CERC_ETH_RPC_ENDPOINTS}
CERC_IPLD_ETH_GQL_ENDPOINT: ${CERC_IPLD_ETH_GQL_ENDPOINT}
working_dir: /app/packages/polls-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/polls-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/polls-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/polls-watcher/start-server.sh
- polls_watcher_gql_logs_data:/app/packages/polls-watcher/gql-logs
ports:
- "3008"
- "9015"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3008"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "3008"]
interval: 20s
timeout: 5s
retries: 15
@ -542,7 +558,7 @@ services:
ports:
- "0.0.0.0:4000:4000"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "4000"]
test: ["CMD", "nc", "-vz", "127.0.0.1", "4000"]
interval: 20s
timeout: 5s
retries: 15
@ -552,3 +568,11 @@ services:
volumes:
watcher_db_data:
azimuth_watcher_gql_logs_data:
censures_watcher_gql_logs_data:
claims_watcher_gql_logs_data:
conditional_star_release_watcher_gql_logs_data:
delegated_sending_watcher_gql_logs_data:
ecliptic_watcher_gql_logs_data:
linear_star_release_watcher_gql_logs_data:
polls_watcher_gql_logs_data:

View File

@ -60,6 +60,7 @@ services:
volumes:
- ../config/watcher-merkl-sushiswap-v3/watcher-config-template.toml:/app/environments/watcher-config-template.toml
- ../config/watcher-merkl-sushiswap-v3/start-server.sh:/app/start-server.sh
- merkl_sushiswap_v3_watcher_gql_logs_data:/app/gql-logs
ports:
- "127.0.0.1:3007:3008"
- "9003:9001"
@ -74,3 +75,4 @@ services:
volumes:
merkl_sushiswap_v3_watcher_db_data:
merkl_sushiswap_v3_watcher_gql_logs_data:

View File

@ -60,6 +60,7 @@ services:
volumes:
- ../config/watcher-sushiswap-v3/watcher-config-template.toml:/app/environments/watcher-config-template.toml
- ../config/watcher-sushiswap-v3/start-server.sh:/app/start-server.sh
- sushiswap_v3_watcher_gql_logs_data:/app/gql-logs
ports:
- "127.0.0.1:3008:3008"
- "9001:9001"
@ -74,3 +75,4 @@ services:
volumes:
sushiswap_v3_watcher_db_data:
sushiswap_v3_watcher_gql_logs_data:

View File

@ -0,0 +1,20 @@
apiVersion: 1
datasources:
- name: Graph Node Postgres
type: postgres
jsonData:
database: graph-node
sslmode: 'disable'
maxOpenConns: 100
maxIdleConns: 100
maxIdleConnsAuto: true
connMaxLifetime: 14400
postgresVersion: 1411 # 903=9.3, 1000=10, 1411=14.11
timescaledb: false
user: graph-node
# # Add URL for graph-node database
# url: graph-node-db:5432
# # Set password for graph-node database
# secureJsonData:
# password: 'password'

View File

@ -45,7 +45,18 @@ scrape_configs:
metrics_path: /metrics
scheme: http
static_configs:
- targets: ['chain-head-exporter:5000']
- targets: ['ethereum-chain-head-exporter:5000']
labels:
instance: 'external'
chain: 'ethereum'
- targets: ['filecoin-chain-head-exporter:5000']
labels:
instance: 'external'
chain: 'filecoin'
- targets: ['graph-node-upstream-head-exporter:5000']
labels:
instance: 'graph-node'
chain: 'filecoin'
- job_name: 'postgres'
scrape_interval: 30s
@ -74,3 +85,11 @@ scrape_configs:
# - targets: ['example-host:1317']
params:
format: ['prometheus']
- job_name: graph-node
metrics_path: /metrics
scrape_interval: 30s
scheme: http
static_configs:
# Add graph-node targets to be monitored below
# - targets: ['graph-node:8040']

View File

@ -4,4 +4,6 @@ echo Using CERC_GRAFANA_ALERTS_SUBGRAPH_IDS ${CERC_GRAFANA_ALERTS_SUBGRAPH_IDS}
# Replace subgraph ids in subgraph alerting config
# Note: Requires the grafana container to be run with user root
sed -i "s/REPLACE_WITH_SUBGRAPH_IDS/$CERC_GRAFANA_ALERTS_SUBGRAPH_IDS/g" /etc/grafana/provisioning/alerting/subgraph-alert-rules.yml
if [ -n "$CERC_GRAFANA_ALERTS_SUBGRAPH_IDS" ]; then
sed -i "s/REPLACE_WITH_SUBGRAPH_IDS/$CERC_GRAFANA_ALERTS_SUBGRAPH_IDS/g" /etc/grafana/provisioning/alerting/subgraph-alert-rules.yml
fi

View File

@ -24,7 +24,7 @@ groups:
uid: PBFA97CFB590B2093
disableTextWrap: false
editorMode: code
expr: latest_block_number - on(chain) group_right sync_status_block_number{job="azimuth", instance="azimuth", kind="latest_indexed"}
expr: latest_block_number{instance="external"} - on(chain) group_right sync_status_block_number{job="azimuth", instance="azimuth", kind="latest_indexed"}
fullMetaSearch: false
includeNullMetadata: true
instant: true
@ -100,7 +100,7 @@ groups:
uid: PBFA97CFB590B2093
disableTextWrap: false
editorMode: code
expr: latest_block_number - on(chain) group_right sync_status_block_number{job="azimuth", instance="censures", kind="latest_indexed"}
expr: latest_block_number{instance="external"} - on(chain) group_right sync_status_block_number{job="azimuth", instance="censures", kind="latest_indexed"}
fullMetaSearch: false
includeNullMetadata: true
instant: true
@ -176,7 +176,7 @@ groups:
uid: PBFA97CFB590B2093
disableTextWrap: false
editorMode: code
expr: latest_block_number - on(chain) group_right sync_status_block_number{job="azimuth", instance="claims", kind="latest_indexed"}
expr: latest_block_number{instance="external"} - on(chain) group_right sync_status_block_number{job="azimuth", instance="claims", kind="latest_indexed"}
fullMetaSearch: false
includeNullMetadata: true
instant: true
@ -252,7 +252,7 @@ groups:
uid: PBFA97CFB590B2093
disableTextWrap: false
editorMode: code
expr: latest_block_number - on(chain) group_right sync_status_block_number{job="azimuth", instance="conditional_star_release", kind="latest_indexed"}
expr: latest_block_number{instance="external"} - on(chain) group_right sync_status_block_number{job="azimuth", instance="conditional_star_release", kind="latest_indexed"}
fullMetaSearch: false
includeNullMetadata: true
instant: true
@ -328,7 +328,7 @@ groups:
uid: PBFA97CFB590B2093
disableTextWrap: false
editorMode: code
expr: latest_block_number - on(chain) group_right sync_status_block_number{job="azimuth", instance="delegated_sending", kind="latest_indexed"}
expr: latest_block_number{instance="external"} - on(chain) group_right sync_status_block_number{job="azimuth", instance="delegated_sending", kind="latest_indexed"}
fullMetaSearch: false
includeNullMetadata: true
instant: true
@ -404,7 +404,7 @@ groups:
uid: PBFA97CFB590B2093
disableTextWrap: false
editorMode: code
expr: latest_block_number - on(chain) group_right sync_status_block_number{job="azimuth", instance="ecliptic", kind="latest_indexed"}
expr: latest_block_number{instance="external"} - on(chain) group_right sync_status_block_number{job="azimuth", instance="ecliptic", kind="latest_indexed"}
fullMetaSearch: false
includeNullMetadata: true
instant: true
@ -480,7 +480,7 @@ groups:
uid: PBFA97CFB590B2093
disableTextWrap: false
editorMode: code
expr: latest_block_number - on(chain) group_right sync_status_block_number{job="azimuth", instance="linear_star_release", kind="latest_indexed"}
expr: latest_block_number{instance="external"} - on(chain) group_right sync_status_block_number{job="azimuth", instance="linear_star_release", kind="latest_indexed"}
fullMetaSearch: false
includeNullMetadata: true
instant: true
@ -556,7 +556,7 @@ groups:
uid: PBFA97CFB590B2093
disableTextWrap: false
editorMode: code
expr: latest_block_number - on(chain) group_right sync_status_block_number{job="azimuth", instance="polls", kind="latest_indexed"}
expr: latest_block_number{instance="external"} - on(chain) group_right sync_status_block_number{job="azimuth", instance="polls", kind="latest_indexed"}
fullMetaSearch: false
includeNullMetadata: true
instant: true
@ -634,7 +634,7 @@ groups:
uid: PBFA97CFB590B2093
disableTextWrap: false
editorMode: code
expr: latest_block_number - on(chain) group_right sync_status_block_number{job="sushi", instance="sushiswap", kind="latest_indexed"}
expr: latest_block_number{instance="external"} - on(chain) group_right sync_status_block_number{job="sushi", instance="sushiswap", kind="latest_indexed"}
fullMetaSearch: false
includeNullMetadata: true
instant: true
@ -710,7 +710,7 @@ groups:
uid: PBFA97CFB590B2093
disableTextWrap: false
editorMode: code
expr: latest_block_number - on(chain) group_right sync_status_block_number{job="sushi", instance="merkl_sushiswap", kind="latest_indexed"}
expr: latest_block_number{instance="external"} - on(chain) group_right sync_status_block_number{job="sushi", instance="merkl_sushiswap", kind="latest_indexed"}
fullMetaSearch: false
includeNullMetadata: true
instant: true
@ -788,7 +788,85 @@ groups:
uid: PBFA97CFB590B2093
disableTextWrap: false
editorMode: code
expr: latest_block_number - on(chain) group_right sync_status_block_number{job="ajna", instance="ajna", kind="latest_indexed"}
expr: latest_block_number{instance="external"} - on(chain) group_right sync_status_block_number{job="ajna", instance="ajna", kind="latest_indexed"}
fullMetaSearch: false
includeNullMetadata: true
instant: true
intervalMs: 1000
legendFormat: __auto
maxDataPoints: 43200
range: false
refId: diff
useBackend: false
- refId: latest_external
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
datasource:
type: prometheus
uid: PBFA97CFB590B2093
editorMode: code
expr: latest_block_number{chain="filecoin"}
hide: false
instant: true
legendFormat: __auto
range: false
refId: latest_external
- refId: condition
relativeTimeRange:
from: 600
to: 0
datasourceUid: __expr__
model:
conditions:
- evaluator:
params:
- 0
- 0
type: gt
operator:
type: and
query:
params: []
reducer:
params: []
type: avg
type: query
datasource:
name: Expression
type: __expr__
uid: __expr__
expression: ${diff} >= 16
intervalMs: 1000
maxDataPoints: 43200
refId: condition
type: math
noDataState: Alerting
execErrState: Alerting
for: 15m
annotations:
summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }}
isPaused: false
# Secured Finance
- uid: secured_finance_diff_external
title: secured_finance_watcher_head_tracking
condition: condition
data:
- refId: diff
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
datasource:
type: prometheus
uid: PBFA97CFB590B2093
disableTextWrap: false
editorMode: code
expr: latest_block_number{instance="external"} - on(chain) group_right sync_status_block_number{job="secured-finance", instance="secured-finance", kind="latest_indexed"}
fullMetaSearch: false
includeNullMetadata: true
instant: true

View File

@ -2,7 +2,6 @@
host = "0.0.0.0"
port = 3008
kind = "active"
gqlPath = "/"
# Checkpointing state.
checkpointing = true
@ -22,23 +21,30 @@
# Interval in number of blocks at which to clear entities cache.
clearEntitiesCacheInterval = 1000
# Max block range for which to return events in eventsInRange GQL query.
# Use -1 for skipping check on block range.
maxEventsBlockRange = 1000
# Flag to specify whether RPC endpoint supports block hash as block tag parameter
rpcSupportsBlockHashParam = false
# GQL cache settings
[server.gqlCache]
enabled = true
# Server GQL config
[server.gql]
path = "/"
# Max in-memory cache size (in bytes) (default 8 MB)
# maxCacheSize
# Max block range for which to return events in eventsInRange GQL query.
# Use -1 for skipping check on block range.
maxEventsBlockRange = 1000
# GQL cache-control max-age settings (in seconds)
maxAge = 15
timeTravelMaxAge = 86400 # 1 day
# Log directory for GQL requests
logDir = "./gql-logs"
# GQL cache settings
[server.gql.cache]
enabled = true
# Max in-memory cache size (in bytes) (default 8 MB)
# maxCacheSize
# GQL cache-control max-age settings (in seconds)
maxAge = 15
timeTravelMaxAge = 86400 # 1 day
[metrics]
host = "0.0.0.0"
@ -67,9 +73,9 @@
isFEVM = true
# Boolean flag to filter event logs by contracts
filterLogsByAddresses = false
filterLogsByAddresses = true
# Boolean flag to filter event logs by topics
filterLogsByTopics = false
filterLogsByTopics = true
[upstream.cache]
name = "requests"
@ -85,6 +91,9 @@
# Filecoin block time: https://docs.filecoin.io/basics/the-blockchain/blocks-and-tipsets#blocktime
blockDelayInMilliSecs = 30000
# Number of blocks by which block processing lags behind head
blockProcessingOffset = 0
# Boolean to switch between modes of processing events when starting the server.
# Setting to true will fetch filtered events and required blocks in a range of blocks and then process them.
# Setting to false will fetch blocks consecutively with its events and then process them (Behaviour is followed in realtime processing near head).

View File

@ -4,16 +4,19 @@ if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
echo "Using IPLD ETH RPC endpoint ${CERC_IPLD_ETH_RPC}"
echo "Using IPLD GQL endpoint ${CERC_IPLD_ETH_GQL}"
echo "Using ETH RPC endpoints ${CERC_ETH_RPC_ENDPOINTS}"
echo "Using IPLD GQL endpoint ${CERC_IPLD_ETH_GQL_ENDPOINT}"
echo "Using historicalLogsBlockRange ${CERC_HISTORICAL_BLOCK_RANGE:-2000}"
# Convert the comma-separated list in CERC_ETH_RPC_ENDPOINTS to a JSON array
RPC_ENDPOINTS_ARRAY=$(echo "$CERC_ETH_RPC_ENDPOINTS" | tr ',' '\n' | awk '{print "\"" $0 "\""}' | paste -sd, - | sed 's/^/[/; s/$/]/')
# Replace env variables in template TOML file
# Read in the config template TOML file and modify it
WATCHER_CONFIG_TEMPLATE=$(cat environments/watcher-config-template.toml)
WATCHER_CONFIG=$(echo "$WATCHER_CONFIG_TEMPLATE" | \
sed -E "s|REPLACE_WITH_CERC_IPLD_ETH_RPC|${CERC_IPLD_ETH_RPC}|g; \
s|REPLACE_WITH_CERC_IPLD_ETH_GQL|${CERC_IPLD_ETH_GQL}|g; \
sed -E "s|REPLACE_WITH_CERC_ETH_RPC_ENDPOINTS|${RPC_ENDPOINTS_ARRAY}|g; \
s|REPLACE_WITH_CERC_IPLD_ETH_GQL_ENDPOINT|${CERC_IPLD_ETH_GQL_ENDPOINT}|g; \
s|REPLACE_WITH_CERC_HISTORICAL_BLOCK_RANGE|${CERC_HISTORICAL_BLOCK_RANGE:-2000}| ")
# Write the modified content to a new file

View File

@ -4,16 +4,19 @@ if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
echo "Using IPLD ETH RPC endpoint ${CERC_IPLD_ETH_RPC}"
echo "Using IPLD GQL endpoint ${CERC_IPLD_ETH_GQL}"
echo "Using ETH RPC endpoints ${CERC_ETH_RPC_ENDPOINTS}"
echo "Using IPLD GQL endpoint ${CERC_IPLD_ETH_GQL_ENDPOINT}"
echo "Using historicalLogsBlockRange ${CERC_HISTORICAL_BLOCK_RANGE:-2000}"
# Convert the comma-separated list in CERC_ETH_RPC_ENDPOINTS to a JSON array
RPC_ENDPOINTS_ARRAY=$(echo "$CERC_ETH_RPC_ENDPOINTS" | tr ',' '\n' | awk '{print "\"" $0 "\""}' | paste -sd, - | sed 's/^/[/; s/$/]/')
# Replace env variables in template TOML file
# Read in the config template TOML file and modify it
WATCHER_CONFIG_TEMPLATE=$(cat environments/watcher-config-template.toml)
WATCHER_CONFIG=$(echo "$WATCHER_CONFIG_TEMPLATE" | \
sed -E "s|REPLACE_WITH_CERC_IPLD_ETH_RPC|${CERC_IPLD_ETH_RPC}|g; \
s|REPLACE_WITH_CERC_IPLD_ETH_GQL|${CERC_IPLD_ETH_GQL}|g; \
sed -E "s|REPLACE_WITH_CERC_ETH_RPC_ENDPOINTS|${RPC_ENDPOINTS_ARRAY}|g; \
s|REPLACE_WITH_CERC_IPLD_ETH_GQL_ENDPOINT|${CERC_IPLD_ETH_GQL_ENDPOINT}|g; \
s|REPLACE_WITH_CERC_HISTORICAL_BLOCK_RANGE|${CERC_HISTORICAL_BLOCK_RANGE:-2000}| ")
# Write the modified content to a new file

View File

@ -1,6 +1,7 @@
[server]
host = "0.0.0.0"
maxSimultaneousRequests = -1
[server.gql]
maxSimultaneousRequests = -1
[metrics]
host = "0.0.0.0"
@ -13,8 +14,8 @@
[upstream]
[upstream.ethServer]
gqlApiEndpoint = "REPLACE_WITH_CERC_IPLD_ETH_GQL"
rpcProviderEndpoint = "REPLACE_WITH_CERC_IPLD_ETH_RPC"
gqlApiEndpoint = "REPLACE_WITH_CERC_IPLD_ETH_GQL_ENDPOINT"
rpcProviderEndpoints = REPLACE_WITH_CERC_ETH_RPC_ENDPOINTS
[jobQueue]
historicalLogsBlockRange = REPLACE_WITH_CERC_HISTORICAL_BLOCK_RANGE

View File

@ -2,7 +2,6 @@
host = "0.0.0.0"
port = 3008
kind = "active"
gqlPath = '/'
# Checkpointing state.
checkpointing = true
@ -22,23 +21,30 @@
# Interval in number of blocks at which to clear entities cache.
clearEntitiesCacheInterval = 1000
# Max block range for which to return events in eventsInRange GQL query.
# Use -1 for skipping check on block range.
maxEventsBlockRange = 1000
# Flag to specify whether RPC endpoint supports block hash as block tag parameter
rpcSupportsBlockHashParam = false
# GQL cache settings
[server.gqlCache]
enabled = true
# Server GQL config
[server.gql]
path = "/"
# Max in-memory cache size (in bytes) (default 8 MB)
# maxCacheSize
# Max block range for which to return events in eventsInRange GQL query.
# Use -1 for skipping check on block range.
maxEventsBlockRange = 1000
# GQL cache-control max-age settings (in seconds)
maxAge = 15
timeTravelMaxAge = 86400 # 1 day
# Log directory for GQL requests
logDir = "./gql-logs"
# GQL cache settings
[server.gql.cache]
enabled = true
# Max in-memory cache size (in bytes) (default 8 MB)
# maxCacheSize
# GQL cache-control max-age settings (in seconds)
maxAge = 15
timeTravelMaxAge = 86400 # 1 day
[metrics]
host = "0.0.0.0"
@ -67,9 +73,9 @@
isFEVM = true
# Boolean flag to filter event logs by contracts
filterLogsByAddresses = false
filterLogsByAddresses = true
# Boolean flag to filter event logs by topics
filterLogsByTopics = false
filterLogsByTopics = true
[upstream.cache]
name = "requests"
@ -85,6 +91,9 @@
# Filecoin block time: https://docs.filecoin.io/basics/the-blockchain/blocks-and-tipsets#blocktime
blockDelayInMilliSecs = 30000
# Number of blocks by which block processing lags behind head
blockProcessingOffset = 0
# Boolean to switch between modes of processing events when starting the server.
# Setting to true will fetch filtered events and required blocks in a range of blocks and then process them.
# Setting to false will fetch blocks consecutively with its events and then process them (Behaviour is followed in realtime processing near head).

View File

@ -2,7 +2,6 @@
host = "0.0.0.0"
port = 3008
kind = "active"
gqlPath = "/"
# Checkpointing state.
checkpointing = true
@ -22,23 +21,30 @@
# Interval in number of blocks at which to clear entities cache.
clearEntitiesCacheInterval = 1000
# Max block range for which to return events in eventsInRange GQL query.
# Use -1 for skipping check on block range.
maxEventsBlockRange = 1000
# Flag to specify whether RPC endpoint supports block hash as block tag parameter
rpcSupportsBlockHashParam = false
# GQL cache settings
[server.gqlCache]
enabled = true
# Server GQL config
[server.gql]
path = "/"
# Max in-memory cache size (in bytes) (default 8 MB)
# maxCacheSize
# Max block range for which to return events in eventsInRange GQL query.
# Use -1 for skipping check on block range.
maxEventsBlockRange = 1000
# GQL cache-control max-age settings (in seconds)
maxAge = 15
timeTravelMaxAge = 86400 # 1 day
# Log directory for GQL requests
logDir = "./gql-logs"
# GQL cache settings
[server.gql.cache]
enabled = true
# Max in-memory cache size (in bytes) (default 8 MB)
# maxCacheSize
# GQL cache-control max-age settings (in seconds)
maxAge = 15
timeTravelMaxAge = 86400 # 1 day
[metrics]
host = "0.0.0.0"
@ -67,9 +73,9 @@
isFEVM = true
# Boolean flag to filter event logs by contracts
filterLogsByAddresses = false
filterLogsByAddresses = true
# Boolean flag to filter event logs by topics
filterLogsByTopics = false
filterLogsByTopics = true
[upstream.cache]
name = "requests"
@ -85,6 +91,9 @@
# Filecoin block time: https://docs.filecoin.io/basics/the-blockchain/blocks-and-tipsets#blocktime
blockDelayInMilliSecs = 30000
# Number of blocks by which block processing lags behind head
blockProcessingOffset = 0
# Boolean to switch between modes of processing events when starting the server.
# Setting to true will fetch filtered events and required blocks in a range of blocks and then process them.
# Setting to false will fetch blocks consecutively with its events and then process them (Behaviour is followed in realtime processing near head).

View File

@ -1,6 +0,0 @@
FROM cerc/snowballtools-base-backend-base:local
WORKDIR /app/packages/backend
COPY run.sh .
ENTRYPOINT ["./run.sh"]

View File

@ -1,26 +0,0 @@
FROM ubuntu:22.04 as builder
RUN apt update && \
apt install -y --no-install-recommends --no-install-suggests \
ca-certificates curl gnupg
# Node
ARG NODE_MAJOR=20
RUN curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg && \
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list && \
apt update && apt install -y nodejs
# npm setup
RUN npm config set @cerc-io:registry https://git.vdb.to/api/packages/cerc-io/npm/ && npm install -g yarn
COPY . /app/
WORKDIR /app/
RUN find . -name 'node_modules' | xargs -n1 rm -rf
RUN yarn && yarn build --ignore frontend
FROM cerc/webapp-base:local
COPY --from=builder /app /app
WORKDIR /app/packages/backend

View File

@ -1,10 +0,0 @@
#!/usr/bin/env bash
# Build cerc/webapp-deployer-backend
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
# See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/snowballtools-base-backend-base:local ${build_command_args} -f ${SCRIPT_DIR}/Dockerfile-base ${CERC_REPO_BASE_DIR}/snowballtools-base
docker build -t cerc/snowballtools-base-backend:local ${build_command_args} ${SCRIPT_DIR}

View File

@ -1,19 +0,0 @@
#!/bin/bash
LACONIC_HOSTED_CONFIG_FILE=${LACONIC_HOSTED_CONFIG_FILE}
if [ -z "${LACONIC_HOSTED_CONFIG_FILE}" ]; then
if [ -f "/config/laconic-hosted-config.yml" ]; then
LACONIC_HOSTED_CONFIG_FILE="/config/laconic-hosted-config.yml"
elif [ -f "/config/config.yml" ]; then
LACONIC_HOSTED_CONFIG_FILE="/config/config.yml"
fi
fi
if [ -f "${LACONIC_HOSTED_CONFIG_FILE}" ]; then
/scripts/apply-webapp-config.sh $LACONIC_HOSTED_CONFIG_FILE "`pwd`/dist"
fi
/scripts/apply-runtime-env.sh "`pwd`/dist"
yarn start

View File

@ -6,5 +6,10 @@ WORKDIR /app
COPY . .
# Get the latest Git commit hash and set in package.json
RUN COMMIT_HASH=$(git rev-parse HEAD) && \
jq --arg commitHash "$COMMIT_HASH" '.commitHash = $commitHash' package.json > tmp.json && \
mv tmp.json package.json
RUN echo "Installing dependencies and building ajna-watcher-ts" && \
yarn && yarn build

View File

@ -1,11 +1,20 @@
FROM node:18.16.0-alpine3.16
RUN apk --update --no-cache add git python3 alpine-sdk
RUN apk --update --no-cache add git python3 alpine-sdk jq
WORKDIR /app
COPY . .
# Get the latest Git commit hash and set it in package.json of all watchers
RUN COMMIT_HASH=$(git rev-parse HEAD) && \
find . -name 'package.json' -exec sh -c ' \
for packageFile; do \
jq --arg commitHash "$0" ".commitHash = \$commitHash" "$packageFile" > "$packageFile.tmp" && \
mv "$packageFile.tmp" "$packageFile"; \
done \
' "$COMMIT_HASH" {} \;
RUN echo "Building azimuth-watcher-ts" && \
yarn && yarn build

View File

@ -6,5 +6,10 @@ WORKDIR /app
COPY . .
# Get the latest Git commit hash and set in package.json
RUN COMMIT_HASH=$(git rev-parse HEAD) && \
jq --arg commitHash "$COMMIT_HASH" '.commitHash = $commitHash' package.json > tmp.json && \
mv tmp.json package.json
RUN echo "Installing dependencies and building merkl-sushiswap-v3-watcher-ts" && \
yarn && yarn build

View File

@ -6,5 +6,10 @@ WORKDIR /app
COPY . .
# Get the latest Git commit hash and set in package.json
RUN COMMIT_HASH=$(git rev-parse HEAD) && \
jq --arg commitHash "$COMMIT_HASH" '.commitHash = $commitHash' package.json > tmp.json && \
mv tmp.json package.json
RUN echo "Installing dependencies and building sushiswap-v3-watcher-ts" && \
yarn && yarn build

View File

@ -32,7 +32,7 @@ RUN \
# [Optional] Uncomment this section to install additional OS packages.
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install --no-install-recommends jq gettext-base
&& apt-get -y install --no-install-recommends jq gettext-base git
# [Optional] Uncomment if you want to install an additional version of node using nvm
# ARG EXTRA_NODE_VERSION=10

View File

@ -8,11 +8,13 @@ CERC_WEBAPP_FILES_DIR="${CERC_WEBAPP_FILES_DIR:-/data}"
CERC_ENABLE_CORS="${CERC_ENABLE_CORS:-false}"
CERC_SINGLE_PAGE_APP="${CERC_SINGLE_PAGE_APP}"
if [ -z "${CERC_SINGLE_PAGE_APP}" ] && [ 1 -eq $(find "${CERC_WEBAPP_FILES_DIR}" -name '*.html' | wc -l) ]; then
# If there only one HTML file, we assume an SPA.
CERC_SINGLE_PAGE_APP=true
else
CERC_SINGLE_PAGE_APP=false
if [ -z "${CERC_SINGLE_PAGE_APP}" ]; then
# If there is only one HTML file, assume an SPA.
if [ 1 -eq $(find "${CERC_WEBAPP_FILES_DIR}" -name '*.html' | wc -l) ]; then
CERC_SINGLE_PAGE_APP=true
else
CERC_SINGLE_PAGE_APP=false
fi
fi
# ${var,,} is a lower-case comparison

View File

@ -2,7 +2,7 @@ version: "1.0"
name: ajna
description: "Ajna watcher stack"
repos:
- git.vdb.to/cerc-io/ajna-watcher-ts@v0.1.6
- git.vdb.to/cerc-io/ajna-watcher-ts@v0.1.13
containers:
- cerc/watcher-ajna
pods:

View File

@ -4,7 +4,7 @@ Instructions to setup and deploy Azimuth Watcher stack
## Setup
Prerequisite: `ipld-eth-server` RPC and GQL endpoints
Prerequisite: External RPC endpoints
Clone required repositories:
@ -44,34 +44,42 @@ network:
- 0.0.0.0:9000:9000
azimuth-watcher-server:
- 0.0.0.0:3001:3001
- 0.0.0.0:9001:9001
censures-watcher-job-runner:
- 0.0.0.0:9002:9002
censures-watcher-server:
- 0.0.0.0:3002:3002
- 0.0.0.0:9003:9003
claims-watcher-job-runner:
- 0.0.0.0:9004:9004
claims-watcher-server:
- 0.0.0.0:3003:3003
- 0.0.0.0:9005:9005
conditional-star-release-watcher-job-runner:
- 0.0.0.0:9006:9006
conditional-star-release-watcher-server:
- 0.0.0.0:3004:3004
- 0.0.0.0:9007:9007
delegated-sending-watcher-job-runner:
- 0.0.0.0:9008:9008
delegated-sending-watcher-server:
- 0.0.0.0:3005:3005
- 0.0.0.0:9009:9009
ecliptic-watcher-job-runner:
- 0.0.0.0:9010:9010
ecliptic-watcher-server:
- 0.0.0.0:3006:3006
- 0.0.0.0:9011:9011
linear-star-release-watcher-job-runner:
- 0.0.0.0:9012:9012
linear-star-release-watcher-server:
- 0.0.0.0:3007:3007
- 0.0.0.0:9013:9013
polls-watcher-job-runner:
- 0.0.0.0:9014:9014
polls-watcher-server:
- 0.0.0.0:3008:3008
- 0.0.0.0:9015:9015
gateway-server:
- 0.0.0.0:4000:4000
...
@ -94,7 +102,7 @@ Inside the deployment directory, open the file `config.env` and add variable to
```bash
# External RPC endpoints
CERC_IPLD_ETH_RPC=
CERC_ETH_RPC_ENDPOINTS=https://example-rpc-endpoint-1,https://example-rpc-endpoint-2
```
* NOTE: If RPC endpoint is on the host machine, use `host.docker.internal` as the hostname to access the host port, or use the `ip a` command to find the IP address of the `docker0` interface (this will usually be something like `172.17.0.1` or `172.18.0.1`)
@ -120,4 +128,7 @@ To stop all azimuth services and also delete data:
```bash
laconic-so deployment --dir azimuth-deployment stop --delete-volumes
# Remove deployment directory (deployment will have to be recreated for a re-run)
rm -r azimuth-deployment
```

View File

@ -1,7 +1,7 @@
version: "1.0"
name: azimuth
repos:
- github.com/cerc-io/azimuth-watcher-ts@v0.1.3
- github.com/cerc-io/azimuth-watcher-ts@0.1.6
containers:
- cerc/watcher-azimuth
pods:

View File

@ -43,16 +43,18 @@ customized by editing the "spec" file generated by `laconic-so deploy init`.
```
$ cat graph-node-spec.yml
stack: graph-node
ports:
graph-node:
- '8000:8000'
- '8001'
- '8020:8020'
- '8030'
ipfs:
- '8080'
- '4001'
- '5001:5001'
network:
ports:
graph-node:
- '8000:8000'
- '8001'
- '8020:8020'
- '8030'
- '8040'
ipfs:
- '8080'
- '4001'
- '5001:5001'
...
```
@ -64,7 +66,7 @@ laconic-so --stack graph-node deploy create --spec-file graph-node-spec.yml --de
## Start the stack
Create an env file with the following values to be set before starting the stack:
Update `config.env` file inside deployment directory with the following values before starting the stack:
```bash
# Set ETH RPC endpoint the graph node will use
@ -88,10 +90,13 @@ export GRAPH_ETHEREUM_REQUEST_RETRIES=
# Maximum number of blocks to scan for triggers in each request (default: 2000)
export GRAPH_ETHEREUM_MAX_BLOCK_RANGE_SIZE=
# Maximum number of concurrent requests made against Ethereum for requesting transaction receipts during block ingestion (default: 1000)
export GRAPH_ETHEREUM_BLOCK_INGESTOR_MAX_CONCURRENT_JSON_RPC_CALLS_FOR_TXN_RECEIPTS=
# Ref: https://git.vdb.to/cerc-io/graph-node/src/branch/master/docs/environment-variables.md
```
Example env file:
Example `config.env` file:
```bash
export ETH_RPC_HOST=filecoin.chainup.net
@ -104,12 +109,6 @@ export GRAPH_ETHEREUM_REQUEST_RETRIES=5
export GRAPH_ETHEREUM_MAX_BLOCK_RANGE_SIZE=50
```
Set the environment variables:
```bash
source <PATH_TO_ENV_FILE>
```
Deploy the stack:
```bash

View File

@ -3,7 +3,9 @@ name: graph-node
description: "Stack for running graph-node"
repos:
- github.com/graphprotocol/graph-node
- github.com/cerc-io/watcher-ts@v0.2.92
containers:
- cerc/graph-node
- cerc/watcher-ts
pods:
- graph-node

View File

@ -14,10 +14,11 @@
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from stack_orchestrator.util import get_yaml
from stack_orchestrator.deploy.deploy_types import DeployCommandContext, LaconicStackSetupCommand, DeploymentContext
from stack_orchestrator.deploy.deploy_types import DeployCommandContext, LaconicStackSetupCommand
from stack_orchestrator.deploy.deployment_context import DeploymentContext
from stack_orchestrator.deploy.stack_state import State
from stack_orchestrator.deploy.deploy_util import VolumeMapping, run_container_command
from stack_orchestrator.command_types import CommandOptions
from stack_orchestrator.opts import opts
from enum import Enum
from pathlib import Path
from shutil import copyfile, copytree
@ -61,7 +62,7 @@ def _get_node_moniker_from_config(network_dir: Path):
return moniker
def _get_node_key_from_gentx(options: CommandOptions, gentx_file_name: str):
def _get_node_key_from_gentx(gentx_file_name: str):
gentx_file_path = Path(gentx_file_name)
if gentx_file_path.exists():
with open(Path(gentx_file_name), "rb") as f:
@ -76,24 +77,24 @@ def _comma_delimited_to_list(list_str: str):
return list_str.split(",") if list_str else []
def _get_node_keys_from_gentx_files(options: CommandOptions, gentx_file_list: str):
def _get_node_keys_from_gentx_files(gentx_file_list: str):
node_keys = []
gentx_files = _comma_delimited_to_list(gentx_file_list)
for gentx_file in gentx_files:
node_key = _get_node_key_from_gentx(options, gentx_file)
node_key = _get_node_key_from_gentx(gentx_file)
if node_key:
node_keys.append(node_key)
return node_keys
def _copy_gentx_files(options: CommandOptions, network_dir: Path, gentx_file_list: str):
def _copy_gentx_files(network_dir: Path, gentx_file_list: str):
gentx_files = _comma_delimited_to_list(gentx_file_list)
for gentx_file in gentx_files:
gentx_file_path = Path(gentx_file)
copyfile(gentx_file_path, os.path.join(network_dir, "config", "gentx", os.path.basename(gentx_file_path)))
def _remove_persistent_peers(options: CommandOptions, network_dir: Path):
def _remove_persistent_peers(network_dir: Path):
config_file_path = _config_toml_path(network_dir)
if not config_file_path.exists():
print("Error: config.toml not found")
@ -107,7 +108,7 @@ def _remove_persistent_peers(options: CommandOptions, network_dir: Path):
output_file.write(config_file_content)
def _insert_persistent_peers(options: CommandOptions, config_dir: Path, new_persistent_peers: str):
def _insert_persistent_peers(config_dir: Path, new_persistent_peers: str):
config_file_path = config_dir.joinpath("config.toml")
if not config_file_path.exists():
print("Error: config.toml not found")
@ -150,7 +151,7 @@ def _phase_from_params(parameters):
def setup(command_context: DeployCommandContext, parameters: LaconicStackSetupCommand, extra_args):
options = command_context.cluster_context.options
options = opts.o
currency = "stake" # Does this need to be a parameter?
@ -237,7 +238,7 @@ def setup(command_context: DeployCommandContext, parameters: LaconicStackSetupCo
print("Error: --gentx-files must be supplied")
sys.exit(1)
# First look in the supplied gentx files for the other nodes' keys
other_node_keys = _get_node_keys_from_gentx_files(options, parameters.gentx_file_list)
other_node_keys = _get_node_keys_from_gentx_files(parameters.gentx_file_list)
# Add those keys to our genesis, with balances we determine here (why?)
for other_node_key in other_node_keys:
outputk, statusk = run_container_command(
@ -246,7 +247,7 @@ def setup(command_context: DeployCommandContext, parameters: LaconicStackSetupCo
if options.debug:
print(f"Command output: {outputk}")
# Copy the gentx json files into our network dir
_copy_gentx_files(options, network_dir, parameters.gentx_file_list)
_copy_gentx_files(network_dir, parameters.gentx_file_list)
# Now we can run collect-gentxs
output1, status1 = run_container_command(
command_context, "laconicd", f"laconicd collect-gentxs --home {laconicd_home_path_in_container}", mounts)
@ -255,7 +256,7 @@ def setup(command_context: DeployCommandContext, parameters: LaconicStackSetupCo
print(f"Generated genesis file, please copy to other nodes as required: \
{os.path.join(network_dir, 'config', 'genesis.json')}")
# Last thing, collect-gentxs puts a likely bogus set of persistent_peers in config.toml so we remove that now
_remove_persistent_peers(options, network_dir)
_remove_persistent_peers(network_dir)
# In both cases we validate the genesis file now
output2, status1 = run_container_command(
command_context, "laconicd", f"laconicd validate-genesis --home {laconicd_home_path_in_container}", mounts)
@ -266,7 +267,7 @@ def setup(command_context: DeployCommandContext, parameters: LaconicStackSetupCo
sys.exit(1)
def create(context: DeploymentContext, extra_args):
def create(deployment_context: DeploymentContext, extra_args):
network_dir = extra_args[0]
if network_dir is None:
print("Error: --network-dir must be supplied")
@ -285,15 +286,15 @@ def create(context: DeploymentContext, extra_args):
sys.exit(1)
# Copy the network directory contents into our deployment
# TODO: change this to work with non local paths
deployment_config_dir = context.deployment_dir.joinpath("data", "laconicd-config")
deployment_config_dir = deployment_context.deployment_dir.joinpath("data", "laconicd-config")
copytree(config_dir_path, deployment_config_dir, dirs_exist_ok=True)
# If supplied, add the initial persistent peers to the config file
if extra_args[1]:
initial_persistent_peers = extra_args[1]
_insert_persistent_peers(context.command_context.cluster_context.options, deployment_config_dir, initial_persistent_peers)
_insert_persistent_peers(deployment_config_dir, initial_persistent_peers)
# Copy the data directory contents into our deployment
# TODO: change this to work with non local paths
deployment_data_dir = context.deployment_dir.joinpath("data", "laconicd-data")
deployment_data_dir = deployment_context.deployment_dir.joinpath("data", "laconicd-data")
copytree(data_dir_path, deployment_data_dir, dirs_exist_ok=True)

View File

@ -2,7 +2,7 @@ version: "1.0"
name: merkl-sushiswap-v3
description: "SushiSwap v3 watcher stack"
repos:
- github.com/cerc-io/merkl-sushiswap-v3-watcher-ts@v0.1.10
- github.com/cerc-io/merkl-sushiswap-v3-watcher-ts@v0.1.14
containers:
- cerc/watcher-merkl-sushiswap-v3
pods:

View File

@ -134,6 +134,29 @@ Note: Use `host.docker.internal` as host to access ports on the host machine
Place the dashboard json files in grafana dashboards config directory (`monitoring-deployment/config/monitoring/grafana/dashboards`) in the deployment folder
#### Graph Node Config
For graph-node dashboard postgres datasource needs to be setup in `monitoring-deployment/config/monitoring/grafana/provisioning/datasources/graph-node-postgres.yml` (in deployment folder)
```yml
# graph-node-postgres.yml
...
datasources:
- name: Graph Node Postgres
type: postgres
jsonData:
# Set name to remote graph-node database name
database: graph-node
...
# Set user to remote graph-node database username
user: graph-node
# Add URL for remote graph-node database
url: graph-node-db:5432
# Set password for graph-node database
secureJsonData:
password: 'password'
```
### Env
Set the following env variables in the deployment env config file (`monitoring-deployment/config.env`):
@ -156,6 +179,11 @@ Set the following env variables in the deployment env config file (`monitoring-d
# Grafana server host URL (used in various links in alerts, etc.)
# (Optional, default: http://localhost:3000)
GF_SERVER_ROOT_URL=
# RPC endpoint used by graph-node for upstream head metric
# (Optional, default: https://mainnet.infura.io/v3)
GRAPH_NODE_RPC_ENDPOINT=
```
## Start the stack

View File

@ -57,35 +57,35 @@ Add the following scrape configs to prometheus config file (`monitoring-watchers
metrics_path: /metrics
scheme: http
static_configs:
- targets: ['AZIMUTH_WATCHER_HOST:AZIMUTH_WATCHER_PORT']
- targets: ['AZIMUTH_WATCHER_HOST:AZIMUTH_WATCHER_METRICS_PORT', 'AZIMUTH_WATCHER_HOST:AZIMUTH_WATCHER_GQL_METRICS_PORT']
labels:
instance: 'azimuth'
chain: 'ethereum'
- targets: ['CENSURES_WATCHER_HOST:CENSURES_WATCHER_PORT']
- targets: ['CENSURES_WATCHER_HOST:CENSURES_WATCHER_METRICS_PORT', 'CENSURES_WATCHER_HOST:CENSURES_WATCHER_GQL_METRICS_PORT']
labels:
instance: 'censures'
chain: 'ethereum'
- targets: ['CLAIMS_WATCHER_HOST:CLAIMS_WATCHER_PORT']
- targets: ['CLAIMS_WATCHER_HOST:CLAIMS_WATCHER_METRICS_PORT', 'CLAIMS_WATCHER_HOST:CLAIMS_WATCHER_GQL_METRICS_PORT']
labels:
instance: 'claims'
chain: 'ethereum'
- targets: ['CONDITIONAL_STAR_RELEASE_WATCHER_HOST:CONDITIONAL_STAR_RELEASE_WATCHER_PORT']
- targets: ['CONDITIONAL_STAR_RELEASE_WATCHER_HOST:CONDITIONAL_STAR_RELEASE_WATCHER_METRICS_PORT', 'CONDITIONAL_STAR_RELEASE_WATCHER_HOST:CONDITIONAL_STAR_RELEASE_WATCHER_GQL_METRICS_PORT']
labels:
instance: 'conditional_star_release'
chain: 'ethereum'
- targets: ['DELEGATED_SENDING_WATCHER_HOST:DELEGATED_SENDING_WATCHER_PORT']
- targets: ['DELEGATED_SENDING_WATCHER_HOST:DELEGATED_SENDING_WATCHER_METRICS_PORT', 'DELEGATED_SENDING_WATCHER_HOST:DELEGATED_SENDING_WATCHER_GQL_METRICS_PORT']
labels:
instance: 'delegated_sending'
chain: 'ethereum'
- targets: ['ECLIPTIC_WATCHER_HOST:ECLIPTIC_WATCHER_PORT']
- targets: ['ECLIPTIC_WATCHER_HOST:ECLIPTIC_WATCHER_METRICS_PORT', 'ECLIPTIC_WATCHER_HOST:ECLIPTIC_WATCHER_GQL_METRICS_PORT']
labels:
instance: 'ecliptic'
chain: 'ethereum'
- targets: ['LINEAR_STAR_WATCHER_HOST:LINEAR_STAR_WATCHER_PORT']
- targets: ['LINEAR_STAR_WATCHER_HOST:LINEAR_STAR_WATCHER_METRICS_PORT', 'LINEAR_STAR_WATCHER_HOST:LINEAR_STAR_WATCHER_GQL_METRICS_PORT']
labels:
instance: 'linear_star_release'
chain: 'ethereum'
- targets: ['POLLS_WATCHER_HOST:POLLS_WATCHER_PORT']
- targets: ['POLLS_WATCHER_HOST:POLLS_WATCHER_METRICS_PORT', 'POLLS_WATCHER_HOST:POLLS_WATCHER_GQL_METRICS_PORT']
labels:
instance: 'polls'
chain: 'ethereum'
@ -95,11 +95,11 @@ Add the following scrape configs to prometheus config file (`monitoring-watchers
metrics_path: /metrics
scheme: http
static_configs:
- targets: ['SUSHISWAP_WATCHER_HOST:SUSHISWAP_WATCHER_PORT']
- targets: ['SUSHISWAP_WATCHER_HOST:SUSHISWAP_WATCHER_METRICS_PORT', 'SUSHISWAP_WATCHER_HOST:SUSHISWAP_WATCHER_GQL_METRICS_PORT']
labels:
instance: 'sushiswap'
chain: 'filecoin'
- targets: ['MERKLE_SUSHISWAP_WATCHER_HOST:MERKLE_SUSHISWAP_WATCHER_PORT']
- targets: ['MERKLE_SUSHISWAP_WATCHER_HOST:MERKLE_SUSHISWAP_WATCHER_METRICS_PORT', 'MERKLE_SUSHISWAP_WATCHER_HOST:MERKLE_SUSHISWAP_WATCHER_GQL_METRICS_PORT']
labels:
instance: 'merkl_sushiswap'
chain: 'filecoin'
@ -109,7 +109,7 @@ Add the following scrape configs to prometheus config file (`monitoring-watchers
metrics_path: /metrics
scheme: http
static_configs:
- targets: ['AJNA_WATCHER_HOST:AJNA_WATCHER_PORT']
- targets: ['AJNA_WATCHER_HOST:AJNA_WATCHER_METRICS_PORT', 'AJNA_WATCHER_HOST:AJNA_WATCHER_GQL_METRICS_PORT']
labels:
instance: 'ajna'
chain: 'filecoin'

View File

@ -1,7 +1,7 @@
version: "0.1"
name: monitoring
repos:
- github.com/cerc-io/watcher-ts@v0.2.81
- github.com/cerc-io/watcher-ts@v0.2.92
containers:
- cerc/watcher-ts
pods:

View File

@ -1,10 +0,0 @@
version: "1.0"
name: snowballtools-base-backend
description: "snowballtools-base-backend"
repos:
- github.com/snowball-tools/snowballtools-base
containers:
- cerc/webapp-base
- cerc/snowballtools-base-backend
pods:
- snowballtools-base-backend

View File

@ -2,7 +2,7 @@ version: "1.0"
name: sushiswap-v3
description: "SushiSwap v3 watcher stack"
repos:
- github.com/cerc-io/sushiswap-v3-watcher-ts@v0.1.10
- github.com/cerc-io/sushiswap-v3-watcher-ts@v0.1.14
containers:
- cerc/watcher-sushiswap-v3
pods:

View File

@ -29,16 +29,29 @@ def _image_needs_pushed(image: str):
return image.endswith(":local")
def _remote_tag_for_image(image: str, remote_repo_url: str):
# Turns image tags of the form: foo/bar:local into remote.repo/org/bar:deploy
major_parts = image.split("/", 2)
image_name_with_version = major_parts[1] if 2 == len(major_parts) else major_parts[0]
(image_name, image_version) = image_name_with_version.split(":")
if image_version == "local":
return f"{remote_repo_url}/{image_name}:deploy"
else:
return image
# Note: do not add any calls this function
def remote_image_exists(remote_repo_url: str, local_tag: str):
docker = DockerClient()
try:
remote_tag = remote_tag_for_image(local_tag, remote_repo_url)
remote_tag = _remote_tag_for_image(local_tag, remote_repo_url)
result = docker.manifest.inspect(remote_tag)
return True if result else False
except Exception: # noqa: E722
return False
# Note: do not add any calls this function
def add_tags_to_image(remote_repo_url: str, local_tag: str, *additional_tags):
if not additional_tags:
return
@ -47,18 +60,20 @@ def add_tags_to_image(remote_repo_url: str, local_tag: str, *additional_tags):
raise Exception(f"{local_tag} does not exist in {remote_repo_url}")
docker = DockerClient()
remote_tag = remote_tag_for_image(local_tag, remote_repo_url)
new_remote_tags = [remote_tag_for_image(tag, remote_repo_url) for tag in additional_tags]
remote_tag = _remote_tag_for_image(local_tag, remote_repo_url)
new_remote_tags = [_remote_tag_for_image(tag, remote_repo_url) for tag in additional_tags]
docker.buildx.imagetools.create(sources=[remote_tag], tags=new_remote_tags)
def remote_tag_for_image(image: str, remote_repo_url: str):
def remote_tag_for_image_unique(image: str, remote_repo_url: str, deployment_id: str):
# Turns image tags of the form: foo/bar:local into remote.repo/org/bar:deploy
major_parts = image.split("/", 2)
image_name_with_version = major_parts[1] if 2 == len(major_parts) else major_parts[0]
(image_name, image_version) = image_name_with_version.split(":")
if image_version == "local":
return f"{remote_repo_url}/{image_name}:deploy"
# Salt the tag with part of the deployment id to make it unique to this deployment
deployment_tag = deployment_id[-8:]
return f"{remote_repo_url}/{image_name}:deploy-{deployment_tag}"
else:
return image
@ -73,14 +88,14 @@ def push_images_operation(command_context: DeployCommandContext, deployment_cont
docker = DockerClient()
for image in images:
if _image_needs_pushed(image):
remote_tag = remote_tag_for_image(image, remote_repo_url)
remote_tag = remote_tag_for_image_unique(image, remote_repo_url, deployment_context.id)
if opts.o.verbose:
print(f"Tagging {image} to {remote_tag}")
docker.image.tag(image, remote_tag)
# Run docker push commands to upload
for image in images:
if _image_needs_pushed(image):
remote_tag = remote_tag_for_image(image, remote_repo_url)
remote_tag = remote_tag_for_image_unique(image, remote_repo_url, deployment_context.id)
if opts.o.verbose:
print(f"Pushing image {remote_tag}")
docker.image.push(remote_tag)

View File

@ -26,7 +26,7 @@ from stack_orchestrator.deploy.k8s.helpers import envs_from_environment_variable
from stack_orchestrator.deploy.deploy_util import parsed_pod_files_map_from_file_names, images_for_deployment
from stack_orchestrator.deploy.deploy_types import DeployEnvVars
from stack_orchestrator.deploy.spec import Spec, Resources, ResourceLimits
from stack_orchestrator.deploy.images import remote_tag_for_image
from stack_orchestrator.deploy.images import remote_tag_for_image_unique
DEFAULT_VOLUME_RESOURCES = Resources({
"reservations": {"storage": "2Gi"}
@ -326,8 +326,11 @@ class ClusterInfo:
if opts.o.debug:
print(f"Merged envs: {envs}")
# Re-write the image tag for remote deployment
image_to_use = remote_tag_for_image(
image, self.spec.get_image_registry()) if self.spec.get_image_registry() is not None else image
# Note self.app_name has the same value as deployment_id
image_to_use = remote_tag_for_image_unique(
image,
self.spec.get_image_registry(),
self.app_name) if self.spec.get_image_registry() is not None else image
volume_mounts = volume_mounts_for_service(self.parsed_pod_yaml_map, service_name)
container = client.V1Container(
name=container_name,

View File

@ -24,7 +24,7 @@ import uuid
import click
from stack_orchestrator.deploy.images import remote_image_exists, add_tags_to_image
from stack_orchestrator.deploy.images import remote_image_exists
from stack_orchestrator.deploy.webapp import deploy_webapp
from stack_orchestrator.deploy.webapp.util import (LaconicRegistryClient, TimedLogger,
build_container_image, push_container_image,
@ -99,44 +99,62 @@ def process_app_deployment_request(
deployment_record = laconic.get_record(app_deployment_crn)
deployment_dir = os.path.join(deployment_parent_dir, fqdn)
# At present we use this to generate a unique but stable ID for the app's host container
# TODO: implement support to derive this transparently from the already-unique deployment id
unique_deployment_id = hashlib.md5(fqdn.encode()).hexdigest()[:16]
deployment_config_file = os.path.join(deployment_dir, "config.env")
# TODO: Is there any reason not to simplify the hash input to the app_deployment_crn?
deployment_container_tag = "laconic-webapp/%s:local" % hashlib.md5(deployment_dir.encode()).hexdigest()
deployment_container_tag = "laconic-webapp/%s:local" % unique_deployment_id
app_image_shared_tag = f"laconic-webapp/{app.id}:local"
# b. check for deployment directory (create if necessary)
if not os.path.exists(deployment_dir):
if deployment_record:
raise Exception("Deployment record %s exists, but not deployment dir %s. Please remove name." %
(app_deployment_crn, deployment_dir))
print("deploy_webapp", deployment_dir)
logger.log(f"Creating webapp deployment in: {deployment_dir} with container id: {deployment_container_tag}")
deploy_webapp.create_deployment(ctx, deployment_dir, deployment_container_tag,
f"https://{fqdn}", kube_config, image_registry, env_filename)
elif env_filename:
shutil.copyfile(env_filename, deployment_config_file)
needs_k8s_deploy = False
if force_rebuild:
logger.log("--force-rebuild is enabled so the container will always be built now, even if nothing has changed in the app")
# 6. build container (if needed)
if not deployment_record or deployment_record.attributes.application != app.id:
# TODO: add a comment that explains what this code is doing (not clear to me)
if not deployment_record or deployment_record.attributes.application != app.id or force_rebuild:
needs_k8s_deploy = True
# check if the image already exists
shared_tag_exists = remote_image_exists(image_registry, app_image_shared_tag)
# Note: in the code below, calls to add_tags_to_image() won't work at present.
# This is because SO deployment code in general re-names the container image
# to be unique to the deployment. This is done transparently
# and so when we call add_tags_to_image() here and try to add tags to the remote image,
# we get the image name wrong. Accordingly I've disabled the relevant code for now.
# This is safe because we are running with --force-rebuild at present
if shared_tag_exists and not force_rebuild:
# simply add our unique tag to the existing image and we are done
logger.log(f"Using existing app image {app_image_shared_tag} for {deployment_container_tag}")
add_tags_to_image(image_registry, app_image_shared_tag, deployment_container_tag)
logger.log(
f"(SKIPPED) Existing image found for this app: {app_image_shared_tag} "
"tagging it with: {deployment_container_tag} to use in this deployment"
)
# add_tags_to_image(image_registry, app_image_shared_tag, deployment_container_tag)
logger.log("Tag complete")
else:
extra_build_args = [] # TODO: pull from request
logger.log(f"Building container image {deployment_container_tag}")
logger.log(f"Building container image: {deployment_container_tag}")
build_container_image(app, deployment_container_tag, extra_build_args, logger)
logger.log("Build complete")
logger.log(f"Pushing container image {deployment_container_tag}")
logger.log(f"Pushing container image: {deployment_container_tag}")
push_container_image(deployment_dir, logger)
logger.log("Push complete")
# The build/push commands above will use the unique deployment tag, so now we need to add the shared tag.
logger.log(f"Updating app image tag {app_image_shared_tag} from build of {deployment_container_tag}")
add_tags_to_image(image_registry, deployment_container_tag, app_image_shared_tag)
logger.log(
f"(SKIPPED) Adding global app image tag: {app_image_shared_tag} to newly built image: {deployment_container_tag}"
)
# add_tags_to_image(image_registry, deployment_container_tag, app_image_shared_tag)
logger.log("Tag complete")
else:
logger.log("Requested app is already deployed, skipping build and image push")
# 7. update config (if needed)
if not deployment_record or file_hash(deployment_config_file) != deployment_record.attributes.meta.config:

View File

@ -242,6 +242,7 @@ def determine_base_container(clone_dir, app_type="webapp"):
def build_container_image(app_record, tag, extra_build_args=[], logger=None):
tmpdir = tempfile.mkdtemp()
# TODO: determine if this code could be calling into the Python git library like setup-repositories
try:
record_id = app_record["id"]
ref = app_record.attributes.repository_ref
@ -249,6 +250,16 @@ def build_container_image(app_record, tag, extra_build_args=[], logger=None):
clone_dir = os.path.join(tmpdir, record_id)
logger.log(f"Cloning repository {repo} to {clone_dir} ...")
# Set github credentials if present running a command like:
# git config --global url."https://${TOKEN}:@github.com/".insteadOf "https://github.com/"
github_token = os.environ.get("DEPLOYER_GITHUB_TOKEN")
if github_token:
logger.log("Github token detected, setting it in the git environment")
git_config_args = [
"git", "config", "--global", f"url.https://{github_token}:@github.com/.insteadOf", "https://github.com/"
]
result = subprocess.run(git_config_args, stdout=logger.file, stderr=logger.file)
result.check_returncode()
if ref:
# TODO: Determing branch or hash, and use depth 1 if we can.
git_env = dict(os.environ.copy())
@ -265,6 +276,7 @@ def build_container_image(app_record, tag, extra_build_args=[], logger=None):
logger.log(f"git checkout failed. Does ref {ref} exist?")
raise e
else:
# TODO: why is this code different vs the branch above (run vs check_call, and no prompt disable)?
result = subprocess.run(["git", "clone", "--depth", "1", repo, clone_dir], stdout=logger.file, stderr=logger.file)
result.check_returncode()
@ -299,11 +311,12 @@ def push_container_image(deployment_dir, logger):
def deploy_to_k8s(deploy_record, deployment_dir, logger):
if not deploy_record:
command = "up"
command = "start"
else:
command = "update"
logger.log("Deploying to k8s ...")
logger.log(f"Running {command} command on deployment dir: {deployment_dir}")
result = subprocess.run([sys.argv[0], "deployment", "--dir", deployment_dir, command],
stdout=logger.file, stderr=logger.file)
result.check_returncode()

View File

@ -22,16 +22,16 @@ echo "$(date +"%Y-%m-%d %T"): Stack started"
# Verify that the fixturenet is up and running
$TEST_TARGET_SO --stack fixturenet-laconicd deploy --cluster laconicd ps
# Wait for the laconid endpoint to come up
echo "Waiting for the RPC endpoint to come up"
docker exec laconicd-laconicd-1 sh -c "curl --retry 20 --retry-delay 3 --retry-connrefused http://127.0.0.1:9473/api"
# Get the fixturenet account address
laconicd_account_address=$(docker exec laconicd-laconicd-1 laconicd keys list | awk '/- address:/ {print $3}')
# Copy over config
docker exec laconicd-cli-1 cp config.yml laconic-registry-cli/
# Wait for the laconid endpoint to come up
echo "Waiting for the RPC endpoint to come up"
docker exec laconicd-laconicd-1 sh -c "curl --retry 20 --retry-delay 3 --retry-connrefused http://127.0.0.1:9473/api"
# Run the tests
echo "Running the tests"
docker exec -e TEST_ACCOUNT=$laconicd_account_address laconicd-cli-1 sh -c 'cd laconic-registry-cli && yarn && yarn test'