Compare commits

..

70 Commits

Author SHA1 Message Date
68132c3305 Reduce alert pending period for watcher alerts 2024-04-22 18:48:56 +05:30
cfc411bfe4 Use classic condition for watcher alert rules 2024-04-22 18:47:34 +05:30
abf4d39a22 Add http status code check to alert rule 2024-04-15 18:42:26 +05:30
23d527720f Add a custom label to blackbox metrics and update instructions 2024-04-15 17:51:17 +05:30
1746f7366c Add alerts on blackbox metrics for monitoring endpoints 2024-04-15 17:02:50 +05:30
345d200873 Add a laconicd Grafana dashboard to monitoring stack (#799)
Part of https://www.notion.so/Monitoring-and-alerting-for-laconicd-86727c3b4dde4dc993d87d6e29f935fe

- Add a laconicd Grafana dashboard
  - Update fixturenet-laconicd script to expose metrics
- Upgrade Grafana version to avoid errors while saving changes made to a dashboard (see [thread](https://community.grafana.com/t/error-cannot-add-property-ishandled-object-is-not-extensible/109268))
-  Add an alert rule for Ajna watcher

Reviewed-on: cerc-io/stack-orchestrator#799
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-04-11 07:59:36 +00:00
87fffca358 fixturenet-plugeth Deneb/Cancun upgrade (#789)
Updates fixturenet-plugeth stack for the Deneb fork based on Geth v1.13.x:

- updates genesis generator tool, and simplifies the config: the default from `ethereum-genesis-generator` can be used for a from-genesis Merged chain.

Reviewed-on: cerc-io/stack-orchestrator#789
Reviewed-by: jonathanface <jonathanface@noreply.git.vdb.to>
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
2024-04-11 03:21:36 +00:00
66b92df498 Merge pull request 'Blast stack' (#777) from blast-stack into main
Reviewed-on: cerc-io/stack-orchestrator#777
2024-04-08 15:51:10 +00:00
108a5a3440 Merge branch 'main' into blast-stack 2024-04-08 15:21:03 +00:00
4b04a39faf linted 2024-04-08 15:17:44 +00:00
40f362511b Run CI alert steps only on main (#797)
Part of https://www.notion.so/Alerting-for-failing-CI-jobs-d0183b65453947aeab11dbddf989d9c0

- Run CI alert steps only on main to avoid alerts for in-progress PRs
- The Slack alerts will be sent on a CI job failure if
  - A commit is pushed directly to main
  - A PR gets merged into main
  - A scheduled job runs on main

Reviewed-on: cerc-io/stack-orchestrator#797
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-04-05 09:26:08 +00:00
9cd34ffebb Add Slack alerts for failures on CI workflows (#793)
Part of https://www.notion.so/Alerting-for-failing-CI-jobs-d0183b65453947aeab11dbddf989d9c0

Reviewed-on: cerc-io/stack-orchestrator#793
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-04-05 08:27:46 +00:00
515f6d16f5 Fix laconic registry CLI tests (#792)
Part of https://www.notion.so/Test-registry-cli-in-SO-fixturenet-laconicd-CI-ef1f497678264362931bd12643ba8a17

Reviewed-on: cerc-io/stack-orchestrator#792
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-04-04 10:50:02 +00:00
105805cb9b Run registry CLI tests as part of laconicd fixturenet tests (#791)
Part of https://www.notion.so/Test-registry-cli-in-SO-fixturenet-laconicd-CI-ef1f497678264362931bd12643ba8a17

Co-authored-by: neeraj <neeraj.rtly@gmail.com>
Reviewed-on: cerc-io/stack-orchestrator#791
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-04-04 07:16:46 +00:00
jonathan@vulcanize.io
62f1962546 metrics on op-node 2024-04-02 17:35:13 +00:00
jonathan@vulcanize.io
2a24e71c92 added metrics addr flag 2024-04-02 16:20:25 +00:00
jonathan@vulcanize.io
c789b82782 metrics 2024-04-01 19:38:00 +00:00
d2442bcc9b revert 5308ab1e4e (#788)
revert Blind commit to fix laconic CLI calls after rename. (#784)

`laconic cns` got renamed to `laconic registry` which breaks all the scripts and commands that use it.

Reviewed-on: cerc-io/stack-orchestrator#784
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>

Reviewed-on: cerc-io/stack-orchestrator#788
2024-03-27 20:55:03 +00:00
jonathan@vulcanize.io
b3bc5a19ae blast testnet, initial commit 2024-03-27 15:03:30 +00:00
44faf36837 Update ajna-watcher-ts version for using new subgraph (#786)
Part of https://www.notion.so/Run-ajna-finance-subgraph-watcher-87748d78cd7a471b8d71f50d5fdc2657

Reviewed-on: cerc-io/stack-orchestrator#786
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-03-26 14:52:57 +00:00
18b006468d Update GQL path for ajna subgraph watcher server (#785)
Part of https://www.notion.so/Run-ajna-finance-subgraph-watcher-87748d78cd7a471b8d71f50d5fdc2657

Reviewed-on: cerc-io/stack-orchestrator#785
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-03-26 11:50:05 +00:00
5308ab1e4e Blind commit to fix laconic CLI calls after rename. (#784)
`laconic cns` got renamed to `laconic registry` which breaks all the scripts and commands that use it.

Reviewed-on: cerc-io/stack-orchestrator#784
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-03-25 19:09:26 +00:00
cd50832038 Add a Ajna watcher stack (#781)
Part of https://www.notion.so/Generate-ajna-finance-subgraph-watcher-with-codegen-5b80ac149b3f449fb138f5d92cc5485e

Reviewed-on: cerc-io/stack-orchestrator#781
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-03-21 07:17:01 +00:00
jonathan@vulcanize.io
7f9e1da8ba removed keycloak 2024-03-15 15:15:15 +00:00
jonathan@vulcanize.io
0149346927 adding trustrpc flag to op-node 2024-03-13 18:18:57 +00:00
jonathan@vulcanize.io
06de4fe485 follow established naming convention 2024-03-13 16:21:29 +00:00
aeddc82ebc Remove latest indexed block value from watcher alerts data (#780)
Part of https://www.notion.so/Setup-grafana-SO-stack-for-monitoring-watchers-7e23042c296c4de6b8676f1f604aa03c

Reviewed-on: cerc-io/stack-orchestrator#780
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-03-13 07:16:15 +00:00
jonathan@vulcanize.io
821d401575 fixed missing rollup 2024-03-13 04:28:16 +00:00
jonathan@vulcanize.io
5123111db0 copy whether absolute path or local 2024-03-13 03:56:17 +00:00
jonathan@vulcanize.io
02c33cb229 working state 2024-03-13 02:49:21 +00:00
17e860d6e4 Update subgraph watcher versions and instructions to use deployments (#775)
Part of https://www.notion.so/Setup-watchers-on-sandman-34b5514a10634c6fbf3ec338967c871c

Reviewed-on: cerc-io/stack-orchestrator#775
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-03-12 05:32:55 +00:00
jonathan@vulcanize.io
b4eda902ea fixed optimum deployment 2024-03-08 19:27:36 +00:00
jonathan@vulcanize.io
b4df8104c8 tweaking yml 2024-03-08 17:20:44 +00:00
jonathan@vulcanize.io
07282cdd6e minimal build 2024-03-08 02:56:23 +00:00
jonathan@vulcanize.io
e7c935fb78 integration testing, I think 2024-03-07 22:24:46 +00:00
523b5779be Auto-detect which certificate to use (including wildcards). (#779)
Rather than always requesting a certificate, attempt to re-use an existing certificate if it already exists in the k8s cluster.  This includes matching to a wildcard certificate.

Reviewed-on: cerc-io/stack-orchestrator#779
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-03-07 17:38:36 +00:00
jonathan@vulcanize.io
1a636799a6 filepath 2024-03-06 19:34:25 +00:00
jonathan@vulcanize.io
0aa4b350bd keycloak implementation 2024-03-06 19:21:29 +00:00
62f7ce649d Exit non-0 if docker build fails. (#778)
Make sure to check the exit code of the docker build and bubble it back up to laconic-so.

Reviewed-on: cerc-io/stack-orchestrator#778
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-03-06 18:38:30 +00:00
jonathan@vulcanize.io
2252252072 comment format 2024-03-05 20:42:53 +00:00
jonathan@vulcanize.io
c92f15f47c comment format 2024-03-05 20:38:09 +00:00
jonathan@vulcanize.io
fee32ec703 copying genesis.json to /data/blast-data for blast 2024-03-05 20:32:56 +00:00
fb55c1425e Beginnings of blast stack 2024-03-04 15:05:31 -07:00
cc541ac20f Use -slim variant for Dockerfile (#773)
This saves about 1GB of space in the image.

Reviewed-on: cerc-io/stack-orchestrator#773
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-28 04:37:57 +00:00
10e2311a8b Add timed logging for the webapp build (#771)
Add lots of log and timer output to webapp builds.

Reviewed-on: cerc-io/stack-orchestrator#771
2024-02-28 00:38:11 +00:00
f32bbf9e48 Merge pull request 'Doc for fetch-containers command' (#772) from dboreham/fetch-containers-doc into main
Reviewed-on: cerc-io/stack-orchestrator#772
2024-02-27 18:45:18 +00:00
0302153162 Doc for fetch-containers command 2024-02-27 11:44:28 -07:00
01e4437b62 Merge pull request 'Sort order was backwards' (#770) from dboreham/fix-container-age-sort into main
Reviewed-on: cerc-io/stack-orchestrator#770
2024-02-27 16:01:17 +00:00
64cec163b3 Sort order was backwards 2024-02-27 09:00:36 -07:00
170ad71397 fetch-containers-fixes (#769)
Reviewed-on: cerc-io/stack-orchestrator#769
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-27 15:53:05 +00:00
da1ff609fe fetch-images command (#768)
Implementation of a command to fetch pre-built images from a remote registry, complementing the --push-images option already present on build-containers.

The two subcommands used together allow a stack to be deployed without needing to built its images, provided they have been already built and pushed to the specified container image registry.

This implementation simply picks the newest image with the right name and platform (matches against the platform Python is running on, so watch out for scenarios where Python is an x86 binary on M1 macs).

Reviewed-on: cerc-io/stack-orchestrator#768
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-27 15:15:08 +00:00
21eb9f036f Add support for pnpm as a webapp build tool. (#767)
This adds support for auto-detecting pnpm as a build tool for webapps.

Reviewed-on: cerc-io/stack-orchestrator#767
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-26 23:31:52 +00:00
a0413659f7 Check for existing tag in remote repo before building. (#764)
webapps are meant to be build-once/deploy-many, but we were rebuilding them for every request.  This changes that, so that we rebuild only for every unique ApplicationRecord.

When we push the image, we now tag it according to its ApplicationRecord.

We don't want to use that tag directly in the compose file for the deployment, however, as the deployment needs to be able to adjust to new builds w/o re-writing the file all the time.  Instead, we use a per-deployment unique tag (same as before), we just update what image it references as needed.

Reviewed-on: cerc-io/stack-orchestrator#764
2024-02-24 03:22:49 +00:00
a16fc657bf clarify uniswap urbit readme (#766)
Co-authored-by: zramsay <zach@bluecollarcoding.ca>
Reviewed-on: cerc-io/stack-orchestrator#766
2024-02-24 00:15:53 +00:00
704c42c404 Use a catchall for single page apps. (#763)
This creates a new environment variable, CERC_SINGLE_PAGE_APP, which controls whether a catchall redirection back to / is applied.

If the value is not explicitly set, we try to detect if the page looks like a single-page app.

Reviewed-on: cerc-io/stack-orchestrator#763
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-23 20:32:24 +00:00
202f187172 Fix copy/paste error 2024-02-23 13:15:37 -07:00
aaed356d32 Simple container image publication (#762)
Reviewed-on: cerc-io/stack-orchestrator#762
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-23 19:57:47 +00:00
2af6ffce77 Tweaks for running the container registry in k8s (#760)
Minor tweaks for running the container-registry in k8s.  The big change is not requiring --image-registry.

Reviewed-on: cerc-io/stack-orchestrator#760
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
2024-02-22 21:11:06 +00:00
7bb86cf35e Merge pull request 'Add ARM fixturenet-plugeth test job' (#759) from dboreham/arm-fixturenet-test into main
Reviewed-on: cerc-io/stack-orchestrator#759
2024-02-22 20:45:26 +00:00
9e0892cb6b No need to start docker now 2024-02-22 13:36:00 -07:00
cf9cf6346f Add an arm-specific version of the plugeth fixturenet test 2024-02-22 13:34:37 -07:00
642c0ead0d Add test for two config parameters (#758)
Reviewed-on: cerc-io/stack-orchestrator#758
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-22 19:35:55 +00:00
6bd77c893a Even more logging fixes (#757)
Hopefully the last one for a bit.

This only output the cmdline if log_file is present (ie, not to stdout).  It also fixes a bug where the log_file was not passed in one line.

Reviewed-on: cerc-io/stack-orchestrator#757
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-22 01:24:44 +00:00
4a4d48ddb9 Fix error when logging exception. (#756)
Reviewed-on: cerc-io/stack-orchestrator#756
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-22 00:11:06 +00:00
08438b1cd5 More logging for webapp deployments (#755)
Reviewed-on: cerc-io/stack-orchestrator#755
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-21 23:48:52 +00:00
9f1dd284a5 Better error logging for registry deployments. (#754)
We were missing errors related to registration, this should fix that.

Reviewed-on: cerc-io/stack-orchestrator#754
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-21 20:12:52 +00:00
5985242b8c Merge pull request 'Update doc for fixturenet-laconic-loaded stack' (#753) from dboreham/update-laconic-stack-doc into main
Reviewed-on: cerc-io/stack-orchestrator#753
2024-02-21 17:33:04 +00:00
d4152b7ce3 Update doc for fixturenet-laconic-loaded stack 2024-02-21 10:29:15 -07:00
db4986dcc6 snowballtool-base backend stack (#751)
This adds a stack for the backend from snowball/snowballtools-base.

Reviewed-on: cerc-io/stack-orchestrator#751
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-21 04:45:46 +00:00
65f05ea80c Run a manual build script, if present. (#750)
If the tree has a 'build-webapp.sh' script, use that.

Reviewed-on: cerc-io/stack-orchestrator#750
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-21 00:20:50 +00:00
110 changed files with 35858 additions and 704 deletions

View File

@ -0,0 +1,61 @@
name: Fixturenet-Eth-Plugeth-Arm-Test
on:
push:
branches: '*'
paths:
- '!**'
- '.gitea/workflows/triggers/fixturenet-eth-plugeth-arm-test'
schedule: # Note: coordinate with other tests to not overload runners at the same time of day
- cron: '2 14 * * *'
# Needed until we can incorporate docker startup into the executor container
env:
DOCKER_HOST: unix:///var/run/dind.sock
jobs:
test:
name: "Run an Ethereum plugeth fixturenet test"
runs-on: ubuntu-latest-arm
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
# At present the stock setup-python action fails on Linux/aarch64
# Conditional steps below workaroud this by using deadsnakes for that case only
- name: "Install Python for ARM on Linux"
if: ${{ runner.arch == 'arm64' && runner.os == 'Linux' }}
uses: deadsnakes/action@v3.0.1
with:
python-version: '3.8'
- name: "Install Python cases other than ARM on Linux"
if: ${{ ! (runner.arch == 'arm64' && runner.os == 'Linux') }}
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Run fixturenet-eth tests"
run: ./tests/fixturenet-eth-plugeth/run-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -47,3 +47,19 @@ jobs:
sleep 5 sleep 5
- name: "Run fixturenet-eth tests" - name: "Run fixturenet-eth tests"
run: ./tests/fixturenet-eth-plugeth/run-test.sh run: ./tests/fixturenet-eth-plugeth/run-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -45,4 +45,19 @@ jobs:
sleep 5 sleep 5
- name: "Run fixturenet-eth tests" - name: "Run fixturenet-eth tests"
run: ./tests/fixturenet-eth/run-test.sh run: ./tests/fixturenet-eth/run-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -11,7 +11,7 @@ on:
jobs: jobs:
test: test:
name: "Run an Laconicd fixturenet test" name: "Run Laconicd fixturenet and Laconic CLI tests"
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: 'Update' - name: 'Update'
@ -46,3 +46,21 @@ jobs:
run: ./scripts/build_shiv_package.sh run: ./scripts/build_shiv_package.sh
- name: "Run fixturenet-laconicd tests" - name: "Run fixturenet-laconicd tests"
run: ./tests/fixturenet-laconicd/run-test.sh run: ./tests/fixturenet-laconicd/run-test.sh
- name: "Run laconic CLI tests"
run: ./tests/fixturenet-laconicd/run-cli-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -19,3 +19,19 @@ jobs:
python-version: '3.8' python-version: '3.8'
- name : "Run flake8" - name : "Run flake8"
uses: py-actions/flake8@v2 uses: py-actions/flake8@v2
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -54,3 +54,19 @@ jobs:
# Hack using endsWith to workaround Gitea sometimes sending "publish-test" vs "refs/heads/publish-test" # Hack using endsWith to workaround Gitea sometimes sending "publish-test" vs "refs/heads/publish-test"
draft: ${{ endsWith('publish-test', github.ref ) }} draft: ${{ endsWith('publish-test', github.ref ) }}
files: ./laconic-so files: ./laconic-so
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -51,4 +51,19 @@ jobs:
source /opt/bash-utils/cgroup-helper.sh source /opt/bash-utils/cgroup-helper.sh
join_cgroup join_cgroup
./tests/container-registry/run-test.sh ./tests/container-registry/run-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -49,4 +49,19 @@ jobs:
source /opt/bash-utils/cgroup-helper.sh source /opt/bash-utils/cgroup-helper.sh
join_cgroup join_cgroup
./tests/database/run-test.sh ./tests/database/run-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -47,3 +47,19 @@ jobs:
sleep 5 sleep 5
- name: "Run deploy tests" - name: "Run deploy tests"
run: ./tests/deploy/run-deploy-test.sh run: ./tests/deploy/run-deploy-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -51,4 +51,19 @@ jobs:
source /opt/bash-utils/cgroup-helper.sh source /opt/bash-utils/cgroup-helper.sh
join_cgroup join_cgroup
./tests/k8s-deploy/run-deploy-test.sh ./tests/k8s-deploy/run-deploy-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -49,3 +49,19 @@ jobs:
sleep 5 sleep 5
- name: "Run webapp tests" - name: "Run webapp tests"
run: ./tests/webapp-test/run-webapp-test.sh run: ./tests/webapp-test/run-webapp-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -47,5 +47,19 @@ jobs:
sleep 5 sleep 5
- name: "Run smoke tests" - name: "Run smoke tests"
run: ./tests/smoke-test/run-smoke-test.sh run: ./tests/smoke-test/run-smoke-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -0,0 +1,2 @@
Change this file to trigger running the fixturenet-eth-plugeth-arm-test CI job

View File

@ -1,3 +1,6 @@
Change this file to trigger running the fixturenet-laconicd-test CI job Change this file to trigger running the fixturenet-laconicd-test CI job
Trigger Trigger
Trigger Trigger
Trigger
Trigger
Trigger

View File

@ -0,0 +1,9 @@
# Fetching pre-built container images
When Stack Orchestrator deploys a stack containing a suite of one or more containers it expects images for those containers to be on the local machine with a tag of the form `<image-name>:local` Images for these containers can be built from source (and optionally base container images from public registries) with the `build-containers` subcommand.
However, the task of building a large number of containers from source may consume considerable time and machine resources. This is where the `fetch-containers` subcommand steps in. It is designed to work exactly like `build-containers` but instead the images, pre-built, are fetched from an image registry then re-tagged for deployment. It can be used in place of `build-containers` for any stack provided the necessary containers, built for the local machine architecture (e.g. arm64 or x86-64) have already been published in an image registry.
## Usage
To use `fetch-containers`, provide an image registry path, a username and token/password with read access to the registry, and optionally specify `--force-local-overwrite`. If this argument is not specified, if there is already a locally built or previously fetched image for a stack container on the machine, it will not be overwritten and a warning issued.
```
$ laconic-so --stack mobymask-v3-demo fetch-containers --image-registry git.vdb.to/cerc-io --registry-username <registry-user> --registry-token <registry-token> --force-local-overwrite
```

View File

@ -15,12 +15,11 @@ To avoid hiccups on Mac M1/M2 and any local machine nuances that may affect the
16 GB Memory / 8 Intel vCPUs / 160 GB Disk. 16 GB Memory / 8 Intel vCPUs / 160 GB Disk.
1. Login to the droplet as root (either by SSH key or password set in the DO console) 1. Login to the droplet as root (either by SSH key or password set in the DO console)
``` ```
ssh root@IP ssh root@IP
``` ```
2. Get the install script, give it executable permissions, and run it: 1. Get the install script, give it executable permissions, and run it:
``` ```
curl -o install.sh https://raw.githubusercontent.com/cerc-io/stack-orchestrator/main/scripts/quick-install-linux.sh curl -o install.sh https://raw.githubusercontent.com/cerc-io/stack-orchestrator/main/scripts/quick-install-linux.sh
@ -32,7 +31,7 @@ chmod +x install.sh
bash install.sh bash install.sh
``` ```
3. Confirm docker was installed and activate the changes in `~/.profile`: 1. Confirm docker was installed and activate the changes in `~/.profile`:
``` ```
docker run hello-world docker run hello-world
@ -41,7 +40,7 @@ docker run hello-world
source ~/.profile source ~/.profile
``` ```
4. Verify installation: 1. Verify installation:
``` ```
laconic-so version laconic-so version
@ -55,13 +54,7 @@ laconic-so version
laconic-so --stack fixturenet-laconic-loaded setup-repositories --include git.vdb.to/cerc-io/laconicd,git.vdb.to/cerc-io/laconic-sdk,git.vdb.to/cerc-io/laconic-registry-cli,git.vdb.to/cerc-io/laconic-console laconic-so --stack fixturenet-laconic-loaded setup-repositories --include git.vdb.to/cerc-io/laconicd,git.vdb.to/cerc-io/laconic-sdk,git.vdb.to/cerc-io/laconic-registry-cli,git.vdb.to/cerc-io/laconic-console
``` ```
2. Set this environment variable to the Laconic self-hosted Gitea instance: 1. Build the containers:
```
export CERC_NPM_REGISTRY_URL=https://git.vdb.to/api/packages/cerc-io/npm/
```
3. Build the containers:
``` ```
laconic-so --stack fixturenet-laconic-loaded build-containers laconic-so --stack fixturenet-laconic-loaded build-containers
@ -69,22 +62,34 @@ laconic-so --stack fixturenet-laconic-loaded build-containers
It's possible to run into an `ESOCKETTIMEDOUT` error, e.g., `error An unexpected error occurred: "https://registry.yarnpkg.com/@material-ui/icons/-/icons-4.11.3.tgz: ESOCKETTIMEDOUT"`. This may happen even if you have a great internet connection. In that case, re-run the `build-containers` command. It's possible to run into an `ESOCKETTIMEDOUT` error, e.g., `error An unexpected error occurred: "https://registry.yarnpkg.com/@material-ui/icons/-/icons-4.11.3.tgz: ESOCKETTIMEDOUT"`. This may happen even if you have a great internet connection. In that case, re-run the `build-containers` command.
4. Set this environment variable to your droplet's IP address:
1. Set this environment variable to your droplet's IP address or fully qualified DNS host name if it has one:
``` ```
export LACONIC_HOSTED_ENDPOINT=http://<your-IP> export BACKEND_ENDPOINT=http://<your-IP-or-hostname>:9473
```
e.g.
```
export BACKEND_ENDPOINT=http://my-test-server.example.com:9473
``` ```
5. Deploy the stack: 1. Create a deployment directory for the stack:
```
laconic-so --stack fixturenet-laconic-loaded deploy init --output laconic-loaded.spec --map-ports-to-host any-same --config LACONIC_HOSTED_ENDPOINT=$BACKEND_ENDPOINT
```
```
laconic-so --stack fixturenet-laconic-loaded deploy create --deployment-dir laconic-loaded-deployment --spec-file laconic-loaded.spec
```
2. Start the stack:
``` ```
laconic-so --stack fixturenet-laconic-loaded deploy up laconic-so deployment --dir laconic-loaded-deployment start
``` ```
6. Check the logs: 3. Check the logs:
``` ```
laconic-so --stack fixturenet-laconic-loaded deploy logs laconic-so deployment --dir laconic-loaded-deployment logs
``` ```
You'll see output from `laconicd` and the block height should be >1 to confirm it is running: You'll see output from `laconicd` and the block height should be >1 to confirm it is running:
@ -102,10 +107,10 @@ laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF committ
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF indexed block exents height=13 module=txindex server=node laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF indexed block exents height=13 module=txindex server=node
``` ```
7. Confirm operation of the registry CLI: 4. Confirm operation of the registry CLI:
``` ```
laconic-so --stack fixturenet-laconic-loaded deploy exec cli "laconic cns status" laconic-so deployment --dir laconic-loaded-deployment exec cli "laconic cns status"
``` ```
``` ```
@ -141,6 +146,8 @@ laconic-so --stack fixturenet-laconic-loaded deploy exec cli "laconic cns status
## Configure Digital Ocean firewall ## Configure Digital Ocean firewall
(Note this step may not be necessary depending on the droplet image used)
Let's open some ports. Let's open some ports.
1. In the Digital Ocean web console, navigate to your droplet's main page. Select the "Networking" tab and scroll down to "Firewall". 1. In the Digital Ocean web console, navigate to your droplet's main page. Select the "Networking" tab and scroll down to "Firewall".

View File

@ -25,10 +25,13 @@ import sys
from decouple import config from decouple import config
import subprocess import subprocess
import click import click
import importlib.resources
from pathlib import Path from pathlib import Path
from stack_orchestrator.util import include_exclude_check, get_parsed_stack_config, stack_is_external, warn_exit from stack_orchestrator.opts import opts
from stack_orchestrator.util import include_exclude_check, stack_is_external, error_exit
from stack_orchestrator.base import get_npm_registry_url from stack_orchestrator.base import get_npm_registry_url
from stack_orchestrator.build.build_types import BuildContext
from stack_orchestrator.build.publish import publish_image
from stack_orchestrator.build.build_util import get_containers_in_scope
# TODO: find a place for this # TODO: find a place for this
# epilog="Config provided either in .env or settings.ini or env vars: CERC_REPO_BASE_DIR (defaults to ~/cerc)" # epilog="Config provided either in .env or settings.ini or env vars: CERC_REPO_BASE_DIR (defaults to ~/cerc)"
@ -59,69 +62,58 @@ def make_container_build_env(dev_root_path: str,
return container_build_env return container_build_env
def process_container(stack: str, def process_container(build_context: BuildContext) -> bool:
container, if not opts.o.quiet:
container_build_dir: str, print(f"Building: {build_context.container}")
container_build_env: dict,
dev_root_path: str,
quiet: bool,
verbose: bool,
dry_run: bool,
continue_on_error: bool,
):
if not quiet:
print(f"Building: {container}")
default_container_tag = f"{container}:local" default_container_tag = f"{build_context.container}:local"
container_build_env.update({"CERC_DEFAULT_CONTAINER_IMAGE_TAG": default_container_tag}) build_context.container_build_env.update({"CERC_DEFAULT_CONTAINER_IMAGE_TAG": default_container_tag})
# Check if this is in an external stack # Check if this is in an external stack
if stack_is_external(stack): if stack_is_external(build_context.stack):
container_parent_dir = Path(stack).joinpath("container-build") container_parent_dir = Path(build_context.stack).joinpath("container-build")
temp_build_dir = container_parent_dir.joinpath(container.replace("/", "-")) temp_build_dir = container_parent_dir.joinpath(build_context.container.replace("/", "-"))
temp_build_script_filename = temp_build_dir.joinpath("build.sh") temp_build_script_filename = temp_build_dir.joinpath("build.sh")
# Now check if the container exists in the external stack. # Now check if the container exists in the external stack.
if not temp_build_script_filename.exists(): if not temp_build_script_filename.exists():
# If not, revert to building an internal container # If not, revert to building an internal container
container_parent_dir = container_build_dir container_parent_dir = build_context.container_build_dir
else: else:
container_parent_dir = container_build_dir container_parent_dir = build_context.container_build_dir
build_dir = container_parent_dir.joinpath(container.replace("/", "-")) build_dir = container_parent_dir.joinpath(build_context.container.replace("/", "-"))
build_script_filename = build_dir.joinpath("build.sh") build_script_filename = build_dir.joinpath("build.sh")
if verbose: if opts.o.verbose:
print(f"Build script filename: {build_script_filename}") print(f"Build script filename: {build_script_filename}")
if os.path.exists(build_script_filename): if os.path.exists(build_script_filename):
build_command = build_script_filename.as_posix() build_command = build_script_filename.as_posix()
else: else:
if verbose: if opts.o.verbose:
print(f"No script file found: {build_script_filename}, using default build script") print(f"No script file found: {build_script_filename}, using default build script")
repo_dir = container.split('/')[1] repo_dir = build_context.container.split('/')[1]
# TODO: make this less of a hack -- should be specified in some metadata somewhere # TODO: make this less of a hack -- should be specified in some metadata somewhere
# Check if we have a repo for this container. If not, set the context dir to the container-build subdir # Check if we have a repo for this container. If not, set the context dir to the container-build subdir
repo_full_path = os.path.join(dev_root_path, repo_dir) repo_full_path = os.path.join(build_context.dev_root_path, repo_dir)
repo_dir_or_build_dir = repo_full_path if os.path.exists(repo_full_path) else build_dir repo_dir_or_build_dir = repo_full_path if os.path.exists(repo_full_path) else build_dir
build_command = os.path.join(container_build_dir, build_command = os.path.join(build_context.container_build_dir,
"default-build.sh") + f" {default_container_tag} {repo_dir_or_build_dir}" "default-build.sh") + f" {default_container_tag} {repo_dir_or_build_dir}"
if not dry_run: if not opts.o.dry_run:
# No PATH at all causes failures with podman. # No PATH at all causes failures with podman.
if "PATH" not in container_build_env: if "PATH" not in build_context.container_build_env:
container_build_env["PATH"] = os.environ["PATH"] build_context.container_build_env["PATH"] = os.environ["PATH"]
if verbose: if opts.o.verbose:
print(f"Executing: {build_command} with environment: {container_build_env}") print(f"Executing: {build_command} with environment: {build_context.container_build_env}")
build_result = subprocess.run(build_command, shell=True, env=container_build_env) build_result = subprocess.run(build_command, shell=True, env=build_context.container_build_env)
if verbose: if opts.o.verbose:
print(f"Return code is: {build_result.returncode}") print(f"Return code is: {build_result.returncode}")
if build_result.returncode != 0: if build_result.returncode != 0:
print(f"Error running build for {container}") return False
if not continue_on_error:
print("FATAL Error: container build failed and --continue-on-error not set, exiting")
sys.exit(1)
else: else:
print("****** Container Build Error, continuing because --continue-on-error is set") return True
else: else:
print("Skipped") print("Skipped")
return True
@click.command() @click.command()
@ -129,17 +121,14 @@ def process_container(stack: str,
@click.option('--exclude', help="don\'t build these containers") @click.option('--exclude', help="don\'t build these containers")
@click.option("--force-rebuild", is_flag=True, default=False, help="Override dependency checking -- always rebuild") @click.option("--force-rebuild", is_flag=True, default=False, help="Override dependency checking -- always rebuild")
@click.option("--extra-build-args", help="Supply extra arguments to build") @click.option("--extra-build-args", help="Supply extra arguments to build")
@click.option("--publish-images", is_flag=True, default=False, help="Publish the built images in the specified image registry")
@click.option("--image-registry", help="Specify the image registry for --publish-images")
@click.pass_context @click.pass_context
def command(ctx, include, exclude, force_rebuild, extra_build_args): def command(ctx, include, exclude, force_rebuild, extra_build_args, publish_images, image_registry):
'''build the set of containers required for a complete stack''' '''build the set of containers required for a complete stack'''
quiet = ctx.obj.quiet
verbose = ctx.obj.verbose
dry_run = ctx.obj.dry_run
debug = ctx.obj.debug
local_stack = ctx.obj.local_stack local_stack = ctx.obj.local_stack
stack = ctx.obj.stack stack = ctx.obj.stack
continue_on_error = ctx.obj.continue_on_error
# See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure # See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
container_build_dir = Path(__file__).absolute().parent.parent.joinpath("data", "container-build") container_build_dir = Path(__file__).absolute().parent.parent.joinpath("data", "container-build")
@ -150,41 +139,45 @@ def command(ctx, include, exclude, force_rebuild, extra_build_args):
else: else:
dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc")) dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc"))
if not quiet: if not opts.o.quiet:
print(f'Dev Root is: {dev_root_path}') print(f'Dev Root is: {dev_root_path}')
if not os.path.isdir(dev_root_path): if not os.path.isdir(dev_root_path):
print('Dev root directory doesn\'t exist, creating') print('Dev root directory doesn\'t exist, creating')
# See: https://stackoverflow.com/a/20885799/1701505 if publish_images:
from stack_orchestrator import data if not image_registry:
with importlib.resources.open_text(data, "container-image-list.txt") as container_list_file: error_exit("--image-registry must be supplied with --publish-images")
all_containers = container_list_file.read().splitlines()
containers_in_scope = [] containers_in_scope = get_containers_in_scope(stack)
if stack:
stack_config = get_parsed_stack_config(stack)
if "containers" not in stack_config or stack_config["containers"] is None:
warn_exit(f"stack {stack} does not define any containers")
containers_in_scope = stack_config['containers']
else:
containers_in_scope = all_containers
if verbose:
print(f'Containers: {containers_in_scope}')
if stack:
print(f"Stack: {stack}")
container_build_env = make_container_build_env(dev_root_path, container_build_env = make_container_build_env(dev_root_path,
container_build_dir, container_build_dir,
debug, opts.o.debug,
force_rebuild, force_rebuild,
extra_build_args) extra_build_args)
for container in containers_in_scope: for container in containers_in_scope:
if include_exclude_check(container, include, exclude): if include_exclude_check(container, include, exclude):
process_container(stack, container, container_build_dir, container_build_env,
dev_root_path, quiet, verbose, dry_run, continue_on_error) build_context = BuildContext(
stack,
container,
container_build_dir,
container_build_env,
dev_root_path
)
result = process_container(build_context)
if result:
if publish_images:
publish_image(f"{container}:local", image_registry)
else: else:
if verbose: print(f"Error running build for {build_context.container}")
if not opts.o.continue_on_error:
error_exit("container build failed and --continue-on-error not set, exiting")
sys.exit(1)
else:
print("****** Container Build Error, continuing because --continue-on-error is set")
else:
if opts.o.verbose:
print(f"Excluding: {container}") print(f"Excluding: {container}")

View File

@ -0,0 +1,29 @@
# Copyright © 2024 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from dataclasses import dataclass
from pathlib import Path
from typing import Mapping
@dataclass
class BuildContext:
stack: str
container: str
container_build_dir: Path
container_build_env: Mapping[str,str]
dev_root_path: str

View File

@ -0,0 +1,43 @@
# Copyright © 2024 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
import importlib.resources
from stack_orchestrator.opts import opts
from stack_orchestrator.util import get_parsed_stack_config, warn_exit
def get_containers_in_scope(stack: str):
# See: https://stackoverflow.com/a/20885799/1701505
from stack_orchestrator import data
with importlib.resources.open_text(data, "container-image-list.txt") as container_list_file:
all_containers = container_list_file.read().splitlines()
containers_in_scope = []
if stack:
stack_config = get_parsed_stack_config(stack)
if "containers" not in stack_config or stack_config["containers"] is None:
warn_exit(f"stack {stack} does not define any containers")
containers_in_scope = stack_config['containers']
else:
containers_in_scope = all_containers
if opts.o.verbose:
print(f'Containers: {containers_in_scope}')
if stack:
print(f"Stack: {stack}")
return containers_in_scope

View File

@ -21,11 +21,14 @@
# TODO: display the available list of containers; allow re-build of either all or specific containers # TODO: display the available list of containers; allow re-build of either all or specific containers
import os import os
import sys
from decouple import config from decouple import config
import click import click
from pathlib import Path from pathlib import Path
from stack_orchestrator.build import build_containers from stack_orchestrator.build import build_containers
from stack_orchestrator.deploy.webapp.util import determine_base_container from stack_orchestrator.deploy.webapp.util import determine_base_container, TimedLogger
from stack_orchestrator.build.build_types import BuildContext
@click.command() @click.command()
@ -37,26 +40,25 @@ from stack_orchestrator.deploy.webapp.util import determine_base_container
@click.pass_context @click.pass_context
def command(ctx, base_container, source_repo, force_rebuild, extra_build_args, tag): def command(ctx, base_container, source_repo, force_rebuild, extra_build_args, tag):
'''build the specified webapp container''' '''build the specified webapp container'''
logger = TimedLogger()
quiet = ctx.obj.quiet quiet = ctx.obj.quiet
verbose = ctx.obj.verbose
dry_run = ctx.obj.dry_run
debug = ctx.obj.debug debug = ctx.obj.debug
verbose = ctx.obj.verbose
local_stack = ctx.obj.local_stack local_stack = ctx.obj.local_stack
stack = ctx.obj.stack stack = ctx.obj.stack
continue_on_error = ctx.obj.continue_on_error
# See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure # See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
container_build_dir = Path(__file__).absolute().parent.parent.joinpath("data", "container-build") container_build_dir = Path(__file__).absolute().parent.parent.joinpath("data", "container-build")
if local_stack: if local_stack:
dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")] dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")]
print(f'Local stack dev_root_path (CERC_REPO_BASE_DIR) overridden to: {dev_root_path}') logger.log(f'Local stack dev_root_path (CERC_REPO_BASE_DIR) overridden to: {dev_root_path}')
else: else:
dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc")) dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc"))
if not quiet: if verbose:
print(f'Dev Root is: {dev_root_path}') logger.log(f'Dev Root is: {dev_root_path}')
if not base_container: if not base_container:
base_container = determine_base_container(source_repo) base_container = determine_base_container(source_repo)
@ -65,8 +67,23 @@ def command(ctx, base_container, source_repo, force_rebuild, extra_build_args, t
container_build_env = build_containers.make_container_build_env(dev_root_path, container_build_dir, debug, container_build_env = build_containers.make_container_build_env(dev_root_path, container_build_dir, debug,
force_rebuild, extra_build_args) force_rebuild, extra_build_args)
build_containers.process_container(None, base_container, container_build_dir, container_build_env, dev_root_path, quiet, if verbose:
verbose, dry_run, continue_on_error) logger.log(f"Building base container: {base_container}")
build_context_1 = BuildContext(
stack,
base_container,
container_build_dir,
container_build_env,
dev_root_path,
)
ok = build_containers.process_container(build_context_1)
if not ok:
logger.log("ERROR: Build failed.")
sys.exit(1)
if verbose:
logger.log(f"Base container {base_container} build finished.")
# Now build the target webapp. We use the same build script, but with a different Dockerfile and work dir. # Now build the target webapp. We use the same build script, but with a different Dockerfile and work dir.
container_build_env["CERC_WEBAPP_BUILD_RUNNING"] = "true" container_build_env["CERC_WEBAPP_BUILD_RUNNING"] = "true"
@ -76,9 +93,25 @@ def command(ctx, base_container, source_repo, force_rebuild, extra_build_args, t
"Dockerfile.webapp") "Dockerfile.webapp")
if not tag: if not tag:
webapp_name = os.path.abspath(source_repo).split(os.path.sep)[-1] webapp_name = os.path.abspath(source_repo).split(os.path.sep)[-1]
container_build_env["CERC_CONTAINER_BUILD_TAG"] = f"cerc/{webapp_name}:local" tag = f"cerc/{webapp_name}:local"
else:
container_build_env["CERC_CONTAINER_BUILD_TAG"] = tag container_build_env["CERC_CONTAINER_BUILD_TAG"] = tag
build_containers.process_container(None, base_container, container_build_dir, container_build_env, dev_root_path, quiet, if verbose:
verbose, dry_run, continue_on_error) logger.log(f"Building app container: {tag}")
build_context_2 = BuildContext(
stack,
base_container,
container_build_dir,
container_build_env,
dev_root_path,
)
ok = build_containers.process_container(build_context_2)
if not ok:
logger.log("ERROR: Build failed.")
sys.exit(1)
if verbose:
logger.log(f"App container {base_container} build finished.")
logger.log("build-webapp complete", show_step_time=False, show_total_time=True)

View File

@ -0,0 +1,195 @@
# Copyright © 2024 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
import click
from dataclasses import dataclass
import json
import platform
from python_on_whales import DockerClient
from python_on_whales.components.manifest.cli_wrapper import ManifestCLI, ManifestList
from python_on_whales.utils import run
import requests
from typing import List
from stack_orchestrator.opts import opts
from stack_orchestrator.util import include_exclude_check, error_exit
from stack_orchestrator.build.build_util import get_containers_in_scope
# Experimental fetch-container command
@dataclass
class RegistryInfo:
registry: str
registry_username: str
registry_token: str
# Extending this code to support the --verbose option, cnosider contributing upstream
# https://github.com/gabrieldemarmiesse/python-on-whales/blob/master/python_on_whales/components/manifest/cli_wrapper.py#L129
class ExtendedManifestCLI(ManifestCLI):
def inspect_verbose(self, x: str) -> ManifestList:
"""Returns a Docker manifest list object."""
json_str = run(self.docker_cmd + ["manifest", "inspect", "--verbose", x])
return json.loads(json_str)
def _local_tag_for(container: str):
return f"{container}:local"
# See: https://docker-docs.uclv.cu/registry/spec/api/
# Emulate this:
# $ curl -u "my-username:my-token" -X GET "https://<container-registry-hostname>/v2/cerc-io/cerc/test-container/tags/list"
# {"name":"cerc-io/cerc/test-container","tags":["202402232130","202402232208"]}
def _get_tags_for_container(container: str, registry_info: RegistryInfo) -> List[str]:
# registry looks like: git.vdb.to/cerc-io
registry_parts = registry_info.registry.split("/")
url = f"https://{registry_parts[0]}/v2/{registry_parts[1]}/{container}/tags/list"
if opts.o.debug:
print(f"Fetching tags from: {url}")
response = requests.get(url, auth=(registry_info.registry_username, registry_info.registry_token))
if response.status_code == 200:
tag_info = response.json()
if opts.o.debug:
print(f"container tags list: {tag_info}")
tags_array = tag_info["tags"]
return tags_array
else:
error_exit(f"failed to fetch tags from image registry, status code: {response.status_code}")
def _find_latest(candidate_tags: List[str]):
# Lex sort should give us the latest first
sorted_candidates = sorted(candidate_tags)
if opts.o.debug:
print(f"sorted candidates: {sorted_candidates}")
return sorted_candidates[-1]
def _filter_for_platform(container: str,
registry_info: RegistryInfo,
tag_list: List[str]) -> List[str] :
filtered_tags = []
this_machine = platform.machine()
# Translate between Python and docker platform names
if this_machine == "x86_64":
this_machine = "amd64"
if this_machine == "aarch64":
this_machine = "arm64"
if opts.o.debug:
print(f"Python says the architecture is: {this_machine}")
docker = DockerClient()
for tag in tag_list:
remote_tag = f"{registry_info.registry}/{container}:{tag}"
manifest_cmd = ExtendedManifestCLI(docker.client_config)
manifest = manifest_cmd.inspect_verbose(remote_tag)
if opts.o.debug:
print(f"manifest: {manifest}")
image_architecture = manifest["Descriptor"]["platform"]["architecture"]
if opts.o.debug:
print(f"image_architecture: {image_architecture}")
if this_machine == image_architecture:
filtered_tags.append(tag)
if opts.o.debug:
print(f"Tags filtered for platform: {filtered_tags}")
return filtered_tags
def _get_latest_image(container: str, registry_info: RegistryInfo):
all_tags = _get_tags_for_container(container, registry_info)
tags_for_platform = _filter_for_platform(container, registry_info, all_tags)
if len(tags_for_platform) > 0:
latest_tag = _find_latest(tags_for_platform)
return f"{container}:{latest_tag}"
else:
return None
def _fetch_image(tag: str, registry_info: RegistryInfo):
docker = DockerClient()
remote_tag = f"{registry_info.registry}/{tag}"
if opts.o.debug:
print(f"Attempting to pull this image: {remote_tag}")
docker.image.pull(remote_tag)
def _exists_locally(container: str):
docker = DockerClient()
return docker.image.exists(_local_tag_for(container))
def _add_local_tag(remote_tag: str, registry: str, local_tag: str):
docker = DockerClient()
docker.image.tag(f"{registry}/{remote_tag}", local_tag)
@click.command()
@click.option('--include', help="only fetch these containers")
@click.option('--exclude', help="don\'t fetch these containers")
@click.option("--force-local-overwrite", is_flag=True, default=False, help="Overwrite a locally built image, if present")
@click.option("--image-registry", required=True, help="Specify the image registry to fetch from")
@click.option("--registry-username", required=True, help="Specify the image registry username")
@click.option("--registry-token", required=True, help="Specify the image registry access token")
@click.pass_context
def command(ctx, include, exclude, force_local_overwrite, image_registry, registry_username, registry_token):
'''EXPERIMENTAL: fetch the images for a stack from remote registry'''
registry_info = RegistryInfo(image_registry, registry_username, registry_token)
docker = DockerClient()
if not opts.o.quiet:
print("Logging into container registry:")
docker.login(registry_info.registry, registry_info.registry_username, registry_info.registry_token)
# Generate list of target containers
stack = ctx.obj.stack
containers_in_scope = get_containers_in_scope(stack)
all_containers_found = True
for container in containers_in_scope:
local_tag = _local_tag_for(container)
if include_exclude_check(container, include, exclude):
if opts.o.debug:
print(f"Processing: {container}")
# For each container, attempt to find the latest of a set of
# images with the correct name and platform in the specified registry
image_to_fetch = _get_latest_image(container, registry_info)
if not image_to_fetch:
print(f"Warning: no image found to fetch for container: {container}")
all_containers_found = False
continue
if opts.o.debug:
print(f"Fetching: {image_to_fetch}")
_fetch_image(image_to_fetch, registry_info)
# Now check if the target container already exists exists locally already
if (_exists_locally(container)):
if not opts.o.quiet:
print(f"Container image {container} already exists locally")
# if so, fail unless the user specified force-local-overwrite
if (force_local_overwrite):
# In that case remove the existing :local tag
if not opts.o.quiet:
print(f"Warning: overwriting local tag from this image: {container} because "
"--force-local-overwrite was specified")
else:
if not opts.o.quiet:
print(f"Skipping local tagging for this image: {container} because that would "
"overwrite an existing :local tagged image, use --force-local-overwrite to do so.")
continue
# Tag the fetched image with the :local tag
_add_local_tag(image_to_fetch, image_registry, local_tag)
else:
if opts.o.verbose:
print(f"Excluding: {container}")
if not all_containers_found:
print("Warning: couldn't find usable images for one or more containers, this stack will not deploy")

View File

@ -0,0 +1,48 @@
# Copyright © 2024 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from datetime import datetime
from python_on_whales import DockerClient
from stack_orchestrator.opts import opts
from stack_orchestrator.util import error_exit
def _publish_tag_for_image(local_image_tag: str, remote_repo: str, version: str):
# Turns image tags of the form: foo/bar:local into remote.repo/org/bar:deploy
(image_name, image_version) = local_image_tag.split(":")
if image_version == "local":
return f"{remote_repo}/{image_name}:{version}"
else:
error_exit("Asked to publish a non-locally built image")
def publish_image(local_tag, registry):
if opts.o.verbose:
print(f"Publishing this image: {local_tag} to this registry: {registry}")
docker = DockerClient()
# Figure out the target image tag
# Eventually this version will be generated from the source repo state
# Using a timestemp is an intermediate step
version = datetime.now().strftime("%Y%m%d%H%M")
remote_tag = _publish_tag_for_image(local_tag, registry, version)
# Tag the image thus
if opts.o.debug:
print(f"Tagging {local_tag} to {remote_tag}")
docker.image.tag(local_tag, remote_tag)
# Push it to the desired registry
if opts.o.verbose:
print(f"Pushing image {remote_tag}")
docker.image.push(remote_tag)

View File

@ -5,9 +5,11 @@ services:
environment: environment:
REGISTRY_LOG_LEVEL: ${REGISTRY_LOG_LEVEL} REGISTRY_LOG_LEVEL: ${REGISTRY_LOG_LEVEL}
volumes: volumes:
- config:/config:ro
- registry-data:/var/lib/registry - registry-data:/var/lib/registry
ports: ports:
- "5000" - "5000"
volumes: volumes:
config:
registry-data: registry-data:

View File

@ -0,0 +1,80 @@
# From: https://raw.githubusercontent.com/blast-io/deployment/master/docker-compose.yml
services:
# generate jwt.txt if it's absent
generate-jwt:
image: blastio/openssl
volumes:
- blast-data:/blast:rw
command: >
sh -c "[ ! -f /blast/jwt.txt ] && openssl rand -hex 32 | tr -d '\n' > /blast/jwt.txt || exit 0"
# initialise geth db
geth-init:
image: blastio/blast-geth:${NETWORK:-testnet-sepolia}
volumes:
- blast-data:/blast:rw
- ../config/fixturenet-blast/genesis.json:/blast/genesis.json
entrypoint: /bin/sh
command: >
-c "[ ! -d /blast/${GETH_DATA_DIR:-blast-geth-data}/geth ] && /usr/local/bin/geth init --datadir=/blast/${GETH_DATA_DIR:-blast-geth-data} /blast/genesis.json || exit 0"
depends_on:
generate-jwt:
condition: service_completed_successfully
env_file:
- ../config/fixturenet-blast/${NETWORK:-fixturenet}.config
blast-geth:
image: blastio/blast-geth:${NETWORK:-testnet-sepolia}
volumes:
- blast-data:/blast
ports:
- "9545"
- "9546"
command: >
--datadir=/blast/${GETH_DATA_DIR:-blast-geth-data}
--http
--http.corsdomain="*"
--http.vhosts="*"
--http.addr=0.0.0.0
--http.port=9545
--http.api=web3,debug,eth,txpool,net,engine
--ws
--ws.addr=0.0.0.0
--ws.port=9546
--ws.origins="*"
--ws.api=debug,eth,txpool,net,engine
--authrpc.addr="0.0.0.0"
--authrpc.port="8551"
--authrpc.vhosts="*"
--authrpc.jwtsecret=/blast/jwt.txt
--syncmode=full
--gcmode=archive
--nodiscover
--maxpeers=0
--rollup.disabletxpoolgossip=true
env_file:
- ../config/fixturenet-blast/${NETWORK:-fixturenet}.config
depends_on:
geth-init:
condition: service_completed_successfully
op-node:
image: blastio/blast-optimism:${NETWORK:-testnet-sepolia}
volumes:
- blast-data:/blast
- ../config/fixturenet-blast/rollup.json:/blast/rollup.json
ports:
- "9003"
command: >
op-node
--l1="${CERC_L1_RPC}"
--l1.rpckind="any"
--l1.trustrpc=true
--l2="http://blast-geth:8551"
--l2.jwt-secret=/blast/jwt.txt
--rollup.config="/blast/rollup.json"
depends_on:
- blast-geth
env_file:
- ../config/fixturenet-blast/${NETWORK:-fixturenet}.config
volumes:
blast-data:

View File

@ -3,6 +3,9 @@ services:
restart: unless-stopped restart: unless-stopped
image: cerc/laconicd:local image: cerc/laconicd:local
command: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"] command: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"]
environment:
TEST_AUCTION_ENABLED: ${TEST_AUCTION_ENABLED}
TEST_REGISTRY_EXPIRY: ${TEST_REGISTRY_EXPIRY}
volumes: volumes:
# The cosmos-sdk node's database directory: # The cosmos-sdk node's database directory:
- laconicd-data:/root/.laconicd - laconicd-data:/root/.laconicd
@ -25,6 +28,7 @@ services:
image: cerc/laconic-registry-cli:local image: cerc/laconic-registry-cli:local
volumes: volumes:
- ../config/fixturenet-laconicd/registry-cli-config-template.yml:/registry-cli-config-template.yml - ../config/fixturenet-laconicd/registry-cli-config-template.yml:/registry-cli-config-template.yml
- ${BASE_DIR:-~/cerc}/laconic-registry-cli:/laconic-registry-cli
volumes: volumes:
laconicd-data: laconicd-data:

View File

@ -6,6 +6,7 @@ services:
- ../config/fixturenet-eth/fixturenet-eth.env - ../config/fixturenet-eth/fixturenet-eth.env
environment: environment:
RUN_BOOTNODE: "true" RUN_BOOTNODE: "true"
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
image: cerc/fixturenet-plugeth-plugeth:local image: cerc/fixturenet-plugeth-plugeth:local
volumes: volumes:
- fixturenet_plugeth_bootnode_geth_data:/root/ethdata - fixturenet_plugeth_bootnode_geth_data:/root/ethdata

View File

@ -2,7 +2,7 @@ version: "3.7"
services: services:
grafana: grafana:
image: grafana/grafana:10.2.2 image: grafana/grafana:10.2.3
restart: always restart: always
environment: environment:
GF_SERVER_ROOT_URL: ${GF_SERVER_ROOT_URL} GF_SERVER_ROOT_URL: ${GF_SERVER_ROOT_URL}

View File

@ -0,0 +1,83 @@
# From: https://raw.githubusercontent.com/blast-io/deployment/master/docker-compose.yml
services:
# generate jwt.txt if it's absent
generate-jwt:
image: blastio/openssl
volumes:
- blast-data:/blast:rw
command: >
sh -c "[ ! -f /blast/jwt.txt ] && openssl rand -hex 32 | tr -d '\n' > /blast/jwt.txt || exit 0"
# initialise geth db
geth-init:
image: blastio/blast-geth:${NETWORK:-mainnet}
volumes:
- blast-data:/blast:rw
entrypoint: /bin/sh
command: >
-c "[ ! -d /blast/${GETH_DATA_DIR:-blast-geth-data}/geth ] && /usr/local/bin/geth init --datadir=/blast/${GETH_DATA_DIR:-blast-geth-data} /blast/genesis.json || exit 0"
depends_on:
generate-jwt:
condition: service_completed_successfully
env_file:
- ../config/mainnet-blast/${NETWORK:-mainnet}.config
blast-geth:
image: blastio/blast-geth:${NETWORK:-mainnet}
volumes:
- blast-data:/blast
ports:
- "9545"
- "9546"
- "6060"
command: >
--datadir=/blast/${GETH_DATA_DIR:-blast-geth-data}
--http
--http.corsdomain="*"
--http.vhosts="*"
--http.addr=0.0.0.0
--http.port=9545
--http.api=web3,debug,eth,txpool,net,engine
--ws
--ws.addr=0.0.0.0
--ws.port=9546
--ws.origins="*"
--ws.api=debug,eth,txpool,net,engine
--authrpc.addr="0.0.0.0"
--authrpc.port="8551"
--authrpc.vhosts="*"
--authrpc.jwtsecret=/blast/jwt.txt
--syncmode=full
--metrics
--metrics.addr=0.0.0.0
--gcmode=archive
--nodiscover
--maxpeers=0
--rollup.disabletxpoolgossip=true
env_file:
- ../config/mainnet-blast/${NETWORK:-mainnet}.config
depends_on:
geth-init:
condition: service_completed_successfully
op-node:
image: blastio/blast-optimism:${NETWORK:-mainnet}
volumes:
- blast-data:/blast
ports:
- "9003"
- "7300"
command: >
op-node
--l1="https://eth-mainnet-1.vdb.to/"
--metrics.enabled
--l1.rpckind="any"
--l1.trustrpc=true
--l2="http://blast-geth:8551"
--l2.jwt-secret=/blast/jwt.txt
--rollup.config="/blast/rollup.json"
depends_on:
- blast-geth
env_file:
- ../config/mainnet-blast/${NETWORK:-mainnet}.config
volumes:
blast-data:

View File

@ -0,0 +1,13 @@
services:
snowballtools-base-backend:
image: cerc/snowballtools-base-backend:local
restart: always
volumes:
- data:/data
- config:/config:ro
ports:
- 8000
volumes:
data:
config:

View File

@ -6,6 +6,7 @@ services:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG} CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_TEST_PARAM_1: ${CERC_TEST_PARAM_1:-FAILED} CERC_TEST_PARAM_1: ${CERC_TEST_PARAM_1:-FAILED}
CERC_TEST_PARAM_2: "CERC_TEST_PARAM_2_VALUE" CERC_TEST_PARAM_2: "CERC_TEST_PARAM_2_VALUE"
CERC_TEST_PARAM_3: ${CERC_TEST_PARAM_3:-FAILED}
volumes: volumes:
- test-data-bind:/data - test-data-bind:/data
- test-data-auto:/data2 - test-data-auto:/data2

View File

@ -0,0 +1,76 @@
version: '3.2'
services:
ajna-watcher-db:
restart: unless-stopped
image: postgres:14-alpine
environment:
- POSTGRES_USER=vdbm
- POSTGRES_MULTIPLE_DATABASES=ajna-watcher,ajna-watcher-job-queue
- POSTGRES_EXTENSION=ajna-watcher-job-queue:pgcrypto
- POSTGRES_PASSWORD=password
volumes:
- ../config/postgresql/multiple-postgressql-databases.sh:/docker-entrypoint-initdb.d/multiple-postgressql-databases.sh
- ajna_watcher_db_data:/var/lib/postgresql/data
ports:
- "5432"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "5432"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
ajna-watcher-job-runner:
restart: unless-stopped
depends_on:
ajna-watcher-db:
condition: service_healthy
image: cerc/watcher-ajna:local
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_ETH_RPC_ENDPOINT: ${CERC_ETH_RPC_ENDPOINT}
command: ["bash", "./start-job-runner.sh"]
volumes:
- ../config/watcher-ajna/watcher-config-template.toml:/app/environments/watcher-config-template.toml
- ../config/watcher-ajna/start-job-runner.sh:/app/start-job-runner.sh
ports:
- "9000"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "9000"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
ajna-watcher-server:
restart: unless-stopped
depends_on:
ajna-watcher-db:
condition: service_healthy
ajna-watcher-job-runner:
condition: service_healthy
image: cerc/watcher-ajna:local
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_ETH_RPC_ENDPOINT: ${CERC_ETH_RPC_ENDPOINT}
command: ["bash", "./start-server.sh"]
volumes:
- ../config/watcher-ajna/watcher-config-template.toml:/app/environments/watcher-config-template.toml
- ../config/watcher-ajna/start-server.sh:/app/start-server.sh
ports:
- "3008"
- "9001"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "3008"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
ajna_watcher_db_data:

View File

@ -0,0 +1,2 @@
GETH_ROLLUP_SEQUENCERHTTP=https://sequencer.s2.testblast.io
OP_NODE_P2P_BOOTNODES=enr:-J-4QM3GLUFfKMSJQuP1UvuKQe8DyovE7Eaiit0l6By4zjTodkR4V8NWXJxNmlg8t8rP-Q-wp3jVmeAOml8cjMj__ROGAYznzb_HgmlkgnY0gmlwhA-cZ_eHb3BzdGFja4X947FQAIlzZWNwMjU2azGhAiuDqvB-AsVSRmnnWr6OHfjgY8YfNclFy9p02flKzXnOg3RjcIJ2YYN1ZHCCdmE,enr:-J-4QDCVpByqQ8nFqCS9aHicqwUfXgzFDslvpEyYz19lvkHLIdtcIGp2d4q5dxHdjRNTO6HXCsnIKxUeuZSPcEbyVQCGAYznzz0RgmlkgnY0gmlwhANiQfuHb3BzdGFja4X947FQAIlzZWNwMjU2azGhAy3AtF2Jh_aPdOohg506Hjmtx-fQ1AKmu71C7PfkWAw9g3RjcIJ2YYN1ZHCCdmE

View File

@ -0,0 +1,57 @@
{
"config": {
"chainId": 608943043,
"homesteadBlock": 0,
"eip150Block": 0,
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0,
"muirGlacierBlock": 0,
"berlinBlock": 0,
"londonBlock": 0,
"arrowGlacierBlock": 0,
"grayGlacierBlock": 0,
"mergeNetsplitBlock": 0,
"shanghaiTime": 0,
"bedrockBlock": 0,
"regolithTime": 0,
"canyonTime": 0,
"terminalTotalDifficulty": 0,
"terminalTotalDifficultyPassed": true,
"optimism": {
"eip1559Elasticity": 6,
"eip1559Denominator": 50,
"eip1559DenominatorCanyon": 250
}
},
"alloc": {
"0000000000000000000000000000000000000000": {
"balance": "0x1"
},
"4200000000000000000000000000000000000000": {
"code": "0x60806040526004361061004e5760003560e01c80633659cfe6146100655780634f1ef286146100855780635c60da1b146100ae5780638f283970146100db578063f851a440146100fb5761005d565b3661005d5761005b610110565b005b61005b610110565b34801561007157600080fd5b5061005b610080366004610521565b6101c8565b61009861009336600461053c565b61020e565b6040516100a591906105bf565b60405180910390f35b3480156100ba57600080fd5b506100c361033e565b6040516001600160a01b0390911681526020016100a5565b3480156100e757600080fd5b5061005b6100f6366004610521565b6103a9565b34801561010757600080fd5b506100c36103e4565b600061013a7f360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc5490565b90506001600160a01b0381166101a55760405162461bcd60e51b815260206004820152602560248201527f50726f78793a20696d706c656d656e746174696f6e206e6f7420696e697469616044820152641b1a5e995960da1b60648201526084015b60405180910390fd5b3660008037600080366000845af43d6000803e806101c2573d6000fd5b503d6000f35b600080516020610625833981519152546001600160a01b0316336001600160a01b031614806101f5575033155b156102065761020381610432565b50565b610203610110565b60606102266000805160206106258339815191525490565b6001600160a01b0316336001600160a01b03161480610243575033155b1561032f5761025184610432565b600080856001600160a01b0316858560405161026e929190610614565b600060405180830381855af49150503d80600081146102a9576040519150601f19603f3d011682016040523d82523d6000602084013e6102ae565b606091505b5091509150816103265760405162461bcd60e51b815260206004820152603960248201527f50726f78793a2064656c656761746563616c6c20746f206e657720696d706c6560448201527f6d656e746174696f6e20636f6e7472616374206661696c656400000000000000606482015260840161019c565b91506103379050565b610337610110565b9392505050565b60006103566000805160206106258339815191525490565b6001600160a01b0316336001600160a01b03161480610373575033155b1561039e57507f360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc5490565b6103a6610110565b90565b600080516020610625833981519152546001600160a01b0316336001600160a01b031614806103d6575033155b15610206576102038161048e565b60006103fc6000805160206106258339815191525490565b6001600160a01b0316336001600160a01b03161480610419575033155b1561039e57506000805160206106258339815191525490565b7f360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc8181556040516001600160a01b038316907fbc7cd75a20ee27fd9adebab32041f755214dbc6bffa90cc0225b39da2e5c2d3b90600090a25050565b60006104a66000805160206106258339815191525490565b600080516020610625833981519152838155604080516001600160a01b0380851682528616602082015292935090917f7e644d79422f17c01e4894b5f4f588d331ebfa28653d42ae832dc59e38c9798f910160405180910390a1505050565b80356001600160a01b038116811461051c57600080fd5b919050565b60006020828403121561053357600080fd5b61033782610505565b60008060006040848603121561055157600080fd5b61055a84610505565b9250602084013567ffffffffffffffff8082111561057757600080fd5b818601915086601f83011261058b57600080fd5b81358181111561059a57600080fd5b8760208285010111156105ac57600080fd5b6020830194508093505050509250925092565b600060208083528351808285015260005b818110156105ec578581018301518582016040015282016105d0565b818111156105fe576000604083870101525b50601f01601f1916929092016040019392505050565b818382376000910190815291905056feb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d6103a164736f6c634300080f000a",
"storage": {
"0x360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc": "0x000000000000000000000000c0d3c0d3c0d3c0d3c0d3c0d3c0d3c0d3c0d30000",
"0xb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d6103": "0x0000000000000000000000004200000000000000000000000000000000000018"
},
"balance": "0x0",
"flags": 1
}
},
"nonce": "0x0",
"timestamp": "0x659b7460",
"extraData": "0x424544524f434b",
"gasLimit": "0x1c9c380",
"difficulty": "0x0",
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"coinbase": "0x4200000000000000000000000000000000000011",
"number": "0x0",
"gasUsed": "0x0",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"baseFeePerGas": "0x3b9aca00",
"excessBlobGas": null,
"blobGasUsed": null
}

View File

@ -0,0 +1,31 @@
{
"genesis": {
"l1": {
"hash": "0x17728cf4d8e0b4f292d2390a869fd7c632d39e72efb00ca3462b4387c6aa2437",
"number": 5044255
},
"l2": {
"hash": "0x26a1c0faad7b041f34569a1bb383f00ab74b335883a44bed53e9f41ced5fd906",
"number": 0
},
"l2_time": 1704686688,
"system_config": {
"batcherAddr": "0xba26fee2fa917443e05e65de8d4350bcd2f59222",
"overhead": "0x00000000000000000000000000000000000000000000000000000000000000bc",
"scalar": "0x00000000000000000000000000000000000000000000000000000000000a6fe0",
"gasLimit": 30000000
}
},
"block_time": 2,
"max_sequencer_drift": 600,
"seq_window_size": 3600,
"channel_timeout": 300,
"l1_chain_id": 11155111,
"l2_chain_id": 608943043,
"regolith_time": 0,
"canyon_time": 0,
"batch_inbox_address": "0x1c3b85a2108784eab6a4bf56cdd6f722e415b331",
"deposit_contract_address": "0x2757e4430e694f27b73ec9c02257cab3a498c8c5",
"l1_system_config_address": "0x329faf078c364a316e08bf6a17b7eee6ae75a613",
"protocol_versions_address": "0x0000000000000000000000000000000000000000"
}

View File

@ -23,3 +23,6 @@ CERC_STATEDIFF_WORKERS=2
CERC_GETH_VMODULE="statediff/*=5,rpc/*=5" CERC_GETH_VMODULE="statediff/*=5,rpc/*=5"
CERC_GETH_VERBOSITY=${CERC_GETH_VERBOSITY:-3} CERC_GETH_VERBOSITY=${CERC_GETH_VERBOSITY:-3}
# Used by Lighthouse
SECONDS_PER_ETH1_BLOCK=${SECONDS_PER_ETH1_BLOCK:-3}

View File

@ -102,6 +102,17 @@ if [ "$1" == "clean" ] || [ ! -d "$HOME/.laconicd/data/blockstore.db" ]; then
fi fi
fi fi
# Enable telemetry (prometheus metrics: http://localhost:1317/metrics?format=prometheus)
if [[ "$OSTYPE" == "darwin"* ]]; then
sed -i '' 's/enabled = false/enabled = true/g' $HOME/.laconicd/config/app.toml
sed -i '' 's/prometheus-retention-time = 0/prometheus-retention-time = 60/g' $HOME/.laconicd/config/app.toml
sed -i '' 's/prometheus = false/prometheus = true/g' $HOME/.laconicd/config/config.toml
else
sed -i 's/enabled = false/enabled = true/g' $HOME/.laconicd/config/app.toml
sed -i 's/prometheus-retention-time = 0/prometheus-retention-time = 60/g' $HOME/.laconicd/config/app.toml
sed -i 's/prometheus = false/prometheus = true/g' $HOME/.laconicd/config/config.toml
fi
# Allocate genesis accounts (cosmos formatted addresses) # Allocate genesis accounts (cosmos formatted addresses)
laconicd add-genesis-account $KEY 100000000000000000000000000aphoton --keyring-backend $KEYRING laconicd add-genesis-account $KEY 100000000000000000000000000aphoton --keyring-backend $KEYRING

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,32 @@
POSTGRES_DB=keycloak
POSTGRES_USER=keycloak
POSTGRES_PASSWORD=keycloak
# Don't change this unless you also change the healthcheck in docker-compose-mainnet-eth-keycloak.yml
PGPORT=35432
KC_DB=postgres
KC_DB_URL_HOST=keycloak-db
KC_DB_URL_PORT=${PGPORT}
KC_DB_URL_DATABASE=${POSTGRES_DB}
KC_DB_USERNAME=${POSTGRES_USER}
KC_DB_PASSWORD=${POSTGRES_PASSWORD}
KC_DB_SCHEMA=public
KC_HOSTNAME=localhost
KC_HTTP_ENABLED="true"
KC_HTTP_RELATIVE_PATH="/auth"
KC_HOSTNAME_STRICT_HTTPS="false"
KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=admin
X_API_CHECK_REALM=cerc
X_API_CHECK_CLIENT_ID="%user_id%"
# keycloak-reg-api
CERC_KCUSERREG_LISTEN_PORT=9292
CERC_KCUSERREG_LISTEN_ADDR='0.0.0.0'
CERC_KCUSERREG_API_URL='http://keycloak:8080/auth'
CERC_KCUSERREG_REG_USER="${KEYCLOAK_ADMIN}"
CERC_KCUSERREG_REG_PW="${KEYCLOAK_ADMIN_PASSWORD}"
CERC_KCUSERREG_REG_CLIENT_ID='admin-cli'
CERC_KCUSERREG_TARGET_REALM=cerc
CERC_KCUSERREG_TARGET_GROUPS=eth
CERC_KCUSERREG_CREATE_ENABLED=true

View File

@ -0,0 +1,33 @@
# Enable startup script debug output.
CERC_SCRIPT_DEBUG=false
# Specify any other lighthouse CLI options.
LIGHTHOUSE_OPTS=""
# Override the advertised public IP (optional)
# --enr-address
#LIGHTHOUSE_ENR_ADDRESS=""
# --checkpoint-sync-url
LIGHTHOUSE_CHECKPOINT_SYNC_URL="https://beaconstate.ethstaker.cc"
# --checkpoint-sync-url-timeout
LIGHTHOUSE_CHECKPOINT_SYNC_URL_TIMEOUT=300
# --datadir
LIGHTHOUSE_DATADIR=/data
# --debug-level
LIGHTHOUSE_DEBUG_LEVEL=info
# --http-port
LIGHTHOUSE_HTTP_PORT=5052
# --execution-jwt
LIGHTHOUSE_JWTSECRET=/etc/mainnet-eth/jwtsecret
# --metrics-port
LIGHTHOUSE_METRICS_PORT=5054
# --port --enr-udp-port --enr-tcp-port
LIGHTHOUSE_NETWORK_PORT=9000

View File

@ -0,0 +1,2 @@
GETH_ROLLUP_SEQUENCERHTTP=https://sequencer.blast.io
OP_NODE_P2P_BOOTNODES=enr:-J64QGwHl9uYLfC_cnmxSA6wQH811nkOWJDWjzxqkEUlJoZHWvI66u-BXgVcPCeMUmg0dBpFQAPotFchG67FHJMZ9OSGAY3d6wevgmlkgnY0gmlwhANizeSHb3BzdGFja4Sx_AQAiXNlY3AyNTZrMaECg4pk0cskPAyJ7pOmo9E6RqGBwV-Lex4VS9a3MQvu7PWDdGNwgnZhg3VkcIJ2YQ,enr:-J64QDge2jYBQtcNEpRqmKfci5E5BHAhNBjgv4WSdwH1_wPqbueq2bDj38-TSW8asjy5lJj1Xftui6Or8lnaYFCqCI-GAY3d6wf3gmlkgnY0gmlwhCO2D9yHb3BzdGFja4Sx_AQAiXNlY3AyNTZrMaEDo4aCTq7pCEN8om9U5n_VyWdambGnQhwHNwKc8o-OicaDdGNwgnZhg3VkcIJ2YQ

View File

@ -0,0 +1,32 @@
{
"genesis": {
"l1": {
"hash": "0xfcfb8d586bdae763f1189988789211c69eb893a895e7ba48be3ca6289f0941b7",
"number": 19300102
},
"l2": {
"hash": "0xb689b35ef29d0bec5816938e0e52683c7257d2e325420ea69b739a2be4754b89",
"number": 0
},
"l2_time": 1708809815,
"system_config": {
"batcherAddr": "0x415c8893d514f9bc5211d36eeda4183226b84aa7",
"overhead": "0x00000000000000000000000000000000000000000000000000000000000000bc",
"scalar": "0x00000000000000000000000000000000000000000000000000000000000a6fe0",
"gasLimit": 30000000
}
},
"block_time": 2,
"max_sequencer_drift": 600,
"seq_window_size": 3600,
"channel_timeout": 300,
"l1_chain_id": 1,
"l2_chain_id": 81457,
"regolith_time": 0,
"canyon_time": 0,
"batch_inbox_address": "0xff00000000000000000000000000000000081457",
"deposit_contract_address": "0x0ec68c5b10f21effb74f2a5c61dfe6b08c0db6cb",
"l1_system_config_address": "0x5531dcff39ec1ec727c4c5d2fc49835368f805a9",
"protocol_versions_address": "0x0000000000000000000000000000000000000000"
}

View File

@ -0,0 +1,30 @@
#!/bin/bash
if [[ "true" == "$CERC_SCRIPT_DEBUG" ]]; then
set -x
fi
ENR_OPTS=""
if [[ -n "$LIGHTHOUSE_ENR_ADDRESS" ]]; then
ENR_OPTS="--enr-address $LIGHTHOUSE_ENR_ADDRESS"
fi
exec lighthouse bn \
--checkpoint-sync-url "$LIGHTHOUSE_CHECKPOINT_SYNC_URL" \
--checkpoint-sync-url-timeout ${LIGHTHOUSE_CHECKPOINT_SYNC_URL_TIMEOUT} \
--datadir "$LIGHTHOUSE_DATADIR" \
--debug-level $LIGHTHOUSE_DEBUG_LEVEL \
--disable-deposit-contract-sync \
--disable-upnp \
--enr-tcp-port $LIGHTHOUSE_NETWORK_PORT \
--enr-udp-port $LIGHTHOUSE_NETWORK_PORT \
--execution-endpoint "$LIGHTHOUSE_EXECUTION_ENDPOINT" \
--execution-jwt /etc/mainnet-eth/jwtsecret \
--http \
--http-address 0.0.0.0 \
--http-port $LIGHTHOUSE_HTTP_PORT \
--metrics \
--metrics-address=0.0.0.0 \
--metrics-port $LIGHTHOUSE_METRICS_PORT \
--network mainnet \
--port $LIGHTHOUSE_NETWORK_PORT \
$ENR_OPTS $LIGHTHOUSE_OPTS

View File

@ -0,0 +1,147 @@
apiVersion: 1
groups:
- orgId: 1
name: blackbox
folder: BlackboxAlerts
interval: 30s
rules:
# Azimuth Gateway endpoint
- uid: azimuth_gateway
title: azimuth_gateway_endpoint_tracking
condition: condition
data:
- refId: probe
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
editorMode: code
expr: probe_success{destination="azimuth_gateway"}
instant: true
intervalMs: 1000
legendFormat: __auto
maxDataPoints: 43200
range: false
refId: probe
- refId: http_status_code
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
editorMode: code
expr: probe_http_status_code{destination="azimuth_gateway"}
instant: true
intervalMs: 1000
legendFormat: __auto
maxDataPoints: 43200
range: false
refId: http_status_code
- refId: condition
relativeTimeRange:
from: 600
to: 0
datasourceUid: __expr__
model:
conditions:
- evaluator:
params:
- 0
- 0
type: gt
operator:
type: and
query:
params: []
reducer:
params: []
type: avg
type: query
datasource:
name: Expression
type: __expr__
uid: __expr__
expression: ${probe} != 1 || ${http_status_code} != 200
intervalMs: 1000
maxDataPoints: 43200
refId: condition
type: math
noDataState: Alerting
execErrState: Alerting
for: 5m
annotations:
summary: Probe failed for Azimuth gateway endpoint
labels:
probe_success: '{{ index $values "probe" }}'
isPaused: false
# Laconicd GQL endpoint
- uid: laconicd_gql
title: laconicd_gql_endpoint_tracking
condition: condition
data:
- refId: probe
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
editorMode: code
expr: probe_success{destination="laconicd_gql"}
instant: true
intervalMs: 1000
legendFormat: __auto
maxDataPoints: 43200
range: false
refId: probe
- refId: http_status_code
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
editorMode: code
expr: probe_http_status_code{destination="laconicd_gql"}
instant: true
intervalMs: 1000
legendFormat: __auto
maxDataPoints: 43200
range: false
refId: http_status_code
- refId: condition
relativeTimeRange:
from: 600
to: 0
datasourceUid: __expr__
model:
conditions:
- evaluator:
params:
- 0
- 0
type: gt
operator:
type: and
query:
params: []
reducer:
params: []
type: avg
type: query
datasource:
name: Expression
type: __expr__
uid: __expr__
expression: ${probe} != 1 || ${http_status_code} != 200
intervalMs: 1000
maxDataPoints: 43200
refId: condition
type: math
noDataState: Alerting
execErrState: Alerting
for: 5m
annotations:
summary: Probe failed for Laconicd GQL endpoint
labels:
probe_success: '{{ index $values "probe" }}'
isPaused: false

View File

@ -24,9 +24,10 @@ scrape_configs:
params: params:
module: [http_2xx] module: [http_2xx]
static_configs: static_configs:
# Add URLs to be monitored below # Add URLs for targets to be monitored below
- targets: # - targets: [https://github.com]
# - https://github.com # labels:
# destination: 'github'
relabel_configs: relabel_configs:
- source_labels: [__address__] - source_labels: [__address__]
regex: (.*)(:80)? regex: (.*)(:80)?
@ -65,3 +66,12 @@ scrape_configs:
target_label: instance target_label: instance
- target_label: __address__ - target_label: __address__
replacement: postgres-exporter:9187 replacement: postgres-exporter:9187
- job_name: laconicd
metrics_path: /metrics
scrape_interval: 30s
static_configs:
# Add laconicd REST endpoint target with host and port (1317)
# - targets: ['example-host:1317']
params:
format: ['prometheus']

View File

@ -50,22 +50,6 @@ groups:
legendFormat: __auto legendFormat: __auto
range: false range: false
refId: latest_external refId: latest_external
- refId: latest_indexed
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
datasource:
type: prometheus
uid: PBFA97CFB590B2093
editorMode: code
expr: sync_status_block_number{job="azimuth", instance="azimuth", kind="latest_indexed"}
hide: false
instant: true
legendFormat: __auto
range: false
refId: latest_indexed
- refId: condition - refId: condition
relativeTimeRange: relativeTimeRange:
from: 600 from: 600
@ -75,29 +59,29 @@ groups:
conditions: conditions:
- evaluator: - evaluator:
params: params:
- 0 - 15
- 0 - 0
type: gt type: gt
operator: operator:
type: and type: when
query: query:
params: [] params:
- diff
reducer: reducer:
params: [] params: []
type: avg type: last
type: query type: query
datasource: datasource:
name: Expression name: Expression
type: __expr__ type: __expr__
uid: __expr__ uid: __expr__
expression: ${diff} >= 16 expression: ""
intervalMs: 1000 hide: false
maxDataPoints: 43200
refId: condition refId: condition
type: math type: classic_conditions
noDataState: Alerting noDataState: Alerting
execErrState: Alerting execErrState: Alerting
for: 15m for: 5m
annotations: annotations:
summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }} summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }}
isPaused: false isPaused: false
@ -142,22 +126,6 @@ groups:
legendFormat: __auto legendFormat: __auto
range: false range: false
refId: latest_external refId: latest_external
- refId: latest_indexed
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
datasource:
type: prometheus
uid: PBFA97CFB590B2093
editorMode: code
expr: sync_status_block_number{job="azimuth", instance="censures", kind="latest_indexed"}
hide: false
instant: true
legendFormat: __auto
range: false
refId: latest_indexed
- refId: condition - refId: condition
relativeTimeRange: relativeTimeRange:
from: 600 from: 600
@ -167,29 +135,29 @@ groups:
conditions: conditions:
- evaluator: - evaluator:
params: params:
- 0 - 15
- 0 - 0
type: gt type: gt
operator: operator:
type: and type: when
query: query:
params: [] params:
- diff
reducer: reducer:
params: [] params: []
type: avg type: last
type: query type: query
datasource: datasource:
name: Expression name: Expression
type: __expr__ type: __expr__
uid: __expr__ uid: __expr__
expression: ${diff} >= 16 expression: ""
intervalMs: 1000 hide: false
maxDataPoints: 43200
refId: condition refId: condition
type: math type: classic_conditions
noDataState: Alerting noDataState: Alerting
execErrState: Alerting execErrState: Alerting
for: 15m for: 5m
annotations: annotations:
summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }} summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }}
isPaused: false isPaused: false
@ -234,22 +202,6 @@ groups:
legendFormat: __auto legendFormat: __auto
range: false range: false
refId: latest_external refId: latest_external
- refId: latest_indexed
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
datasource:
type: prometheus
uid: PBFA97CFB590B2093
editorMode: code
expr: sync_status_block_number{job="azimuth", instance="claims", kind="latest_indexed"}
hide: false
instant: true
legendFormat: __auto
range: false
refId: latest_indexed
- refId: condition - refId: condition
relativeTimeRange: relativeTimeRange:
from: 600 from: 600
@ -259,29 +211,29 @@ groups:
conditions: conditions:
- evaluator: - evaluator:
params: params:
- 0 - 15
- 0 - 0
type: gt type: gt
operator: operator:
type: and type: when
query: query:
params: [] params:
- diff
reducer: reducer:
params: [] params: []
type: avg type: last
type: query type: query
datasource: datasource:
name: Expression name: Expression
type: __expr__ type: __expr__
uid: __expr__ uid: __expr__
expression: ${diff} >= 16 expression: ""
intervalMs: 1000 hide: false
maxDataPoints: 43200
refId: condition refId: condition
type: math type: classic_conditions
noDataState: Alerting noDataState: Alerting
execErrState: Alerting execErrState: Alerting
for: 15m for: 5m
annotations: annotations:
summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }} summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }}
isPaused: false isPaused: false
@ -326,22 +278,6 @@ groups:
legendFormat: __auto legendFormat: __auto
range: false range: false
refId: latest_external refId: latest_external
- refId: latest_indexed
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
datasource:
type: prometheus
uid: PBFA97CFB590B2093
editorMode: code
expr: sync_status_block_number{job="azimuth", instance="conditional_star_release", kind="latest_indexed"}
hide: false
instant: true
legendFormat: __auto
range: false
refId: latest_indexed
- refId: condition - refId: condition
relativeTimeRange: relativeTimeRange:
from: 600 from: 600
@ -351,29 +287,29 @@ groups:
conditions: conditions:
- evaluator: - evaluator:
params: params:
- 0 - 15
- 0 - 0
type: gt type: gt
operator: operator:
type: and type: when
query: query:
params: [] params:
- diff
reducer: reducer:
params: [] params: []
type: avg type: last
type: query type: query
datasource: datasource:
name: Expression name: Expression
type: __expr__ type: __expr__
uid: __expr__ uid: __expr__
expression: ${diff} >= 16 expression: ""
intervalMs: 1000 hide: false
maxDataPoints: 43200
refId: condition refId: condition
type: math type: classic_conditions
noDataState: Alerting noDataState: Alerting
execErrState: Alerting execErrState: Alerting
for: 15m for: 5m
annotations: annotations:
summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }} summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }}
isPaused: false isPaused: false
@ -418,22 +354,6 @@ groups:
legendFormat: __auto legendFormat: __auto
range: false range: false
refId: latest_external refId: latest_external
- refId: latest_indexed
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
datasource:
type: prometheus
uid: PBFA97CFB590B2093
editorMode: code
expr: sync_status_block_number{job="azimuth", instance="delegated_sending", kind="latest_indexed"}
hide: false
instant: true
legendFormat: __auto
range: false
refId: latest_indexed
- refId: condition - refId: condition
relativeTimeRange: relativeTimeRange:
from: 600 from: 600
@ -443,29 +363,29 @@ groups:
conditions: conditions:
- evaluator: - evaluator:
params: params:
- 0 - 15
- 0 - 0
type: gt type: gt
operator: operator:
type: and type: when
query: query:
params: [] params:
- diff
reducer: reducer:
params: [] params: []
type: avg type: last
type: query type: query
datasource: datasource:
name: Expression name: Expression
type: __expr__ type: __expr__
uid: __expr__ uid: __expr__
expression: ${diff} >= 16 expression: ""
intervalMs: 1000 hide: false
maxDataPoints: 43200
refId: condition refId: condition
type: math type: classic_conditions
noDataState: Alerting noDataState: Alerting
execErrState: Alerting execErrState: Alerting
for: 15m for: 5m
annotations: annotations:
summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }} summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }}
isPaused: false isPaused: false
@ -510,22 +430,6 @@ groups:
legendFormat: __auto legendFormat: __auto
range: false range: false
refId: latest_external refId: latest_external
- refId: latest_indexed
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
datasource:
type: prometheus
uid: PBFA97CFB590B2093
editorMode: code
expr: sync_status_block_number{job="azimuth", instance="ecliptic", kind="latest_indexed"}
hide: false
instant: true
legendFormat: __auto
range: false
refId: latest_indexed
- refId: condition - refId: condition
relativeTimeRange: relativeTimeRange:
from: 600 from: 600
@ -535,29 +439,29 @@ groups:
conditions: conditions:
- evaluator: - evaluator:
params: params:
- 0 - 15
- 0 - 0
type: gt type: gt
operator: operator:
type: and type: when
query: query:
params: [] params:
- diff
reducer: reducer:
params: [] params: []
type: avg type: last
type: query type: query
datasource: datasource:
name: Expression name: Expression
type: __expr__ type: __expr__
uid: __expr__ uid: __expr__
expression: ${diff} >= 16 expression: ""
intervalMs: 1000 hide: false
maxDataPoints: 43200
refId: condition refId: condition
type: math type: classic_conditions
noDataState: Alerting noDataState: Alerting
execErrState: Alerting execErrState: Alerting
for: 15m for: 5m
annotations: annotations:
summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }} summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }}
isPaused: false isPaused: false
@ -602,22 +506,6 @@ groups:
legendFormat: __auto legendFormat: __auto
range: false range: false
refId: latest_external refId: latest_external
- refId: latest_indexed
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
datasource:
type: prometheus
uid: PBFA97CFB590B2093
editorMode: code
expr: sync_status_block_number{job="azimuth", instance="azimuth", kind="latest_indexed"}
hide: false
instant: true
legendFormat: __auto
range: false
refId: latest_indexed
- refId: condition - refId: condition
relativeTimeRange: relativeTimeRange:
from: 600 from: 600
@ -627,29 +515,29 @@ groups:
conditions: conditions:
- evaluator: - evaluator:
params: params:
- 0 - 15
- 0 - 0
type: gt type: gt
operator: operator:
type: and type: when
query: query:
params: [] params:
- diff
reducer: reducer:
params: [] params: []
type: avg type: last
type: query type: query
datasource: datasource:
name: Expression name: Expression
type: __expr__ type: __expr__
uid: __expr__ uid: __expr__
expression: ${diff} >= 16 expression: ""
intervalMs: 1000 hide: false
maxDataPoints: 43200
refId: condition refId: condition
type: math type: classic_conditions
noDataState: Alerting noDataState: Alerting
execErrState: Alerting execErrState: Alerting
for: 15m for: 5m
annotations: annotations:
summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }} summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }}
isPaused: false isPaused: false
@ -694,22 +582,6 @@ groups:
legendFormat: __auto legendFormat: __auto
range: false range: false
refId: latest_external refId: latest_external
- refId: latest_indexed
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
datasource:
type: prometheus
uid: PBFA97CFB590B2093
editorMode: code
expr: sync_status_block_number{job="azimuth", instance="polls", kind="latest_indexed"}
hide: false
instant: true
legendFormat: __auto
range: false
refId: latest_indexed
- refId: condition - refId: condition
relativeTimeRange: relativeTimeRange:
from: 600 from: 600
@ -719,29 +591,29 @@ groups:
conditions: conditions:
- evaluator: - evaluator:
params: params:
- 0 - 15
- 0 - 0
type: gt type: gt
operator: operator:
type: and type: when
query: query:
params: [] params:
- diff
reducer: reducer:
params: [] params: []
type: avg type: last
type: query type: query
datasource: datasource:
name: Expression name: Expression
type: __expr__ type: __expr__
uid: __expr__ uid: __expr__
expression: ${diff} >= 16 expression: ""
intervalMs: 1000 hide: false
maxDataPoints: 43200
refId: condition refId: condition
type: math type: classic_conditions
noDataState: Alerting noDataState: Alerting
execErrState: Alerting execErrState: Alerting
for: 15m for: 5m
annotations: annotations:
summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }} summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }}
isPaused: false isPaused: false
@ -788,22 +660,6 @@ groups:
legendFormat: __auto legendFormat: __auto
range: false range: false
refId: latest_external refId: latest_external
- refId: latest_indexed
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
datasource:
type: prometheus
uid: PBFA97CFB590B2093
editorMode: code
expr: sync_status_block_number{job="sushi", instance="sushiswap", kind="latest_indexed"}
hide: false
instant: true
legendFormat: __auto
range: false
refId: latest_indexed
- refId: condition - refId: condition
relativeTimeRange: relativeTimeRange:
from: 600 from: 600
@ -813,29 +669,29 @@ groups:
conditions: conditions:
- evaluator: - evaluator:
params: params:
- 0 - 15
- 0 - 0
type: gt type: gt
operator: operator:
type: and type: when
query: query:
params: [] params:
- diff
reducer: reducer:
params: [] params: []
type: avg type: last
type: query type: query
datasource: datasource:
name: Expression name: Expression
type: __expr__ type: __expr__
uid: __expr__ uid: __expr__
expression: ${diff} >= 16 expression: ""
intervalMs: 1000 hide: false
maxDataPoints: 43200
refId: condition refId: condition
type: math type: classic_conditions
noDataState: Alerting noDataState: Alerting
execErrState: Alerting execErrState: Alerting
for: 15m for: 5m
annotations: annotations:
summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }} summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }}
isPaused: false isPaused: false
@ -880,22 +736,6 @@ groups:
legendFormat: __auto legendFormat: __auto
range: false range: false
refId: latest_external refId: latest_external
- refId: latest_indexed
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
datasource:
type: prometheus
uid: PBFA97CFB590B2093
editorMode: code
expr: sync_status_block_number{job="sushi", instance="merkl_sushiswap", kind="latest_indexed"}
hide: false
instant: true
legendFormat: __auto
range: false
refId: latest_indexed
- refId: condition - refId: condition
relativeTimeRange: relativeTimeRange:
from: 600 from: 600
@ -905,29 +745,107 @@ groups:
conditions: conditions:
- evaluator: - evaluator:
params: params:
- 0 - 15
- 0 - 0
type: gt type: gt
operator: operator:
type: and type: when
query: query:
params: [] params:
- diff
reducer: reducer:
params: [] params: []
type: avg type: last
type: query type: query
datasource: datasource:
name: Expression name: Expression
type: __expr__ type: __expr__
uid: __expr__ uid: __expr__
expression: ${diff} >= 16 expression: ""
intervalMs: 1000 hide: false
maxDataPoints: 43200
refId: condition refId: condition
type: math type: classic_conditions
noDataState: Alerting noDataState: Alerting
execErrState: Alerting execErrState: Alerting
for: 15m for: 5m
annotations:
summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }}
isPaused: false
# Ajna
- uid: ajna_diff_external
title: ajna_watcher_head_tracking
condition: condition
data:
- refId: diff
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
datasource:
type: prometheus
uid: PBFA97CFB590B2093
disableTextWrap: false
editorMode: code
expr: latest_block_number - on(chain) group_right sync_status_block_number{job="ajna", instance="ajna", kind="latest_indexed"}
fullMetaSearch: false
includeNullMetadata: true
instant: true
intervalMs: 1000
legendFormat: __auto
maxDataPoints: 43200
range: false
refId: diff
useBackend: false
- refId: latest_external
relativeTimeRange:
from: 600
to: 0
datasourceUid: PBFA97CFB590B2093
model:
datasource:
type: prometheus
uid: PBFA97CFB590B2093
editorMode: code
expr: latest_block_number{chain="filecoin"}
hide: false
instant: true
legendFormat: __auto
range: false
refId: latest_external
- refId: condition
relativeTimeRange:
from: 600
to: 0
datasourceUid: __expr__
model:
conditions:
- evaluator:
params:
- 15
- 0
type: gt
operator:
type: when
query:
params:
- diff
reducer:
params: []
type: last
type: query
datasource:
name: Expression
type: __expr__
uid: __expr__
expression: ""
hide: false
refId: condition
type: classic_conditions
noDataState: Alerting
execErrState: Alerting
for: 5m
annotations: annotations:
summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }} summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }}
isPaused: false isPaused: false

View File

@ -0,0 +1,20 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
set -u
echo "Using ETH RPC endpoint ${CERC_ETH_RPC_ENDPOINT}"
# Read in the config template TOML file and modify it
WATCHER_CONFIG_TEMPLATE=$(cat environments/watcher-config-template.toml)
WATCHER_CONFIG=$(echo "$WATCHER_CONFIG_TEMPLATE" | \
sed -E "s|REPLACE_WITH_CERC_ETH_RPC_ENDPOINT|${CERC_ETH_RPC_ENDPOINT}| ")
# Write the modified content to a new file
echo "$WATCHER_CONFIG" > environments/local.toml
echo "Running job-runner..."
DEBUG=vulcanize:* exec node --enable-source-maps dist/job-runner.js

View File

@ -0,0 +1,20 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
set -u
echo "Using ETH RPC endpoint ${CERC_ETH_RPC_ENDPOINT}"
# Read in the config template TOML file and modify it
WATCHER_CONFIG_TEMPLATE=$(cat environments/watcher-config-template.toml)
WATCHER_CONFIG=$(echo "$WATCHER_CONFIG_TEMPLATE" | \
sed -E "s|REPLACE_WITH_CERC_ETH_RPC_ENDPOINT|${CERC_ETH_RPC_ENDPOINT}| ")
# Write the modified content to a new file
echo "$WATCHER_CONFIG" > environments/local.toml
echo "Running server..."
DEBUG=vulcanize:* exec node --enable-source-maps dist/server.js

View File

@ -0,0 +1,98 @@
[server]
host = "0.0.0.0"
port = 3008
kind = "active"
gqlPath = "/"
# Checkpointing state.
checkpointing = true
# Checkpoint interval in number of blocks.
checkpointInterval = 2000
# Enable state creation
# CAUTION: Disable only if state creation is not desired or can be filled subsequently
enableState = false
subgraphPath = "./subgraph-build"
# Interval to restart wasm instance periodically
wasmRestartBlocksInterval = 20
# Interval in number of blocks at which to clear entities cache.
clearEntitiesCacheInterval = 1000
# Max block range for which to return events in eventsInRange GQL query.
# Use -1 for skipping check on block range.
maxEventsBlockRange = 1000
# Flag to specify whether RPC endpoint supports block hash as block tag parameter
rpcSupportsBlockHashParam = false
# GQL cache settings
[server.gqlCache]
enabled = true
# Max in-memory cache size (in bytes) (default 8 MB)
# maxCacheSize
# GQL cache-control max-age settings (in seconds)
maxAge = 15
timeTravelMaxAge = 86400 # 1 day
[metrics]
host = "0.0.0.0"
port = 9000
[metrics.gql]
port = 9001
[database]
type = "postgres"
host = "ajna-watcher-db"
port = 5432
database = "ajna-watcher"
username = "vdbm"
password = "password"
synchronize = true
logging = false
[upstream]
[upstream.ethServer]
rpcProviderEndpoint = "REPLACE_WITH_CERC_ETH_RPC_ENDPOINT"
# Boolean flag to specify if rpc-eth-client should be used for RPC endpoint instead of ipld-eth-client (ipld-eth-server GQL client)
rpcClient = true
# Boolean flag to specify if rpcProviderEndpoint is an FEVM RPC endpoint
isFEVM = true
# Boolean flag to filter event logs by contracts
filterLogsByAddresses = true
# Boolean flag to filter event logs by topics
filterLogsByTopics = true
[upstream.cache]
name = "requests"
enabled = false
deleteOnStart = false
[jobQueue]
dbConnectionString = "postgres://vdbm:password@ajna-watcher-db/ajna-watcher-job-queue"
maxCompletionLagInSecs = 300
jobDelayInMilliSecs = 100
eventsInBatch = 50
subgraphEventsOrder = true
# Filecoin block time: https://docs.filecoin.io/basics/the-blockchain/blocks-and-tipsets#blocktime
blockDelayInMilliSecs = 30000
# Boolean to switch between modes of processing events when starting the server.
# Setting to true will fetch filtered events and required blocks in a range of blocks and then process them.
# Setting to false will fetch blocks consecutively with its events and then process them (Behaviour is followed in realtime processing near head).
useBlockRanges = true
# Block range in which logs are fetched during historical blocks processing
historicalLogsBlockRange = 2000
# Max block range of historical processing after which it waits for completion of events processing
# If set to -1 historical processing does not wait for events processing and completes till latest canonical block
historicalMaxFetchAhead = 10000

View File

@ -1,26 +1,23 @@
FROM skylenet/ethereum-genesis-generator@sha256:210353ce7c898686bc5092f16c61220a76d357f51eff9c451e9ad1b9ad03d4d3 AS ethgen FROM ethpandaops/ethereum-genesis-generator:3.0.0 AS ethgen
FROM golang:1.20-alpine as builder FROM golang:1.20-alpine as builder
RUN apk add --no-cache python3 py3-pip RUN apk add --no-cache python3 py3-pip make bash envsubst jq
COPY genesis /opt/genesis
# Install ethereum-genesis-generator tools # Install ethereum-genesis-generator tools
COPY --from=ethgen /usr/local/bin/eth2-testnet-genesis /usr/local/bin/
COPY --from=ethgen /usr/local/bin/eth2-val-tools /usr/local/bin/
COPY --from=ethgen /apps /apps COPY --from=ethgen /apps /apps
RUN cd /apps/el-gen && pip3 install --break-system-packages -r requirements.txt RUN cd /apps/el-gen && pip3 install --break-system-packages -r requirements.txt
# web3==5.24.0 used by el-gen is broken on python 3.11 RUN pip3 install --break-system-packages --upgrade "web3==v6.15.1"
RUN pip3 install --break-system-packages --upgrade "web3==6.5.0"
RUN pip3 install --break-system-packages --upgrade "typing-extensions" RUN pip3 install --break-system-packages --upgrade "typing-extensions"
# Install tool to generate initial block
RUN go install github.com/cerc-io/eth-dump-genblock@b29516740fc01cf1d1d623acbfd0e9a2b6440a96
# Build genesis config # Build genesis config
RUN apk add --no-cache make bash envsubst jq COPY genesis /opt/genesis
RUN cd /opt/genesis && make genesis-el RUN cd /opt/genesis && make genesis-el
# Snag the genesis block info. # Snag the genesis block info.
RUN go install github.com/cerc-io/eth-dump-genblock@latest
RUN eth-dump-genblock /opt/genesis/build/el/geth.json > /opt/genesis/build/el/genesis_block.json RUN eth-dump-genblock /opt/genesis/build/el/geth.json > /opt/genesis/build/el/genesis_block.json
FROM alpine:latest FROM alpine:latest

View File

@ -9,32 +9,8 @@ mkdir -p ../build/el
tmp_dir=$(mktemp -d -t ci-XXXXXXXXXX) tmp_dir=$(mktemp -d -t ci-XXXXXXXXXX)
envsubst < el-config.yaml > $tmp_dir/genesis-config.yaml envsubst < el-config.yaml > $tmp_dir/genesis-config.yaml
ttd=`cat $tmp_dir/genesis-config.yaml | grep terminal_total_difficulty | awk '{ print $2 }'`
homestead_block=`cat $tmp_dir/genesis-config.yaml | grep homestead_block | awk '{ print $2 }'`
eip150_block=`cat $tmp_dir/genesis-config.yaml | grep eip150_block | awk '{ print $2 }'`
eip155_block=`cat $tmp_dir/genesis-config.yaml | grep eip155_block | awk '{ print $2 }'`
eip158_block=`cat $tmp_dir/genesis-config.yaml | grep eip158_block | awk '{ print $2 }'`
byzantium_block=`cat $tmp_dir/genesis-config.yaml | grep byzantium_block | awk '{ print $2 }'`
constantinople_block=`cat $tmp_dir/genesis-config.yaml | grep constantinople_block | awk '{ print $2 }'`
petersburg_block=`cat $tmp_dir/genesis-config.yaml | grep petersburg_block | awk '{ print $2 }'`
istanbul_block=`cat $tmp_dir/genesis-config.yaml | grep istanbul_block | awk '{ print $2 }'`
berlin_block=`cat $tmp_dir/genesis-config.yaml | grep berlin_block | awk '{ print $2 }'`
london_block=`cat $tmp_dir/genesis-config.yaml | grep london_block | awk '{ print $2 }'`
merge_fork_block=`cat $tmp_dir/genesis-config.yaml | grep merge_fork_block | awk '{ print $2 }'`
python3 /apps/el-gen/genesis_geth.py $tmp_dir/genesis-config.yaml | \ python3 /apps/el-gen/genesis_geth.py $tmp_dir/genesis-config.yaml | \
jq ".config.terminalTotalDifficulty=$ttd" | \ jq 'del(.config.pragueTime)' \
jq ".config.homesteadBlock=$homestead_block" | \
jq ".config.eip150Block=$eip150_block" | \
jq ".config.eip155Block=$eip155_block" | \
jq ".config.eip158Block=$eip158_block" | \
jq ".config.byzantiumBlock=$byzantium_block" | \
jq ".config.constantinopleBlock=$constantinople_block" | \
jq ".config.petersburgBlock=$petersburg_block" | \
jq ".config.istanbulBlock=$istanbul_block" | \
jq ".config.berlinBlock=$berlin_block" | \
jq ".config.londonBlock=$london_block" | \
jq ".config.mergeForkBlock=$merge_fork_block" | \
jq ".config.mergeNetsplitBlock=$merge_fork_block" \
> ../build/el/geth.json > ../build/el/geth.json
python3 ../accounts/mnemonic_to_csv.py $tmp_dir/genesis-config.yaml > ../build/el/accounts.csv python3 ../accounts/mnemonic_to_csv.py $tmp_dir/genesis-config.yaml > ../build/el/accounts.csv

View File

@ -10,22 +10,8 @@ el_premine_addrs: {}
chain_id: 1212 chain_id: 1212
deposit_contract_address: "0x1212121212121212121212121212121212121212" deposit_contract_address: "0x1212121212121212121212121212121212121212"
genesis_timestamp: 0 genesis_timestamp: 0
terminal_total_difficulty: 1000 genesis_delay: 0
homestead_block: 1 deneb_fork_epoch: 0
eip150_block: 1 # note: only needed as workaround https://github.com/ethpandaops/ethereum-genesis-generator/pull/105
eip155_block: 1 electra_fork_epoch: 0
eip158_block: 1 slot_duration_in_seconds: 3
byzantium_block: 1
constantinople_block: 1
petersburg_block: 1
istanbul_block: 1
berlin_block: 1
london_block: 1
merge_fork_block: 1
clique:
enabled: false
signers:
- 36d56343bc308d4ffaac2f793d121aba905fa6cc
- 5e762d4a3847cadaf40a4b0c39574b0ff6698c78
- 15d7acc1019fdf8ab4f0f7bd31ec1487ecb5a2bd

View File

@ -6,7 +6,7 @@ fi
ETHERBASE=`cat /opt/testnet/build/el/accounts.csv | head -1 | cut -d',' -f2` ETHERBASE=`cat /opt/testnet/build/el/accounts.csv | head -1 | cut -d',' -f2`
NETWORK_ID=`cat /opt/testnet/el/el-config.yaml | grep 'chain_id' | awk '{ print $2 }'` NETWORK_ID=`cat /opt/testnet/el/el-config.yaml | grep 'chain_id' | awk '{ print $2 }'`
NETRESTRICT=`ip addr | grep inet | grep -v '127.0' | awk '{print $2}'` NETRESTRICT=`ip addr | grep -w inet | grep -v '127.0' | awk '{print $2}'`
CERC_ETH_DATADIR="${CERC_ETH_DATADIR:-$HOME/ethdata}" CERC_ETH_DATADIR="${CERC_ETH_DATADIR:-$HOME/ethdata}"
CERC_PLUGINS_DIR="${CERC_PLUGINS_DIR:-/usr/local/lib/plugeth}" CERC_PLUGINS_DIR="${CERC_PLUGINS_DIR:-/usr/local/lib/plugeth}"
@ -102,6 +102,13 @@ else
fi fi
fi fi
OTHER_OPTS=""
# miner options were removed in v1.12
GETH_VERSION=$(geth --version | grep -io '[0-9][0-9a-z.-]*')
if echo -e "$GETH_VERSION\n1.12" | sort -Vc; then
OTHER_OPTS="--miner.threads=1"
fi
$START_CMD \ $START_CMD \
--datadir="${CERC_ETH_DATADIR}" \ --datadir="${CERC_ETH_DATADIR}" \
--bootnodes="${ENODE}" \ --bootnodes="${ENODE}" \
@ -126,12 +133,12 @@ else
--cache.preimages \ --cache.preimages \
--syncmode=full \ --syncmode=full \
--mine \ --mine \
--miner.threads=1 \
--metrics \ --metrics \
--metrics.addr="0.0.0.0" \ --metrics.addr="0.0.0.0" \
--verbosity=${CERC_GETH_VERBOSITY:-3} \ --verbosity=${CERC_GETH_VERBOSITY:-3} \
--log.vmodule="${CERC_GETH_VMODULE:-statediff/*=5}" \ --log.vmodule="${CERC_GETH_VMODULE:-statediff/*=5}" \
--miner.etherbase="${ETHERBASE}" \ --miner.etherbase="${ETHERBASE}" \
${OTHER_OPTS} \
${STATEDIFF_OPTS} \ ${STATEDIFF_OPTS} \
& &

View File

@ -1,5 +1,4 @@
FROM cerc/lighthouse-cli:local AS lcli FROM cerc/lighthouse-cli:local AS lcli
FROM skylenet/ethereum-genesis-generator@sha256:210353ce7c898686bc5092f16c61220a76d357f51eff9c451e9ad1b9ad03d4d3 AS ethgen
FROM cerc/fixturenet-eth-genesis:local AS fnetgen FROM cerc/fixturenet-eth-genesis:local AS fnetgen
FROM cerc/lighthouse:local FROM cerc/lighthouse:local
@ -12,16 +11,13 @@ RUN apt-get update && apt-get -y upgrade && apt-get install -y --no-install-reco
&& apt-get clean \ && apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
COPY genesis /opt/testnet
COPY run-cl.sh /opt/testnet/run.sh
COPY --from=lcli /usr/local/bin/lcli /usr/local/bin/lcli COPY --from=lcli /usr/local/bin/lcli /usr/local/bin/lcli
COPY --from=ethgen /usr/local/bin/eth2-testnet-genesis /usr/local/bin/eth2-testnet-genesis
COPY --from=ethgen /usr/local/bin/eth2-val-tools /usr/local/bin/eth2-val-tools
COPY --from=ethgen /apps /apps
COPY --from=fnetgen /opt/genesis/el /opt/testnet/el COPY --from=fnetgen /opt/genesis/el /opt/testnet/el
COPY --from=fnetgen /opt/genesis/build/el /opt/testnet/build/el COPY --from=fnetgen /opt/genesis/build/el /opt/testnet/build/el
COPY genesis /opt/testnet
COPY run-cl.sh /opt/testnet/run.sh
RUN cd /opt/testnet && make genesis-cl RUN cd /opt/testnet && make genesis-cl
# Work around some bugs in lcli where the default path is always used. # Work around some bugs in lcli where the default path is always used.

View File

@ -10,7 +10,6 @@ set -Eeuo pipefail
source ./vars.env source ./vars.env
SUBSCRIBE_ALL_SUBNETS= SUBSCRIBE_ALL_SUBNETS=
DEBUG_LEVEL=${DEBUG_LEVEL:-debug}
# Get positional arguments # Get positional arguments
data_dir=$DATADIR/node_${NODE_NUMBER} data_dir=$DATADIR/node_${NODE_NUMBER}

View File

@ -9,8 +9,6 @@ set -Eeuo pipefail
source ./vars.env source ./vars.env
DEBUG_LEVEL=${1:-info}
echo "Starting bootnode" echo "Starting bootnode"
# Clean up existing ENR dir to avoid node connectivity issues on a restart # Clean up existing ENR dir to avoid node connectivity issues on a restart

View File

@ -1,5 +1,6 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# See https://github.com/sigp/lighthouse/scripts/local_testnet/setup.sh
# #
# Deploys the deposit contract and makes deposits for $VALIDATOR_COUNT insecure deterministic validators. # Deploys the deposit contract and makes deposits for $VALIDATOR_COUNT insecure deterministic validators.
# Produces a testnet specification and a genesis state where the genesis time # Produces a testnet specification and a genesis state where the genesis time
@ -24,23 +25,26 @@ echo "(Note: errors of the form 'WARN: Scrypt parameters are too weak...' below
lcli \ lcli \
new-testnet \ new-testnet \
--spec $SPEC_PRESET \ --spec $SPEC_PRESET \
--deposit-contract-address $ETH1_DEPOSIT_CONTRACT_ADDRESS \
--testnet-dir $TESTNET_DIR \ --testnet-dir $TESTNET_DIR \
--deposit-contract-address $ETH1_DEPOSIT_CONTRACT_ADDRESS \
--min-genesis-active-validator-count $GENESIS_VALIDATOR_COUNT \ --min-genesis-active-validator-count $GENESIS_VALIDATOR_COUNT \
--validator-count $VALIDATOR_COUNT \ --validator-count $VALIDATOR_COUNT \
--min-genesis-time $GENESIS_TIME \ --min-genesis-time $GENESIS_TIME \
--genesis-delay $GENESIS_DELAY \ --genesis-delay $GENESIS_DELAY \
--genesis-fork-version $GENESIS_FORK_VERSION \ --genesis-fork-version $GENESIS_FORK_VERSION \
--altair-fork-epoch $ALTAIR_FORK_EPOCH \ --altair-fork-epoch $ALTAIR_FORK_EPOCH \
--bellatrix-fork-epoch $MERGE_FORK_EPOCH \ --bellatrix-fork-epoch $BELLATRIX_FORK_EPOCH \
--capella-fork-epoch $CAPELLA_FORK_EPOCH \
--deneb-fork-epoch $DENEB_FORK_EPOCH \
--eth1-id $ETH1_CHAIN_ID \ --eth1-id $ETH1_CHAIN_ID \
--eth1-block-hash $ETH1_BLOCK_HASH \ --eth1-block-hash $ETH1_BLOCK_HASH \
--eth1-follow-distance 1 \ --eth1-follow-distance 1 \
--seconds-per-slot $SECONDS_PER_SLOT \ --seconds-per-slot $SECONDS_PER_SLOT \
--seconds-per-eth1-block $SECONDS_PER_ETH1_BLOCK \ --seconds-per-eth1-block $SECONDS_PER_ETH1_BLOCK \
--interop-genesis-state \
--force --force
echo Specification generated at $TESTNET_DIR. echo Specification and genesis.ssz generated at $TESTNET_DIR.
echo "Generating $VALIDATOR_COUNT validators concurrently... (this may take a while)" echo "Generating $VALIDATOR_COUNT validators concurrently... (this may take a while)"
lcli \ lcli \
@ -50,13 +54,3 @@ lcli \
--node-count $BN_COUNT --node-count $BN_COUNT
echo Validators generated with keystore passwords at $DATADIR. echo Validators generated with keystore passwords at $DATADIR.
echo "Building genesis state... (this might take a while)"
lcli \
interop-genesis \
--spec $SPEC_PRESET \
--genesis-time $GENESIS_TIME \
--testnet-dir $TESTNET_DIR \
$GENESIS_VALIDATOR_COUNT
echo Created genesis state in $TESTNET_DIR

View File

@ -8,8 +8,6 @@ set -Eeuo pipefail
source ./vars.env source ./vars.env
DEBUG_LEVEL=info
BUILDER_PROPOSALS= BUILDER_PROPOSALS=
# Get options # Get options

View File

@ -25,7 +25,9 @@ BOOTNODE_PORT=${BOOTNODE_PORT:-4242}
# Hard fork configuration # Hard fork configuration
ALTAIR_FORK_EPOCH=${ALTAIR_FORK_EPOCH:-0} ALTAIR_FORK_EPOCH=${ALTAIR_FORK_EPOCH:-0}
MERGE_FORK_EPOCH=${MERGE_FORK_EPOCH:-0} BELLATRIX_FORK_EPOCH=${BELLATRIX_FORK_EPOCH:-0}
CAPELLA_FORK_EPOCH=${CAPELLA_FORK_EPOCH:-0}
DENEB_FORK_EPOCH=${DENEB_FORK_EPOCH:-0}
# Spec version (mainnet or minimal) # Spec version (mainnet or minimal)
SPEC_PRESET=${SPEC_PRESET:-mainnet} SPEC_PRESET=${SPEC_PRESET:-mainnet}
@ -51,3 +53,6 @@ ETH1_TTD=${ETH1_TTD:-`cat $ETH1_GENESIS_JSON | jq -r '.config.terminalTotalDiffi
ETH1_DEPOSIT_CONTRACT_ADDRESS=${ETH1_DEPOSIT_CONTRACT_ADDRESS:-`cat $ETH1_CONFIG_YAML | grep 'deposit_contract_address' | awk '{ print $2 }' | sed 's/"//g'`} ETH1_DEPOSIT_CONTRACT_ADDRESS=${ETH1_DEPOSIT_CONTRACT_ADDRESS:-`cat $ETH1_CONFIG_YAML | grep 'deposit_contract_address' | awk '{ print $2 }' | sed 's/"//g'`}
ETH1_DEPOSIT_CONTRACT_BLOCK=${ETH1_DEPOSIT_CONTRACT_BLOCK:-0x0} ETH1_DEPOSIT_CONTRACT_BLOCK=${ETH1_DEPOSIT_CONTRACT_BLOCK:-0x0}
SUGGESTED_FEE_RECIPIENT=`cat ../build/el/accounts.csv | head -1 | cut -d',' -f2` SUGGESTED_FEE_RECIPIENT=`cat ../build/el/accounts.csv | head -1 | cut -d',' -f2`
# --debug-level
DEBUG_LEVEL=${LIGHTHOUSE_DEBUG_LEVEL:-debug}

View File

@ -1,9 +1,10 @@
ARG TAG_SUFFIX="-modern" FROM sigp/lighthouse:v5.1.2
FROM sigp/lighthouse:v4.3.0${TAG_SUFFIX}
RUN apt-get update; apt-get install bash netcat curl less jq wget -y; RUN apt-get update && apt-get -y upgrade \
&& apt-get -y install bash netcat curl less jq wget \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
WORKDIR /root/ WORKDIR /root
ADD start-lighthouse.sh . ADD start-lighthouse.sh .
ENTRYPOINT [ "./start-lighthouse.sh" ] ENTRYPOINT [ "./start-lighthouse.sh" ]

View File

@ -6,4 +6,4 @@ source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
# See: https://stackoverflow.com/a/246128/1701505 # See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/lighthouse:local ${build_command_args} --build-arg TAG_SUFFIX="" ${SCRIPT_DIR} docker build -t cerc/lighthouse:local ${build_command_args} ${SCRIPT_DIR}

View File

@ -26,6 +26,8 @@ RUN \
&& su ${USERNAME} -c "umask 0002 && npm install -g eslint" \ && su ${USERNAME} -c "umask 0002 && npm install -g eslint" \
# Install semver # Install semver
&& su ${USERNAME} -c "umask 0002 && npm install -g semver" \ && su ${USERNAME} -c "umask 0002 && npm install -g semver" \
# Install pnpm
&& su ${USERNAME} -c "umask 0002 && npm install -g pnpm" \
&& npm cache clean --force > /dev/null 2>&1 && npm cache clean --force > /dev/null 2>&1
# [Optional] Uncomment this section to install additional OS packages. # [Optional] Uncomment this section to install additional OS packages.
@ -35,6 +37,9 @@ RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# [Optional] Uncomment if you want to install more global node modules # [Optional] Uncomment if you want to install more global node modules
# RUN su node -c "npm install -g <your-package-list-here>" # RUN su node -c "npm install -g <your-package-list-here>"
# We do this to get a yq binary from the published container, for the correct architecture we're building here
COPY --from=docker.io/mikefarah/yq:latest /usr/bin/yq /usr/local/bin/yq
# Expose port for http # Expose port for http
EXPOSE 80 EXPOSE 80

View File

@ -11,8 +11,14 @@ CERC_CONTAINER_BUILD_DOCKERFILE=${CERC_CONTAINER_BUILD_DOCKERFILE:-$SCRIPT_DIR/D
CERC_CONTAINER_BUILD_TAG=${CERC_CONTAINER_BUILD_TAG:-cerc/nextjs-base:local} CERC_CONTAINER_BUILD_TAG=${CERC_CONTAINER_BUILD_TAG:-cerc/nextjs-base:local}
docker build -t $CERC_CONTAINER_BUILD_TAG ${build_command_args} -f $CERC_CONTAINER_BUILD_DOCKERFILE $CERC_CONTAINER_BUILD_WORK_DIR docker build -t $CERC_CONTAINER_BUILD_TAG ${build_command_args} -f $CERC_CONTAINER_BUILD_DOCKERFILE $CERC_CONTAINER_BUILD_WORK_DIR
rc=$?
if [ $? -eq 0 ] && [ "$CERC_CONTAINER_BUILD_TAG" != "cerc/nextjs-base:local" ]; then if [ $rc -ne 0 ]; then
echo "BUILD FAILED" 1>&2
exit $rc
fi
if [ "$CERC_CONTAINER_BUILD_TAG" != "cerc/nextjs-base:local" ]; then
cat <<EOF cat <<EOF
################################################################# #################################################################

View File

@ -10,10 +10,12 @@ TRG_DIR="${3:-.next-r}"
CERC_BUILD_TOOL="${CERC_BUILD_TOOL}" CERC_BUILD_TOOL="${CERC_BUILD_TOOL}"
if [ -z "$CERC_BUILD_TOOL" ]; then if [ -z "$CERC_BUILD_TOOL" ]; then
if [ -f "yarn.lock" ]; then if [ -f "pnpm-lock.yaml" ]; then
CERC_BUILD_TOOL=npm CERC_BUILD_TOOL=pnpm
else elif [ -f "yarn.lock" ]; then
CERC_BUILD_TOOL=yarn CERC_BUILD_TOOL=yarn
else
CERC_BUILD_TOOL=npm
fi fi
fi fi

View File

@ -9,7 +9,9 @@ CERC_MIN_NEXTVER=13.4.2
CERC_NEXT_VERSION="${CERC_NEXT_VERSION:-keep}" CERC_NEXT_VERSION="${CERC_NEXT_VERSION:-keep}"
CERC_BUILD_TOOL="${CERC_BUILD_TOOL}" CERC_BUILD_TOOL="${CERC_BUILD_TOOL}"
if [ -z "$CERC_BUILD_TOOL" ]; then if [ -z "$CERC_BUILD_TOOL" ]; then
if [ -f "yarn.lock" ]; then if [ -f "pnpm-lock.yaml" ]; then
CERC_BUILD_TOOL=pnpm
elif [ -f "yarn.lock" ]; then
CERC_BUILD_TOOL=yarn CERC_BUILD_TOOL=yarn
else else
CERC_BUILD_TOOL=npm CERC_BUILD_TOOL=npm
@ -113,7 +115,7 @@ if [ "$CERC_NEXT_VERSION" != "keep" ] && [ "$CUR_NEXT_VERSION" != "$CERC_NEXT_VE
mv package.json.$$ package.json mv package.json.$$ package.json
fi fi
$CERC_BUILD_TOOL install || exit 1 time $CERC_BUILD_TOOL install || exit 1
CUR_NEXT_VERSION=`jq -r '.version' node_modules/next/package.json` CUR_NEXT_VERSION=`jq -r '.version' node_modules/next/package.json`
@ -136,9 +138,9 @@ to use for the build with:
EOF EOF
cat package.json | jq ".dependencies.next = \"^$CERC_MIN_NEXTVER\"" > package.json.$$ cat package.json | jq ".dependencies.next = \"^$CERC_MIN_NEXTVER\"" > package.json.$$
mv package.json.$$ package.json mv package.json.$$ package.json
$CERC_BUILD_TOOL install || exit 1 time $CERC_BUILD_TOOL install || exit 1
fi fi
$CERC_BUILD_TOOL run cerc_compile || exit 1 time $CERC_BUILD_TOOL run cerc_compile || exit 1
exit 0 exit 0

View File

@ -16,7 +16,9 @@ trap ctrl_c INT
CERC_BUILD_TOOL="${CERC_BUILD_TOOL}" CERC_BUILD_TOOL="${CERC_BUILD_TOOL}"
if [ -z "$CERC_BUILD_TOOL" ]; then if [ -z "$CERC_BUILD_TOOL" ]; then
if [ -f "yarn.lock" ] && [ ! -f "package-lock.json" ]; then if [ -f "pnpm-lock.yaml" ]; then
CERC_BUILD_TOOL=pnpm
elif [ -f "yarn.lock" ]; then
CERC_BUILD_TOOL=yarn CERC_BUILD_TOOL=yarn
else else
CERC_BUILD_TOOL=npm CERC_BUILD_TOOL=npm

View File

@ -0,0 +1,6 @@
FROM cerc/snowballtools-base-backend-base:local
WORKDIR /app/packages/backend
COPY run.sh .
ENTRYPOINT ["./run.sh"]

View File

@ -0,0 +1,26 @@
FROM ubuntu:22.04 as builder
RUN apt update && \
apt install -y --no-install-recommends --no-install-suggests \
ca-certificates curl gnupg
# Node
ARG NODE_MAJOR=20
RUN curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg && \
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list && \
apt update && apt install -y nodejs
# npm setup
RUN npm config set @cerc-io:registry https://git.vdb.to/api/packages/cerc-io/npm/ && npm install -g yarn
COPY . /app/
WORKDIR /app/
RUN find . -name 'node_modules' | xargs -n1 rm -rf
RUN yarn && yarn build --ignore frontend
FROM cerc/webapp-base:local
COPY --from=builder /app /app
WORKDIR /app/packages/backend

View File

@ -0,0 +1,10 @@
#!/usr/bin/env bash
# Build cerc/webapp-deployer-backend
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
# See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/snowballtools-base-backend-base:local ${build_command_args} -f ${SCRIPT_DIR}/Dockerfile-base ${CERC_REPO_BASE_DIR}/snowballtools-base
docker build -t cerc/snowballtools-base-backend:local ${build_command_args} ${SCRIPT_DIR}

View File

@ -0,0 +1,19 @@
#!/bin/bash
LACONIC_HOSTED_CONFIG_FILE=${LACONIC_HOSTED_CONFIG_FILE}
if [ -z "${LACONIC_HOSTED_CONFIG_FILE}" ]; then
if [ -f "/config/laconic-hosted-config.yml" ]; then
LACONIC_HOSTED_CONFIG_FILE="/config/laconic-hosted-config.yml"
elif [ -f "/config/config.yml" ]; then
LACONIC_HOSTED_CONFIG_FILE="/config/config.yml"
fi
fi
if [ -f "${LACONIC_HOSTED_CONFIG_FILE}" ]; then
/scripts/apply-webapp-config.sh $LACONIC_HOSTED_CONFIG_FILE "`pwd`/dist"
fi
/scripts/apply-runtime-env.sh "`pwd`/dist"
yarn start

View File

@ -39,6 +39,15 @@ fi
if [ -n "$CERC_TEST_PARAM_2" ]; then if [ -n "$CERC_TEST_PARAM_2" ]; then
echo "Test-param-2: ${CERC_TEST_PARAM_2}" echo "Test-param-2: ${CERC_TEST_PARAM_2}"
fi fi
if [ -n "$CERC_TEST_PARAM_3" ]; then
echo "Test-param-3: ${CERC_TEST_PARAM_3}"
fi
if [ -n "$CERC_TEST_PARAM_4" ]; then
echo "Test-param-4: ${CERC_TEST_PARAM_4}"
fi
if [ -n "$CERC_TEST_PARAM_5" ]; then
echo "Test-param-5: ${CERC_TEST_PARAM_5}"
fi
if [ -d "/config" ]; then if [ -d "/config" ]; then
echo "/config: EXISTS" echo "/config: EXISTS"

View File

@ -0,0 +1,10 @@
FROM node:18.17.1-alpine3.18
RUN apk --update --no-cache add git python3 alpine-sdk bash curl jq
WORKDIR /app
COPY . .
RUN echo "Installing dependencies and building ajna-watcher-ts" && \
yarn && yarn build

View File

@ -0,0 +1,9 @@
#!/usr/bin/env bash
# Build cerc/watcher-ajna
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
# See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/watcher-ajna:local -f ${SCRIPT_DIR}/Dockerfile ${build_command_args} ${CERC_REPO_BASE_DIR}/ajna-watcher-ts

View File

@ -1,6 +1,6 @@
# Originally from: https://github.com/devcontainers/images/blob/main/src/javascript-node/.devcontainer/Dockerfile # Originally from: https://github.com/devcontainers/images/blob/main/src/javascript-node/.devcontainer/Dockerfile
# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 18, 16, 14, 18-bullseye, 16-bullseye, 14-bullseye, 18-buster, 16-buster, 14-buster # [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 18, 16, 14, 18-bullseye, 16-bullseye, 14-bullseye, 18-buster, 16-buster, 14-buster
ARG VARIANT=20-bullseye ARG VARIANT=20-bullseye-slim
FROM node:${VARIANT} FROM node:${VARIANT}
ARG USERNAME=node ARG USERNAME=node
@ -24,6 +24,10 @@ RUN \
&& su ${USERNAME} -c "npm config -g set prefix ${NPM_GLOBAL}" \ && su ${USERNAME} -c "npm config -g set prefix ${NPM_GLOBAL}" \
# Install eslint # Install eslint
&& su ${USERNAME} -c "umask 0002 && npm install -g eslint" \ && su ${USERNAME} -c "umask 0002 && npm install -g eslint" \
# Install semver
&& su ${USERNAME} -c "umask 0002 && npm install -g semver" \
# Install pnpm
&& su ${USERNAME} -c "umask 0002 && npm install -g pnpm" \
&& npm cache clean --force > /dev/null 2>&1 && npm cache clean --force > /dev/null 2>&1
# [Optional] Uncomment this section to install additional OS packages. # [Optional] Uncomment this section to install additional OS packages.
@ -43,7 +47,6 @@ COPY scripts /scripts
# RUN su node -c "npm install -g <your-package-list-here>" # RUN su node -c "npm install -g <your-package-list-here>"
RUN mkdir -p /config RUN mkdir -p /config
COPY ./config.yml /config
# Install simple web server for now (use nginx perhaps later) # Install simple web server for now (use nginx perhaps later)
RUN yarn global add http-server RUN yarn global add http-server

View File

@ -1,11 +1,12 @@
FROM cerc/webapp-base:local as builder FROM cerc/webapp-base:local as builder
ARG CERC_BUILD_TOOL ARG CERC_BUILD_TOOL
ARG CERC_BUILD_OUTPUT_DIR
WORKDIR /app WORKDIR /app
COPY . . COPY . .
RUN rm -rf node_modules build .next* RUN rm -rf node_modules build dist .next*
RUN /scripts/build-app.sh /app build /data RUN /scripts/build-app.sh /app /data
FROM cerc/webapp-base:local FROM cerc/webapp-base:local
COPY --from=builder /data /data COPY --from=builder /data /data

View File

@ -11,8 +11,14 @@ CERC_CONTAINER_BUILD_DOCKERFILE=${CERC_CONTAINER_BUILD_DOCKERFILE:-$SCRIPT_DIR/D
CERC_CONTAINER_BUILD_TAG=${CERC_CONTAINER_BUILD_TAG:-cerc/webapp-base:local} CERC_CONTAINER_BUILD_TAG=${CERC_CONTAINER_BUILD_TAG:-cerc/webapp-base:local}
docker build -t $CERC_CONTAINER_BUILD_TAG ${build_command_args} -f $CERC_CONTAINER_BUILD_DOCKERFILE $CERC_CONTAINER_BUILD_WORK_DIR docker build -t $CERC_CONTAINER_BUILD_TAG ${build_command_args} -f $CERC_CONTAINER_BUILD_DOCKERFILE $CERC_CONTAINER_BUILD_WORK_DIR
rc=$?
if [ $? -eq 0 ] && [ "$CERC_CONTAINER_BUILD_TAG" != "cerc/webapp-base:local" ]; then if [ $rc -ne 0 ]; then
echo "BUILD FAILED" 1>&2
exit $rc
fi
if [ "$CERC_CONTAINER_BUILD_TAG" != "cerc/webapp-base:local" ]; then
cat <<EOF cat <<EOF
################################################################# #################################################################

View File

@ -1 +0,0 @@
# Put config here.

View File

@ -7,27 +7,46 @@ if [ -n "$CERC_SCRIPT_DEBUG" ]; then
fi fi
CERC_BUILD_TOOL="${CERC_BUILD_TOOL}" CERC_BUILD_TOOL="${CERC_BUILD_TOOL}"
WORK_DIR="${1:-/app}" CERC_BUILD_OUTPUT_DIR="${CERC_BUILD_OUTPUT_DIR}"
OUTPUT_DIR="${2:-build}"
DEST_DIR="${3:-/data}"
if [ -f "${WORK_DIR}/package.json" ]; then WORK_DIR="${1:-/app}"
DEST_DIR="${2:-/data}"
if [ -f "${WORK_DIR}/build-webapp.sh" ]; then
echo "Building webapp with ${WORK_DIR}/build-webapp.sh ..."
cd "${WORK_DIR}" || exit 1
rm -rf "${DEST_DIR}"
./build-webapp.sh "${DEST_DIR}" || exit 1
elif [ -f "${WORK_DIR}/package.json" ]; then
echo "Building node-based webapp ..." echo "Building node-based webapp ..."
cd "${WORK_DIR}" || exit 1 cd "${WORK_DIR}" || exit 1
if [ -z "$CERC_BUILD_TOOL" ]; then if [ -z "$CERC_BUILD_TOOL" ]; then
if [ -f "yarn.lock" ]; then if [ -f "pnpm-lock.yaml" ]; then
CERC_BUILD_TOOL=pnpm
elif [ -f "yarn.lock" ]; then
CERC_BUILD_TOOL=yarn CERC_BUILD_TOOL=yarn
else else
CERC_BUILD_TOOL=npm CERC_BUILD_TOOL=npm
fi fi
fi fi
$CERC_BUILD_TOOL install || exit 1 time $CERC_BUILD_TOOL install || exit 1
$CERC_BUILD_TOOL build || exit 1 time $CERC_BUILD_TOOL build || exit 1
rm -rf "${DEST_DIR}" rm -rf "${DEST_DIR}"
mv "${WORK_DIR}/${OUTPUT_DIR}" "${DEST_DIR}" if [ -z "${CERC_BUILD_OUTPUT_DIR}" ]; then
if [ -d "${WORK_DIR}/dist" ]; then
CERC_BUILD_OUTPUT_DIR="${WORK_DIR}/dist"
elif [ -d "${WORK_DIR}/build" ]; then
CERC_BUILD_OUTPUT_DIR="${WORK_DIR}/build"
else
echo "ERROR: Unable to locate build output. Set with --extra-build-args \"--build-arg CERC_BUILD_OUTPUT_DIR=path\"" 1>&2
exit 1
fi
fi
mv "${CERC_BUILD_OUTPUT_DIR}" "${DEST_DIR}"
else else
echo "Copying static app ..." echo "Copying static app ..."
mv "${WORK_DIR}" "${DEST_DIR}" mv "${WORK_DIR}" "${DEST_DIR}"

View File

@ -3,13 +3,40 @@ if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x set -x
fi fi
CERC_LISTEN_PORT=${CERC_LISTEN_PORT:-80}
CERC_WEBAPP_FILES_DIR="${CERC_WEBAPP_FILES_DIR:-/data}" CERC_WEBAPP_FILES_DIR="${CERC_WEBAPP_FILES_DIR:-/data}"
CERC_ENABLE_CORS="${CERC_ENABLE_CORS:-false}" CERC_ENABLE_CORS="${CERC_ENABLE_CORS:-false}"
CERC_SINGLE_PAGE_APP="${CERC_SINGLE_PAGE_APP}"
if [ -z "${CERC_SINGLE_PAGE_APP}" ]; then
if [ 1 -eq $(find "${CERC_WEBAPP_FILES_DIR}" -name '*.html' | wc -l) ] && [ -d "${CERC_WEBAPP_FILES_DIR}/static" ]; then
CERC_SINGLE_PAGE_APP=true
else
CERC_SINGLE_PAGE_APP=false
fi
fi
if [ "true" == "$CERC_ENABLE_CORS" ]; then if [ "true" == "$CERC_ENABLE_CORS" ]; then
CERC_HTTP_EXTRA_ARGS="$CERC_HTTP_EXTRA_ARGS --cors" CERC_HTTP_EXTRA_ARGS="$CERC_HTTP_EXTRA_ARGS --cors"
fi fi
/scripts/apply-webapp-config.sh /config/config.yml ${CERC_WEBAPP_FILES_DIR} if [ "true" == "$CERC_SINGLE_PAGE_APP" ]; then
# Create a catchall redirect back to /
CERC_HTTP_EXTRA_ARGS="$CERC_HTTP_EXTRA_ARGS --proxy http://localhost:${CERC_LISTEN_PORT}?"
fi
LACONIC_HOSTED_CONFIG_FILE=${LACONIC_HOSTED_CONFIG_FILE}
if [ -z "${LACONIC_HOSTED_CONFIG_FILE}" ]; then
if [ -f "/config/laconic-hosted-config.yml" ]; then
LACONIC_HOSTED_CONFIG_FILE="/config/laconic-hosted-config.yml"
elif [ -f "/config/config.yml" ]; then
LACONIC_HOSTED_CONFIG_FILE="/config/config.yml"
fi
fi
if [ -f "${LACONIC_HOSTED_CONFIG_FILE}" ]; then
/scripts/apply-webapp-config.sh $LACONIC_HOSTED_CONFIG_FILE "${CERC_WEBAPP_FILES_DIR}"
fi
/scripts/apply-runtime-env.sh ${CERC_WEBAPP_FILES_DIR} /scripts/apply-runtime-env.sh ${CERC_WEBAPP_FILES_DIR}
http-server $CERC_HTTP_EXTRA_ARGS -p ${CERC_LISTEN_PORT:-80} ${CERC_WEBAPP_FILES_DIR} http-server $CERC_HTTP_EXTRA_ARGS -p ${CERC_LISTEN_PORT} "${CERC_WEBAPP_FILES_DIR}"

View File

@ -0,0 +1,118 @@
# Ajna Watcher
## Setup
Clone required repositories:
```bash
laconic-so --stack ajna setup-repositories --git-ssh --pull
```
Build the container images:
```bash
laconic-so --stack ajna build-containers
```
## Deploy
Create a spec file for the deployment:
```bash
laconic-so --stack ajna deploy init --output ajna-spec.yml
```
### Ports
Edit `network` in the spec file to map container ports to host ports as required:
```yml
...
network:
ports:
ajna-watcher-db:
- 15432:5432
ajna-watcher-job-runner:
- 9000:9000
ajna-watcher-server:
- 3008:3008
- 9001:9001
```
### Create a deployment
Create a deployment from the spec file:
```bash
laconic-so --stack ajna deploy create --spec-file ajna-spec.yml --deployment-dir ajna-deployment
```
### Configuration
Inside deployment directory, open the `config.env` file and set following env variables:
```bash
# External Filecoin (ETH RPC) endpoint to point the watcher to
CERC_ETH_RPC_ENDPOINT=https://example-lotus-endpoint/rpc/v1
```
### Start the deployment
```bash
laconic-so deployment --dir ajna-deployment start
```
* To list down and monitor the running containers:
```bash
# With status
docker ps -a
# Check logs for a container
docker logs -f <CONTAINER_ID>
```
* Open the GQL playground at <http://localhost:3008/graphql>
```graphql
# Example query
query {
_meta {
block {
hash
number
timestamp
}
deployment
hasIndexingErrors
}
accounts {
id
txCount
tokensDelegated
rewardsClaimed
}
}
```
## Clean up
Stop all the ajna services running in background:
```bash
# Only stop the docker containers
laconic-so deployment --dir ajna-deployment stop
# Run 'start' to restart the deployment
```
To stop all the ajna services and also delete data:
```bash
# Stop the docker containers
laconic-so deployment --dir ajna-deployment stop --delete-volumes
# Remove deployment directory (deployment will have to be recreated for a re-run)
rm -r ajna-deployment
```

View File

@ -0,0 +1,9 @@
version: "1.0"
name: ajna
description: "Ajna watcher stack"
repos:
- git.vdb.to/cerc-io/ajna-watcher-ts@v0.1.1
containers:
- cerc/watcher-ajna
pods:
- watcher-ajna

View File

@ -0,0 +1,26 @@
# Blast stack
## Clone required repositories
```
$ laconic-so --stack fixturenet-blast setup-repositories
```
## Build the stack's containers
```
$ laconic-so --stack fixturenet-blast build-containers
```
## Create a deployment of the stack
```
$ laconic-so --stack fixturenet-blast deploy init --map-ports-to-host any-same --output blast-spec.yml
```
[Insert details on how to configure the stack]
```
$ laconic-so --stack fixturenet-blast deploy create --deployment-dir blast-deployment --spec-file blast-spec.yml
```
## Start the stack
```
$ laconic-so deployment --dir blast-deployment start
```
Check logs:
```
$ laconic-so deployment --dir blast-deployment logs
```

View File

@ -0,0 +1,17 @@
version: "1.0"
name: fixturenet-blast
description: "A blast devnet stack"
repos:
- github.com/blast-io/blast
- git.vdb.to/cerc-io/lighthouse
containers:
- cerc/webapp-base
- cerc/lighthouse
- cerc/lighthouse-cli
- cerc/foundry
- cerc/fixturenet-eth-lighthouse
pods:
- fixturenet-blast
- foundry

View File

@ -0,0 +1,26 @@
# Blast stack
## Clone required repositories
```
$ laconic-so --stack mainnet-blast setup-repositories
```
## Build the stack's containers
```
$ laconic-so --stack mainnet-blast build-containers
```
## Create a deployment of the stack
```
$ laconic-so --stack mainnet-blast deploy init --map-ports-to-host any-same --output blast-spec.yml
```
[Insert details on how to configure the stack]
```
$ laconic-so --stack mainnet-blast deploy create --deployment-dir blast-deployment --spec-file blast-spec.yml
```
## Start the stack
```
$ laconic-so deployment --dir blast-deployment start
```
Check logs:
```
$ laconic-so deployment --dir blast-deployment logs
```

View File

@ -0,0 +1,39 @@
# Copyright © 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from pathlib import Path
from shutil import copy
import yaml
def create(context, extra_args):
# Our goal here is just to copy the json files for blast
yml_path = context.deployment_dir.joinpath("spec.yml")
with open(yml_path, 'r') as file:
data = yaml.safe_load(file)
mount_point = data['volumes']['blast-data']
if mount_point[0] == "/":
deploy_dir = Path(mount_point)
else:
deploy_dir = context.deployment_dir.joinpath(mount_point)
command_context = extra_args[2]
compose_file = [f for f in command_context.cluster_context.compose_files if "mainnet-blast" in f][0]
source_config_file = Path(compose_file).parent.parent.joinpath("config", "mainnet-blast", "genesis.json")
copy(source_config_file, deploy_dir)
source_config_file = Path(compose_file).parent.parent.joinpath("config", "mainnet-blast", "rollup.json")
copy(source_config_file, deploy_dir)

View File

@ -0,0 +1,12 @@
version: "1.0"
name: mainnet-blast
description: "A blast stack"
repos:
- github.com/blast-io/blast
- git.vdb.to/cerc-io/lighthouse
containers:
- cerc/webapp-base
- cerc/lighthouse
- cerc/lighthouse-cli
pods:
- mainnet-blast

View File

@ -16,26 +16,55 @@ laconic-so --stack merkl-sushiswap-v3 build-containers
## Deploy ## Deploy
### Configuration Create a spec file for the deployment:
Create and update an env file to be used in the next step:
```bash ```bash
# External Filecoin (ETH RPC) endpoint to point the watcher laconic-so --stack merkl-sushiswap-v3 deploy init --output merkl-sushiswap-v3-spec.yml
CERC_ETH_RPC_ENDPOINT=
``` ```
### Deploy the stack ### Ports
Edit `network` in the spec file to map container ports to host ports as required:
```
...
network:
ports:
merkl-sushiswap-v3-watcher-db:
- '5432'
merkl-sushiswap-v3-watcher-job-runner:
- 9002:9000
merkl-sushiswap-v3-watcher-server:
- 127.0.0.1:3007:3008
- 9003:9001
```
### Create a deployment
Create a deployment from the spec file:
```bash ```bash
laconic-so --stack merkl-sushiswap-v3 deploy --cluster merkl_sushiswap_v3 --env-file <PATH_TO_ENV_FILE> up laconic-so --stack merkl-sushiswap-v3 deploy create --spec-file merkl-sushiswap-v3-spec.yml --deployment-dir merkl-sushiswap-v3-deployment
```
### Configuration
Inside deployment directory, open the `config.env` file and set following env variables:
```bash
# External Filecoin (ETH RPC) endpoint to point the watcher to
CERC_ETH_RPC_ENDPOINT=https://example-lotus-endpoint/rpc/v1
```
### Start the deployment
```bash
laconic-so deployment --dir merkl-sushiswap-v3-deployment start
``` ```
* To list down and monitor the running containers: * To list down and monitor the running containers:
```bash ```bash
laconic-so --stack merkl-sushiswap-v3 deploy --cluster merkl_sushiswap_v3 ps
# With status # With status
docker ps -a docker ps -a
@ -46,6 +75,7 @@ laconic-so --stack merkl-sushiswap-v3 deploy --cluster merkl_sushiswap_v3 --env-
* Open the GQL playground at http://localhost:3007/graphql * Open the GQL playground at http://localhost:3007/graphql
```graphql ```graphql
# Example query
{ {
_meta { _meta {
block { block {
@ -64,18 +94,21 @@ laconic-so --stack merkl-sushiswap-v3 deploy --cluster merkl_sushiswap_v3 --env-
## Clean up ## Clean up
Stop all the services running in background: Stop all the merkl-sushiswap-v3 services running in background:
```bash ```bash
laconic-so --stack merkl-sushiswap-v3 deploy --cluster merkl_sushiswap_v3 down # Only stop the docker containers
laconic-so deployment --dir merkl-sushiswap-v3-deployment stop
# Run 'start' to restart the deployment
``` ```
Clear volumes created by this stack: To stop all the merkl-sushiswap-v3 services and also delete data:
```bash ```bash
# List all relevant volumes # Stop the docker containers
docker volume ls -q --filter "name=merkl_sushiswap_v3" laconic-so deployment --dir merkl-sushiswap-v3-deployment stop --delete-volumes
# Remove all the listed volumes # Remove deployment directory (deployment will have to be recreated for a re-run)
docker volume rm $(docker volume ls -q --filter "name=merkl_sushiswap_v3") rm -r merkl-sushiswap-v3-deployment
``` ```

View File

@ -2,7 +2,7 @@ version: "1.0"
name: merkl-sushiswap-v3 name: merkl-sushiswap-v3
description: "SushiSwap v3 watcher stack" description: "SushiSwap v3 watcher stack"
repos: repos:
- github.com/cerc-io/merkl-sushiswap-v3-watcher-ts@v0.1.6 - github.com/cerc-io/merkl-sushiswap-v3-watcher-ts@v0.1.7
containers: containers:
- cerc/watcher-merkl-sushiswap-v3 - cerc/watcher-merkl-sushiswap-v3
pods: pods:

View File

@ -4,6 +4,7 @@
* Comes with the following built-in exporters / dashboards: * Comes with the following built-in exporters / dashboards:
* Chain Head Exporter - for tracking chain heads given external ETH RPC endpoints * Chain Head Exporter - for tracking chain heads given external ETH RPC endpoints
* Watchers dashboard * Watchers dashboard
* laconicd dashboard
* [Prometheus Blackbox](https://grafana.com/grafana/dashboards/7587-prometheus-blackbox-exporter/) - for tracking HTTP endpoints * [Prometheus Blackbox](https://grafana.com/grafana/dashboards/7587-prometheus-blackbox-exporter/) - for tracking HTTP endpoints
* [NodeJS Application Dashboard](https://grafana.com/grafana/dashboards/11159-nodejs-application-dashboard/) - for default NodeJS metrics * [NodeJS Application Dashboard](https://grafana.com/grafana/dashboards/11159-nodejs-application-dashboard/) - for default NodeJS metrics
* [PostgreSQL Database](https://grafana.com/grafana/dashboards/9628-postgresql-database/) - for monitoring Postgres dbs * [PostgreSQL Database](https://grafana.com/grafana/dashboards/9628-postgresql-database/) - for monitoring Postgres dbs
@ -99,6 +100,7 @@ laconic-so --stack monitoring deploy create --spec-file monitoring-spec.yml --de
- targets: - targets:
- <HTTP_ENDPOINT_1> - <HTTP_ENDPOINT_1>
- <HTTP_ENDPOINT_2> - <HTTP_ENDPOINT_2>
- <LACONICD_GQL_ENDPOINT>
``` ```
* Postgres (in-stack exporter): * Postgres (in-stack exporter):
@ -116,6 +118,17 @@ laconic-so --stack monitoring deploy create --spec-file monitoring-spec.yml --de
``` ```
* Add database credentials to be used in `auth_modules` in the postgres-exporter config file (`monitoring-deployment/config/monitoring/postgres-exporter.yml`) * Add database credentials to be used in `auth_modules` in the postgres-exporter config file (`monitoring-deployment/config/monitoring/postgres-exporter.yml`)
* laconicd: update the `laconicd` job with a laconicd node's REST endpoint host and port:
```yml
...
- job_name: laconicd
...
static_configs:
- targets: ['example-host:1317']
...
```
Note: Use `host.docker.internal` as host to access ports on the host machine Note: Use `host.docker.internal` as host to access ports on the host machine
### Grafana Config ### Grafana Config

View File

@ -44,8 +44,18 @@ Add the following scrape configs to prometheus config file (`monitoring-watchers
- job_name: 'blackbox' - job_name: 'blackbox'
... ...
static_configs: static_configs:
- targets: - targets: [<AZIMUTH_GATEWAY_GQL_ENDPOINT>]
- <AZIMUTH_GATEWAY_GQL_ENDPOINT> labels:
# Add destination label for pre-configured alerts
destination: 'azimuth_gateway'
- targets: [<LACONICD_GQL_ENDPOINT>]
labels:
destination: 'laconicd_gql'
...
- job_name: laconicd
...
static_configs:
- targets: ['LACONICD_REST_HOST:LACONICD_REST_PORT']
... ...
- job_name: azimuth - job_name: azimuth
scrape_interval: 10s scrape_interval: 10s
@ -98,16 +108,28 @@ Add the following scrape configs to prometheus config file (`monitoring-watchers
labels: labels:
instance: 'merkl_sushiswap' instance: 'merkl_sushiswap'
chain: 'filecoin' chain: 'filecoin'
- job_name: ajna
scrape_interval: 20s
metrics_path: /metrics
scheme: http
static_configs:
- targets: ['AJNA_WATCHER_HOST:AJNA_WATCHER_PORT']
labels:
instance: 'ajna'
chain: 'filecoin'
``` ```
Add scrape config as done above for any additional watcher to add it to the Watchers dashboard. Add scrape config as done above for any additional watcher to add it to the Watchers dashboard.
### Grafana alerts config ### Grafana alerts config
Place the pre-configured watcher alerts rules in Grafana provisioning directory: Place the pre-configured watcher and blackbox endpoint alerts rules in Grafana provisioning directory:
```bash ```bash
cp monitoring-watchers-deployment/config/monitoring/watcher-alert-rules.yml monitoring-watchers-deployment/config/monitoring/grafana/provisioning/alerting/ cp monitoring-watchers-deployment/config/monitoring/watcher-alert-rules.yml monitoring-watchers-deployment/config/monitoring/grafana/provisioning/alerting/
cp monitoring-watchers-deployment/config/monitoring/blackbox-alert-rules.yml monitoring-watchers-deployment/config/monitoring/grafana/provisioning/alerting/
``` ```
Update the alerting contact points config (`monitoring-watchers-deployment/config/monitoring/grafana/provisioning/alerting/contactpoints.yml`) with desired contact points Update the alerting contact points config (`monitoring-watchers-deployment/config/monitoring/grafana/provisioning/alerting/contactpoints.yml`) with desired contact points
@ -120,7 +142,7 @@ Add corresponding routes to the notification policies config (`monitoring-watche
- receiver: SlackNotifier - receiver: SlackNotifier
object_matchers: object_matchers:
# Add matchers below # Add matchers below
- ['grafana_folder', '=', 'WatcherAlerts'] - ['grafana_folder', '=~', 'WatcherAlerts|BlackboxAlerts']
``` ```
### Env ### Env

View File

@ -1,7 +1,7 @@
version: "0.1" version: "0.1"
name: monitoring name: monitoring
repos: repos:
- github.com/cerc-io/watcher-ts@v0.2.79 - github.com/cerc-io/watcher-ts@v0.2.81
containers: containers:
- cerc/watcher-ts - cerc/watcher-ts
pods: pods:

View File

@ -0,0 +1,10 @@
version: "1.0"
name: snowballtools-base-backend
description: "snowballtools-base-backend"
repos:
- github.com/snowball-tools/snowballtools-base
containers:
- cerc/webapp-base
- cerc/snowballtools-base-backend
pods:
- snowballtools-base-backend

View File

@ -55,7 +55,7 @@ ports:
Create deployment: Create deployment:
```bash ```bash
laconic-so deploy create --spec-file sushiswap-subgraph-spec.yml --deployment-dir sushiswap-subgraph-deployment laconic-so --stack sushiswap-subgraph deploy create --spec-file sushiswap-subgraph-spec.yml --deployment-dir sushiswap-subgraph-deployment
``` ```
## Start the stack ## Start the stack

View File

@ -16,26 +16,55 @@ laconic-so --stack sushiswap-v3 build-containers
## Deploy ## Deploy
### Configuration Create a spec file for the deployment:
Create and update an env file to be used in the next step:
```bash ```bash
# External Filecoin (ETH RPC) endpoint to point the watcher laconic-so --stack sushiswap-v3 deploy init --output sushiswap-v3-spec.yml
CERC_ETH_RPC_ENDPOINT=
``` ```
### Deploy the stack ### Ports
Edit `network` in the spec file to map container ports to host ports as required:
```
...
network:
ports:
sushiswap-v3-watcher-db:
- '5432'
sushiswap-v3-watcher-job-runner:
- 9000:9000
sushiswap-v3-watcher-server:
- 127.0.0.1:3008:3008
- 9001:9001
```
### Create a deployment
Create a deployment from the spec file:
```bash ```bash
laconic-so --stack sushiswap-v3 deploy --cluster sushiswap_v3 --env-file <PATH_TO_ENV_FILE> up laconic-so --stack sushiswap-v3 deploy create --spec-file sushiswap-v3-spec.yml --deployment-dir sushiswap-v3-deployment
```
### Configuration
Inside deployment directory, open the `config.env` file and set following env variables:
```bash
# External Filecoin (ETH RPC) endpoint to point the watcher to
CERC_ETH_RPC_ENDPOINT=https://example-lotus-endpoint/rpc/v1
```
### Start the deployment
```bash
laconic-so deployment --dir sushiswap-v3-deployment start
``` ```
* To list down and monitor the running containers: * To list down and monitor the running containers:
```bash ```bash
laconic-so --stack sushiswap-v3 deploy --cluster sushiswap_v3 ps
# With status # With status
docker ps -a docker ps -a
@ -43,20 +72,43 @@ laconic-so --stack sushiswap-v3 deploy --cluster sushiswap_v3 --env-file <PATH_T
docker logs -f <CONTAINER_ID> docker logs -f <CONTAINER_ID>
``` ```
* Open the GQL playground at http://localhost:3008/graphql
```graphql
# Example query
{
_meta {
block {
number
timestamp
}
hasIndexingErrors
}
factories {
id
poolCount
}
}
```
## Clean up ## Clean up
Stop all the services running in background: Stop all the sushiswap-v3 services running in background:
```bash ```bash
laconic-so --stack sushiswap-v3 deploy --cluster sushiswap_v3 down # Only stop the docker containers
laconic-so deployment --dir sushiswap-v3-deployment stop
# Run 'start' to restart the deployment
``` ```
Clear volumes created by this stack: To stop all the sushiswap-v3 services and also delete data:
```bash ```bash
# List all relevant volumes # Stop the docker containers
docker volume ls -q --filter "name=sushiswap_v3" laconic-so deployment --dir sushiswap-v3-deployment stop --delete-volumes
# Remove all the listed volumes # Remove deployment directory (deployment will have to be recreated for a re-run)
docker volume rm $(docker volume ls -q --filter "name=sushiswap_v3") rm -r sushiswap-v3-deployment
``` ```

View File

@ -2,7 +2,7 @@ version: "1.0"
name: sushiswap-v3 name: sushiswap-v3
description: "SushiSwap v3 watcher stack" description: "SushiSwap v3 watcher stack"
repos: repos:
- github.com/cerc-io/sushiswap-v3-watcher-ts@v0.1.6 - github.com/cerc-io/sushiswap-v3-watcher-ts@v0.1.7
containers: containers:
- cerc/watcher-sushiswap-v3 - cerc/watcher-sushiswap-v3
pods: pods:

View File

@ -33,23 +33,29 @@ laconic-so --stack uniswap-urbit-app deploy init --output uniswap-urbit-app-spec
### Ports ### Ports
Edit `network` in spec file to map container ports to same ports in host: Edit `uniswap-urbit-app-spec.yml` such that it looks like:
```yml ```yml
... stack: uniswap-urbit-app
deploy-to: compose
network: network:
ports: ports:
urbit-fake-ship:
- '8080:80'
proxy-server: proxy-server:
- '4000:4000' - '4000:4000'
urbit-fake-ship:
- '8080:80'
ipfs: ipfs:
- '4001'
- '8081:8080' - '8081:8080'
- '5001:5001' - 0.0.0.0:5001:5001
... volumes:
urbit_app_builds: ./data/urbit_app_builds
urbit_data: ./data/urbit_data
ipfs-import: ./data/ipfs-import
ipfs-data: ./data/ipfs-data
``` ```
Note: Skip the `ipfs` ports if need to use an externally running IPFS node Note: Skip the `ipfs` ports if using an externally running IPFS node, set via `config.env`, below.
### Data volumes ### Data volumes

View File

@ -327,12 +327,14 @@ def init_operation(deploy_command_context, stack, deployer_type, config,
default_spec_file_content = call_stack_deploy_init(deploy_command_context) default_spec_file_content = call_stack_deploy_init(deploy_command_context)
spec_file_content = {"stack": stack, constants.deploy_to_key: deployer_type} spec_file_content = {"stack": stack, constants.deploy_to_key: deployer_type}
if deployer_type == "k8s": if deployer_type == "k8s":
if kube_config is None: if kube_config:
error_exit("--kube-config must be supplied with --deploy-to k8s")
if image_registry is None:
error_exit("--image-registry must be supplied with --deploy-to k8s")
spec_file_content.update({constants.kube_config_key: kube_config}) spec_file_content.update({constants.kube_config_key: kube_config})
else:
error_exit("--kube-config must be supplied with --deploy-to k8s")
if image_registry:
spec_file_content.update({constants.image_registry_key: image_registry}) spec_file_content.update({constants.image_registry_key: image_registry})
else:
print("WARNING: --image-registry not specified, only default container registries (eg, Docker Hub) will be available")
else: else:
# Check for --kube-config supplied for non-relevant deployer types # Check for --kube-config supplied for non-relevant deployer types
if kube_config is not None: if kube_config is not None:

View File

@ -29,6 +29,29 @@ def _image_needs_pushed(image: str):
return image.endswith(":local") return image.endswith(":local")
def remote_image_exists(remote_repo_url: str, local_tag: str):
docker = DockerClient()
try:
remote_tag = remote_tag_for_image(local_tag, remote_repo_url)
result = docker.manifest.inspect(remote_tag)
return True if result else False
except Exception: # noqa: E722
return False
def add_tags_to_image(remote_repo_url: str, local_tag: str, *additional_tags):
if not additional_tags:
return
if not remote_image_exists(remote_repo_url, local_tag):
raise Exception(f"{local_tag} does not exist in {remote_repo_url}")
docker = DockerClient()
remote_tag = remote_tag_for_image(local_tag, remote_repo_url)
new_remote_tags = [remote_tag_for_image(tag, remote_repo_url) for tag in additional_tags]
docker.buildx.imagetools.create(sources=[remote_tag], tags=new_remote_tags)
def remote_tag_for_image(image: str, remote_repo_url: str): def remote_tag_for_image(image: str, remote_repo_url: str):
# Turns image tags of the form: foo/bar:local into remote.repo/org/bar:deploy # Turns image tags of the form: foo/bar:local into remote.repo/org/bar:deploy
major_parts = image.split("/", 2) major_parts = image.split("/", 2)

Some files were not shown because too many files have changed in this diff Show More