Compare commits

..

364 Commits

Author SHA1 Message Date
39df4683ac Allow payment reuse for same app LRN (#961)
All checks were successful
Lint Checks / Run linter (push) Successful in 33s
Publish / Build and publish (push) Successful in 1m12s
Deploy Test / Run deploy test suite (push) Successful in 4m54s
Smoke Test / Run basic test suite (push) Successful in 3m52s
Webapp Test / Run webapp test suite (push) Successful in 4m38s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 19m6s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m11s
Database Test / Run database hosting test on kind/k8s (push) Successful in 8m51s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m33s
External Stack Test / Run external stack test suite (push) Successful in 4m31s
Part of [Service provider auctions for web deployments](https://www.notion.so/Service-provider-auctions-for-web-deployments-104a6b22d47280dbad51d28aa3a91d75)

Reviewed-on: #961
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-10-29 11:30:03 +00:00
23ca4c4341 Allow payment reuse for application redeployment (#960)
All checks were successful
Lint Checks / Run linter (push) Successful in 39s
Publish / Build and publish (push) Successful in 1m10s
Smoke Test / Run basic test suite (push) Successful in 3m54s
Webapp Test / Run webapp test suite (push) Successful in 4m40s
Deploy Test / Run deploy test suite (push) Successful in 4m51s
Part of [Service provider auctions for web deployments](https://www.notion.so/Service-provider-auctions-for-web-deployments-104a6b22d47280dbad51d28aa3a91d75)

Reviewed-on: #960
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-10-29 06:51:48 +00:00
f64ef5d128 Use file existence for registry mutex (#959)
All checks were successful
Lint Checks / Run linter (push) Successful in 1m1s
Publish / Build and publish (push) Successful in 1m27s
Webapp Test / Run webapp test suite (push) Successful in 4m59s
Smoke Test / Run basic test suite (push) Successful in 4m10s
Deploy Test / Run deploy test suite (push) Successful in 5m33s
Part of [Service provider auctions for web deployments](https://www.notion.so/Service-provider-auctions-for-web-deployments-104a6b22d47280dbad51d28aa3a91d75)

Reviewed-on: #959
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-10-29 04:05:35 +00:00
5f8e809b2d Add mutex lock file path to registry CLI wrapper class (#958)
All checks were successful
Lint Checks / Run linter (push) Successful in 33s
Publish / Build and publish (push) Successful in 1m24s
Deploy Test / Run deploy test suite (push) Successful in 4m53s
Webapp Test / Run webapp test suite (push) Successful in 4m39s
Smoke Test / Run basic test suite (push) Successful in 3m58s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 19m38s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 6m59s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m59s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m49s
External Stack Test / Run external stack test suite (push) Successful in 4m38s
Part of [Service provider auctions for web deployments](https://www.notion.so/Service-provider-auctions-for-web-deployments-104a6b22d47280dbad51d28aa3a91d75)
Follows #957

Reviewed-on: #958
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-10-28 06:03:13 +00:00
4a7df2de33 Use a mutex for registry CLI txs in webapp deployment commands (#957)
All checks were successful
Lint Checks / Run linter (push) Successful in 37s
Publish / Build and publish (push) Successful in 1m19s
Webapp Test / Run webapp test suite (push) Successful in 4m45s
Smoke Test / Run basic test suite (push) Successful in 4m16s
Deploy Test / Run deploy test suite (push) Successful in 4m58s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 19m17s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m33s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m41s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m36s
External Stack Test / Run external stack test suite (push) Successful in 4m43s
Part of [Service provider auctions for web deployments](https://www.notion.so/Service-provider-auctions-for-web-deployments-104a6b22d47280dbad51d28aa3a91d75) and #948

- Add a registry mutex decorator over tx methods in `LaconicRegistryClient` wrapper
- Required to allow multiple process to run webapp deployment tooling without running into account sequence errors when sending laconicd txs

Reviewed-on: #957
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-10-25 08:40:54 +00:00
0c47da42fe Integrate SP auctions in webapp deployment flow (#950)
All checks were successful
Lint Checks / Run linter (push) Successful in 39s
Publish / Build and publish (push) Successful in 1m15s
Smoke Test / Run basic test suite (push) Successful in 4m16s
Webapp Test / Run webapp test suite (push) Successful in 4m47s
Deploy Test / Run deploy test suite (push) Successful in 5m2s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 19m41s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m51s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m30s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m54s
External Stack Test / Run external stack test suite (push) Successful in 4m52s
Part of [Service provider auctions for web deployments](https://www.notion.so/Service-provider-auctions-for-web-deployments-104a6b22d47280dbad51d28aa3a91d75) and #948

- Add a command `publish-deployment-auction` to create and publish an app deployment auction
- Add a command `handle-deployment-auction` to handle auctions on deployer side
- Update `request-webapp-deployment` command to allow using an auction id in deployment requests
- Update `deploy-webapp-from-registry` command to handle deployment requests with auction
- Add a command `request-webapp-undeployment` to request an application undeployment

Reviewed-on: #950
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-10-21 07:02:06 +00:00
e290c62aca Pin shiv version to resolve failing CI (#956)
All checks were successful
Lint Checks / Run linter (push) Successful in 41s
Publish / Build and publish (push) Successful in 1m15s
Webapp Test / Run webapp test suite (push) Successful in 5m25s
K8s Deployment Control Test / Run deployment control suite on kind/k8s (push) Successful in 9m4s
Smoke Test / Run basic test suite (push) Successful in 6m11s
Deploy Test / Run deploy test suite (push) Successful in 7m5s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 19m32s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m35s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m42s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m20s
External Stack Test / Run external stack test suite (push) Successful in 4m30s
Part of #955
- Using `shiv` version 1.0.6

Reviewed-on: #956
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-10-17 06:37:32 +00:00
f1fdc48aaa Work around this bug: https://github.com/python/cpython/pull/14064 (#941)
Some checks failed
Lint Checks / Run linter (push) Successful in 38s
Publish / Build and publish (push) Successful in 1m30s
Smoke Test / Run basic test suite (push) Successful in 4m18s
Webapp Test / Run webapp test suite (push) Successful in 5m2s
Deploy Test / Run deploy test suite (push) Successful in 5m20s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 19m7s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Failing after 30s
Database Test / Run database hosting test on kind/k8s (push) Failing after 32s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Failing after 34s
External Stack Test / Run external stack test suite (push) Failing after 31s
Otherwise we sometimes see errors like:

```
cerc-webapp-deployer:   File "/root/.shiv/laconic-so_0f937aa98c2748ef9af8585d6f441dbc01546ace0d6660cbb159d1e5040aeddf/site-packages/stack_orchestrator/deploy/webapp/deploy_webapp_from_registry.py", line 671, in command
cerc-webapp-deployer:     shutil.rmtree(tempdir)
cerc-webapp-deployer:   File "/usr/lib/python3.10/shutil.py", line 725, in rmtree
cerc-webapp-deployer:     _rmtree_safe_fd(fd, path, onerror)
cerc-webapp-deployer:   File "/usr/lib/python3.10/shutil.py", line 681, in _rmtree_safe_fd
cerc-webapp-deployer:     onerror(os.unlink, fullname, sys.exc_info())
cerc-webapp-deployer:   File "/usr/lib/python3.10/shutil.py", line 679, in _rmtree_safe_fd
cerc-webapp-deployer:     os.unlink(entry.name, dir_fd=topfd)
cerc-webapp-deployer: FileNotFoundError: [Errno 2] No such file or directory: 'S.gpg-agent.extra'
```

Reviewed-on: #941
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-08-28 23:17:13 +00:00
a54072de6c Add --config-ref flag. (#939)
All checks were successful
Lint Checks / Run linter (push) Successful in 43s
Publish / Build and publish (push) Successful in 1m15s
Smoke Test / Run basic test suite (push) Successful in 3m55s
Webapp Test / Run webapp test suite (push) Successful in 4m38s
Deploy Test / Run deploy test suite (push) Successful in 4m53s
Database Test / Run database hosting test on kind/k8s (push) Successful in 8m46s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m24s
External Stack Test / Run external stack test suite (push) Successful in 4m32s
Add a flag to re-use config.

Reviewed-on: #939
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-08-28 17:32:52 +00:00
fa21ff2627 Support uploaded config, add 'publish-webapp-deployer' and 'request-webapp-deployment' commands (#938)
All checks were successful
Lint Checks / Run linter (push) Successful in 36s
Publish / Build and publish (push) Successful in 1m6s
Smoke Test / Run basic test suite (push) Successful in 3m53s
Webapp Test / Run webapp test suite (push) Successful in 4m33s
Deploy Test / Run deploy test suite (push) Successful in 4m39s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m10s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m25s
This adds two new commands: `publish-webapp-deployer` and `request-webapp-deployment`.

`publish-webapp-deployer` creates a `WebappDeployer` record, which provides information to requestors like the API URL, minimum required payment, payment address, and public key to use for encrypting config.

```
$ laconic-so publish-deployer-to-registry \
  --laconic-config ~/.laconic/laconic.yml \
  --api-url https://webapp-deployer-api.dev.vaasl.io \
  --public-key-file webapp-deployer-api.dev.vaasl.io.pgp.pub  \
  --lrn lrn://laconic/deployers/webapp-deployer-api.dev.vaasl.io  \
  --min-required-payment 100000
```

`request-webapp-deployment` simplifies publishing a `WebappDeploymentRequest` and can also handle automatic payment, and encryption and upload of configuration.

```
$ laconic-so request-webapp-deployment \
  --laconic-config ~/.laconic/laconic.yml \
  --deployer lrn://laconic/deployers/webapp-deployer-api.dev.vaasl.io \
  --app lrn://cerc-io/applications/webapp-hello-world@0.1.3 \
  --env-file ~/yaml/hello.env \
  --make-payment auto
```

Related changes are included for the deploy/undeploy commands for decrypting and using config, using the payment address from the WebappDeployer record, etc.

Reviewed-on: #938
2024-08-27 19:55:06 +00:00
33d395e213 Add package registry stack instructions (#937)
All checks were successful
Lint Checks / Run linter (push) Successful in 36s
Publish / Build and publish (push) Successful in 1m9s
Smoke Test / Run basic test suite (push) Successful in 4m0s
Webapp Test / Run webapp test suite (push) Successful in 4m33s
Deploy Test / Run deploy test suite (push) Successful in 4m55s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m59s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m42s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m52s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m37s
External Stack Test / Run external stack test suite (push) Successful in 4m46s
- The instructions to `Deploy Gitea Package Registry` from build-support [readme](https://git.vdb.to/deep-stack/stack-orchestrator/src/branch/pm-update-registry-steps/stack_orchestrator/data/stacks/build-support#2-deploy-gitea-package-registry) don't seem to be in a working state
- Updated `package-registry` stack instructions to use deployment pattern

Reviewed-on: #937
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-08-23 09:42:44 +00:00
75ff60752a Require payment for app deployment requests. (#928)
All checks were successful
Lint Checks / Run linter (push) Successful in 35s
Publish / Build and publish (push) Successful in 1m18s
Smoke Test / Run basic test suite (push) Successful in 3m58s
Webapp Test / Run webapp test suite (push) Successful in 4m45s
Deploy Test / Run deploy test suite (push) Successful in 5m10s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m5s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m19s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m33s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m44s
External Stack Test / Run external stack test suite (push) Successful in 4m39s
Adds three new options for deployment/undeployment:

```
    "--min-required-payment",
    help="Requests must have a minimum payment to be processed",

    "--payment-address",
    help="The address to which payments should be made.  Default is the current laconic account.",

    "--all-requests",
    help="Handle requests addressed to anyone (by default only requests to my payment address are examined).",
```

In this mode, requests should be designated for a particular address with the attribute `to` and include a `payment` attribute which is the tx hash for the payment.

The deployer will confirm the payment (to the right account, right amount, not used before, etc.) and then proceed with the deployment or undeployment.

Reviewed-on: #928
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-08-21 14:39:20 +00:00
44b9709717 Use Laconic version of ping-pub (#930)
All checks were successful
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m33s
Lint Checks / Run linter (push) Successful in 49s
Publish / Build and publish (push) Successful in 1m23s
Smoke Test / Run basic test suite (push) Successful in 4m32s
Deploy Test / Run deploy test suite (push) Successful in 5m20s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m34s
Webapp Test / Run webapp test suite (push) Successful in 5m9s
External Stack Test / Run external stack test suite (push) Successful in 4m34s
Database Test / Run database hosting test on kind/k8s (push) Successful in 8m58s
Reviewed-on: #930
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-08-20 17:44:00 +00:00
e56da7dcc1 Add support for k8s pod to node affinity and taint toleration (#917)
All checks were successful
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m12s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m24s
Lint Checks / Run linter (push) Successful in 38s
Publish / Build and publish (push) Successful in 1m15s
Smoke Test / Run basic test suite (push) Successful in 4m40s
Webapp Test / Run webapp test suite (push) Successful in 5m5s
Deploy Test / Run deploy test suite (push) Successful in 5m42s
K8s Deployment Control Test / Run deployment control suite on kind/k8s (push) Successful in 6m16s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m22s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m30s
External Stack Test / Run external stack test suite (push) Successful in 4m31s
Reviewed-on: #917
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-08-15 20:32:58 +00:00
60d34217f8 More logging for webapp deployment (#923)
All checks were successful
Lint Checks / Run linter (push) Successful in 37s
Publish / Build and publish (push) Successful in 1m11s
Smoke Test / Run basic test suite (push) Successful in 3m57s
Webapp Test / Run webapp test suite (push) Successful in 4m31s
Deploy Test / Run deploy test suite (push) Successful in 4m50s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m14s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m37s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m28s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m22s
External Stack Test / Run external stack test suite (push) Successful in 5m5s
```
cerc-webapp-deployer: ############ DEPLOY #############
cerc-webapp-deployer: 2024-08-15 02:13:08.321991 -  - 0:00:00.000031 (step): Discovering deployment requests...
cerc-webapp-deployer: laconic -c /etc/config/laconic.yml registry record list --all --type ApplicationDeploymentRequest
cerc-webapp-deployer: 2024-08-15 02:13:08.815428 -  - 0:00:00.493420 (step): Loading known requests from /srv/deployments/autodeploy.state...
cerc-webapp-deployer: 2024-08-15 02:13:08.815626 -  - 0:00:00.000158 (step): BEGIN: Examining request bafyreigiltcdscwt7rqldnilo4ohrhgoulrlfceixde5ycewsym64sefgi
cerc-webapp-deployer: 2024-08-15 02:13:08.815645 -  - 0:00:00.000008 (step): Skipping request bafyreigiltcdscwt7rqldnilo4ohrhgoulrlfceixde5ycewsym64sefgi, we've already seen it.
cerc-webapp-deployer: 2024-08-15 02:13:08.815653 -  - 0:00:00.000005 (step): DONE Examining request bafyreigiltcdscwt7rqldnilo4ohrhgoulrlfceixde5ycewsym64sefgi with result SKIP.
cerc-webapp-deployer: 2024-08-15 02:13:08.815664 -  - 0:00:00.000005 (step): BEGIN: Examining request bafyreicoxippgdwab6cz72py4rgv63rvvbsea73y62hashlhqpcsxyfkue
cerc-webapp-deployer: 2024-08-15 02:13:08.815674 -  - 0:00:00.000006 (step): Skipping request bafyreicoxippgdwab6cz72py4rgv63rvvbsea73y62hashlhqpcsxyfkue, we've already seen it.
cerc-webapp-deployer: 2024-08-15 02:13:08.815684 -  - 0:00:00.000004 (step): DONE Examining request bafyreicoxippgdwab6cz72py4rgv63rvvbsea73y62hashlhqpcsxyfkue with result SKIP.
cerc-webapp-deployer: 2024-08-15 02:13:08.815692 -  - 0:00:00.000005 (step): BEGIN: Examining request bafyreih3gt44pvahnbg7ag26mlk3iie4s5m5znhygajja5dcovheti72ne
cerc-webapp-deployer: 2024-08-15 02:13:08.815705 -  - 0:00:00.000007 (step): Skipping request bafyreih3gt44pvahnbg7ag26mlk3iie4s5m5znhygajja5dcovheti72ne, we've already seen it.
cerc-webapp-deployer: 2024-08-15 02:13:08.815714 -  - 0:00:00.000005 (step): DONE Examining request bafyreih3gt44pvahnbg7ag26mlk3iie4s5m5znhygajja5dcovheti72ne with result SKIP.
cerc-webapp-deployer: 2024-08-15 02:13:08.815724 -  - 0:00:00.000004 (step): BEGIN: Examining request bafyreigjnbio47rug6x5tufzc6cwfcqpl3ck3xldzotrlz5bt663dh2pua
cerc-webapp-deployer: 2024-08-15 02:13:08.815733 -  - 0:00:00.000005 (step): Skipping request bafyreigjnbio47rug6x5tufzc6cwfcqpl3ck3xldzotrlz5bt663dh2pua, we've already seen it.
cerc-webapp-deployer: 2024-08-15 02:13:08.815743 -  - 0:00:00.000005 (step): DONE Examining request bafyreigjnbio47rug6x5tufzc6cwfcqpl3ck3xldzotrlz5bt663dh2pua with result SKIP.
cerc-webapp-deployer: 2024-08-15 02:13:08.815751 -  - 0:00:00.000004 (step): BEGIN: Examining request bafyreihsfno4s6lkxcp5a7g7pjj7kklrp3xaqo57mr2pz76nk3h4jukayy
cerc-webapp-deployer: 2024-08-15 02:13:08.815761 -  - 0:00:00.000006 (step): Skipping request bafyreihsfno4s6lkxcp5a7g7pjj7kklrp3xaqo57mr2pz76nk3h4jukayy, we've already seen it.
cerc-webapp-deployer: 2024-08-15 02:13:08.815770 -  - 0:00:00.000005 (step): DONE Examining request bafyreihsfno4s6lkxcp5a7g7pjj7kklrp3xaqo57mr2pz76nk3h4jukayy with result SKIP.
cerc-webapp-deployer: 2024-08-15 02:13:08.815779 -  - 0:00:00.000005 (step): BEGIN: Examining request bafyreicyfyj4ncmtuy5pain2rvc67v645cg2bbsiakizvhdiwvkx7asvdy
cerc-webapp-deployer: 2024-08-15 02:13:08.815791 -  - 0:00:00.000007 (step): Skipping request bafyreicyfyj4ncmtuy5pain2rvc67v645cg2bbsiakizvhdiwvkx7asvdy, we've already seen it.
cerc-webapp-deployer: 2024-08-15 02:13:08.815800 -  - 0:00:00.000004 (step): DONE Examining request bafyreicyfyj4ncmtuy5pain2rvc67v645cg2bbsiakizvhdiwvkx7asvdy with result SKIP.
cerc-webapp-deployer: 2024-08-15 02:13:08.815808 -  - 0:00:00.000004 (step): Discovering existing app deployments...
cerc-webapp-deployer: laconic -c /etc/config/laconic.yml registry record list --all --type ApplicationDeploymentRecord
cerc-webapp-deployer: 2024-08-15 02:13:09.330655 -  - 0:00:00.514858 (step): Discovering deployment removal and cancellation requests...
cerc-webapp-deployer: laconic -c /etc/config/laconic.yml registry record list --all --type ApplicationDeploymentRemovalRequest
cerc-webapp-deployer: 2024-08-15 02:13:09.825145 -  - 0:00:00.494460 (step): Found 0 unsatisfied request(s) to process.
cerc-webapp-deployer: ############ DEPLOY SUCCESS #############
```

Reviewed-on: #923
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-08-15 02:57:47 +00:00
952389abb0 Add option to recreate deployments rather than update them. (#920)
All checks were successful
Lint Checks / Run linter (push) Successful in 48s
Publish / Build and publish (push) Successful in 1m21s
Smoke Test / Run basic test suite (push) Successful in 4m46s
Webapp Test / Run webapp test suite (push) Successful in 5m16s
Deploy Test / Run deploy test suite (push) Successful in 5m41s
cherry-pick from #912

Reviewed-on: #920
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
2024-08-14 20:14:40 +00:00
5c275aa622 Defensively handle errors examining app requests. (#922)
All checks were successful
Lint Checks / Run linter (push) Successful in 34s
Publish / Build and publish (push) Successful in 1m7s
Smoke Test / Run basic test suite (push) Successful in 4m0s
Webapp Test / Run webapp test suite (push) Successful in 4m42s
Deploy Test / Run deploy test suite (push) Successful in 4m58s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m2s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m44s
External Stack Test / Run external stack test suite (push) Successful in 4m26s
Related to cerc-io/webapp-deployment-status-api#10

There are two issues in that.  One is that the output probably changed recently, whether in the client or server, where no matching record is found by ID (Note this is specific to `laconic record get --id <v>` and does not seem to apply to the similar command to retrieve a record by name, `laconic name resolve <n>`).

Rather than returning `[]` it is now returning `[ null ]`.  This cause us to think there *was* an application record found, and we attempt to treat the `null` entry like an Application object.  That's fixed by filtering out null responses, which is a good precaution for the deployer, though I think it makes sense to ask whether this new behavior by the client/server is correct.  Seems suspicious.

The other issue is that all the defensive checks we had in place to deal with broken/bad AppDeploymentRequests were around the _build_.  This error was coming much earlier, merely when parsing and examining the request to see if it needed to be handled at all.

I have now added similar defensive error handling around that portion of the code.

Reviewed-on: #922
Reviewed-by: zramsay <zramsay@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-08-14 18:04:31 +00:00
8576137557 Convert port to string. (#919)
All checks were successful
Lint Checks / Run linter (push) Successful in 40s
Publish / Build and publish (push) Successful in 1m19s
Smoke Test / Run basic test suite (push) Successful in 4m15s
Webapp Test / Run webapp test suite (push) Successful in 4m41s
Deploy Test / Run deploy test suite (push) Successful in 4m59s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m12s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m48s
The str type check doesn't work if the port is a ruamel.yaml.scalarstring.SingleQuotedScalarString or ruamel.yaml.scalarstring.DoubleQuotedScalarString

Reviewed-on: #919
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-08-14 00:25:35 +00:00
65c1cdf6b1 Fix crash if port has int type in yaml (#918)
All checks were successful
Lint Checks / Run linter (push) Successful in 38s
Publish / Build and publish (push) Successful in 1m18s
Deploy Test / Run deploy test suite (push) Successful in 4m36s
Webapp Test / Run webapp test suite (push) Successful in 4m24s
Smoke Test / Run basic test suite (push) Successful in 3m52s
Reviewed-on: #918
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-08-13 20:47:09 +00:00
265699bc38 Allow to disable kind cluster management for testing (#915)
All checks were successful
Lint Checks / Run linter (push) Successful in 37s
Publish / Build and publish (push) Successful in 1m9s
Smoke Test / Run basic test suite (push) Successful in 4m23s
Webapp Test / Run webapp test suite (push) Successful in 4m38s
Deploy Test / Run deploy test suite (push) Successful in 5m0s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m49s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m37s
External Stack Test / Run external stack test suite (push) Successful in 4m54s
Reviewed-on: #915
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-08-13 17:48:14 +00:00
4a7670a5d6 Open the json-rpc port (#916)
Some checks failed
Lint Checks / Run linter (push) Has been cancelled
Deploy Test / Run deploy test suite (push) Has been cancelled
Publish / Build and publish (push) Has been cancelled
Webapp Test / Run webapp test suite (push) Has been cancelled
Smoke Test / Run basic test suite (push) Has been cancelled
Reviewed-on: #916
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-08-13 17:47:56 +00:00
6087e1cd31 Copy config under a volume for Docker (similar to a ConfigMap for K8S). (#914)
All checks were successful
Lint Checks / Run linter (push) Successful in 42s
Publish / Build and publish (push) Successful in 1m15s
Deploy Test / Run deploy test suite (push) Successful in 4m43s
Webapp Test / Run webapp test suite (push) Successful in 4m40s
Smoke Test / Run basic test suite (push) Successful in 3m49s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m9s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m33s
External Stack Test / Run external stack test suite (push) Successful in 4m34s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m43s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m19s
This emulates the K8S ConfigMap behavior on Docker by using a regular volume.

If a directory exists under `config/` which matches a named volume, the contents will be copied to the volume on `create` (provided the destination volume is empty).  That is, rather than a ConfigMap, it is essentially a "config volume".

For example, with a compose file like:

```
version: '3.7'
services:
  caddy:
    image: cerc/caddy-ethcache:local
    restart: always
    volumes:
      - caddyconfig:/etc/caddy:ro
volumes:
  caddyconfig:
```

And a directory:

```
❯ ls stack-orchestrator/config/caddyconfig/
Caddyfile
```

After `laconic-so deploy create --spec-file caddy.yml --deployment-dir /srv/caddy` there will be:

```
❯ ls /srv/caddy/data/caddyconfig
Caddyfile
```

Mounted at `/etc/caddy`

Reviewed-on: #914
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-08-10 02:32:21 +00:00
1def279d26 Support multiple NodePorts, static NodePort mapping, and add 'replicas' spec option (#913)
All checks were successful
Lint Checks / Run linter (push) Successful in 33s
Publish / Build and publish (push) Successful in 1m7s
Smoke Test / Run basic test suite (push) Successful in 3m51s
Webapp Test / Run webapp test suite (push) Successful in 4m30s
Deploy Test / Run deploy test suite (push) Successful in 4m42s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 12m52s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m27s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m35s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m54s
External Stack Test / Run external stack test suite (push) Successful in 5m19s
NodePort example:

```
network:
  ports:
    caddy:
     - 1234
     - 32020:2020
```

Replicas example:

```
replicas: 2
```

This also adds an optimization for k8s where if a directory matching the name of a configmap exists in beneath config/ in the stack, its contents will be copied into the corresponding configmap.

For example:

```
# Config files in the stack
❯ ls stack-orchestrator/config/caddyconfig
Caddyfile  Caddyfile.one-req-per-upstream-example

# ConfigMap in the spec
❯ cat foo.yml | grep config
...
configmaps:
  caddyconfig: ./configmaps/caddyconfig

# Create the deployment
❯ laconic-so --stack ~/cerc/caddy-ethcache/stack-orchestrator/stacks/caddy-ethcache deploy create --spec-file foo.yml

# The files from beneath config/<config_map_name> have been copied to the ConfigMap directory from the spec.
❯ ls deployment-001/configmaps/caddyconfig
Caddyfile  Caddyfile.one-req-per-upstream-example
```

Reviewed-on: #913
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-08-09 02:32:06 +00:00
64691bd206 Merge pull request 'Allow gentx-files to be omitted' (#911) from dboreham/allow-zero-gentx into main
All checks were successful
Lint Checks / Run linter (push) Successful in 43s
Publish / Build and publish (push) Successful in 1m21s
Deploy Test / Run deploy test suite (push) Successful in 5m20s
Webapp Test / Run webapp test suite (push) Successful in 5m0s
Smoke Test / Run basic test suite (push) Successful in 4m18s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m5s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m22s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m15s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m40s
External Stack Test / Run external stack test suite (push) Successful in 4m29s
Reviewed-on: #911
2024-08-07 20:13:40 +00:00
aef5986135 Allow gentx-files to be omitted
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 34s
Webapp Test / Run webapp test suite (pull_request) Successful in 5m3s
Smoke Test / Run basic test suite (pull_request) Successful in 4m36s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m13s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m9s
2024-08-07 14:11:06 -06:00
6f8f0340d3 Merge pull request 'Add stage 1 support' (#910) from dboreham/stage1-support into main
All checks were successful
Lint Checks / Run linter (push) Successful in 38s
Publish / Build and publish (push) Successful in 1m19s
Webapp Test / Run webapp test suite (push) Successful in 5m1s
Deploy Test / Run deploy test suite (push) Successful in 5m42s
Smoke Test / Run basic test suite (push) Successful in 4m32s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m23s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m9s
External Stack Test / Run external stack test suite (push) Successful in 4m52s
Reviewed-on: #910
2024-08-07 17:44:28 +00:00
7590d6e237 Add stage 1 support
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 37s
Webapp Test / Run webapp test suite (pull_request) Successful in 6m10s
Smoke Test / Run basic test suite (pull_request) Successful in 6m9s
Deploy Test / Run deploy test suite (pull_request) Successful in 7m7s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 9m57s
2024-08-07 11:28:10 -06:00
573f99dbbe Listen on 0.0.0.0 (#909)
All checks were successful
Lint Checks / Run linter (push) Successful in 45s
Publish / Build and publish (push) Successful in 1m18s
Smoke Test / Run basic test suite (push) Successful in 4m8s
Webapp Test / Run webapp test suite (push) Successful in 4m46s
Deploy Test / Run deploy test suite (push) Successful in 4m59s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m41s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m42s
External Stack Test / Run external stack test suite (push) Successful in 4m30s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m30s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m58s
Reviewed-on: #909
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-08-02 14:06:06 +00:00
8052c1c25e Merge pull request 'Laconicd needs to be told its currency' (#908) from dboreham/mainnet-laconic-specify-currency into main
All checks were successful
Lint Checks / Run linter (push) Successful in 34s
Publish / Build and publish (push) Successful in 1m13s
Deploy Test / Run deploy test suite (push) Successful in 4m59s
Webapp Test / Run webapp test suite (push) Successful in 4m53s
Smoke Test / Run basic test suite (push) Successful in 4m8s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 12m49s
Reviewed-on: #908
2024-08-02 03:10:34 +00:00
a674d13493 Laconicd needs to be told its currency
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 34s
Smoke Test / Run basic test suite (pull_request) Successful in 4m23s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m57s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m17s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m7s
2024-08-01 21:09:30 -06:00
0d4f4509c8 Remove Eth fixturenet workflows (#906)
All checks were successful
Lint Checks / Run linter (push) Successful in 41s
Publish / Build and publish (push) Successful in 1m14s
Smoke Test / Run basic test suite (push) Successful in 4m21s
Webapp Test / Run webapp test suite (push) Successful in 4m37s
Deploy Test / Run deploy test suite (push) Successful in 4m49s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m8s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m39s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m43s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m34s
External Stack Test / Run external stack test suite (push) Successful in 4m31s
Deletes the now-failing CI workflows for the old `fixturenet-eth` and `fixturenet-plugeth` stacks.

Part of #905.

Reviewed-on: #906
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
2024-08-01 02:28:05 +00:00
5af27b1b3a Merge pull request 'Fix for sh as shell not bash' (#907) from dboreham/fix-script-for-ubuntu into main
All checks were successful
Lint Checks / Run linter (push) Successful in 49s
Publish / Build and publish (push) Successful in 1m31s
Smoke Test / Run basic test suite (push) Successful in 4m13s
Deploy Test / Run deploy test suite (push) Successful in 5m8s
Webapp Test / Run webapp test suite (push) Successful in 4m55s
Reviewed-on: #907
2024-07-31 20:39:37 +00:00
6c91b87348 Fix for sh as shell not bash
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 32s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m39s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 7m55s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m59s
Smoke Test / Run basic test suite (pull_request) Successful in 4m30s
2024-07-31 14:30:53 -06:00
7d18334953 Mainnet-laconic stack fixes for laconicd2 (#904)
Some checks failed
Lint Checks / Run linter (push) Successful in 39s
Publish / Build and publish (push) Successful in 1m13s
Smoke Test / Run basic test suite (push) Successful in 4m7s
Webapp Test / Run webapp test suite (push) Successful in 4m35s
Deploy Test / Run deploy test suite (push) Successful in 4m58s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 9m58s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 10m55s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m44s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m15s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m52s
External Stack Test / Run external stack test suite (push) Successful in 4m48s
Reviewed-on: #904
2024-07-31 13:51:28 +00:00
79c1c5ed99 Update fixturenet-laconicd stack to use alnt denom (#902)
All checks were successful
Lint Checks / Run linter (push) Successful in 45s
Publish / Build and publish (push) Successful in 1m10s
Deploy Test / Run deploy test suite (push) Successful in 5m26s
Smoke Test / Run basic test suite (push) Successful in 4m5s
Webapp Test / Run webapp test suite (push) Successful in 4m49s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m32s
Part of [laconicd testnet validator enrollment](https://www.notion.so/laconicd-testnet-validator-enrollment-6fc1d3cafcc64fef8c5ed3affa27c675)

Reviewed-on: #902
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-07-31 13:27:54 +00:00
dfedd9e9ff rename laconic-sdk to registry-sdk (#897)
Some checks failed
Lint Checks / Run linter (push) Successful in 38s
Publish / Build and publish (push) Successful in 1m21s
Deploy Test / Run deploy test suite (push) Successful in 4m53s
Smoke Test / Run basic test suite (push) Successful in 3m47s
Webapp Test / Run webapp test suite (push) Successful in 4m41s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Failing after 12m28s
Co-authored-by: zramsay <zach@bluecollarcoding.ca>
Reviewed-on: #897
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Co-authored-by: zramsay <zramsay@noreply.git.vdb.to>
Co-committed-by: zramsay <zramsay@noreply.git.vdb.to>
2024-07-31 08:01:02 +00:00
f64683f03b Use a more flexible mechanism to inject config into next.config.js for runtime env. (#901)
All checks were successful
Lint Checks / Run linter (push) Successful in 32s
Publish / Build and publish (push) Successful in 1m11s
Smoke Test / Run basic test suite (push) Successful in 4m1s
Webapp Test / Run webapp test suite (push) Successful in 4m39s
Deploy Test / Run deploy test suite (push) Successful in 4m44s
Instead of attempting to rewriting the nextConfig file directly, inject a helper function to add the config we need.

Reviewed-on: #901
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-07-31 03:22:23 +00:00
913c3a8557 Back to v2 now that we have a working webapp deployer build again. (#896)
Some checks failed
Lint Checks / Run linter (push) Successful in 46s
Publish / Build and publish (push) Successful in 1m27s
Smoke Test / Run basic test suite (push) Successful in 4m2s
Deploy Test / Run deploy test suite (push) Successful in 5m10s
Webapp Test / Run webapp test suite (push) Successful in 4m40s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 12m47s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 10m31s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 11m40s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m40s
Database Test / Run database hosting test on kind/k8s (push) Successful in 11m4s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m7s
External Stack Test / Run external stack test suite (push) Successful in 5m25s
Reviewed-on: #896
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-07-27 18:59:42 +00:00
2f5b0cdd13 Revert recent laconicd deployment changes to restore production webapp deployer function. (#895)
All checks were successful
Lint Checks / Run linter (push) Successful in 49s
Publish / Build and publish (push) Successful in 1m26s
Smoke Test / Run basic test suite (push) Successful in 4m31s
Webapp Test / Run webapp test suite (push) Successful in 4m55s
Deploy Test / Run deploy test suite (push) Successful in 5m22s
Database Test / Run database hosting test on kind/k8s (push) Successful in 8m42s
Reviewed-on: #895
2024-07-27 17:04:03 +00:00
432bd4113d 881: Support next.config.mjs (#890)
Some checks failed
Lint Checks / Run linter (push) Successful in 49s
Publish / Build and publish (push) Successful in 1m24s
Deploy Test / Run deploy test suite (push) Successful in 5m49s
Webapp Test / Run webapp test suite (push) Successful in 5m7s
Smoke Test / Run basic test suite (push) Successful in 4m44s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m3s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m41s
External Stack Test / Run external stack test suite (push) Successful in 4m42s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m37s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 7m28s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 11m11s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m4s
Reviewed-on: #890
2024-07-25 16:47:17 +00:00
b26698b756 Update fixturenet-laconicd stack for renaming changes (#891)
Some checks failed
Lint Checks / Run linter (push) Successful in 49s
Publish / Build and publish (push) Successful in 1m31s
Deploy Test / Run deploy test suite (push) Successful in 5m54s
Webapp Test / Run webapp test suite (push) Successful in 5m10s
Smoke Test / Run basic test suite (push) Successful in 4m34s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m16s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m47s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h7m0s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h6m59s
Part of [Rename laconic2d to laconicd](https://www.notion.so/Rename-laconic2d-to-laconicd-9028d0c020d24d1288e92ebcb773d7a7)
Handles #882, #889

Reviewed-on: #891
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-07-25 08:50:15 +00:00
01deac78c4 880: Support new compile/generate syntax for next >=14.2.0 (#886)
All checks were successful
Lint Checks / Run linter (push) Successful in 34s
Publish / Build and publish (push) Successful in 1m17s
Deploy Test / Run deploy test suite (push) Successful in 4m34s
Smoke Test / Run basic test suite (push) Successful in 3m50s
Webapp Test / Run webapp test suite (push) Successful in 4m39s
Fix for #880 to support the next compile/generate syntax.

Reviewed-on: #886
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-07-24 20:20:16 +00:00
40ccd47857 Merge pull request 'fixes for deployer & SP documentation' (#887) from zach/cns-to-registry into main
Some checks failed
Lint Checks / Run linter (push) Successful in 40s
Publish / Build and publish (push) Successful in 1m18s
Smoke Test / Run basic test suite (push) Successful in 3m45s
Webapp Test / Run webapp test suite (push) Successful in 4m36s
Deploy Test / Run deploy test suite (push) Successful in 4m44s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Has started running
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m17s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h10m59s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h11m0s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m37s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m50s
External Stack Test / Run external stack test suite (push) Successful in 4m31s
Reviewed-on: #887
2024-07-24 00:29:14 +00:00
zramsay
80cff73344 crn --> lrn
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 45s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m6s
Smoke Test / Run basic test suite (pull_request) Successful in 4m23s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m48s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 7m43s
2024-07-23 20:20:01 -04:00
zramsay
21b1270d27 fix lint
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 36s
Smoke Test / Run basic test suite (pull_request) Successful in 4m19s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m56s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m9s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m46s
2024-07-23 20:16:16 -04:00
zramsay
008389dcd8 cns --> registry and other fixes
Some checks failed
Lint Checks / Run linter (pull_request) Failing after 33s
Smoke Test / Run basic test suite (pull_request) Successful in 4m12s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m44s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m4s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 7m49s
2024-07-23 20:10:06 -04:00
c81fb9581a Fix stack path check (#877)
Some checks failed
Lint Checks / Run linter (push) Successful in 37s
Publish / Build and publish (push) Successful in 1m21s
Webapp Test / Run webapp test suite (push) Successful in 4m47s
Smoke Test / Run basic test suite (push) Successful in 4m37s
Deploy Test / Run deploy test suite (push) Successful in 5m29s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m16s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Failing after 3h12m0s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h11m0s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h11m0s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m42s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m44s
External Stack Test / Run external stack test suite (push) Successful in 4m34s
Reviewed-on: #877
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-07-19 17:16:40 +00:00
83397bbae4 Enable cors in laconicd http services (#875)
Some checks failed
Webapp Test / Run webapp test suite (push) Successful in 4m33s
Smoke Test / Run basic test suite (push) Successful in 4m2s
Lint Checks / Run linter (push) Successful in 37s
Publish / Build and publish (push) Successful in 1m17s
Deploy Test / Run deploy test suite (push) Successful in 4m58s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m6s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m53s
External Stack Test / Run external stack test suite (push) Successful in 4m42s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m19s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m16s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h13m59s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h14m0s
Reviewed-on: #875
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-07-15 05:23:18 +00:00
17c21464ab Merge pull request 'Work around explorer host name sensitivity' (#874) from dboreham/fix-explorer-testnet-hostname into main
All checks were successful
Lint Checks / Run linter (push) Successful in 34s
Publish / Build and publish (push) Successful in 1m13s
Deploy Test / Run deploy test suite (push) Successful in 4m52s
Webapp Test / Run webapp test suite (push) Successful in 5m3s
Smoke Test / Run basic test suite (push) Successful in 4m20s
Reviewed-on: #874
2024-07-15 02:56:33 +00:00
7fb9ccdfd8 Work around explorer host name sensitivity
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 35s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m9s
Smoke Test / Run basic test suite (pull_request) Successful in 4m48s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m58s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m29s
2024-07-14 20:51:11 -06:00
2bad59dfcd Add missing file to explorer container (#873)
All checks were successful
Lint Checks / Run linter (push) Successful in 39s
Publish / Build and publish (push) Successful in 1m21s
Webapp Test / Run webapp test suite (push) Successful in 4m47s
Smoke Test / Run basic test suite (push) Successful in 4m10s
Deploy Test / Run deploy test suite (push) Successful in 5m11s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m45s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m44s
External Stack Test / Run external stack test suite (push) Successful in 4m45s
Reviewed-on: #873
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-07-14 17:40:35 +00:00
13d04e9256 Integrate ping-pub explorer (#872)
Some checks failed
Lint Checks / Run linter (push) Successful in 31s
Publish / Build and publish (push) Successful in 1m34s
Deploy Test / Run deploy test suite (push) Successful in 5m27s
Webapp Test / Run webapp test suite (push) Successful in 5m11s
Smoke Test / Run basic test suite (push) Successful in 4m26s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m24s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m39s
External Stack Test / Run external stack test suite (push) Successful in 4m41s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m32s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m30s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h5m0s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h5m0s
Reviewed-on: #872
2024-07-13 14:24:23 +00:00
1f017c9066 increase CERC_MAX_GENERATE_TIME for webapps (#857)
Some checks failed
Lint Checks / Run linter (push) Successful in 35s
Publish / Build and publish (push) Successful in 1m10s
Smoke Test / Run basic test suite (push) Successful in 3m53s
Webapp Test / Run webapp test suite (push) Successful in 4m32s
Deploy Test / Run deploy test suite (push) Successful in 4m46s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m37s
External Stack Test / Run external stack test suite (push) Successful in 4m33s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h1m0s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h1m0s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m23s
sort of addresses #856

Co-authored-by: zramsay <zach@bluecollarcoding.ca>
Co-authored-by: David Boreham <dboreham@noreply.git.vdb.to>
Reviewed-on: #857
Co-authored-by: zramsay <zramsay@noreply.git.vdb.to>
Co-committed-by: zramsay <zramsay@noreply.git.vdb.to>
2024-07-12 19:01:35 +00:00
3b9422095c Add support for bun as a webapp package manager (#800)
Some checks failed
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m28s
Lint Checks / Run linter (push) Successful in 39s
Publish / Build and publish (push) Successful in 1m13s
Deploy Test / Run deploy test suite (push) Successful in 4m41s
Smoke Test / Run basic test suite (push) Successful in 3m47s
Webapp Test / Run webapp test suite (push) Successful in 4m27s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h1m0s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h1m0s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m7s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m5s
External Stack Test / Run external stack test suite (push) Successful in 4m37s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m36s
This is working off pull request "[Add support for pnpm as a webapp build tool. #767](https://git.vdb.to/cerc-io/stack-orchestrator/pulls/767/files)" that adds `pnpm` package manager support for `nextjs` & `webapps`.

`bun` default build output directory (defined as `CERC_BUILD_OUTPUT_DIR`) is `dist` which should already be handled with `pnpm` support in the previously mentioned [pull request](https://git.vdb.to/cerc-io/stack-orchestrator/pulls/767/files)

Installing `bun` using `npm` following our previous `pnpm` installation

```zsh
npm install -g bun
```

We'll be using `bun` as a package manager that works with `Node.js` projects as defined in bun's [docs](https://bun.sh/docs/cli/install)

> The bun CLI contains a Node.js-compatible package manager designed to be a dramatically faster replacement for npm, yarn, and pnpm. It's a standalone tool that will work in pre-existing Node.js projects; if your project has a package.json, bun install can help you speed up your workflow.

To test `next.js` apps using `node.js` and compatibility with all four packager managers -- `npm`, `yarn`, `pnpm`, and `bun` -- use the branches of snowball's [nextjs-package-manager-example-app](https://git.vdb.to/snowball/nextjs-package-manager-example-app) repo: `nextjs-package-manager/npm`, `nextjs-package-manager/yarn`, `nextjs-package-manager/pnpm`, `nextjs-package-manager/bun`.

Co-authored-by: Vivian Phung <dev+github@vivianphung.com>
Co-authored-by: David Boreham <dboreham@noreply.git.vdb.to>
Reviewed-on: https://git.vdb.to/cerc-io/stack-orchestrator/pulls/800
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: VPhung24 <vphung24@noreply.git.vdb.to>
Co-committed-by: VPhung24 <vphung24@noreply.git.vdb.to>
2024-07-09 18:00:14 +00:00
36d4969b2d Fixes for external stack deployment (#851)
All checks were successful
Lint Checks / Run linter (push) Successful in 37s
Publish / Build and publish (push) Successful in 1m10s
Deploy Test / Run deploy test suite (push) Successful in 5m1s
Smoke Test / Run basic test suite (push) Successful in 4m1s
Webapp Test / Run webapp test suite (push) Successful in 4m40s
Fixes
- stack path resolution for `build`
- external stack path resolution for deployments
- "extra" config detection
- `deployment ports` command
- `version` command in dist or source install (without build_tag.txt)
- `setup-repos`, so it won't die when an existing repo is not at a branch or exact tag

Used in cerc-io/fixturenet-eth-stacks#14

Reviewed-on: #851
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
2024-07-09 15:37:35 +00:00
a2d6201be9 Merge pull request 'Remove quotes from git config' (#870) from dboreham/fix-git-config-command into main
Some checks failed
Publish / Build and publish (push) Successful in 1m19s
Deploy Test / Run deploy test suite (push) Successful in 5m12s
Webapp Test / Run webapp test suite (push) Successful in 4m37s
Smoke Test / Run basic test suite (push) Successful in 4m0s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h4m0s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h4m0s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m37s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m35s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m3s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m10s
External Stack Test / Run external stack test suite (push) Successful in 4m43s
Lint Checks / Run linter (push) Successful in 34s
Reviewed-on: #870
2024-07-05 15:56:12 +00:00
62f7825ec2 Remove quotes from git config
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 42s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m57s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m26s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m12s
Smoke Test / Run basic test suite (pull_request) Successful in 4m28s
2024-07-05 09:55:14 -06:00
6d24d4a7e6 Set github auth token if present (#868)
Some checks failed
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h9m59s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h10m0s
Lint Checks / Run linter (push) Successful in 33s
Publish / Build and publish (push) Successful in 1m10s
Deploy Test / Run deploy test suite (push) Successful in 4m33s
Webapp Test / Run webapp test suite (push) Successful in 4m20s
Smoke Test / Run basic test suite (push) Successful in 3m52s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m6s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m27s
Reviewed-on: #868
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-07-05 12:27:22 +00:00
f06e5f9a2a Don't try to tag remote images (#866)
All checks were successful
Lint Checks / Run linter (push) Successful in 37s
Publish / Build and publish (push) Successful in 1m14s
Webapp Test / Run webapp test suite (push) Successful in 4m18s
Smoke Test / Run basic test suite (push) Successful in 4m3s
Deploy Test / Run deploy test suite (push) Successful in 4m53s
Reviewed-on: #866
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-07-04 23:51:06 +00:00
c3a1402042 Derive the webapp host container id from stable data (#865)
All checks were successful
Lint Checks / Run linter (push) Successful in 34s
Publish / Build and publish (push) Successful in 1m20s
Webapp Test / Run webapp test suite (push) Successful in 4m32s
Smoke Test / Run basic test suite (push) Successful in 4m5s
Deploy Test / Run deploy test suite (push) Successful in 4m50s
Reviewed-on: #865
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-07-04 22:38:07 +00:00
ca5fffaed5 Add logging to webapp deployer (#863)
All checks were successful
Lint Checks / Run linter (push) Successful in 32s
Publish / Build and publish (push) Successful in 1m7s
Deploy Test / Run deploy test suite (push) Successful in 4m46s
Webapp Test / Run webapp test suite (push) Successful in 4m17s
Smoke Test / Run basic test suite (push) Successful in 3m59s
Helps with diagnosing failures and odd behavior seen in the deployer in production.

Reviewed-on: #863
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-07-04 19:46:42 +00:00
df776c1b4c Install git for apps that want to get their commit sha (#854)
Some checks failed
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m15s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m10s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h10m0s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h9m59s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m16s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m33s
External Stack Test / Run external stack test suite (push) Successful in 4m31s
Lint Checks / Run linter (push) Successful in 35s
Publish / Build and publish (push) Successful in 1m9s
Deploy Test / Run deploy test suite (push) Successful in 4m40s
Webapp Test / Run webapp test suite (push) Successful in 4m13s
Smoke Test / Run basic test suite (push) Successful in 3m51s
Reviewed-on: #854
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Co-authored-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-committed-by: David Boreham <dboreham@noreply.git.vdb.to>
2024-06-25 05:03:49 +00:00
48a3e79e6a Merge pull request 'Fix argument errors in command code' (#853) from dboreham/fix-laconic-mainnet-command into main
All checks were successful
Lint Checks / Run linter (push) Successful in 34s
Publish / Build and publish (push) Successful in 1m20s
Webapp Test / Run webapp test suite (push) Successful in 4m31s
Smoke Test / Run basic test suite (push) Successful in 3m54s
Deploy Test / Run deploy test suite (push) Successful in 4m58s
Reviewed-on: #853
2024-06-24 20:21:08 +00:00
0eaa5b8f09 Fix argument errors in command code
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 35s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m24s
Smoke Test / Run basic test suite (pull_request) Successful in 4m0s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m54s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 7m38s
2024-06-24 14:15:46 -06:00
8980ac2581 Merge pull request 'Fixes for current SO objects' (#852) from dboreham/fix-mainnet-laconic into main
All checks were successful
Lint Checks / Run linter (push) Successful in 41s
Publish / Build and publish (push) Successful in 1m18s
Webapp Test / Run webapp test suite (push) Successful in 4m17s
Smoke Test / Run basic test suite (push) Successful in 3m48s
Deploy Test / Run deploy test suite (push) Successful in 4m44s
Reviewed-on: #852
2024-06-24 19:54:04 +00:00
fd15252c3f Fixes for current SO objects
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 33s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m23s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m56s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 7m27s
Smoke Test / Run basic test suite (pull_request) Successful in 3m56s
2024-06-24 13:41:15 -06:00
b4e82ebc19 Merge pull request 'Fix mainnet laconic deploy setup' (#850) from dboreham/fix-mainnet-laconic-init into main
Some checks failed
Lint Checks / Run linter (push) Successful in 48s
Publish / Build and publish (push) Successful in 1m15s
Webapp Test / Run webapp test suite (push) Successful in 4m19s
Smoke Test / Run basic test suite (push) Successful in 3m52s
Deploy Test / Run deploy test suite (push) Successful in 4m47s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m19s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 56m43s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m23s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1h12m31s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m46s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m36s
External Stack Test / Run external stack test suite (push) Successful in 4m30s
Reviewed-on: #850
2024-06-22 01:25:07 +00:00
2364924a59 Fix mainnet laconic deploy setup
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 31s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m6s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m9s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m33s
Smoke Test / Run basic test suite (pull_request) Successful in 4m9s
2024-06-21 19:24:33 -06:00
a223797b4a Update graph-node dashboard to show individual subgraph increase in query count (#846)
Some checks failed
Lint Checks / Run linter (push) Successful in 35s
Publish / Build and publish (push) Successful in 1m10s
Deploy Test / Run deploy test suite (push) Successful in 4m42s
Webapp Test / Run webapp test suite (push) Successful in 4m18s
Smoke Test / Run basic test suite (push) Successful in 3m53s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m13s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 51m15s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 55m9s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m20s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m4s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m35s
External Stack Test / Run external stack test suite (push) Successful in 4m24s
Part of [Metrics and logging for GQL queries in watcher](https://www.notion.so/Metrics-and-logging-for-GQL-queries-in-watcher-928c692292b140a2a4f52cda9795df5e)

Reviewed-on: #846
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-06-20 09:27:23 +00:00
b8004e9870 Add Grafana panels for graph-node subgraph GQL queries (#845)
Some checks failed
Lint Checks / Run linter (push) Successful in 42s
Publish / Build and publish (push) Successful in 1m22s
Deploy Test / Run deploy test suite (push) Successful in 5m12s
Webapp Test / Run webapp test suite (push) Successful in 4m34s
Smoke Test / Run basic test suite (push) Successful in 3m59s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m17s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 51m47s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m40s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1h38m30s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m22s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m49s
External Stack Test / Run external stack test suite (push) Successful in 4m50s
Part of [Deploy v2 and updated v3 sushiswap subgraphs](https://www.notion.so/Deploy-v2-and-updated-v3-sushiswap-subgraphs-e331945fdeea487c890706fc22c6cc94)

Reviewed-on: #845
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-06-19 10:40:54 +00:00
3fd99a1522 Handle race condition in laconic registry CLI tests (#843)
All checks were successful
Lint Checks / Run linter (push) Successful in 33s
Publish / Build and publish (push) Successful in 1m16s
Deploy Test / Run deploy test suite (push) Successful in 5m11s
Webapp Test / Run webapp test suite (push) Successful in 4m9s
Smoke Test / Run basic test suite (push) Successful in 3m46s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m37s
Part of cerc-io/laconic-registry-cli#63

Reviewed-on: #843
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-06-19 06:41:58 +00:00
842d999792 Add alert rules for secured secured-finance subgraph watcher (#842)
Some checks failed
Lint Checks / Run linter (push) Successful in 38s
Publish / Build and publish (push) Successful in 1m22s
Webapp Test / Run webapp test suite (push) Successful in 4m44s
Smoke Test / Run basic test suite (push) Successful in 4m34s
Deploy Test / Run deploy test suite (push) Successful in 5m25s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m30s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 53m10s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1h1m56s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m37s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m23s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m56s
External Stack Test / Run external stack test suite (push) Successful in 4m46s
Part of [Generate secured-finance subgraph watcher with codegen](https://www.notion.so/Generate-secured-finance-subgraph-watcher-with-codegen-2923413e0af54ea787c5435d6966f3bb)
- Update watcher dashboard labels

Reviewed-on: #842
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-06-18 12:28:02 +00:00
d6a1fb3279 Merge pull request 'Fix image tag name' (#841) from dboreham/fix-remote-image-tags into main
Some checks failed
Lint Checks / Run linter (push) Successful in 32s
Publish / Build and publish (push) Successful in 1m30s
Deploy Test / Run deploy test suite (push) Successful in 5m13s
Webapp Test / Run webapp test suite (push) Successful in 4m53s
Smoke Test / Run basic test suite (push) Successful in 5m48s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m11s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 50m51s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m23s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h8m0s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m16s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m55s
External Stack Test / Run external stack test suite (push) Successful in 4m48s
Reviewed-on: #841
2024-06-13 14:32:26 +00:00
bf1eccd486 Fix image tag name
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 50s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m36s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m27s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m33s
Smoke Test / Run basic test suite (pull_request) Successful in 4m18s
2024-06-13 08:31:45 -06:00
3fb025b5c9 Make remote image tags unique to the deployment (#838)
Some checks failed
Lint Checks / Run linter (push) Successful in 34s
Publish / Build and publish (push) Successful in 1m22s
Deploy Test / Run deploy test suite (push) Successful in 4m41s
Webapp Test / Run webapp test suite (push) Successful in 4m24s
Smoke Test / Run basic test suite (push) Successful in 3m49s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m45s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 55m4s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h8m0s
Reviewed-on: #838
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-06-13 03:26:58 +00:00
4acb06325b Update watcher dashboard and config templates (#835)
Some checks failed
Lint Checks / Run linter (push) Successful in 33s
Publish / Build and publish (push) Successful in 1m21s
Webapp Test / Run webapp test suite (push) Successful in 4m11s
Smoke Test / Run basic test suite (push) Successful in 5m24s
Deploy Test / Run deploy test suite (push) Successful in 6m59s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m40s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 52m22s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m5s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h8m0s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m24s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m44s
External Stack Test / Run external stack test suite (push) Successful in 4m28s
Part of [Metrics and logging for GQL queries in watcher](https://www.notion.so/Metrics-and-logging-for-GQL-queries-in-watcher-928c692292b140a2a4f52cda9795df5e)

- Update watcher config templates after config refactoring
- Mount watcher GQL query log files on volumes
- Update watcher dashboard to
  - add a panel to show latest processed block number
  - use latest processed block from sync status for diff values

Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Reviewed-on: #835
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-06-12 11:52:51 +00:00
b80b647fa4 Merge pull request 'Remove files migrated to external repo' (#836) from dboreham/remove-snowball-stack into main
Some checks failed
Lint Checks / Run linter (push) Successful in 43s
Publish / Build and publish (push) Successful in 1m16s
Webapp Test / Run webapp test suite (push) Successful in 4m43s
Deploy Test / Run deploy test suite (push) Successful in 5m10s
Smoke Test / Run basic test suite (push) Successful in 4m30s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m15s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 54m48s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m28s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 2h38m9s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m33s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m36s
External Stack Test / Run external stack test suite (push) Successful in 4m36s
Reviewed-on: #836
2024-06-07 03:17:48 +00:00
9a1d3bb0f1 Remove files migrated to external repo
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 34s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m29s
Smoke Test / Run basic test suite (pull_request) Successful in 4m21s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m26s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m24s
2024-06-06 20:44:51 -06:00
abc0c2423f Add panels for GQL metrics to watcher dashboard (#834)
Some checks failed
Lint Checks / Run linter (push) Successful in 43s
Publish / Build and publish (push) Successful in 1m22s
Webapp Test / Run webapp test suite (push) Successful in 4m22s
Deploy Test / Run deploy test suite (push) Successful in 5m1s
Smoke Test / Run basic test suite (push) Successful in 4m20s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m6s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 52m19s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m10s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h12m59s
Database Test / Run database hosting test on kind/k8s (push) Successful in 8m43s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m39s
External Stack Test / Run external stack test suite (push) Successful in 4m37s
Part of [Metrics and logging for GQL queries in watcher](https://www.notion.so/Metrics-and-logging-for-GQL-queries-in-watcher-928c692292b140a2a4f52cda9795df5e)

Reviewed-on: #834
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-06-06 11:47:18 +00:00
a322d6eed4 Add dashboard for graph-node subgraphs (#832)
Some checks failed
Lint Checks / Run linter (push) Successful in 1m5s
Publish / Build and publish (push) Successful in 2m27s
Webapp Test / Run webapp test suite (push) Successful in 7m1s
Smoke Test / Run basic test suite (push) Successful in 7m30s
Deploy Test / Run deploy test suite (push) Successful in 9m23s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 24m12s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 51m33s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 6m56s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1h40m25s
Database Test / Run database hosting test on kind/k8s (push) Successful in 9m3s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m4s
External Stack Test / Run external stack test suite (push) Successful in 4m31s
Part of [Deploy v2 and updated v3 sushiswap subgraphs](https://www.notion.so/Deploy-v2-and-updated-v3-sushiswap-subgraphs-e331945fdeea487c890706fc22c6cc94)

- Add param `GRAPH_ETHEREUM_BLOCK_INGESTOR_MAX_CONCURRENT_JSON_RPC_CALLS_FOR_TXN_RECEIPTS` in graph-node stack
  - <https://github.com/graphprotocol/graph-node/blob/v0.31.0/docs/environment-variables.md#json-rpc-configuration-for-evm-chains>
- Add dashboard for subgraphs deployment in graph-node
  -  Show subgraph names in dashboard
- Add watcher dashboard panel for showing watcher release version and commit hash

Reviewed-on: #832
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-06-04 07:21:27 +00:00
ed8914b8d3 Upgrade watchers and their config (#827)
Some checks failed
Lint Checks / Run linter (push) Successful in 51s
Publish / Build and publish (push) Successful in 1m28s
Webapp Test / Run webapp test suite (push) Successful in 4m29s
Smoke Test / Run basic test suite (push) Successful in 4m32s
Deploy Test / Run deploy test suite (push) Successful in 5m26s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 22m46s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Failing after 9m10s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h13m0s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h13m0s
Database Test / Run database hosting test on kind/k8s (push) Successful in 18m47s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 5m12s
External Stack Test / Run external stack test suite (push) Successful in 6m6s
Part of [Investigate subgraph watchers lagging behind head](https://www.notion.so/Investigate-subgraph-watchers-lagging-behind-head-01b72294ca8e4f658e4c0e86b36d19e2)

Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Reviewed-on: #827
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-05-23 04:12:31 +00:00
fef7649683 Refactor SPA check. (#831)
All checks were successful
Lint Checks / Run linter (push) Successful in 40s
Publish / Build and publish (push) Successful in 1m18s
Webapp Test / Run webapp test suite (push) Successful in 4m23s
Smoke Test / Run basic test suite (push) Successful in 4m23s
Deploy Test / Run deploy test suite (push) Successful in 5m15s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m16s
External Stack Test / Run external stack test suite (push) Successful in 4m53s
Reviewed-on: #831
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-05-22 19:00:42 +00:00
579b402f2f Make the SPA detection even simpler. (#830)
All checks were successful
Lint Checks / Run linter (push) Successful in 1m1s
Publish / Build and publish (push) Successful in 1m33s
Webapp Test / Run webapp test suite (push) Successful in 4m46s
Smoke Test / Run basic test suite (push) Successful in 5m3s
Deploy Test / Run deploy test suite (push) Successful in 5m18s
Reviewed-on: #830
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-05-22 18:31:57 +00:00
25d0bc8a98 Case insensitive comparison (#829)
All checks were successful
Lint Checks / Run linter (push) Successful in 38s
Publish / Build and publish (push) Successful in 1m22s
Webapp Test / Run webapp test suite (push) Successful in 3m59s
Deploy Test / Run deploy test suite (push) Successful in 4m59s
Smoke Test / Run basic test suite (push) Successful in 4m42s
Reviewed-on: #829
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-05-22 18:18:27 +00:00
855288368c Add messages around the SPA auto-detect. (#828)
All checks were successful
Lint Checks / Run linter (push) Successful in 45s
Publish / Build and publish (push) Successful in 1m28s
Deploy Test / Run deploy test suite (push) Successful in 4m35s
Webapp Test / Run webapp test suite (push) Successful in 4m55s
Smoke Test / Run basic test suite (push) Successful in 4m55s
Database Test / Run database hosting test on kind/k8s (push) Successful in 11m48s
Tweak the auto-detection logic slightly for single-page apps, and also print the results.

Reviewed-on: #828
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-05-22 17:42:28 +00:00
8f2da38183 Add alerts for graph-node subgraphs (#821)
Some checks failed
Lint Checks / Run linter (push) Successful in 45s
Publish / Build and publish (push) Successful in 1m27s
Webapp Test / Run webapp test suite (push) Successful in 3m50s
Deploy Test / Run deploy test suite (push) Successful in 5m23s
Smoke Test / Run basic test suite (push) Successful in 4m40s
Database Test / Run database hosting test on kind/k8s (push) Successful in 12m11s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m34s
External Stack Test / Run external stack test suite (push) Successful in 4m51s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 15m20s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 11m10s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h12m58s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h13m0s
Part of [Deploy v2 and updated v3 sushiswap subgraphs](https://www.notion.so/Deploy-v2-and-updated-v3-sushiswap-subgraphs-e331945fdeea487c890706fc22c6cc94)

Reviewed-on: #821
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-05-17 04:01:41 +00:00
254f95e59f Merge pull request 'Remove legacy commands for docker startup' (#826) from dboreham/remove-old-docker-ci into main
Some checks failed
Lint Checks / Run linter (push) Successful in 50s
Publish / Build and publish (push) Successful in 1m31s
Webapp Test / Run webapp test suite (push) Successful in 4m12s
Deploy Test / Run deploy test suite (push) Successful in 5m29s
Smoke Test / Run basic test suite (push) Successful in 5m9s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 14m48s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 10m13s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h12m49s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h12m50s
Database Test / Run database hosting test on kind/k8s (push) Successful in 11m6s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m39s
External Stack Test / Run external stack test suite (push) Successful in 4m54s
Reviewed-on: #826
2024-05-15 16:09:09 +00:00
0acb6ea6bc Remove legacy commands for docker startup
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 1m14s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m25s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m40s
Smoke Test / Run basic test suite (pull_request) Successful in 4m55s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 10m8s
2024-05-15 09:04:19 -06:00
b9369a13e6 Update watcher dashboard with panels for ETH RPC request failures and durations (#825)
Some checks failed
Lint Checks / Run linter (push) Successful in 53s
Publish / Build and publish (push) Successful in 1m33s
Webapp Test / Run webapp test suite (push) Successful in 4m48s
Smoke Test / Run basic test suite (push) Successful in 4m28s
Deploy Test / Run deploy test suite (push) Successful in 5m36s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m31s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h13m0s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h13m0s
Part of [Regenerate ajna watcher with updated subgraph config
](https://www.notion.so/Regenerate-ajna-watcher-with-updated-subgraph-config-c9bbecb033024c13a7515c7f1efc3363)
Requires [Add metrics to monitor errors and duration for ETH RPC requests](https://github.com/cerc-io/watcher-ts/pull/507)

Reviewed-on: #825
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-05-15 13:52:18 +00:00
0b1eb8eb0f Update ajna-watcher version in ajna stack (#824)
Some checks failed
Lint Checks / Run linter (push) Successful in 47s
Publish / Build and publish (push) Successful in 1m37s
Deploy Test / Run deploy test suite (push) Successful in 4m56s
Webapp Test / Run webapp test suite (push) Successful in 5m4s
Smoke Test / Run basic test suite (push) Successful in 4m46s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 9m29s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h13m0s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h12m59s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m11s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m11s
External Stack Test / Run external stack test suite (push) Successful in 5m9s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m54s
Part of [Regenerate ajna watcher with updated subgraph config](https://www.notion.so/Regenerate-ajna-watcher-with-updated-subgraph-config-c9bbecb033024c13a7515c7f1efc3363)

Reviewed-on: #824
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-05-14 08:27:59 +00:00
78092f5793 Update subgraph watcher stacks to configure multiple RPC endpoints (#822)
Some checks failed
Lint Checks / Run linter (push) Successful in 53s
Publish / Build and publish (push) Successful in 1m30s
Webapp Test / Run webapp test suite (push) Successful in 4m38s
Smoke Test / Run basic test suite (push) Successful in 4m49s
Deploy Test / Run deploy test suite (push) Successful in 5m55s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 13m54s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m49s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h13m0s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h13m0s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m43s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m40s
External Stack Test / Run external stack test suite (push) Successful in 5m16s
Part of [Ability to configure watchers with multiple RPC endpoints](https://www.notion.so/Ability-to-configure-watchers-with-multiple-RPC-endpoints-dc8d3ff4d647404ab718dfd5a4c9035c)

Reviewed-on: #822
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-05-10 04:58:30 +00:00
247dbdd2f0 Update subgraph watcher versions for improved eth_getLogs calls (#820)
Some checks failed
Lint Checks / Run linter (push) Successful in 55s
Publish / Build and publish (push) Successful in 1m36s
Webapp Test / Run webapp test suite (push) Successful in 5m13s
Deploy Test / Run deploy test suite (push) Successful in 5m32s
Smoke Test / Run basic test suite (push) Successful in 5m3s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 15m22s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 9m20s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h13m0s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h12m59s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m50s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m15s
External Stack Test / Run external stack test suite (push) Successful in 4m54s
Part of [Investigate subgraph watchers lagging behind head](https://www.notion.so/Investigate-subgraph-watchers-lagging-behind-head-01b72294ca8e4f658e4c0e86b36d19e2)

Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Reviewed-on: #820
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-05-07 04:18:45 +00:00
30db1f58d0 Refactor for new external stack directory layout under common parent (#815)
Some checks failed
Lint Checks / Run linter (push) Successful in 48s
Publish / Build and publish (push) Successful in 1m29s
Deploy Test / Run deploy test suite (push) Successful in 6m36s
Webapp Test / Run webapp test suite (push) Successful in 5m14s
Smoke Test / Run basic test suite (push) Successful in 6m49s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 14m34s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 9m14s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h13m0s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h12m59s
Database Test / Run database hosting test on kind/k8s (push) Successful in 11m45s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m37s
External Stack Test / Run external stack test suite (push) Successful in 5m45s
Reviewed-on: #815
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-04-29 23:03:20 +00:00
09a1cbb966 Additional env configuration in graph-node stack (#812)
Some checks failed
Webapp Test / Run webapp test suite (push) Successful in 4m14s
Lint Checks / Run linter (push) Failing after 0s
Deploy Test / Run deploy test suite (push) Failing after 3s
Smoke Test / Run basic test suite (push) Failing after 1s
Publish / Build and publish (push) Successful in 1m20s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 17m23s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m57s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h13m0s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h13m0s
Database Test / Run database hosting test on kind/k8s (push) Successful in 11m37s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m59s
External Stack Test / Run external stack test suite (push) Successful in 5m6s
Part of [Deploy v2 and updated v3 sushiswap subgraphs](https://www.notion.so/Deploy-v2-and-updated-v3-sushiswap-subgraphs-e331945fdeea487c890706fc22c6cc94)

Reviewed-on: #812
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-04-26 05:45:00 +00:00
13ce521d84 Fix config dir processing for external stacks (#810)
Some checks failed
Lint Checks / Run linter (push) Successful in 30s
Publish / Build and publish (push) Successful in 1m16s
Webapp Test / Run webapp test suite (push) Successful in 3m46s
Deploy Test / Run deploy test suite (push) Successful in 5m17s
Smoke Test / Run basic test suite (push) Successful in 4m41s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 18m56s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 0s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m31s
Database Test / Run database hosting test on kind/k8s (push) Failing after 1s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m25s
External Stack Test / Run external stack test suite (push) Successful in 6m20s
Reviewed-on: #810
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-04-23 21:47:20 +00:00
6e4dae9777 Add external stack support (#806)
Some checks failed
Database Test / Run database hosting test on kind/k8s (push) Successful in 8m33s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m45s
External Stack Test / Run external stack test suite (push) Successful in 4m41s
Lint Checks / Run linter (push) Successful in 41s
Publish / Build and publish (push) Successful in 1m22s
Deploy Test / Run deploy test suite (push) Successful in 4m58s
Webapp Test / Run webapp test suite (push) Successful in 4m27s
Smoke Test / Run basic test suite (push) Successful in 5m8s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 14m11s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Failing after 1s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h8m0s
Reviewed-on: #806
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-04-18 21:22:47 +00:00
9043a67c7c Skip checks on requests we've already seen (#805)
Some checks failed
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 15m11s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 9m5s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h10m0s
Database Test / Run database hosting test on kind/k8s (push) Successful in 11m11s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m18s
Lint Checks / Run linter (push) Successful in 47s
Publish / Build and publish (push) Successful in 1m20s
Webapp Test / Run webapp test suite (push) Successful in 4m20s
Deploy Test / Run deploy test suite (push) Successful in 7m3s
Smoke Test / Run basic test suite (push) Successful in 6m23s
Reviewed-on: #805
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-04-15 20:27:35 +00:00
7f84a45cfd Switch repo to cerc-io org. (#804)
All checks were successful
Lint Checks / Run linter (push) Successful in 39s
Publish / Build and publish (push) Successful in 1m21s
Webapp Test / Run webapp test suite (push) Successful in 4m33s
Smoke Test / Run basic test suite (push) Successful in 4m23s
Deploy Test / Run deploy test suite (push) Successful in 5m8s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m27s
Update stack to track moved repo.

Reviewed-on: #804
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
2024-04-15 18:59:08 +00:00
4126f2fc43 Add --fqdn-policy option to deploy-webapp-from-registry. (#802)
Some checks failed
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 9m38s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h10m0s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m11s
Lint Checks / Run linter (push) Successful in 57s
Publish / Build and publish (push) Successful in 1m34s
Webapp Test / Run webapp test suite (push) Successful in 5m8s
Deploy Test / Run deploy test suite (push) Successful in 6m20s
Smoke Test / Run basic test suite (push) Successful in 5m30s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 14m49s
This add a new option `--fqdn-policy` to the `deploy-webapp-from-registry`.

The default policy, `prohibit` means that `ApplicationDeploymentRequests` which specify a FQDN will be rejected.  The `allow` policy will cause them to be processed.  The `preexisting` policy will only process them if an existing `DnsRecord` exists in the registry with the correct ownership.

The latter would be useful in conjunction with a pre-checking scheme in the UI (eg, that the DNS entry is properly configured, the domain is under the control of the requestor, etc.)  Only after all the checks were successful would the `DnsRecord` be created, allowing for `ApplicationDeploymentRequests` to use it.

Reviewed-on: #802
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-04-15 12:20:35 +00:00
345d200873 Add a laconicd Grafana dashboard to monitoring stack (#799)
Some checks failed
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 14m44s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m59s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 3h10m0s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m55s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m11s
Lint Checks / Run linter (push) Successful in 1m20s
Publish / Build and publish (push) Successful in 1m23s
Deploy Test / Run deploy test suite (push) Successful in 5m11s
Webapp Test / Run webapp test suite (push) Successful in 5m19s
Smoke Test / Run basic test suite (push) Successful in 5m15s
Part of https://www.notion.so/Monitoring-and-alerting-for-laconicd-86727c3b4dde4dc993d87d6e29f935fe

- Add a laconicd Grafana dashboard
  - Update fixturenet-laconicd script to expose metrics
- Upgrade Grafana version to avoid errors while saving changes made to a dashboard (see [thread](https://community.grafana.com/t/error-cannot-add-property-ishandled-object-is-not-extensible/109268))
-  Add an alert rule for Ajna watcher

Reviewed-on: #799
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-04-11 07:59:36 +00:00
87fffca358 fixturenet-plugeth Deneb/Cancun upgrade (#789)
All checks were successful
Lint Checks / Run linter (push) Successful in 47s
Publish / Build and publish (push) Successful in 1m35s
Webapp Test / Run webapp test suite (push) Successful in 4m54s
Smoke Test / Run basic test suite (push) Successful in 4m42s
Deploy Test / Run deploy test suite (push) Successful in 5m48s
Updates fixturenet-plugeth stack for the Deneb fork based on Geth v1.13.x:

- updates genesis generator tool, and simplifies the config: the default from `ethereum-genesis-generator` can be used for a from-genesis Merged chain.

Reviewed-on: #789
Reviewed-by: jonathanface <jonathanface@noreply.git.vdb.to>
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
2024-04-11 03:21:36 +00:00
66b92df498 Merge pull request 'Blast stack' (#777) from blast-stack into main
Some checks failed
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 16m16s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1h6m51s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 10m19s
Lint Checks / Run linter (push) Successful in 44s
Publish / Build and publish (push) Successful in 1m25s
Webapp Test / Run webapp test suite (push) Successful in 4m27s
Database Test / Run database hosting test on kind/k8s (push) Successful in 12m39s
Deploy Test / Run deploy test suite (push) Successful in 5m37s
Smoke Test / Run basic test suite (push) Successful in 5m4s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m42s
Reviewed-on: #777
2024-04-08 15:51:10 +00:00
108a5a3440 Merge branch 'main' into blast-stack
All checks were successful
Lint Checks / Run linter (push) Successful in 42s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 5m22s
Lint Checks / Run linter (pull_request) Successful in 1m12s
Deploy Test / Run deploy test suite (pull_request) Successful in 6m10s
Database Test / Run database hosting test on kind/k8s (push) Successful in 13m48s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 11m12s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 16m57s
Smoke Test / Run basic test suite (pull_request) Successful in 4m56s
Webapp Test / Run webapp test suite (pull_request) Successful in 5m7s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 10m2s
2024-04-08 15:21:03 +00:00
4b04a39faf linted
Some checks failed
Lint Checks / Run linter (pull_request) Successful in 44s
Lint Checks / Run linter (push) Successful in 49s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m36s
Webapp Test / Run webapp test suite (pull_request) Failing after 4m51s
Smoke Test / Run basic test suite (pull_request) Successful in 5m52s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 10m43s
2024-04-08 15:17:44 +00:00
40f362511b Run CI alert steps only on main (#797)
Some checks failed
Lint Checks / Run linter (push) Successful in 43s
Publish / Build and publish (push) Successful in 1m24s
Deploy Test / Run deploy test suite (push) Successful in 6m9s
Webapp Test / Run webapp test suite (push) Successful in 5m12s
Smoke Test / Run basic test suite (push) Successful in 4m55s
Database Test / Run database hosting test on kind/k8s (push) Successful in 11m55s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m33s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 16m7s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 1s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m52s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 59m34s
Part of https://www.notion.so/Alerting-for-failing-CI-jobs-d0183b65453947aeab11dbddf989d9c0

- Run CI alert steps only on main to avoid alerts for in-progress PRs
- The Slack alerts will be sent on a CI job failure if
  - A commit is pushed directly to main
  - A PR gets merged into main
  - A scheduled job runs on main

Reviewed-on: #797
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-04-05 09:26:08 +00:00
9cd34ffebb Add Slack alerts for failures on CI workflows (#793)
Some checks failed
Lint Checks / Run linter (push) Successful in 40s
Publish / Build and publish (push) Successful in 1m30s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m45s
Deploy Test / Run deploy test suite (push) Successful in 5m45s
Webapp Test / Run webapp test suite (push) Successful in 4m36s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 19m3s
Database Test / Run database hosting test on kind/k8s (push) Successful in 17m30s
Smoke Test / Run basic test suite (push) Successful in 4m50s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Has been cancelled
Part of https://www.notion.so/Alerting-for-failing-CI-jobs-d0183b65453947aeab11dbddf989d9c0

Reviewed-on: #793
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-04-05 08:27:46 +00:00
515f6d16f5 Fix laconic registry CLI tests (#792)
Some checks failed
Lint Checks / Run linter (push) Successful in 45s
Publish / Build and publish (push) Successful in 1m28s
Webapp Test / Run webapp test suite (push) Successful in 4m50s
Deploy Test / Run deploy test suite (push) Successful in 6m24s
Smoke Test / Run basic test suite (push) Successful in 5m34s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Successful in 14m15s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 0s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 56m6s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 9m34s
Database Test / Run database hosting test on kind/k8s (push) Successful in 11m25s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m25s
Part of https://www.notion.so/Test-registry-cli-in-SO-fixturenet-laconicd-CI-ef1f497678264362931bd12643ba8a17

Reviewed-on: #792
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-04-04 10:50:02 +00:00
105805cb9b Run registry CLI tests as part of laconicd fixturenet tests (#791)
Some checks failed
Lint Checks / Run linter (push) Successful in 47s
Publish / Build and publish (push) Successful in 1m34s
Webapp Test / Run webapp test suite (push) Successful in 5m14s
Deploy Test / Run deploy test suite (push) Successful in 6m22s
Smoke Test / Run basic test suite (push) Successful in 5m32s
Fixturenet-Laconicd-Test / Run Laconicd fixturenet and Laconic CLI tests (push) Failing after 10m4s
Part of https://www.notion.so/Test-registry-cli-in-SO-fixturenet-laconicd-CI-ef1f497678264362931bd12643ba8a17

Co-authored-by: neeraj <neeraj.rtly@gmail.com>
Reviewed-on: #791
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-04-04 07:16:46 +00:00
jonathan@vulcanize.io
62f1962546 metrics on op-node
Some checks failed
Lint Checks / Run linter (push) Failing after 38s
Lint Checks / Run linter (pull_request) Failing after 39s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m42s
Webapp Test / Run webapp test suite (pull_request) Failing after 3m56s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 9m3s
Smoke Test / Run basic test suite (pull_request) Successful in 4m29s
2024-04-02 17:35:13 +00:00
jonathan@vulcanize.io
2a24e71c92 added metrics addr flag
Some checks failed
Lint Checks / Run linter (pull_request) Failing after 39s
Lint Checks / Run linter (push) Failing after 51s
Webapp Test / Run webapp test suite (pull_request) Failing after 4m44s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m50s
Smoke Test / Run basic test suite (pull_request) Successful in 5m27s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 9m44s
2024-04-02 16:20:25 +00:00
jonathan@vulcanize.io
c789b82782 metrics
Some checks failed
Lint Checks / Run linter (push) Failing after 42s
Lint Checks / Run linter (pull_request) Failing after 41s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m50s
Webapp Test / Run webapp test suite (pull_request) Successful in 5m8s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 9m11s
Smoke Test / Run basic test suite (pull_request) Successful in 5m1s
2024-04-01 19:38:00 +00:00
d2442bcc9b revert 5308ab1e4e (#788)
Some checks failed
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 2s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 56m21s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m53s
Database Test / Run database hosting test on kind/k8s (push) Successful in 11m3s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m15s
Lint Checks / Run linter (push) Successful in 37s
Publish / Build and publish (push) Successful in 1m7s
Webapp Test / Run webapp test suite (push) Successful in 4m58s
Deploy Test / Run deploy test suite (push) Successful in 5m38s
Smoke Test / Run basic test suite (push) Successful in 5m0s
Fixturenet-Laconicd-Test / Run an Laconicd fixturenet test (push) Successful in 10m5s
revert Blind commit to fix laconic CLI calls after rename. (#784)

`laconic cns` got renamed to `laconic registry` which breaks all the scripts and commands that use it.

Reviewed-on: #784
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>

Reviewed-on: #788
2024-03-27 20:55:03 +00:00
jonathan@vulcanize.io
b3bc5a19ae blast testnet, initial commit
Some checks failed
Lint Checks / Run linter (push) Failing after 40s
Lint Checks / Run linter (pull_request) Failing after 43s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m57s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m29s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 10m34s
Smoke Test / Run basic test suite (pull_request) Successful in 5m1s
2024-03-27 15:03:30 +00:00
44faf36837 Update ajna-watcher-ts version for using new subgraph (#786)
Some checks failed
Lint Checks / Run linter (push) Successful in 45s
Publish / Build and publish (push) Successful in 1m35s
Webapp Test / Run webapp test suite (push) Successful in 4m29s
Deploy Test / Run deploy test suite (push) Successful in 5m29s
Smoke Test / Run basic test suite (push) Successful in 4m51s
Fixturenet-Laconicd-Test / Run an Laconicd fixturenet test (push) Successful in 9m18s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 33s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 53m4s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 10m6s
Database Test / Run database hosting test on kind/k8s (push) Successful in 11m4s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m14s
Part of https://www.notion.so/Run-ajna-finance-subgraph-watcher-87748d78cd7a471b8d71f50d5fdc2657

Reviewed-on: #786
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-03-26 14:52:57 +00:00
18b006468d Update GQL path for ajna subgraph watcher server (#785)
Some checks failed
Lint Checks / Run linter (push) Successful in 35s
Publish / Build and publish (push) Successful in 1m7s
Deploy Test / Run deploy test suite (push) Successful in 4m35s
Webapp Test / Run webapp test suite (push) Successful in 4m17s
Smoke Test / Run basic test suite (push) Successful in 4m18s
Fixturenet-Laconicd-Test / Run an Laconicd fixturenet test (push) Successful in 9m27s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 33s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 53m7s
Part of https://www.notion.so/Run-ajna-finance-subgraph-watcher-87748d78cd7a471b8d71f50d5fdc2657

Reviewed-on: #785
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-03-26 11:50:05 +00:00
5308ab1e4e Blind commit to fix laconic CLI calls after rename. (#784)
All checks were successful
Lint Checks / Run linter (push) Successful in 42s
Publish / Build and publish (push) Successful in 1m25s
Deploy Test / Run deploy test suite (push) Successful in 5m20s
Webapp Test / Run webapp test suite (push) Successful in 4m32s
Smoke Test / Run basic test suite (push) Successful in 5m47s
`laconic cns` got renamed to `laconic registry` which breaks all the scripts and commands that use it.

Reviewed-on: #784
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-03-25 19:09:26 +00:00
cd50832038 Add a Ajna watcher stack (#781)
All checks were successful
Fixturenet-Laconicd-Test / Run an Laconicd fixturenet test (push) Successful in 11m14s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 51m28s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 51m45s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m38s
Database Test / Run database hosting test on kind/k8s (push) Successful in 11m46s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m57s
Lint Checks / Run linter (push) Successful in 41s
Publish / Build and publish (push) Successful in 1m20s
Deploy Test / Run deploy test suite (push) Successful in 4m36s
Webapp Test / Run webapp test suite (push) Successful in 4m11s
Smoke Test / Run basic test suite (push) Successful in 4m29s
Part of https://www.notion.so/Generate-ajna-finance-subgraph-watcher-with-codegen-5b80ac149b3f449fb138f5d92cc5485e

Reviewed-on: #781
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-03-21 07:17:01 +00:00
jonathan@vulcanize.io
7f9e1da8ba removed keycloak
Some checks failed
Lint Checks / Run linter (push) Failing after 46s
Lint Checks / Run linter (pull_request) Failing after 56s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m40s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m40s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 9m19s
Smoke Test / Run basic test suite (pull_request) Successful in 5m8s
2024-03-15 15:15:15 +00:00
jonathan@vulcanize.io
0149346927 adding trustrpc flag to op-node
Some checks failed
Lint Checks / Run linter (push) Failing after 45s
Lint Checks / Run linter (pull_request) Failing after 44s
Webapp Test / Run webapp test suite (pull_request) Successful in 5m15s
Deploy Test / Run deploy test suite (pull_request) Successful in 6m30s
Smoke Test / Run basic test suite (pull_request) Successful in 5m55s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 9m50s
2024-03-13 18:18:57 +00:00
jonathan@vulcanize.io
06de4fe485 follow established naming convention
Some checks failed
Lint Checks / Run linter (pull_request) Failing after 34s
Lint Checks / Run linter (push) Failing after 39s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m38s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m33s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m11s
Smoke Test / Run basic test suite (pull_request) Successful in 4m20s
2024-03-13 16:21:29 +00:00
aeddc82ebc Remove latest indexed block value from watcher alerts data (#780)
All checks were successful
Lint Checks / Run linter (push) Successful in 40s
Publish / Build and publish (push) Successful in 1m21s
Webapp Test / Run webapp test suite (push) Successful in 4m34s
Deploy Test / Run deploy test suite (push) Successful in 5m42s
Smoke Test / Run basic test suite (push) Successful in 4m50s
Fixturenet-Laconicd-Test / Run an Laconicd fixturenet test (push) Successful in 8m55s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 50m50s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 54m1s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 9m5s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m40s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m20s
Part of https://www.notion.so/Setup-grafana-SO-stack-for-monitoring-watchers-7e23042c296c4de6b8676f1f604aa03c

Reviewed-on: #780
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-03-13 07:16:15 +00:00
jonathan@vulcanize.io
821d401575 fixed missing rollup
Some checks failed
Lint Checks / Run linter (pull_request) Failing after 43s
Lint Checks / Run linter (push) Failing after 45s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m38s
Webapp Test / Run webapp test suite (pull_request) Successful in 5m11s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 9m23s
Smoke Test / Run basic test suite (pull_request) Successful in 4m47s
2024-03-13 04:28:16 +00:00
jonathan@vulcanize.io
5123111db0 copy whether absolute path or local
Some checks failed
Lint Checks / Run linter (push) Failing after 39s
Lint Checks / Run linter (pull_request) Failing after 39s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m5s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m25s
Smoke Test / Run basic test suite (pull_request) Successful in 4m44s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m56s
2024-03-13 03:56:17 +00:00
jonathan@vulcanize.io
02c33cb229 working state
All checks were successful
Lint Checks / Run linter (push) Successful in 35s
Lint Checks / Run linter (pull_request) Successful in 33s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m32s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 9m31s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m59s
Smoke Test / Run basic test suite (pull_request) Successful in 4m42s
2024-03-13 02:49:21 +00:00
17e860d6e4 Update subgraph watcher versions and instructions to use deployments (#775)
All checks were successful
Fixturenet-Laconicd-Test / Run an Laconicd fixturenet test (push) Successful in 9m39s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 53m46s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 55m59s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m44s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m32s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m19s
Lint Checks / Run linter (push) Successful in 38s
Publish / Build and publish (push) Successful in 1m16s
Webapp Test / Run webapp test suite (push) Successful in 4m10s
Deploy Test / Run deploy test suite (push) Successful in 5m30s
Smoke Test / Run basic test suite (push) Successful in 4m53s
Part of https://www.notion.so/Setup-watchers-on-sandman-34b5514a10634c6fbf3ec338967c871c

Reviewed-on: #775
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-03-12 05:32:55 +00:00
jonathan@vulcanize.io
b4eda902ea fixed optimum deployment
All checks were successful
Lint Checks / Run linter (push) Successful in 42s
Lint Checks / Run linter (pull_request) Successful in 41s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m25s
Smoke Test / Run basic test suite (pull_request) Successful in 4m47s
Webapp Test / Run webapp test suite (pull_request) Successful in 5m6s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 9m17s
2024-03-08 19:27:36 +00:00
jonathan@vulcanize.io
b4df8104c8 tweaking yml
All checks were successful
Lint Checks / Run linter (push) Successful in 41s
Lint Checks / Run linter (pull_request) Successful in 46s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m32s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 9m21s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m42s
Smoke Test / Run basic test suite (pull_request) Successful in 4m28s
2024-03-08 17:20:44 +00:00
jonathan@vulcanize.io
07282cdd6e minimal build
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 43s
Lint Checks / Run linter (push) Successful in 46s
Deploy Test / Run deploy test suite (pull_request) Successful in 6m1s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 9m31s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m27s
Smoke Test / Run basic test suite (pull_request) Successful in 4m31s
2024-03-08 02:56:23 +00:00
jonathan@vulcanize.io
e7c935fb78 integration testing, I think
All checks were successful
Lint Checks / Run linter (push) Successful in 40s
Lint Checks / Run linter (pull_request) Successful in 37s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m49s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m37s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m22s
Smoke Test / Run basic test suite (pull_request) Successful in 4m50s
2024-03-07 22:24:46 +00:00
523b5779be Auto-detect which certificate to use (including wildcards). (#779)
Some checks failed
Lint Checks / Run linter (push) Successful in 43s
Publish / Build and publish (push) Successful in 1m9s
Deploy Test / Run deploy test suite (push) Successful in 4m49s
Webapp Test / Run webapp test suite (push) Successful in 4m23s
Smoke Test / Run basic test suite (push) Successful in 5m5s
Fixturenet-Laconicd-Test / Run an Laconicd fixturenet test (push) Successful in 9m18s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 54m22s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 55m24s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 7m53s
Database Test / Run database hosting test on kind/k8s (push) Successful in 11m14s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m22s
Rather than always requesting a certificate, attempt to re-use an existing certificate if it already exists in the k8s cluster.  This includes matching to a wildcard certificate.

Reviewed-on: #779
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-03-07 17:38:36 +00:00
jonathan@vulcanize.io
1a636799a6 filepath
All checks were successful
Lint Checks / Run linter (push) Successful in 31s
Lint Checks / Run linter (pull_request) Successful in 44s
Webapp Test / Run webapp test suite (pull_request) Successful in 3m15s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m45s
Smoke Test / Run basic test suite (pull_request) Successful in 5m2s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 6m14s
2024-03-06 19:34:25 +00:00
jonathan@vulcanize.io
0aa4b350bd keycloak implementation
All checks were successful
Lint Checks / Run linter (push) Successful in 45s
Lint Checks / Run linter (pull_request) Successful in 44s
Deploy Test / Run deploy test suite (pull_request) Successful in 3m38s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m59s
Smoke Test / Run basic test suite (pull_request) Successful in 2m59s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 9m12s
2024-03-06 19:21:29 +00:00
62f7ce649d Exit non-0 if docker build fails. (#778)
All checks were successful
Fixturenet-Laconicd-Test / Run an Laconicd fixturenet test (push) Successful in 10m22s
Lint Checks / Run linter (push) Successful in 27s
Publish / Build and publish (push) Successful in 52s
Webapp Test / Run webapp test suite (push) Successful in 2m49s
Deploy Test / Run deploy test suite (push) Successful in 3m37s
Smoke Test / Run basic test suite (push) Successful in 2m43s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 37m29s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 2m52s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m43s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 50m50s
Make sure to check the exit code of the docker build and bubble it back up to laconic-so.

Reviewed-on: #778
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-03-06 18:38:30 +00:00
jonathan@vulcanize.io
2252252072 comment format
All checks were successful
Lint Checks / Run linter (push) Successful in 32s
Lint Checks / Run linter (pull_request) Successful in 27s
Webapp Test / Run webapp test suite (pull_request) Successful in 3m40s
Deploy Test / Run deploy test suite (pull_request) Successful in 6m37s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 7m44s
Smoke Test / Run basic test suite (pull_request) Successful in 3m15s
2024-03-05 20:42:53 +00:00
jonathan@vulcanize.io
c92f15f47c comment format
Some checks failed
Lint Checks / Run linter (push) Failing after 24s
Lint Checks / Run linter (pull_request) Failing after 32s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m13s
Webapp Test / Run webapp test suite (pull_request) Successful in 3m6s
Smoke Test / Run basic test suite (pull_request) Successful in 3m41s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 11m9s
2024-03-05 20:38:09 +00:00
jonathan@vulcanize.io
fee32ec703 copying genesis.json to /data/blast-data for blast
Some checks failed
Lint Checks / Run linter (pull_request) Failing after 25s
Lint Checks / Run linter (push) Failing after 28s
Webapp Test / Run webapp test suite (pull_request) Successful in 2m53s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m55s
Smoke Test / Run basic test suite (pull_request) Successful in 3m42s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 10m48s
2024-03-05 20:32:56 +00:00
fb55c1425e Beginnings of blast stack
All checks were successful
Lint Checks / Run linter (push) Successful in 39s
Lint Checks / Run linter (pull_request) Successful in 38s
Webapp Test / Run webapp test suite (pull_request) Successful in 3m7s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m15s
Smoke Test / Run basic test suite (pull_request) Successful in 3m2s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 8m47s
2024-03-04 15:05:31 -07:00
cc541ac20f Use -slim variant for Dockerfile (#773)
Some checks failed
Lint Checks / Run linter (push) Successful in 44s
Publish / Build and publish (push) Successful in 1m12s
Smoke Test / Run basic test suite (push) Successful in 2m48s
Webapp Test / Run webapp test suite (push) Successful in 5m40s
Deploy Test / Run deploy test suite (push) Successful in 6m23s
Fixturenet-Laconicd-Test / Run an Laconicd fixturenet test (push) Successful in 9m42s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 4s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 35m37s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 5m21s
Database Test / Run database hosting test on kind/k8s (push) Successful in 7m12s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 2m50s
This saves about 1GB of space in the image.

Reviewed-on: #773
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-28 04:37:57 +00:00
10e2311a8b Add timed logging for the webapp build (#771)
All checks were successful
Lint Checks / Run linter (push) Successful in 32s
Publish / Build and publish (push) Successful in 59s
Smoke Test / Run basic test suite (push) Successful in 3m1s
Webapp Test / Run webapp test suite (push) Successful in 4m47s
Deploy Test / Run deploy test suite (push) Successful in 5m40s
Add lots of log and timer output to webapp builds.

Reviewed-on: #771
2024-02-28 00:38:11 +00:00
f32bbf9e48 Merge pull request 'Doc for fetch-containers command' (#772) from dboreham/fetch-containers-doc into main
All checks were successful
Lint Checks / Run linter (push) Successful in 31s
Publish / Build and publish (push) Successful in 1m15s
Webapp Test / Run webapp test suite (push) Successful in 4m9s
Smoke Test / Run basic test suite (push) Successful in 4m3s
Deploy Test / Run deploy test suite (push) Successful in 5m48s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m26s
Reviewed-on: #772
2024-02-27 18:45:18 +00:00
0302153162 Doc for fetch-containers command
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 29s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m23s
Webapp Test / Run webapp test suite (pull_request) Successful in 5m1s
Smoke Test / Run basic test suite (pull_request) Successful in 3m39s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 9m49s
2024-02-27 11:44:28 -07:00
01e4437b62 Merge pull request 'Sort order was backwards' (#770) from dboreham/fix-container-age-sort into main
All checks were successful
Lint Checks / Run linter (push) Successful in 30s
Publish / Build and publish (push) Successful in 1m3s
Deploy Test / Run deploy test suite (push) Successful in 3m51s
Webapp Test / Run webapp test suite (push) Successful in 2m58s
Smoke Test / Run basic test suite (push) Successful in 4m53s
Database Test / Run database hosting test on kind/k8s (push) Successful in 7m20s
Reviewed-on: #770
2024-02-27 16:01:17 +00:00
64cec163b3 Sort order was backwards
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 32s
Webapp Test / Run webapp test suite (pull_request) Successful in 3m1s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m34s
Smoke Test / Run basic test suite (pull_request) Successful in 3m39s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 9m0s
2024-02-27 09:00:36 -07:00
170ad71397 fetch-containers-fixes (#769)
Some checks failed
Lint Checks / Run linter (push) Successful in 31s
Publish / Build and publish (push) Successful in 1m24s
Smoke Test / Run basic test suite (push) Has been cancelled
Deploy Test / Run deploy test suite (push) Has been cancelled
Webapp Test / Run webapp test suite (push) Has been cancelled
Reviewed-on: #769
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-27 15:53:05 +00:00
da1ff609fe fetch-images command (#768)
All checks were successful
Lint Checks / Run linter (push) Successful in 35s
Publish / Build and publish (push) Successful in 1m33s
Webapp Test / Run webapp test suite (push) Successful in 3m44s
Smoke Test / Run basic test suite (push) Successful in 4m9s
Deploy Test / Run deploy test suite (push) Successful in 7m3s
Implementation of a command to fetch pre-built images from a remote registry, complementing the --push-images option already present on build-containers.

The two subcommands used together allow a stack to be deployed without needing to built its images, provided they have been already built and pushed to the specified container image registry.

This implementation simply picks the newest image with the right name and platform (matches against the platform Python is running on, so watch out for scenarios where Python is an x86 binary on M1 macs).

Reviewed-on: #768
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-27 15:15:08 +00:00
21eb9f036f Add support for pnpm as a webapp build tool. (#767)
All checks were successful
Lint Checks / Run linter (push) Successful in 26s
Publish / Build and publish (push) Successful in 1m13s
Webapp Test / Run webapp test suite (push) Successful in 2m48s
Deploy Test / Run deploy test suite (push) Successful in 3m46s
Smoke Test / Run basic test suite (push) Successful in 5m1s
Fixturenet-Laconicd-Test / Run an Laconicd fixturenet test (push) Successful in 9m43s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 48m33s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 55m22s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m48s
This adds support for auto-detecting pnpm as a build tool for webapps.

Reviewed-on: #767
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-26 23:31:52 +00:00
a0413659f7 Check for existing tag in remote repo before building. (#764)
Some checks failed
Lint Checks / Run linter (push) Successful in 59s
Publish / Build and publish (push) Successful in 44s
Webapp Test / Run webapp test suite (push) Successful in 2m49s
Deploy Test / Run deploy test suite (push) Successful in 5m41s
Smoke Test / Run basic test suite (push) Successful in 4m48s
Fixturenet-Laconicd-Test / Run an Laconicd fixturenet test (push) Successful in 6m26s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Failing after 27m20s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 55m51s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m39s
Database Test / Run database hosting test on kind/k8s (push) Successful in 6m47s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 3m53s
webapps are meant to be build-once/deploy-many, but we were rebuilding them for every request.  This changes that, so that we rebuild only for every unique ApplicationRecord.

When we push the image, we now tag it according to its ApplicationRecord.

We don't want to use that tag directly in the compose file for the deployment, however, as the deployment needs to be able to adjust to new builds w/o re-writing the file all the time.  Instead, we use a per-deployment unique tag (same as before), we just update what image it references as needed.

Reviewed-on: #764
2024-02-24 03:22:49 +00:00
a16fc657bf clarify uniswap urbit readme (#766)
All checks were successful
Lint Checks / Run linter (push) Successful in 29s
Publish / Build and publish (push) Successful in 1m7s
Smoke Test / Run basic test suite (push) Successful in 3m27s
Deploy Test / Run deploy test suite (push) Successful in 4m28s
Webapp Test / Run webapp test suite (push) Successful in 4m30s
Co-authored-by: zramsay <zach@bluecollarcoding.ca>
Reviewed-on: #766
2024-02-24 00:15:53 +00:00
704c42c404 Use a catchall for single page apps. (#763)
All checks were successful
Lint Checks / Run linter (push) Successful in 27s
Publish / Build and publish (push) Successful in 56s
Webapp Test / Run webapp test suite (push) Successful in 2m50s
Deploy Test / Run deploy test suite (push) Successful in 4m1s
Smoke Test / Run basic test suite (push) Successful in 6m18s
This creates a new environment variable, CERC_SINGLE_PAGE_APP, which controls whether a catchall redirection back to / is applied.

If the value is not explicitly set, we try to detect if the page looks like a single-page app.

Reviewed-on: #763
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-23 20:32:24 +00:00
202f187172 Fix copy/paste error
All checks were successful
Lint Checks / Run linter (push) Successful in 24s
Publish / Build and publish (push) Successful in 1m15s
Webapp Test / Run webapp test suite (push) Successful in 3m9s
Deploy Test / Run deploy test suite (push) Successful in 4m12s
Smoke Test / Run basic test suite (push) Successful in 5m15s
2024-02-23 13:15:37 -07:00
aaed356d32 Simple container image publication (#762)
All checks were successful
Lint Checks / Run linter (push) Successful in 53s
Publish / Build and publish (push) Successful in 1m22s
Deploy Test / Run deploy test suite (push) Successful in 4m23s
Smoke Test / Run basic test suite (push) Successful in 2m57s
Webapp Test / Run webapp test suite (push) Successful in 4m47s
Reviewed-on: #762
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-23 19:57:47 +00:00
2af6ffce77 Tweaks for running the container registry in k8s (#760)
All checks were successful
Lint Checks / Run linter (push) Successful in 23s
Publish / Build and publish (push) Successful in 1m30s
Webapp Test / Run webapp test suite (push) Successful in 3m11s
Deploy Test / Run deploy test suite (push) Successful in 4m22s
Smoke Test / Run basic test suite (push) Successful in 4m57s
Minor tweaks for running the container-registry in k8s.  The big change is not requiring --image-registry.

Reviewed-on: #760
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
2024-02-22 21:11:06 +00:00
7bb86cf35e Merge pull request 'Add ARM fixturenet-plugeth test job' (#759) from dboreham/arm-fixturenet-test into main
All checks were successful
Lint Checks / Run linter (push) Successful in 44s
Publish / Build and publish (push) Successful in 1m16s
Smoke Test / Run basic test suite (push) Successful in 3m13s
Webapp Test / Run webapp test suite (push) Successful in 4m59s
Deploy Test / Run deploy test suite (push) Successful in 5m54s
Fixturenet-Eth-Plugeth-Arm-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 43m19s
Reviewed-on: #759
2024-02-22 20:45:26 +00:00
9e0892cb6b No need to start docker now
Some checks failed
Lint Checks / Run linter (pull_request) Successful in 57s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m44s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Failing after 5m41s
Webapp Test / Run webapp test suite (pull_request) Successful in 5m48s
Smoke Test / Run basic test suite (pull_request) Successful in 5m38s
2024-02-22 13:36:00 -07:00
cf9cf6346f Add an arm-specific version of the plugeth fixturenet test 2024-02-22 13:34:37 -07:00
642c0ead0d Add test for two config parameters (#758)
All checks were successful
Lint Checks / Run linter (push) Successful in 49s
Publish / Build and publish (push) Successful in 1m8s
Smoke Test / Run basic test suite (push) Successful in 3m8s
Deploy Test / Run deploy test suite (push) Successful in 4m33s
Webapp Test / Run webapp test suite (push) Successful in 4m23s
Reviewed-on: #758
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-22 19:35:55 +00:00
6bd77c893a Even more logging fixes (#757)
All checks were successful
Lint Checks / Run linter (push) Successful in 57s
Publish / Build and publish (push) Successful in 1m40s
Deploy Test / Run deploy test suite (push) Successful in 3m46s
Webapp Test / Run webapp test suite (push) Successful in 4m48s
Smoke Test / Run basic test suite (push) Successful in 5m7s
Hopefully the last one for a bit.

This only output the cmdline if log_file is present (ie, not to stdout).  It also fixes a bug where the log_file was not passed in one line.

Reviewed-on: #757
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-22 01:24:44 +00:00
4a4d48ddb9 Fix error when logging exception. (#756)
All checks were successful
Lint Checks / Run linter (push) Successful in 40s
Publish / Build and publish (push) Successful in 56s
Smoke Test / Run basic test suite (push) Successful in 3m53s
Webapp Test / Run webapp test suite (push) Successful in 4m39s
Deploy Test / Run deploy test suite (push) Successful in 5m33s
Reviewed-on: #756
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-22 00:11:06 +00:00
08438b1cd5 More logging for webapp deployments (#755)
All checks were successful
Lint Checks / Run linter (push) Successful in 50s
Publish / Build and publish (push) Successful in 55s
Smoke Test / Run basic test suite (push) Successful in 2m42s
Webapp Test / Run webapp test suite (push) Successful in 4m55s
Deploy Test / Run deploy test suite (push) Successful in 5m50s
Reviewed-on: #755
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-21 23:48:52 +00:00
9f1dd284a5 Better error logging for registry deployments. (#754)
All checks were successful
Lint Checks / Run linter (push) Successful in 46s
Publish / Build and publish (push) Successful in 50s
Smoke Test / Run basic test suite (push) Successful in 2m57s
Webapp Test / Run webapp test suite (push) Successful in 4m27s
Deploy Test / Run deploy test suite (push) Successful in 5m44s
We were missing errors related to registration, this should fix that.

Reviewed-on: #754
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-21 20:12:52 +00:00
5985242b8c Merge pull request 'Update doc for fixturenet-laconic-loaded stack' (#753) from dboreham/update-laconic-stack-doc into main
All checks were successful
Lint Checks / Run linter (push) Successful in 35s
Publish / Build and publish (push) Successful in 57s
Webapp Test / Run webapp test suite (push) Successful in 3m37s
Deploy Test / Run deploy test suite (push) Successful in 6m24s
Smoke Test / Run basic test suite (push) Successful in 5m23s
Reviewed-on: #753
2024-02-21 17:33:04 +00:00
d4152b7ce3 Update doc for fixturenet-laconic-loaded stack
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 41s
Webapp Test / Run webapp test suite (pull_request) Successful in 3m47s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m52s
Smoke Test / Run basic test suite (pull_request) Successful in 5m51s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 9m39s
2024-02-21 10:29:15 -07:00
db4986dcc6 snowballtool-base backend stack (#751)
All checks were successful
Lint Checks / Run linter (push) Successful in 49s
Publish / Build and publish (push) Successful in 1m29s
Deploy Test / Run deploy test suite (push) Successful in 5m12s
Webapp Test / Run webapp test suite (push) Successful in 4m54s
Smoke Test / Run basic test suite (push) Successful in 4m41s
This adds a stack for the backend from snowball/snowballtools-base.

Reviewed-on: #751
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-21 04:45:46 +00:00
65f05ea80c Run a manual build script, if present. (#750)
All checks were successful
Lint Checks / Run linter (push) Successful in 1m13s
Publish / Build and publish (push) Successful in 1m31s
Webapp Test / Run webapp test suite (push) Successful in 4m42s
Deploy Test / Run deploy test suite (push) Successful in 6m23s
Smoke Test / Run basic test suite (push) Successful in 5m12s
If the tree has a 'build-webapp.sh' script, use that.

Reviewed-on: #750
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-21 00:20:50 +00:00
01f9fe67ed add Mars v2 interface (#744)
All checks were successful
Lint Checks / Run linter (push) Successful in 39s
Publish / Build and publish (push) Successful in 1m13s
Webapp Test / Run webapp test suite (push) Successful in 4m14s
Deploy Test / Run deploy test suite (push) Successful in 5m0s
Smoke Test / Run basic test suite (push) Successful in 4m45s
Tested on DO with real funds on mainnet

Co-authored-by: zramsay <zach@bluecollarcoding.ca>
Reviewed-on: #744
2024-02-19 19:11:59 +00:00
049ffcff71 Fix test failure
All checks were successful
Lint Checks / Run linter (push) Successful in 43s
Publish / Build and publish (push) Successful in 1m21s
Deploy Test / Run deploy test suite (push) Successful in 5m13s
Webapp Test / Run webapp test suite (push) Successful in 5m0s
Smoke Test / Run basic test suite (push) Successful in 5m2s
Fixturenet-Laconicd-Test / Run an Laconicd fixturenet test (push) Successful in 9m41s
Fixturenet-Eth-Plugeth-Test / Run an Ethereum plugeth fixturenet test (push) Successful in 52m24s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m44s
Database Test / Run database hosting test on kind/k8s (push) Successful in 10m33s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Successful in 4m5s
2024-02-18 12:28:48 -07:00
f5314a979b Install ed to fix CI job
Some checks failed
Lint Checks / Run linter (push) Successful in 47s
Publish / Build and publish (push) Successful in 1m20s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Failing after 3m16s
Webapp Test / Run webapp test suite (push) Successful in 4m31s
Deploy Test / Run deploy test suite (push) Successful in 5m22s
Smoke Test / Run basic test suite (push) Successful in 4m38s
2024-02-18 12:20:01 -07:00
39f4fa4487 Container Registry Stack (#747)
Some checks failed
Lint Checks / Run linter (push) Successful in 42s
Publish / Build and publish (push) Successful in 1m23s
Container Registry Test / Run contaier registry hosting test on kind/k8s (push) Failing after 1m24s
Webapp Test / Run webapp test suite (push) Successful in 4m15s
Deploy Test / Run deploy test suite (push) Successful in 5m11s
Smoke Test / Run basic test suite (push) Successful in 4m48s
Co-authored-by: David Boreham <david@bozemanpas.com>
Reviewed-on: #747
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-18 18:55:55 +00:00
0b0394a940 Use absolute path for the data volume (#749)
All checks were successful
Lint Checks / Run linter (push) Successful in 40s
Publish / Build and publish (push) Successful in 1m26s
Webapp Test / Run webapp test suite (push) Successful in 5m36s
Smoke Test / Run basic test suite (push) Successful in 5m23s
Deploy Test / Run deploy test suite (push) Successful in 6m25s
Database Test / Run database hosting test on kind/k8s (push) Successful in 13m27s
Reviewed-on: #749
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-17 14:29:53 +00:00
37b9500483 Support non-tls ingress for kind (#748)
All checks were successful
Lint Checks / Run linter (push) Successful in 39s
Publish / Build and publish (push) Successful in 1m19s
Webapp Test / Run webapp test suite (push) Successful in 4m40s
Deploy Test / Run deploy test suite (push) Successful in 4m58s
Smoke Test / Run basic test suite (push) Successful in 4m44s
Reviewed-on: #748
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-17 01:54:30 +00:00
3c3e582939 Minor envsubst improvements. (#746)
All checks were successful
Lint Checks / Run linter (push) Successful in 40s
Publish / Build and publish (push) Successful in 1m16s
Webapp Test / Run webapp test suite (push) Successful in 4m13s
Deploy Test / Run deploy test suite (push) Successful in 5m26s
Smoke Test / Run basic test suite (push) Successful in 4m55s
Minor fixes to envsubst for webapps.  Somewhat specially treated is `LACONIC_HOSTED_CONFIG_homepage` which can be used to replace the homepage in package.json.  With react, this gets an extra `/` though, which we need to remove.

Reviewed-on: #746
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-16 04:11:09 +00:00
26d265360d Rename workflow file
Some checks failed
Database Test / Run database hosting test on kind/k8s (push) Failing after 9m16s
Lint Checks / Run linter (push) Successful in 1m21s
Publish / Build and publish (push) Successful in 1m24s
Webapp Test / Run webapp test suite (push) Successful in 3m7s
Deploy Test / Run deploy test suite (push) Successful in 5m8s
Smoke Test / Run basic test suite (push) Successful in 3m11s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m51s
2024-02-15 07:39:04 -07:00
f81b78cfbc Update .gitea/workflows/triggers/test-database
All checks were successful
Lint Checks / Run linter (push) Successful in 42s
2024-02-15 14:35:56 +00:00
d9bb6b3588 Test Database Stack (#737)
All checks were successful
Lint Checks / Run linter (push) Successful in 33s
Publish / Build and publish (push) Successful in 1m2s
Webapp Test / Run webapp test suite (push) Successful in 3m5s
Deploy Test / Run deploy test suite (push) Successful in 4m20s
Smoke Test / Run basic test suite (push) Successful in 4m50s
Reviewed-on: #737
2024-02-15 05:26:29 +00:00
b59beb66eb Add simple quick deploy script (#743)
All checks were successful
Lint Checks / Run linter (push) Successful in 30s
Publish / Build and publish (push) Successful in 54s
Webapp Test / Run webapp test suite (push) Successful in 2m39s
Deploy Test / Run deploy test suite (push) Successful in 3m56s
Smoke Test / Run basic test suite (push) Successful in 5m16s
Reviewed-on: #743
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-15 05:00:51 +00:00
65d67dba10 Fix k8s and enable it by default on PRs (#742)
All checks were successful
Lint Checks / Run linter (push) Successful in 31s
Publish / Build and publish (push) Successful in 1m25s
Webapp Test / Run webapp test suite (push) Successful in 2m44s
Deploy Test / Run deploy test suite (push) Successful in 4m5s
Smoke Test / Run basic test suite (push) Successful in 5m34s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Successful in 8m29s
Reviewed-on: #742
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-14 23:50:09 +00:00
b22c72e715 For k8s, use provisioner-managed volumes when an absolute host path is not specified. (#741)
Some checks failed
Lint Checks / Run linter (push) Successful in 45s
Publish / Build and publish (push) Successful in 1m22s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Failing after 3m20s
Deploy Test / Run deploy test suite (push) Successful in 5m28s
Webapp Test / Run webapp test suite (push) Successful in 4m28s
Smoke Test / Run basic test suite (push) Successful in 4m58s
In kind, when we bind-mount a host directory it is first mounted into the kind container at /mnt, then into the pod at the desired location.

We accidentally picked this up for full-blown k8s, and were creating volumes at /mnt.  This changes the behavior for both kind and regular k8s so that bind mounts are only allowed if a fully-qualified path is specified.  If no path is specified at all, a default storageClass is assumed to be present, and the volume managed by a provisioner.

Eg, for kind, the default provisioner is: https://github.com/rancher/local-path-provisioner

```
stack: test
deploy-to: k8s-kind
config:
  test-variable-1: test-value-1
network:
  ports:
    test:
     - '80'
volumes:
  # this will be bind-mounted to a host-path
  test-data-bind: /srv/data
  # this will be managed by the k8s node
  test-data-auto:
configmaps:
  test-config: ./configmap/test-config
```

Reviewed-on: #741
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-14 21:45:01 +00:00
c9444591f5 Fix default webapp port number. (#740)
All checks were successful
Lint Checks / Run linter (push) Successful in 43s
Publish / Build and publish (push) Successful in 56s
Webapp Test / Run webapp test suite (push) Successful in 3m26s
Smoke Test / Run basic test suite (push) Successful in 3m24s
Deploy Test / Run deploy test suite (push) Successful in 5m9s
Reviewed-on: #740
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-09 01:20:41 +00:00
903f3b10e2 Add support for annotations and labels in spec. (#739)
All checks were successful
Lint Checks / Run linter (push) Successful in 30s
Publish / Build and publish (push) Successful in 1m36s
Webapp Test / Run webapp test suite (push) Successful in 2m38s
Deploy Test / Run deploy test suite (push) Successful in 3m58s
Smoke Test / Run basic test suite (push) Successful in 4m50s
Lint Checks / Run linter (pull_request) Successful in 51s
Webapp Test / Run webapp test suite (pull_request) Successful in 3m12s
Smoke Test / Run basic test suite (pull_request) Successful in 3m36s
Deploy Test / Run deploy test suite (pull_request) Successful in 5m40s
```
stack: webapp-deployer-backend
deploy-to: k8s
annotations:
  foo.bar.annot/{name}: baz
labels:
  a.b.c/{name}.blah: "value"
```

Reviewed-on: #739
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-09 00:11:07 +00:00
72ed2eb91a Fix bad test in tag check. (#738)
All checks were successful
Lint Checks / Run linter (push) Successful in 42s
Publish / Build and publish (push) Successful in 1m11s
Deploy Test / Run deploy test suite (push) Successful in 4m40s
Webapp Test / Run webapp test suite (push) Successful in 3m31s
Smoke Test / Run basic test suite (push) Successful in 5m30s
Reviewed-on: #738
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-08 20:38:41 +00:00
2104eb5f30 Merge pull request 'Add Mars stack' (#725) from zach/mars into main
All checks were successful
Lint Checks / Run linter (push) Successful in 46s
Publish / Build and publish (push) Successful in 1m14s
Webapp Test / Run webapp test suite (push) Successful in 4m34s
Smoke Test / Run basic test suite (push) Successful in 4m18s
Deploy Test / Run deploy test suite (push) Successful in 5m54s
Reviewed-on: #725
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
2024-02-08 20:30:47 +00:00
afd6be3b13 Ping pub (#663)
Some checks failed
Lint Checks / Run linter (push) Successful in 36s
Publish / Build and publish (push) Successful in 1m35s
Webapp Test / Run webapp test suite (push) Successful in 5m6s
Smoke Test / Run basic test suite (push) Successful in 4m8s
Deploy Test / Run deploy test suite (push) Has been cancelled
for #170, revives #190
uses https://github.com/LaconicNetwork/explorer/pull/1

Co-authored-by: zramsay <zach@bluecollarcoding.ca>
Co-authored-by: David Boreham <david@bozemanpass.com>
Reviewed-on: #663
Co-authored-by: zramsay <zramsay@noreply.git.vdb.to>
Co-committed-by: zramsay <zramsay@noreply.git.vdb.to>
2024-02-08 20:13:12 +00:00
f914baa913 Merge branch 'main' into zach/mars
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 57s
Deploy Test / Run deploy test suite (pull_request) Successful in 7m3s
Webapp Test / Run webapp test suite (pull_request) Successful in 5m43s
Smoke Test / Run basic test suite (pull_request) Successful in 6m5s
2024-02-08 19:52:49 +00:00
8be1e684e8 Process environment variables defined in compose files (#736)
Some checks failed
Lint Checks / Run linter (push) Successful in 48s
Publish / Build and publish (push) Successful in 1m38s
K8s Deploy Test / Run deploy test suite on kind/k8s (push) Failing after 4m28s
Deploy Test / Run deploy test suite (push) Successful in 5m8s
Webapp Test / Run webapp test suite (push) Successful in 5m50s
Smoke Test / Run basic test suite (push) Successful in 6m22s
Reviewed-on: #736
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-08 19:41:57 +00:00
5d16251ce9 Merge pull request 'Add resource limit options to spec.' (#735) from telackey/limits into main
All checks were successful
Lint Checks / Run linter (push) Successful in 44s
Publish / Build and publish (push) Successful in 1m13s
Webapp Test / Run webapp test suite (push) Successful in 3m14s
Deploy Test / Run deploy test suite (push) Successful in 3m59s
Smoke Test / Run basic test suite (push) Successful in 4m55s
Reviewed-on: #735
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
2024-02-08 16:52:49 +00:00
3309782439 Refactor
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 58s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m7s
Smoke Test / Run basic test suite (pull_request) Successful in 4m11s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m41s
2024-02-08 00:47:46 -06:00
4b3b3478e7 Switch to Docker-style limits
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 51s
Deploy Test / Run deploy test suite (pull_request) Successful in 3m56s
Smoke Test / Run basic test suite (pull_request) Successful in 4m6s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m45s
2024-02-08 00:43:41 -06:00
2a9955055c debug
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 38s
Deploy Test / Run deploy test suite (pull_request) Successful in 2m47s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m21s
Smoke Test / Run basic test suite (pull_request) Successful in 2m42s
2024-02-07 16:56:35 -06:00
8964e1c0fe Add resource limit options to spec. 2024-02-07 16:48:02 -06:00
d2ebb81d77 Tags for undeploy (#734)
All checks were successful
Lint Checks / Run linter (push) Successful in 27s
Publish / Build and publish (push) Successful in 44s
Deploy Test / Run deploy test suite (push) Successful in 2m43s
Webapp Test / Run webapp test suite (push) Successful in 2m35s
Smoke Test / Run basic test suite (push) Successful in 2m36s
```
  --include-tags TEXT             Only include requests with matching tags
                                  (comma-separated).
  --exclude-tags TEXT             Exclude requests with matching tags (comma-
                                  separated).
```

Reviewed-on: #734
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-07 21:45:16 +00:00
4a981d8d2e Fix repo URL (#733)
All checks were successful
Lint Checks / Run linter (push) Successful in 51s
Publish / Build and publish (push) Successful in 1m35s
Webapp Test / Run webapp test suite (push) Successful in 3m24s
Deploy Test / Run deploy test suite (push) Successful in 3m40s
Smoke Test / Run basic test suite (push) Successful in 5m4s
Needs a '/' (http) not ':' (ssh).

Reviewed-on: #733
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-07 18:52:07 +00:00
88a0236ca9 Add the ability to filter deployment requests by tag. (#730)
All checks were successful
Lint Checks / Run linter (push) Successful in 32s
Publish / Build and publish (push) Successful in 1m17s
Deploy Test / Run deploy test suite (push) Successful in 2m58s
Webapp Test / Run webapp test suite (push) Successful in 4m12s
Smoke Test / Run basic test suite (push) Successful in 2m40s
Reviewed-on: #730
2024-02-07 03:12:40 +00:00
937b983ec9 Update links from github.com to git.vdb.to (#732)
All checks were successful
Lint Checks / Run linter (push) Successful in 56s
Publish / Build and publish (push) Successful in 1m24s
Deploy Test / Run deploy test suite (push) Successful in 5m24s
Webapp Test / Run webapp test suite (push) Successful in 4m57s
Smoke Test / Run basic test suite (push) Successful in 4m48s
Update links and references to github.com to git.vdb.to.

Also enable the flake8 lint action in gitea.

Reviewed-on: #732
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-06 22:42:44 +00:00
bfbcfb7904
Volume processing fixes (#729) 2024-02-06 12:32:10 -07:00
3d5ececba5
Remove duplicate plugin paths and resulting extraneous error message (#728) 2024-02-06 11:59:37 -07:00
6848fc33cf
Implement dry run support for k8s deploy (#727) 2024-02-06 07:07:56 -07:00
36bb068983
Add ConfigMap test. (#726)
* Add ConfigMap test.

* eof

* Minor tweak

* Trigger test

---------

Co-authored-by: David Boreham <david@bozemanpass.com>
2024-02-05 14:15:11 -06:00
25a2b70f2c
Fix command in mainnet-eth docs 2024-02-03 18:25:02 -07:00
2fcd416e29
Basic webapp deployer stack. (#722) 2024-02-02 19:05:15 -07:00
6629017d6a
Support other webapp types (react, static). (#721)
* Support other webapp types (react, static).
2024-02-02 18:04:06 -06:00
1c30441000
Add schedule for k8s deploy test 2024-02-01 07:40:20 -07:00
b398050787
Don't include volumes in spec if we don't have any. (#720) 2024-01-31 15:11:32 -06:00
12ec1bec43
Add ConfigMap support for k8s. (#714)
* Minor fixes for deploying with k8s and podman.

* ConfigMap support
2024-01-30 23:09:48 -06:00
62af03077f
Add deployed/error status output to the state file. (#719)
* More status info
* Up default resource limits.
* Need ps
2024-01-30 22:13:45 -06:00
Zach
098567625a
Create README.md 2024-01-30 17:47:56 -05:00
428b05158e
Fix DnsRecord ownership check. (#718)
* Fix DnsRecord ownership check.

* Var names
2024-01-30 13:31:59 -06:00
a750b645b9
Merge Ci test branch fixes (#717) 2024-01-30 11:18:08 -07:00
zramsay
23ee3e19b7 mars: add env vars to docker-compose 2024-01-29 22:44:55 +00:00
zramsay
2d764fc7d0 basic mars stack 2024-01-29 16:00:58 +00:00
b7f215d9bf
k8s test fixes (#713)
* Add cgroup setup, increase test timeouts

* Trigger from test script or CI job changes too
2024-01-28 16:21:39 -07:00
eca52b10b7
Fix copy-paste error 2024-01-25 11:46:50 -07:00
b9128841e4
Switch test back to stock runner 2024-01-25 11:42:07 -07:00
0a302ea555
Add a schedule for fixturenet-eth-plugeth-test 2024-01-25 07:33:44 -07:00
aa0f60baa1
Trigger fixturenet-eth-plugeth-test run 2024-01-25 06:46:18 -07:00
cef73d8de2
Update fixturenet-laconicd-test schedule 2024-01-23 10:32:44 -07:00
7d0f2adb46
Enable scheduled test run (#711) 2024-01-23 09:42:24 -07:00
5fdee25dc1
update laconicd test (#710)
* Remove legacy docker config

* Trigger test run
2024-01-23 09:25:02 -07:00
554f05de87
Fix pip3 error (#709) 2024-01-22 21:06:15 -06:00
b4fbee9b13
Update fixturenet-laconicd-test 2024-01-22 07:43:38 -07:00
f826f50c4d Merge branch 'main' of github.com:cerc-io/stack-orchestrator 2024-01-17 08:27:38 -07:00
b83465767d Set k8s deploy test to manual trigger 2024-01-17 08:27:09 -07:00
50509203d1 Set k8s deploy test to manual trigger 2024-01-17 08:26:51 -07:00
prathamesh0
282e175566
Remove unnecessary hyperlinks and pin image versions (#706)
* Remove invalid dashboard and panel ids from alert rules

* Pin grafana and prometheus versions

* Configure custom grafana server URL
2024-01-17 14:02:10 +05:30
635aa7037b Build test container 2024-01-16 21:15:21 -07:00
9877cfaf85 Update for new runner 2024-01-16 20:08:32 -07:00
c642e5d490 Try different runner 2024-01-16 17:09:58 -07:00
02c49d66f5 Add debug output to check container 2024-01-16 17:07:03 -07:00
90cebdb7a6
Add CI script for k8s deployment test (#705) 2024-01-16 16:16:07 -07:00
1f9653e6f7
Fix kind mode and add k8s deployment test (#704)
* Fix kind mode and add k8s deployment test

* Fix lint errors
2024-01-16 15:55:58 -07:00
0587813dd0
Create next.config.js if it is missing. (#701)
* Create next.config.js if it is missing.

* Add comment.
2024-01-15 12:12:59 -06:00
4b3ea7c30f
Update independent act-runner stack to use custom act as well. (#702)
* Update independent act-runner stack to use custom act as well.

* Remove branches which are not needed or already merged.
2024-01-15 12:10:48 -06:00
db8aec52aa
Pin commit hash of asset list repo in osmosis frontend app (#703) 2024-01-15 16:56:06 +05:30
b83030f63b
Use custom act with gitea. (#700)
* Use custom act with gitea.

* Make sure wget is available

* Fix repo url
2024-01-09 22:53:43 -06:00
prathamesh0
a3eb3c0bb0
Setup basic alerting for watchers in monitoring stack (#698)
* Provision Grafana alert contactpoints and policies for Slack

* Add watcher alert rules

* Update watcher monitoring instructions

* Add listening port flag to node exporter command

* Add reference links
2024-01-08 17:25:30 +05:30
eae2af7ccc
Upgrade watcher versions (#699) 2024-01-08 11:51:54 +05:30
837e443800
Support application removal requests. (#697)
* Support application removal request.

* Git should never prompt when deploying a webapp
2023-12-21 18:05:40 -06:00
prathamesh0
a57b0cdd26
Add a stack for prom node exporter and its dashboard in monitoring stack (#696)
* Add a stack for Prometheus node exporter

* Add node exporter dashboard to monitoring stack
2023-12-21 15:15:03 +05:30
prathamesh0
38622fb33c
[WIP] Use templating for watcher dashboard and add Postgres exporter (#695)
* Add Postgres exporter and it's dashboard

* Use templating for watcher dashboard

* Add subgraph related panels to watcher dashboard

* Remove individual watcher dashboards and update instructions
2023-12-21 13:41:36 +05:30
prathamesh0
4a1a46facc
Update monitoring stack with additional dashboards and watcher metrics (#693)
* Include retry jobs and update default refresh intervals

* Add prometheus blackbox exporter and it's dashboard

* Add NodeJS application dashboard

* Allow UI updates

* Update watcher dashboards for upstream and external chain heads

* Update watcher dashboards with watcher config metrics

* Upgrade sushiswap and azimuth watchers

* Removed fixed title size values

* Update instructions

* Update instructions for env config

* Update instructions with setup
2023-12-21 09:26:37 +05:30
Zach
42b92f7e23
use square logo for an urbit tile (#689)
* use square logo for an urbit tile

* bump version, improve info text
2023-12-19 09:50:14 -05:00
def192edab
Update new environment values for Osmosis frontend app (#694)
* Update new env values for Osmosis frontend app

* Use .env.production instead of local
2023-12-18 17:49:45 +05:30
d8357df345
Add image pull secret to pods (#692) 2023-12-15 14:27:45 -07:00
997496b8a5
Update script for new nextjs build output. (#691) 2023-12-14 19:47:30 -06:00
61f2884505
Reduce base image size (first round of improvements) (#690) 2023-12-14 17:46:03 -06:00
27a14737f8
Make the container tag based on the deployment path. (#688) 2023-12-14 09:49:21 -06:00
prathamesh0
b9b758bfdd
Add a stack for running Osmosis frontend app on Urbit (#683)
* Deploy osmosis on Urbit fake ship

* Remove Urbit configuration from existing osmosis stack

* Add a separate stack for Osmosis front end on Urbit

* Run script for renaming build files with bash

* Add environment variables required in urbit osmosis build

* Fix BASEPATH in compose file

* Remove ipfs-glob-host from network config in osmosis readme

* Use laconic branch for osmosis frontend

---------

Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
2023-12-14 17:28:10 +05:30
prathamesh0
9ba410b095
Add a stack for monitoring watchers with prometheus and grafana (#687)
* Setup Prometheus and Grafana for monitoring stack

* Add dashboard for azimuth watchers

* Add a dashboard for sushiswap watcher

* Persist prometheus server data

* Additional metrics in watcher dashboards

* Update dashboards and add for merkl sushiswap watcher

* Add dashboards for remaining azimuth watchers

* Separate out preconfigured watcher dashboards and add instructions

* Keep the empty dashboards dir
2023-12-14 16:59:00 +05:30
1f4eb57069
Add --dry-run option (#686) 2023-12-13 22:56:40 -06:00
88f66a3626
Add deployment update and deploy-webapp-from-registry commands. (#676) 2023-12-13 21:02:34 -06:00
prathamesh0
1ef0b316c6
Expose metrics endpoints for sushiswap and merkl sushiswap watchers (#685) 2023-12-13 14:58:26 +05:30
prathamesh0
232d5618cb
Update instructions in osmosis stack (#684) 2023-12-12 13:54:00 +05:30
fa6b570f4a
Add stack for running osmosis frontend app (#673)
* osmosis FE stack

* chmod

* dont use 3000

* fix for neww stack format

* updates

* update osmosis readme

* Update stack.yml

* Update osmosis frontend stack to serve app

* Host osmosis app static build using python server

* Fix mapped ports in deployment for containers

* Update instructions

* Use nginx server to host files and handle page reloads

* Fix typo

---------

Co-authored-by: zramsay <zach@bluecollarcoding.ca>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2023-12-11 14:10:54 +05:30
prathamesh0
f9eb5a4ba8
Refactor to make Urbit setup generic (#682)
* Refactor to make Urbit app deployment script generic

* Rename urbit pod and update instructions

* Add a flag to allow skipping app installation on Urbit

* Make remote Urbit app deployment scripts generic

* Move remote deployment scripts to urbit fixturenet

* Update and use existing kubo pod for Urbit glob hosting
2023-12-08 09:35:00 +05:30
077ea80c70
Add deployment status command and fix k8s output for deployment ps (#679) 2023-12-06 09:27:47 -07:00
15faed00de
Generate a unique deployment id for each deployment (#680)
* Move cluster name generation into a function

* Generate a unique deployment id for each deployment
2023-12-05 22:56:58 -07:00
prathamesh0
6bef0c5b2f
Separate out GQL proxy server from uniswap-urbit stack (#681)
* Separate out uniswap gql proxy in a stack

* Use proxy server from watcher-ts

* Add a flag to enable/disable the proxy server

* Update env configuratoin for uniswap urbit app stack

* Update stack file for uniswap urbit app stack

* Fix env variables in instructions
2023-12-06 10:41:10 +05:30
prathamesh0
f27da19808
Use IPFS for hosting glob files for Urbit (#677)
* Use IPFS for hosting glob files for Urbit

* Add env configuration for IPFS endpoints to instructions

* Make ship pier dir configurable in remote deployment script

* Update remote deployment script to accept glob hash arg
2023-12-05 15:00:03 +05:30
2dd54892a1
Allow specifying the webapp tag explicitly (#675) 2023-12-04 21:39:16 -06:00
ab0e70ed83
Change path portion of unique cluster name to point to compose file, not argv[0]. (#678) 2023-12-04 13:39:14 -06:00
c319e90ddd
Add a stack for running uniswap frontend on urbit (#670)
* Create uniswap-frontend stack

* Add stack for building uniswap frontend app

* Add a container for Urbit fake ship

* Update with deployment command

* Add a service for uniswap app deployment to urbit

* Use a script to start urbit ship to handle restarts

* Rename stack name to uniswap-urbit-app

* Rename build.sh to build-app.sh and check if build already exists

* Rename stack directory name

* Update uniswap build restart on failure

* Perform uniswap app deployment in the urbit container

* Add steps to create glob for the app

* Tail /dev/null after deployment

* Add steps to install the app to desk

* Host glob files for uniswap

* Update repo branch

* Update readme with command to get urbit password

* Update readme

* Update readme to open urbit web UI

* Expose the port on glob hosting container

* Avoid exposing urbit http port

* Add scripts for installing uniswap on remote urbit instance

* Configure GQL proxy for uniswap app

* Use laconic branch for app repo

* Rename urbit pod for uniswap app deployment

---------

Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2023-12-04 18:39:19 +05:30
2173e7ce6a
If the next version is unsupported, print a big warning and try higher version. (#674) 2023-11-30 12:33:06 -06:00
c19559967d
Add doc for deploy-webapp (#672) 2023-11-29 20:55:14 -07:00
d7093277b4
Use constants (#671) 2023-11-29 20:50:53 -07:00
03a3645b3c
Add --port option to run-webapp. (#667)
* Add --port option to run-webapp

* Fixed merge

* lint
2023-11-29 11:32:28 -06:00
113c0bfbf1
Propagate env file for webapp deployment (#669) 2023-11-28 21:14:02 -07:00
1a069a6816
Use a temp file for the spec file name (#668) 2023-11-28 19:56:12 -07:00
a68cd5d65c
Webapp deploy (#662) 2023-11-27 22:02:16 -07:00
1b94db27c1
Upgrade azimuth watcher release version to 0.1.2 (#666)
* Upgrade azimuth watcher release version

* Fix version for azimuth watcher repo
2023-11-24 14:05:37 +05:30
prathamesh0
9499941891
Increase max connections for Azimuth watcher dbs (#665) 2023-11-23 19:11:02 +05:30
3fefc67e77
Run azimuth contract watcher in active mode (#661)
* Update stack to run azimuth job runner

* Run azimuth watcher in active mode

* Update stack to run job-runners for all watchers

* Update ports in job-runner health checks

* Map metrics ports to host

* Configure historical block processing batch size for Azimuth watcher

* Use deployment command for azimuth stack

---------

Co-authored-by: neeraj <neeraj.rtly@gmail.com>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2023-11-23 15:00:18 +05:30
1a37255c18
Tweak laconicd config to allow setting endpoint port and to make the fixturenet restartable. (#660)
* Endpoint includes port

* Make it restartable

* Don't try to remove the mounted directory

* Make copy of init.sh
2023-11-22 11:31:30 -06:00
87bedde5cb
Support for k8s ingress and tls (#659) 2023-11-21 16:04:36 -07:00
01029cf7aa
Fix for code path that doesn't create a DeploymentContext (#658) 2023-11-21 08:35:31 -07:00
0b87c12c13
Upgrade merkl and sushiswap watcher to v0.1.4 (#657)
* Upgrade merkl and sushi watcher versions

* Set gqlPath to base URL and remove filling start block

* Upgrade watcher versions to 0.1.4
2023-11-21 19:07:09 +05:30
f6624cb33a
Add image push command (#656) 2023-11-20 20:23:55 -07:00
c9c6a0eee3
Changes for remote k8s (#655) 2023-11-20 09:12:57 -07:00
5c80887215
Fix missing tty parameter. (#653) 2023-11-16 12:58:03 -06:00
Ian
80c4b9214b
Merge pull request #643 from cerc-io/iskay/update-optimism
Iskay/update optimism
2023-11-16 12:31:57 -05:00
70529c43e7
Upgrade merkl and sushiswap watcher versions (#654) 2023-11-16 16:27:41 +05:30
1e9d24a8ce
Update webapp.md 2023-11-15 12:52:34 -06:00
9900565714
Update webapp.md 2023-11-15 12:48:58 -06:00
a13f841f34
Update webapp.md 2023-11-15 12:37:30 -06:00
d37f80553d
Add webapp doc (#652) 2023-11-15 12:28:07 -06:00
2059d67dca
Add run-webapp command. (#651) 2023-11-15 10:54:27 -07:00
638fa01649
Support external stack file (#650) 2023-11-14 20:59:48 -07:00
4ae4d3b61d
Print docker container logs in webapp test. (#649) 2023-11-14 17:30:01 -06:00
9687d84468
646: Add error message for webapp startup hang (#647)
This fixes three issues:

1. #644 (build output)
2. #646 (error on startup)
3. automatic env quote handling (related to 2)


For the build output we now have:

```
#################################################################

Built host container for /home/telackey/tmp/iglootools-home with tag:

    cerc/iglootools-home:local

To test locally run:

    docker run -p 3000:3000 cerc/iglootools-home:local
```

For the startup error, it was hung waiting for the "success" message from the next generate output (itself a workaround for a nextjs bug fixed by this PR we submitted: https://github.com/vercel/next.js/pull/58276).

I added a timeout which will cause it to wait up to a maximum _n_ seconds before issuing:

```
ERROR: 'npm run cerc_generate' exceeded CERC_MAX_GENERATE_TIME.
```

On the quoting itself, I plan on adding a new run-webapp command, but I realized I had a decent spot to do effect the quote replacement on-the-fly after all when I am already escaping the values for insertion/replacement into JS.

The "dequoting" can be disabled with `CERC_RETAIN_ENV_QUOTES=true`.
2023-11-14 16:07:26 -06:00
iskay
f088cbb3b0 fix linter errors 2023-11-14 14:38:49 +00:00
f1f618c57a
Don't change the next.js version by default. (#640) 2023-11-13 11:56:04 -06:00
0aca087558
Upgrade release versions for merkl and sushiswap watchers (#642)
* Upgrade merkl-sushiswap-v3-watcher-ts release

* Increase blockDelayInMilliSecs for merkl-sushiswap-v3 watcher

* Upgrade sushiswap-v3-watcher-ts release

* Add sushiswap-v3 watcher to stack list

* Avoid mapping ports that are not required to be exposed
2023-11-13 17:36:37 +05:30
a04730e7ac
Add a merkl-sushiswap-v3 watcher stack (#641)
* Add a merkl-sushiswap-v3 watcher stack

* Remove unrequired image from list
2023-11-13 11:13:55 +05:30
prathamesh0
95e881ba19
Add a sushiswap-v3 watcher stack (#638)
* Add a sushiswap-v3 watcher stack

* Add services for watcher db and server

* Add service for watcher job-runner

* Use 0.0.0.0 for watcher server config

---------

Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
2023-11-13 10:58:55 +05:30
414b887036
Allow setting build tool (npm/yarn) and next.js version. (#639)
* Allow setting build tool (npm/yarn) and next.js version.
2023-11-10 17:44:25 -06:00
iskay
3db443b2bb fix commands import 2023-11-10 20:42:02 +00:00
iskay
1072fc98c3 update fixturenet-optimism 2023-11-10 20:05:22 +00:00
042b413598
Support the case where webpack config is already present next.config.js (#631)
* Support the case where webpack config is already present next.config.js

* Update scripts for experimental-compile/experimental-generate
2023-11-08 23:44:48 -06:00
8384e95049
Add debug commands to test job (#636) 2023-11-08 19:42:19 -07:00
a27cf86748
Add basic k8s test (#635)
* Add CI job

* Add basic k8s test
2023-11-08 19:12:48 -07:00
ce587457d7
Add env var support for k8s (#634) 2023-11-08 17:53:46 -07:00
5e91c2224e
kind test stack (#629) 2023-11-08 01:11:00 -07:00
36e13f7199
Remove test output from tree. (#628) 2023-11-07 23:10:32 -06:00
d9bcc088a8
Enable webapp test in GitHub CI. (#627) 2023-11-07 18:27:08 -06:00
660326f713
Add new build-webapp command and related scripts and containers. (#626)
* Add new build-webapp command and related scripts and containers.
2023-11-07 18:15:04 -06:00
4456e70c93
Rename app -> stack_orchestrator (#625) 2023-11-07 00:06:55 -07:00
e989368793
Add generated kind config (#623) 2023-11-05 23:21:53 -07:00
0f93d30d54
Basic volume support (#622) 2023-11-03 17:02:13 -06:00
fd5779f967
Fix KeyError accessing config. (#620) 2023-10-31 12:29:19 -05:00
Ian
d854dd5c81
Update fixturenet-laconicd.yml 2023-10-30 15:44:46 -04:00
Ian
948f9f4287
Merge pull request #611 from cerc-io/iskay/fixturenet-laconicd-test
add fixturenet-laconicd test
2023-10-30 11:50:33 -04:00
b92d9cd7dd
Update stack README.md to use config directive 2023-10-29 23:16:39 -06:00
86076c7ed8
Fix deployer.exec() (#619) 2023-10-29 22:26:15 -06:00
8cac598679
Split act-runner into its own pod and offer as a distinct stack. (#612)
* Split act-runner into its own pod and offer as a distinct stack.
2023-10-27 13:57:13 -05:00
6130eab5cb
k8s deploy (#614) 2023-10-27 10:19:44 -06:00
Ian
f198f43b3a
add newline 2023-10-27 08:48:44 -04:00
36d1e0eedd add fixturenet-laconicd test 2023-10-26 17:26:42 -04:00
0f5b1a097b
Add plugeth to chain-chunker stack (needed for new verify option). (#610) 2023-10-25 14:47:53 -05:00
20d633f81c
Plugeth-based full mainnet stack. (#592)
* Plugeth-based full mainnet stack.

---------

Co-authored-by: David Boreham <david@bozemanpass.com>
2023-10-25 14:42:52 -05:00
5e36e3e2ae
Turn off long run time test on push/pr unless explicitly triggered (#606) 2023-10-24 22:28:58 -06:00
f7eb8b9a38
Rearrange files (#605) 2023-10-24 22:13:09 -06:00
5b9b12a223
Rename functions to remove compose prefix (#604) 2023-10-24 16:23:28 -06:00
Ian
567dadef7d
update fixturenet-eth test (#600) 2023-10-24 16:21:30 -06:00
052f0df4b0
Fix execute call parameter (#603) 2023-10-24 16:16:57 -06:00
c51671f786
Trigger CI Job after k8s refactor updates 2023-10-24 14:51:04 -06:00
573a19a3b7
k8s refactor (#595) 2023-10-24 14:44:48 -06:00
Zach
fc051265d8
more demo records for the console (#597)
* increase gas

* create six demo records

* print record

* use moon

* typos

* use git.vdb.t

* use the right moon

* ok
2023-10-24 14:25:25 -04:00
Zach
ddebb9c690
bind only to 127.0.0.1 (#598) 2023-10-23 10:51:44 -04:00
Zach
fe6c3f92ed
typo (#596) 2023-10-21 13:37:21 -04:00
Zach
c06be6da81
stack for the laconic.com website (#590)
* website stack

* use a release, update ports

* use deployments feature for website
2023-10-21 13:31:47 -04:00
3291c16466
Update stacks for migrated laconic repos. (#594) 2023-10-20 11:59:06 -05:00
69b071ee7a
Add example GQL queries made in Ponder indexers (#593)
* Add example GQL queries made in Ponder indexers

* Use fixturenet chainId in GQL query
2023-10-19 14:55:07 +05:30
4a90cedeb2
Run multiple Ponder indexers in payment stack (#588)
* Separate ponder indexer and ponder watcher and add second ponder indexer

* Handle review changes

* Update config to point ponder watcher to indexer 2 to indexer 1

* Update Ponder demo

* Use deployed ERC20 contract in second Ponder indexer

* Add order by timestamp in Ponder watcher app entities query

* Upgrade go-nitro version to v0.1.2-ts-port-0.1.9

* Decrease Ponder start block to process contract transfer event at deployment

---------

Co-authored-by: Shreerang Kale <shreerangkale@gmail.com>
2023-10-19 12:26:10 +05:30
Zach
2402220273
updates to payments demo (#566) 2023-10-18 12:23:57 -04:00
Zach
771943864e
build fixturenet payments demo on arm (#564) 2023-10-18 12:05:09 -04:00
prathamesh0
dd4dd519dd
Add a container for ERC20 contract txs in the payments stack (#591)
* Add a container for ERC20 contract txs in the payments stack

* Use erc20-watcher-ts repo in erc20 stack
2023-10-18 17:40:55 +05:30
prathamesh0
3262ebe4ac
Setup ipld-eth-server communicating with a remote Nitro node (#587)
* Use durable store for in-process Nitro node

* Update setup for external go-nitro node

* Add a separate service for ipld-eth-server with remote Nitro node

* Update repo branches / versions

* Wait for external Nitro node endpoint and update instructions

* Update repo branches
2023-10-18 13:51:55 +05:30
8246e3551f
53: Make sure curl and jq are present in the container. (#585) 2023-10-17 13:36:43 -05:00
e26a05f2c7
Add Ponder indexer queries payment config (#586)
* Add support to pass ratesFile in config

* Change branch for ponder to laconic-esm

* Fix repo name for ponder

---------

Co-authored-by: Shreerang Kale <shreerangkale@gmail.com>
2023-10-17 16:21:40 +05:30
f9b102f5fa
Trigger many CI jobs 2023-10-16 09:56:12 -06:00
7ce40331d8
Remove reverse payment proxy from payments stack (#584)
* Remove reverse payment proxy service from payment stack

* Remove run-reverse-payment-proxy.sh

* Remove reverse payment proxy port from readme

---------

Co-authored-by: Shreerang Kale <shreerangkale@gmail.com>
2023-10-16 18:15:15 +05:30
479a7772f3
Trigger CI Job 2023-10-14 16:13:46 -06:00
prathamesh0
4030c0a0d2
Add a separate pod for ipld-eth-server with payments (#583)
* Add a separate pod for ipld-eth-server with payments

* Wait for nitro contracts to become available
2023-10-14 09:47:55 +05:30
ba09043227
Trigger run of fixturenet-eth-plugeth test 2023-10-13 13:56:40 -06:00
99f80ddc7c Trigger fixturenet-eth-plugeth-test CI job 2023-10-13 07:38:27 -06:00
prathamesh0
246d3d8732
Run go-nitro node in process in ipld-eth-server (#575)
* Setup ipld-eth-server to run in-process Nitro node

* Update watcher version in fixturenet-payments stack

* Update upstream nitro multiaddr in watcher setup

* Change RPC query endpoint to ipld-eth-server

* Update Ponder config to pay ipld-eth-server Nitro node

* Separate nitro-rpc-client service and update demo.md

* Remove unnecessary volumes

* Update ipld-eth-server branch

* Fix clean up steps

---------

Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-authored-by: Shreerang Kale <shreerangkale@gmail.com>
2023-10-13 15:27:17 +05:30
a68abe5f5d
Update fixturenet-eth-test
Trigger CI job run
2023-10-12 22:42:05 -06:00
c3d87692fa
python setup CI action for arm linux (#578)
* Workaround for missing Python binaries for ARM

* Trigger CI job

* Add Python install workaround to remaining jobs

* Typo
2023-10-12 21:53:47 -06:00
61579f0434
Implement scheme for triggering individual CI jobs (#577)
* Implement scheme for triggering individual CI jobs

* Add missing comment
2023-10-12 20:40:39 -06:00
0bec51e96a
Pay for queries from watcher to indexer mode Ponder apps in payments stack (#573)
* Use ponder in watcher mode and indexer mode separately in payments stack

* Refactor config file and configure env variables for watcher mode

* Update demo.md for payments stack

* Handle review changes

* Setup config to pay for watcher to indexer GQL queries

* Fix config in stack for making payments in watcher ponder app

* Update demo for payment from watcher to indexer mode Ponder apps

* Use laconic-esm brannch for ponder

---------

Co-authored-by: Shreerang Kale <shreerangkale@gmail.com>
2023-10-12 14:16:44 +05:30
f4216419c4
Don't error when CERC_GO_AUTH_TOKEN isn't set (#574)
* Don't error when CERC_GO_AUTH_TOKEN isn't set

* conditionally add ags
2023-10-11 21:24:52 -05:00
420b1c292b
Force the stack to be specified (#571)
* Force the stack to be specified

* Fix up test

* Remove test for legacy non-stack deploy
2023-10-10 16:13:29 -06:00
1446e54f31
Tolerate missing plugin functions (#570) 2023-10-10 15:32:07 -06:00
2486003361
Gitea deployment (#568)
* First part of deployments for external repos

* Generate deployment dir

* Create empty config file

* Copy script files into deployment

* Run scripts in deployment

* Refactor

* Integrate external plugins

* Remove debug output
2023-10-09 14:54:55 -06:00
5ec98ee9a1
Add missing container image (#567) 2023-10-09 12:21:54 -06:00
prathamesh0
8c4ed24dfc
Update mobymask-v3 stack (#563) 2023-10-09 10:32:57 +05:30
prathamesh0
9e56f6357d
Update demo instructions in fixturenet-payments stack (#560)
* Update demo instructions

* Add expected payment proxy output logs

* Wait for chain endpoint to be up before starting go-nitro node
2023-10-06 14:36:10 +05:30
prathamesh0
8770b1df86
Upgrade mobymask-ui version in fixturenet-payments stack (#559) 2023-10-05 17:58:34 +05:30
889df76f4f
Use release tag for go-nitro container in payments stack (#558) 2023-10-05 16:43:48 +05:30
5d19c56b0c
Upgrade Nitro version in stack and add nitro-rpc-client CLI (#557)
* Changes required for ponder container and upgrade ts-nitro version

* Fix empty CERC_RELAY_MULTIADDR env variable

* Add curl output for ponder payment channel

* Add `nitro-rpc-client` container in payments stack (#1)

* Add container for nitro-rpc-client

* Add nitro-rpc-client service

* Update nitro-rpc-client container

* Update demo.md in payments stack

---------

Co-authored-by: Shreerang Kale <shreerangkale@gmail.com>

* Update env variables used for go-nitro container

* Pass Nitro chain URL in watcher config

* Update ponder config chainUrl

* Remove curl check in ponder start script

* Upgrade node version to 18 in watcher-ts Dockerfile

* Update ponder section in the demo instructions

---------

Co-authored-by: Shreerang Kale <shreerangkale@gmail.com>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2023-10-05 14:57:47 +05:30
prathamesh0
d57efe87b8
Add demo instructions to fixturenet-payments stack (#556)
* Update remaining references for core repos from github to gitea

* Add demo instructions

* Add demo clean up steps
2023-10-04 12:51:04 +05:30
80b0c07736
Open ports for 2nd geth instance and add missing lcli param. (#555) 2023-10-03 19:52:23 -05:00
6fa3ca2b6d
Update from github.com to git.vdb.to where applicable. (#553)
* Update from github.com to git.vdb.to for many repos.

* Use ipld-eth-server@v1.11.6-statediff-v5 for most stacks

* Specify go-ethereum branch/tag
2023-10-03 13:55:33 -05:00
3c5489681f
Implement deployment config (#554)
* Initial deployment config implementation

* Complete implementation, add test

* Fix funky indentation

* Revert test test
2023-10-03 12:49:15 -06:00
prathamesh0
cf039d9562
Add a fixturenet-payments stack (#540)
* Add a fixturenet-payments stack

* Export the WebSocket port in fixturenet-eth-geth service

* Add container to run a go-nitro node

* Add container to deploy Nitro contracts

* Read contract addresses from a volume when running the Nitro node

* Add a service for Nitro reverse payment proxy

* Expose payment proxy endpoint to be accessible from host

* Map nitro node messaging and payment proxy ports to host

* Use container to deploy Nitro contracts in mobymask-v3 stack

* Use a common contract deployment script from mobymask-v3 stack

* Add MobyMask contract deployment and watcher services

* Fixes for contract deployment and watcher scripts

* Add a container and service for mobymask-snap

* Add MobyMask app service

* Add container and service for a ponder app

* Fix ponder setup and update instructions

* Handle review comments

* Use enablepaidrpcmethods flag in reverse payment proxy server

* Update go-nitro branch

* Fixes for mobymask-v3 stack

---------

Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
2023-10-03 17:40:34 +05:30
Zach
b485a3b8d8
doc on how to add a stack (#552)
* doc on how to add a stack

* commands to run + nits
2023-10-01 14:33:01 -04:00
746 changed files with 87868 additions and 3099 deletions

View File

@ -1,36 +0,0 @@
name: Fixturenet-Eth-Plugeth-Test
on:
push:
branches: 'ci-test'
# Needed until we can incorporate docker startup into the executor container
env:
DOCKER_HOST: unix:///var/run/dind.sock
jobs:
test:
name: "Run an Ethereum plugeth fixturenet test"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
uses: cerc-io/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: Start dockerd # Also needed until we can incorporate into the executor
run: |
dockerd -H $DOCKER_HOST --userland-proxy=false &
sleep 5
- name: "Run fixturenet-eth tests"
run: ./tests/fixturenet-eth-plugeth/run-test.sh

View File

@ -0,0 +1,66 @@
name: Fixturenet-Laconicd-Test
on:
push:
branches: '*'
paths:
- '!**'
- '.gitea/workflows/triggers/fixturenet-laconicd-test'
schedule:
- cron: '1 13 * * *'
jobs:
test:
name: "Run Laconicd fixturenet and Laconic CLI tests"
runs-on: ubuntu-latest
steps:
- name: 'Update'
run: apt-get update
- name: 'Setup jq'
run: apt-get install jq -y
- name: 'Check jq'
run: |
which jq
jq --version
- name: "Clone project repository"
uses: actions/checkout@v3
# At present the stock setup-python action fails on Linux/aarch64
# Conditional steps below workaroud this by using deadsnakes for that case only
- name: "Install Python for ARM on Linux"
if: ${{ runner.arch == 'arm64' && runner.os == 'Linux' }}
uses: deadsnakes/action@v3.0.1
with:
python-version: '3.8'
- name: "Install Python cases other than ARM on Linux"
if: ${{ ! (runner.arch == 'arm64' && runner.os == 'Linux') }}
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv==1.0.6
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Run fixturenet-laconicd tests"
run: ./tests/fixturenet-laconicd/run-test.sh
- name: "Run laconic CLI tests"
run: ./tests/fixturenet-laconicd/run-cli-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

37
.gitea/workflows/lint.yml Normal file
View File

@ -0,0 +1,37 @@
name: Lint Checks
on:
pull_request:
branches: '*'
push:
branches: '*'
jobs:
test:
name: "Run linter"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name : "Run flake8"
uses: py-actions/flake8@v2
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -5,6 +5,8 @@ on:
branches:
- main
- publish-test
paths-ignore:
- '.gitea/workflows/triggers/*'
jobs:
publish:
@ -18,14 +20,22 @@ jobs:
run: |
build_tag=$(./scripts/create_build_tag_file.sh)
echo "build-tag=v${build_tag}" >> $GITHUB_OUTPUT
- name: "Install Python"
# At present the stock setup-python action fails on Linux/aarch64
# Conditional steps below workaroud this by using deadsnakes for that case only
- name: "Install Python for ARM on Linux"
if: ${{ runner.arch == 'arm64' && runner.os == 'Linux' }}
uses: deadsnakes/action@v3.0.1
with:
python-version: '3.8'
- name: "Install Python cases other than ARM on Linux"
if: ${{ ! (runner.arch == 'arm64' && runner.os == 'Linux') }}
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
run: pip install shiv==1.0.6
- name: "Build local shiv package"
id: build
run: |
@ -44,3 +54,19 @@ jobs:
# Hack using endsWith to workaround Gitea sometimes sending "publish-test" vs "refs/heads/publish-test"
draft: ${{ endsWith('publish-test', github.ref ) }}
files: ./laconic-so
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -0,0 +1,69 @@
name: Container Registry Test
on:
push:
branches: '*'
paths:
- '!**'
- '.gitea/workflows/triggers/test-container-registry'
- '.gitea/workflows/test-container-registry.yml'
- 'tests/container-registry/run-test.sh'
schedule: # Note: coordinate with other tests to not overload runners at the same time of day
- cron: '6 19 * * *'
jobs:
test:
name: "Run contaier registry hosting test on kind/k8s"
runs-on: ubuntu-22.04
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
# At present the stock setup-python action fails on Linux/aarch64
# Conditional steps below workaroud this by using deadsnakes for that case only
- name: "Install Python for ARM on Linux"
if: ${{ runner.arch == 'arm64' && runner.os == 'Linux' }}
uses: deadsnakes/action@v3.0.1
with:
python-version: '3.8'
- name: "Install Python cases other than ARM on Linux"
if: ${{ ! (runner.arch == 'arm64' && runner.os == 'Linux') }}
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv==1.0.6
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Check cgroups version"
run: mount | grep cgroup
- name: "Install kind"
run: ./tests/scripts/install-kind.sh
- name: "Install Kubectl"
run: ./tests/scripts/install-kubectl.sh
- name: "Install ed" # Only needed until we remove the need to edit the spec file
run: apt update && apt install -y ed
- name: "Run container registry deployment test"
run: |
source /opt/bash-utils/cgroup-helper.sh
join_cgroup
./tests/container-registry/run-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -0,0 +1,67 @@
name: Database Test
on:
push:
branches: '*'
paths:
- '!**'
- '.gitea/workflows/triggers/test-database'
- '.gitea/workflows/test-database.yml'
- 'tests/database/run-test.sh'
schedule: # Note: coordinate with other tests to not overload runners at the same time of day
- cron: '5 18 * * *'
jobs:
test:
name: "Run database hosting test on kind/k8s"
runs-on: ubuntu-22.04
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
# At present the stock setup-python action fails on Linux/aarch64
# Conditional steps below workaroud this by using deadsnakes for that case only
- name: "Install Python for ARM on Linux"
if: ${{ runner.arch == 'arm64' && runner.os == 'Linux' }}
uses: deadsnakes/action@v3.0.1
with:
python-version: '3.8'
- name: "Install Python cases other than ARM on Linux"
if: ${{ ! (runner.arch == 'arm64' && runner.os == 'Linux') }}
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv==1.0.6
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Check cgroups version"
run: mount | grep cgroup
- name: "Install kind"
run: ./tests/scripts/install-kind.sh
- name: "Install Kubectl"
run: ./tests/scripts/install-kubectl.sh
- name: "Run database deployment test"
run: |
source /opt/bash-utils/cgroup-helper.sh
join_cgroup
./tests/database/run-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -7,10 +7,9 @@ on:
branches:
- main
- ci-test
paths-ignore:
- '.gitea/workflows/triggers/*'
# Needed until we can incorporate docker startup into the executor container
env:
DOCKER_HOST: unix:///var/run/dind.sock
jobs:
test:
@ -19,21 +18,41 @@ jobs:
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
# At present the stock setup-python action fails on Linux/aarch64
# Conditional steps below workaroud this by using deadsnakes for that case only
- name: "Install Python for ARM on Linux"
if: ${{ runner.arch == 'arm64' && runner.os == 'Linux' }}
uses: deadsnakes/action@v3.0.1
with:
python-version: '3.8'
- name: "Install Python cases other than ARM on Linux"
if: ${{ ! (runner.arch == 'arm64' && runner.os == 'Linux') }}
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
run: pip install shiv==1.0.6
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: Start dockerd # Also needed until we can incorporate into the executor
run: |
dockerd -H $DOCKER_HOST --userland-proxy=false &
sleep 5
- name: "Run deploy tests"
run: ./tests/deploy/run-deploy-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -0,0 +1,58 @@
name: External Stack Test
on:
push:
branches: '*'
paths:
- '!**'
- '.gitea/workflows/triggers/test-external-stack'
- '.gitea/workflows/test-external-stack.yml'
- 'tests/external-stack/run-test.sh'
schedule: # Note: coordinate with other tests to not overload runners at the same time of day
- cron: '8 19 * * *'
jobs:
test:
name: "Run external stack test suite"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
# At present the stock setup-python action fails on Linux/aarch64
# Conditional steps below workaroud this by using deadsnakes for that case only
- name: "Install Python for ARM on Linux"
if: ${{ runner.arch == 'arm64' && runner.os == 'Linux' }}
uses: deadsnakes/action@v3.0.1
with:
python-version: '3.8'
- name: "Install Python cases other than ARM on Linux"
if: ${{ ! (runner.arch == 'arm64' && runner.os == 'Linux') }}
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv==1.0.6
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Run external stack tests"
run: ./tests/external-stack/run-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -0,0 +1,69 @@
name: K8s Deploy Test
on:
pull_request:
branches: '*'
push:
branches: '*'
paths:
- '!**'
- '.gitea/workflows/triggers/test-k8s-deploy'
- '.gitea/workflows/test-k8s-deploy.yml'
- 'tests/k8s-deploy/run-deploy-test.sh'
schedule: # Note: coordinate with other tests to not overload runners at the same time of day
- cron: '3 15 * * *'
jobs:
test:
name: "Run deploy test suite on kind/k8s"
runs-on: ubuntu-22.04
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
# At present the stock setup-python action fails on Linux/aarch64
# Conditional steps below workaroud this by using deadsnakes for that case only
- name: "Install Python for ARM on Linux"
if: ${{ runner.arch == 'arm64' && runner.os == 'Linux' }}
uses: deadsnakes/action@v3.0.1
with:
python-version: '3.8'
- name: "Install Python cases other than ARM on Linux"
if: ${{ ! (runner.arch == 'arm64' && runner.os == 'Linux') }}
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv==1.0.6
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Check cgroups version"
run: mount | grep cgroup
- name: "Install kind"
run: ./tests/scripts/install-kind.sh
- name: "Install Kubectl"
run: ./tests/scripts/install-kubectl.sh
- name: "Run k8s deployment test"
run: |
source /opt/bash-utils/cgroup-helper.sh
join_cgroup
./tests/k8s-deploy/run-deploy-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -0,0 +1,69 @@
name: K8s Deployment Control Test
on:
pull_request:
branches: '*'
push:
branches: '*'
paths:
- '!**'
- '.gitea/workflows/triggers/test-k8s-deployment-control'
- '.gitea/workflows/test-k8s-deployment-control.yml'
- 'tests/k8s-deployment-control/run-test.sh'
schedule: # Note: coordinate with other tests to not overload runners at the same time of day
- cron: '3 30 * * *'
jobs:
test:
name: "Run deployment control suite on kind/k8s"
runs-on: ubuntu-22.04
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
# At present the stock setup-python action fails on Linux/aarch64
# Conditional steps below workaroud this by using deadsnakes for that case only
- name: "Install Python for ARM on Linux"
if: ${{ runner.arch == 'arm64' && runner.os == 'Linux' }}
uses: deadsnakes/action@v3.0.1
with:
python-version: '3.8'
- name: "Install Python cases other than ARM on Linux"
if: ${{ ! (runner.arch == 'arm64' && runner.os == 'Linux') }}
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv==1.0.6
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Check cgroups version"
run: mount | grep cgroup
- name: "Install kind"
run: ./tests/scripts/install-kind.sh
- name: "Install Kubectl"
run: ./tests/scripts/install-kubectl.sh
- name: "Run k8s deployment control test"
run: |
source /opt/bash-utils/cgroup-helper.sh
join_cgroup
./tests/k8s-deployment-control/run-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -0,0 +1,59 @@
name: Webapp Test
on:
pull_request:
branches: '*'
push:
branches:
- main
- ci-test
paths-ignore:
- '.gitea/workflows/triggers/*'
jobs:
test:
name: "Run webapp test suite"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
# At present the stock setup-python action fails on Linux/aarch64
# Conditional steps below workaroud this by using deadsnakes for that case only
- name: "Install Python for ARM on Linux"
if: ${{ runner.arch == 'arm64' && runner.os == 'Linux' }}
uses: deadsnakes/action@v3.0.1
with:
python-version: '3.8'
- name: "Install Python cases other than ARM on Linux"
if: ${{ ! (runner.arch == 'arm64' && runner.os == 'Linux') }}
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv==1.0.6
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Install wget" # 20240109 - Only needed until the executors are updated.
run: apt update && apt install -y wget
- name: "Run webapp tests"
run: ./tests/webapp-test/run-webapp-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -7,10 +7,9 @@ on:
branches:
- main
- ci-test
paths-ignore:
- '.gitea/workflows/triggers/*'
# Needed until we can incorporate docker startup into the executor container
env:
DOCKER_HOST: unix:///var/run/dind.sock
jobs:
test:
@ -19,22 +18,41 @@ jobs:
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
# At present the stock setup-python action fails on Linux/aarch64
# Conditional steps below workaroud this by using deadsnakes for that case only
- name: "Install Python for ARM on Linux"
if: ${{ runner.arch == 'arm64' && runner.os == 'Linux' }}
uses: deadsnakes/action@v3.0.1
with:
python-version: '3.8'
- name: "Install Python cases other than ARM on Linux"
if: ${{ ! (runner.arch == 'arm64' && runner.os == 'Linux') }}
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
run: pip install shiv==1.0.6
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: Start dockerd # Also needed until we can incorporate into the executor
run: |
dockerd -H $DOCKER_HOST --userland-proxy=false &
sleep 5
- name: "Run smoke tests"
run: ./tests/smoke-test/run-smoke-test.sh
- name: Notify Vulcanize Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.VULCANIZE_SLACK_CI_ALERTS }}
- name: Notify DeepStack Slack on CI failure
if: ${{ always() && github.ref_name == 'main' }}
uses: ravsamhq/notify-slack-action@v2
with:
status: ${{ job.status }}
notify_when: 'failure'
env:
SLACK_WEBHOOK_URL: ${{ secrets.DEEPSTACK_SLACK_CI_ALERTS }}

View File

@ -0,0 +1,10 @@
Change this file to trigger running the fixturenet-laconicd-test CI job
Trigger
Trigger
Trigger
Trigger
Trigger
Trigger
Trigger
Trigger
Trigger

View File

@ -0,0 +1 @@
Change this file to trigger running the test-container-registry CI job

View File

@ -0,0 +1,2 @@
Change this file to trigger running the test-database CI job
Trigger test run

View File

@ -0,0 +1,2 @@
Change this file to trigger running the external-stack CI job
trigger

View File

@ -0,0 +1,2 @@
Change this file to trigger running the test-k8s-deploy CI job
Trigger test on PR branch

View File

@ -1,17 +1,15 @@
name: Fixturenet-Eth-Test
name: Fixturenet-Eth Test
on:
push:
branches: 'ci-test'
# Needed until we can incorporate docker startup into the executor container
env:
DOCKER_HOST: unix:///var/run/dind.sock
branches: '*'
paths:
- '!**'
- '.github/workflows/triggers/fixturenet-eth-test'
jobs:
test:
name: "Run an Ethereum fixturenet test"
name: "Run fixturenet-eth test suite"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
@ -28,10 +26,5 @@ jobs:
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: Start dockerd # Also needed until we can incorporate into the executor
run: |
dockerd -H $DOCKER_HOST --userland-proxy=false &
sleep 5
- name: "Run fixturenet-eth tests"
run: ./tests/fixturenet-eth/run-test.sh

View File

@ -0,0 +1,30 @@
name: Fixturenet-Laconicd Test
on:
push:
branches: '*'
paths:
- '!**'
- '.github/workflows/triggers/fixturenet-laconicd-test'
jobs:
test:
name: "Run fixturenet-laconicd test suite"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Run fixturenet-laconicd tests"
run: ./tests/fixturenet-laconicd/run-test.sh

29
.github/workflows/test-webapp.yml vendored Normal file
View File

@ -0,0 +1,29 @@
name: Webapp Test
on:
pull_request:
branches: '*'
push:
branches: '*'
jobs:
test:
name: "Run webapp test suite"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Run webapp tests"
run: ./tests/webapp-test/run-webapp-test.sh

View File

@ -0,0 +1,2 @@
Change this file to trigger running the fixturenet-eth-test CI job

View File

@ -0,0 +1,3 @@
Change this file to trigger running the fixturenet-laconicd-test CI job
trigger

4
.gitignore vendored
View File

@ -6,5 +6,5 @@ laconic_stack_orchestrator.egg-info
__pycache__
*~
package
app/data/build_tag.txt
build
stack_orchestrator/data/build_tag.txt
/build

View File

@ -29,10 +29,10 @@ chmod +x ~/.docker/cli-plugins/docker-compose
Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be
a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory.
Now, having selected that directory, download the latest release from [this page](https://github.com/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable:
Now, having selected that directory, download the latest release from [this page](https://git.vdb.to/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable:
```bash
curl -L -o ~/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so
curl -L -o ~/bin/laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
```
Give it execute permissions:
@ -52,7 +52,7 @@ Version: 1.1.0-7a607c2-202304260513
Save the distribution url to `~/.laconic-so/config.yml`:
```bash
mkdir ~/.laconic-so
echo "distribution-url: https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so" > ~/.laconic-so/config.yml"
echo "distribution-url: https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so" > ~/.laconic-so/config.yml
```
### Update
@ -64,12 +64,12 @@ laconic-so update
## Usage
The various [stacks](/app/data/stacks) each contain instructions for running different stacks based on your use case. For example:
The various [stacks](/stack_orchestrator/data/stacks) each contain instructions for running different stacks based on your use case. For example:
- [self-hosted Gitea](/app/data/stacks/build-support)
- [an Optimism Fixturenet](/app/data/stacks/fixturenet-optimism)
- [laconicd with console and CLI](app/data/stacks/fixturenet-laconic-loaded)
- [kubo (IPFS)](app/data/stacks/kubo)
- [self-hosted Gitea](/stack_orchestrator/data/stacks/build-support)
- [an Optimism Fixturenet](/stack_orchestrator/data/stacks/fixturenet-optimism)
- [laconicd with console and CLI](stack_orchestrator/data/stacks/fixturenet-laconic-loaded)
- [kubo (IPFS)](stack_orchestrator/data/stacks/kubo)
## Contributing

View File

@ -1,143 +0,0 @@
# Copyright © 2022, 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
# Builds or pulls containers for the system components
# env vars:
# CERC_REPO_BASE_DIR defaults to ~/cerc
# TODO: display the available list of containers; allow re-build of either all or specific containers
import os
import sys
from decouple import config
import subprocess
import click
import importlib.resources
from pathlib import Path
from app.util import include_exclude_check, get_parsed_stack_config
from app.base import get_npm_registry_url
# TODO: find a place for this
# epilog="Config provided either in .env or settings.ini or env vars: CERC_REPO_BASE_DIR (defaults to ~/cerc)"
@click.command()
@click.option('--include', help="only build these containers")
@click.option('--exclude', help="don\'t build these containers")
@click.option("--force-rebuild", is_flag=True, default=False, help="Override dependency checking -- always rebuild")
@click.option("--extra-build-args", help="Supply extra arguments to build")
@click.pass_context
def command(ctx, include, exclude, force_rebuild, extra_build_args):
'''build the set of containers required for a complete stack'''
quiet = ctx.obj.quiet
verbose = ctx.obj.verbose
dry_run = ctx.obj.dry_run
debug = ctx.obj.debug
local_stack = ctx.obj.local_stack
stack = ctx.obj.stack
continue_on_error = ctx.obj.continue_on_error
# See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
container_build_dir = Path(__file__).absolute().parent.joinpath("data", "container-build")
if local_stack:
dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")]
print(f'Local stack dev_root_path (CERC_REPO_BASE_DIR) overridden to: {dev_root_path}')
else:
dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc"))
if not quiet:
print(f'Dev Root is: {dev_root_path}')
if not os.path.isdir(dev_root_path):
print('Dev root directory doesn\'t exist, creating')
# See: https://stackoverflow.com/a/20885799/1701505
from app import data
with importlib.resources.open_text(data, "container-image-list.txt") as container_list_file:
all_containers = container_list_file.read().splitlines()
containers_in_scope = []
if stack:
stack_config = get_parsed_stack_config(stack)
containers_in_scope = stack_config['containers']
else:
containers_in_scope = all_containers
if verbose:
print(f'Containers: {containers_in_scope}')
if stack:
print(f"Stack: {stack}")
# TODO: make this configurable
container_build_env = {
"CERC_NPM_REGISTRY_URL": get_npm_registry_url(),
"CERC_GO_AUTH_TOKEN": config("CERC_GO_AUTH_TOKEN", default=""),
"CERC_NPM_AUTH_TOKEN": config("CERC_NPM_AUTH_TOKEN", default=""),
"CERC_REPO_BASE_DIR": dev_root_path,
"CERC_CONTAINER_BASE_DIR": container_build_dir,
"CERC_HOST_UID": f"{os.getuid()}",
"CERC_HOST_GID": f"{os.getgid()}",
"DOCKER_BUILDKIT": config("DOCKER_BUILDKIT", default="0")
}
container_build_env.update({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
container_build_env.update({"CERC_FORCE_REBUILD": "true"} if force_rebuild else {})
container_build_env.update({"CERC_CONTAINER_EXTRA_BUILD_ARGS": extra_build_args} if extra_build_args else {})
docker_host_env = os.getenv("DOCKER_HOST")
if docker_host_env:
container_build_env.update({"DOCKER_HOST": docker_host_env})
def process_container(container):
if not quiet:
print(f"Building: {container}")
build_dir = os.path.join(container_build_dir, container.replace("/", "-"))
build_script_filename = os.path.join(build_dir, "build.sh")
if verbose:
print(f"Build script filename: {build_script_filename}")
if os.path.exists(build_script_filename):
build_command = build_script_filename
else:
if verbose:
print(f"No script file found: {build_script_filename}, using default build script")
repo_dir = container.split('/')[1]
# TODO: make this less of a hack -- should be specified in some metadata somewhere
# Check if we have a repo for this container. If not, set the context dir to the container-build subdir
repo_full_path = os.path.join(dev_root_path, repo_dir)
repo_dir_or_build_dir = repo_full_path if os.path.exists(repo_full_path) else build_dir
build_command = os.path.join(container_build_dir, "default-build.sh") + f" {container}:local {repo_dir_or_build_dir}"
if not dry_run:
if verbose:
print(f"Executing: {build_command} with environment: {container_build_env}")
build_result = subprocess.run(build_command, shell=True, env=container_build_env)
if verbose:
print(f"Return code is: {build_result.returncode}")
if build_result.returncode != 0:
print(f"Error running build for {container}")
if not continue_on_error:
print("FATAL Error: container build failed and --continue-on-error not set, exiting")
sys.exit(1)
else:
print("****** Container Build Error, continuing because --continue-on-error is set")
else:
print("Skipped")
for container in containers_in_scope:
if include_exclude_check(container, include, exclude):
process_container(container)
else:
if verbose:
print(f"Excluding: {container}")

View File

@ -1,13 +0,0 @@
version: "3.2"
# See: https://docs.ipfs.tech/install/run-ipfs-inside-docker/#set-up
services:
ipfs:
image: ipfs/kubo:master-2023-02-20-714a968
restart: always
volumes:
- ./ipfs/import:/import
- ./ipfs/data:/data/ipfs
ports:
- "0.0.0.0:8080:8080"
- "0.0.0.0:4001:4001"
- "0.0.0.0:5001:5001"

View File

@ -1,304 +0,0 @@
version: '3.2'
services:
# Starts the PostgreSQL database for watchers
watcher-db:
restart: unless-stopped
image: postgres:14-alpine
environment:
- POSTGRES_USER=vdbm
- POSTGRES_MULTIPLE_DATABASES=azimuth-watcher,azimuth-watcher-job-queue,censures-watcher,censures-watcher-job-queue,claims-watcher,claims-watcher-job-queue,conditional-star-release-watcher,conditional-star-release-watcher-job-queue,delegated-sending-watcher,delegated-sending-watcher-job-queue,ecliptic-watcher,ecliptic-watcher-job-queue,linear-star-release-watcher,linear-star-release-watcher-job-queue,polls-watcher,polls-watcher-job-queue
- POSTGRES_EXTENSION=azimuth-watcher-job-queue:pgcrypto,censures-watcher-job-queue:pgcrypto,claims-watcher-job-queue:pgcrypto,conditional-star-release-watcher-job-queue:pgcrypto,delegated-sending-watcher-job-queue:pgcrypto,ecliptic-watcher-job-queue:pgcrypto,linear-star-release-watcher-job-queue:pgcrypto,polls-watcher-job-queue:pgcrypto,
- POSTGRES_PASSWORD=password
volumes:
- ../config/postgresql/multiple-postgressql-databases.sh:/docker-entrypoint-initdb.d/multiple-postgressql-databases.sh
- watcher_db_data:/var/lib/postgresql/data
ports:
- "0.0.0.0:15432:5432"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "5432"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
# Starts the azimuth-watcher server
azimuth-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/azimuth-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/azimuth-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/azimuth-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/azimuth-watcher/start-server.sh
ports:
- "3001"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3001"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the censures-watcher server
censures-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/censures-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/censures-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/censures-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/censures-watcher/start-server.sh
ports:
- "3002"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3002"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the claims-watcher server
claims-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/claims-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/claims-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/claims-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/claims-watcher/start-server.sh
ports:
- "3003"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3003"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the conditional-star-release-watcher server
conditional-star-release-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/conditional-star-release-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/conditional-star-release-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/conditional-star-release-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/conditional-star-release-watcher/start-server.sh
ports:
- "3004"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3004"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the delegated-sending-watcher server
delegated-sending-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/delegated-sending-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/delegated-sending-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/delegated-sending-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/delegated-sending-watcher/start-server.sh
ports:
- "3005"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3005"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the ecliptic-watcher server
ecliptic-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/ecliptic-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/ecliptic-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/ecliptic-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/ecliptic-watcher/start-server.sh
ports:
- "3006"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3006"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the linear-star-release-watcher server
linear-star-release-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/linear-star-release-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/linear-star-release-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/linear-star-release-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/linear-star-release-watcher/start-server.sh
ports:
- "3007"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3007"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the polls-watcher server
polls-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/polls-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/polls-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/polls-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/polls-watcher/start-server.sh
ports:
- "3008"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3008"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the gateway-server for proxying queries
gateway-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
azimuth-watcher-server:
condition: service_healthy
censures-watcher-server:
condition: service_healthy
claims-watcher-server:
condition: service_healthy
conditional-star-release-watcher-server:
condition: service_healthy
delegated-sending-watcher-server:
condition: service_healthy
ecliptic-watcher-server:
condition: service_healthy
linear-star-release-watcher-server:
condition: service_healthy
polls-watcher-server:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
working_dir: /app/packages/gateway-server
command: "yarn server"
volumes:
- ../config/watcher-azimuth/gateway-watchers.json:/app/packages/gateway-server/dist/watchers.json
ports:
- "0.0.0.0:4000:4000"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "4000"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
watcher_db_data:

View File

@ -1,118 +0,0 @@
#!/bin/bash
# TODO: this file is now an unmodified copy of cerc-io/laconicd/init.sh
# so we should have a mechanism to bundle it inside the container rather than link from here
# at deploy time.
KEY="mykey"
CHAINID="laconic_9000-1"
MONIKER="localtestnet"
KEYRING="test"
KEYALGO="eth_secp256k1"
LOGLEVEL="info"
# trace evm
TRACE="--trace"
# TRACE=""
# validate dependencies are installed
command -v jq > /dev/null 2>&1 || { echo >&2 "jq not installed. More info: https://stedolan.github.io/jq/download/"; exit 1; }
# remove existing daemon and client
rm -rf ~/.laconic*
make install
laconicd config keyring-backend $KEYRING
laconicd config chain-id $CHAINID
# if $KEY exists it should be deleted
laconicd keys add $KEY --keyring-backend $KEYRING --algo $KEYALGO
# Set moniker and chain-id for Ethermint (Moniker can be anything, chain-id must be an integer)
laconicd init $MONIKER --chain-id $CHAINID
# Change parameter token denominations to aphoton
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["staking"]["params"]["bond_denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["crisis"]["constant_fee"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["gov"]["deposit_params"]["min_deposit"][0]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["mint"]["params"]["mint_denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
# Custom modules
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["record_rent"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_rent"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_commit_fee"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_reveal_fee"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_minimum_bid"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
if [[ "$TEST_REGISTRY_EXPIRY" == "true" ]]; then
echo "Setting timers for expiry tests."
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["record_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_grace_period"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
fi
if [[ "$TEST_AUCTION_ENABLED" == "true" ]]; then
echo "Enabling auction and setting timers."
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_enabled"]=true' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_grace_period"]="300s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_commits_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_reveals_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
fi
# increase block time (?)
cat $HOME/.laconicd/config/genesis.json | jq '.consensus_params["block"]["time_iota_ms"]="1000"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
# Set gas limit in genesis
cat $HOME/.laconicd/config/genesis.json | jq '.consensus_params["block"]["max_gas"]="10000000"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
# disable produce empty block
if [[ "$OSTYPE" == "darwin"* ]]; then
sed -i '' 's/create_empty_blocks = true/create_empty_blocks = false/g' $HOME/.laconicd/config/config.toml
else
sed -i 's/create_empty_blocks = true/create_empty_blocks = false/g' $HOME/.laconicd/config/config.toml
fi
if [[ $1 == "pending" ]]; then
if [[ "$OSTYPE" == "darwin"* ]]; then
sed -i '' 's/create_empty_blocks_interval = "0s"/create_empty_blocks_interval = "30s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_propose = "3s"/timeout_propose = "30s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_propose_delta = "500ms"/timeout_propose_delta = "5s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_prevote = "1s"/timeout_prevote = "10s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_prevote_delta = "500ms"/timeout_prevote_delta = "5s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_precommit = "1s"/timeout_precommit = "10s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_precommit_delta = "500ms"/timeout_precommit_delta = "5s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_commit = "5s"/timeout_commit = "150s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_broadcast_tx_commit = "10s"/timeout_broadcast_tx_commit = "150s"/g' $HOME/.laconicd/config/config.toml
else
sed -i 's/create_empty_blocks_interval = "0s"/create_empty_blocks_interval = "30s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_propose = "3s"/timeout_propose = "30s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_propose_delta = "500ms"/timeout_propose_delta = "5s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_prevote = "1s"/timeout_prevote = "10s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_prevote_delta = "500ms"/timeout_prevote_delta = "5s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_precommit = "1s"/timeout_precommit = "10s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_precommit_delta = "500ms"/timeout_precommit_delta = "5s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_commit = "5s"/timeout_commit = "150s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_broadcast_tx_commit = "10s"/timeout_broadcast_tx_commit = "150s"/g' $HOME/.laconicd/config/config.toml
fi
fi
# Allocate genesis accounts (cosmos formatted addresses)
laconicd add-genesis-account $KEY 100000000000000000000000000aphoton --keyring-backend $KEYRING
# Sign genesis transaction
laconicd gentx $KEY 1000000000000000000000aphoton --keyring-backend $KEYRING --chain-id $CHAINID
# Collect genesis tx
laconicd collect-gentxs
# Run this to ensure everything worked and that the genesis file is setup correctly
laconicd validate-genesis
if [[ $1 == "pending" ]]; then
echo "pending mode is on, please wait for the first block committed."
fi
# Start the node (remove the --pruning=nothing flag if historical queries are not needed)
laconicd start --pruning=nothing --evm.tracer=json $TRACE --log_level $LOGLEVEL --minimum-gas-prices=0.0001aphoton --json-rpc.api eth,txpool,personal,net,debug,web3,miner --api.enable --gql-server --gql-playground

View File

@ -1,37 +0,0 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
# Check existing config if it exists
if [ -f /app/jwt.txt ] && [ -f /app/rollup.json ]; then
echo "Found existing L2 config, cross-checking with L1 deployment config"
SOURCE_L1_CONF=$(cat /contracts-bedrock/deploy-config/getting-started.json)
EXP_L1_BLOCKHASH=$(echo "$SOURCE_L1_CONF" | jq -r '.l1StartingBlockTag')
EXP_BATCHER=$(echo "$SOURCE_L1_CONF" | jq -r '.batchSenderAddress')
GEN_L2_CONF=$(cat /app/rollup.json)
GEN_L1_BLOCKHASH=$(echo "$GEN_L2_CONF" | jq -r '.genesis.l1.hash')
GEN_BATCHER=$(echo "$GEN_L2_CONF" | jq -r '.genesis.system_config.batcherAddr')
if [ "$EXP_L1_BLOCKHASH" = "$GEN_L1_BLOCKHASH" ] && [ "$EXP_BATCHER" = "$GEN_BATCHER" ]; then
echo "Config cross-checked, exiting"
exit 0
fi
echo "Existing L2 config doesn't match the L1 deployment config, please clear L2 config volume before starting"
exit 1
fi
op-node genesis l2 \
--deploy-config /contracts-bedrock/deploy-config/getting-started.json \
--deployment-dir /contracts-bedrock/deployments/getting-started/ \
--outfile.l2 /app/genesis.json \
--outfile.rollup /app/rollup.json \
--l1-rpc $CERC_L1_RPC
openssl rand -hex 32 > /app/jwt.txt

View File

@ -1,131 +0,0 @@
#!/bin/bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_CHAIN_ID="${CERC_L1_CHAIN_ID:-${DEFAULT_CERC_L1_CHAIN_ID}}"
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
CERC_L1_ACCOUNTS_CSV_URL="${CERC_L1_ACCOUNTS_CSV_URL:-${DEFAULT_CERC_L1_ACCOUNTS_CSV_URL}}"
echo "Using L1 RPC endpoint ${CERC_L1_RPC}"
IMPORT_1="import './verify-contract-deployment'"
IMPORT_2="import './rekey-json'"
IMPORT_3="import './send-balance'"
# Append mounted tasks to tasks/index.ts file if not present
if ! grep -Fxq "$IMPORT_1" tasks/index.ts; then
echo "$IMPORT_1" >> tasks/index.ts
echo "$IMPORT_2" >> tasks/index.ts
echo "$IMPORT_3" >> tasks/index.ts
fi
# Update the chainId in the hardhat config
sed -i "/getting-started/ {n; s/.*chainId.*/ chainId: $CERC_L1_CHAIN_ID,/}" hardhat.config.ts
# Exit if a deployment already exists (on restarts)
# Note: fixturenet-eth-geth currently starts fresh on a restart
if [ -d "deployments/getting-started" ]; then
echo "Deployment directory deployments/getting-started found, checking SystemDictator deployment"
# Read JSON file into variable
SYSTEM_DICTATOR_DETAILS=$(cat deployments/getting-started/SystemDictator.json)
# Parse JSON into variables
SYSTEM_DICTATOR_ADDRESS=$(echo "$SYSTEM_DICTATOR_DETAILS" | jq -r '.address')
SYSTEM_DICTATOR_TXHASH=$(echo "$SYSTEM_DICTATOR_DETAILS" | jq -r '.transactionHash')
if yarn hardhat verify-contract-deployment --contract "${SYSTEM_DICTATOR_ADDRESS}" --transaction-hash "${SYSTEM_DICTATOR_TXHASH}"; then
echo "Deployment verfication successful, exiting"
exit 0
else
echo "Deployment verfication failed, please clear L1 deployment volume before starting"
exit 1
fi
fi
# Generate the L2 account addresses
yarn hardhat rekey-json --output /l2-accounts/keys.json
# Read JSON file into variable
KEYS_JSON=$(cat /l2-accounts/keys.json)
# Parse JSON into variables
ADMIN_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Admin.address')
ADMIN_PRIV_KEY=$(echo "$KEYS_JSON" | jq -r '.Admin.privateKey')
PROPOSER_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Proposer.address')
BATCHER_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Batcher.address')
SEQUENCER_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Sequencer.address')
# Get the private keys of L1 accounts
if [ -n "$CERC_L1_ACCOUNTS_CSV_URL" ] && \
l1_accounts_response=$(curl -L --write-out '%{http_code}' --silent --output /dev/null "$CERC_L1_ACCOUNTS_CSV_URL") && \
[ "$l1_accounts_response" -eq 200 ];
then
echo "Fetching L1 account credentials using provided URL"
mkdir -p /geth-accounts
wget -O /geth-accounts/accounts.csv "$CERC_L1_ACCOUNTS_CSV_URL"
CERC_L1_ADDRESS=$(head -n 1 /geth-accounts/accounts.csv | cut -d ',' -f 2)
CERC_L1_PRIV_KEY=$(head -n 1 /geth-accounts/accounts.csv | cut -d ',' -f 3)
CERC_L1_ADDRESS_2=$(awk -F, 'NR==2{print $(NF-1)}' /geth-accounts/accounts.csv)
CERC_L1_PRIV_KEY_2=$(awk -F, 'NR==2{print $NF}' /geth-accounts/accounts.csv)
else
echo "Couldn't fetch L1 account credentials, using them from env"
fi
# Send balances to the above L2 addresses
yarn hardhat send-balance --to "${ADMIN_ADDRESS}" --amount 2 --private-key "${CERC_L1_PRIV_KEY}" --network getting-started
yarn hardhat send-balance --to "${PROPOSER_ADDRESS}" --amount 5 --private-key "${CERC_L1_PRIV_KEY}" --network getting-started
yarn hardhat send-balance --to "${BATCHER_ADDRESS}" --amount 1000 --private-key "${CERC_L1_PRIV_KEY}" --network getting-started
echo "Balances sent to L2 accounts"
# Select a finalized L1 block as the starting point for roll ups
until FINALIZED_BLOCK=$(cast block finalized --rpc-url "$CERC_L1_RPC"); do
echo "Waiting for a finalized L1 block to exist, retrying after 10s"
sleep 10
done
L1_BLOCKNUMBER=$(echo "$FINALIZED_BLOCK" | awk '/number/{print $2}')
L1_BLOCKHASH=$(echo "$FINALIZED_BLOCK" | awk '/hash/{print $2}')
L1_BLOCKTIMESTAMP=$(echo "$FINALIZED_BLOCK" | awk '/timestamp/{print $2}')
echo "Selected L1 block ${L1_BLOCKNUMBER} as the starting block for roll ups"
# Update the deployment config
sed -i 's/"l2OutputOracleStartingTimestamp": TIMESTAMP/"l2OutputOracleStartingTimestamp": '"$L1_BLOCKTIMESTAMP"'/g' deploy-config/getting-started.json
jq --arg chainid "$CERC_L1_CHAIN_ID" '.l1ChainID = ($chainid | tonumber)' deploy-config/getting-started.json > tmp.json && mv tmp.json deploy-config/getting-started.json
node update-config.js deploy-config/getting-started.json "$ADMIN_ADDRESS" "$PROPOSER_ADDRESS" "$BATCHER_ADDRESS" "$SEQUENCER_ADDRESS" "$L1_BLOCKHASH"
echo "Updated the deployment config"
# Create a .env file
echo "L1_RPC=$CERC_L1_RPC" > .env
echo "PRIVATE_KEY_DEPLOYER=$ADMIN_PRIV_KEY" >> .env
echo "Deploying the L1 smart contracts, this will take a while..."
# Deploy the L1 smart contracts
yarn hardhat deploy --network getting-started --tags l1
echo "Deployed the L1 smart contracts"
# Read Proxy contract's JSON and get the address
PROXY_JSON=$(cat deployments/getting-started/Proxy__OVM_L1StandardBridge.json)
PROXY_ADDRESS=$(echo "$PROXY_JSON" | jq -r '.address')
# Send balance to the above Proxy contract in L1 for reflecting balance in L2
# First account
yarn hardhat send-balance --to "${PROXY_ADDRESS}" --amount 1 --private-key "${CERC_L1_PRIV_KEY}" --network getting-started
# Second account
yarn hardhat send-balance --to "${PROXY_ADDRESS}" --amount 1 --private-key "${CERC_L1_PRIV_KEY_2}" --network getting-started
echo "Balance sent to Proxy L2 contract"
echo "Use following accounts for transactions in L2:"
echo "${CERC_L1_ADDRESS}"
echo "${CERC_L1_ADDRESS_2}"
echo "Done"

View File

@ -1,36 +0,0 @@
const fs = require('fs')
// Get the command-line argument
const configFile = process.argv[2]
const adminAddress = process.argv[3]
const proposerAddress = process.argv[4]
const batcherAddress = process.argv[5]
const sequencerAddress = process.argv[6]
const blockHash = process.argv[7]
// Read the JSON file
const configData = fs.readFileSync(configFile)
const configObj = JSON.parse(configData)
// Update the finalSystemOwner property with the ADMIN_ADDRESS value
configObj.finalSystemOwner =
configObj.portalGuardian =
configObj.controller =
configObj.l2OutputOracleChallenger =
configObj.proxyAdminOwner =
configObj.baseFeeVaultRecipient =
configObj.l1FeeVaultRecipient =
configObj.sequencerFeeVaultRecipient =
configObj.governanceTokenOwner =
adminAddress
configObj.l2OutputOracleProposer = proposerAddress
configObj.batchSenderAddress = batcherAddress
configObj.p2pSequencerAddress = sequencerAddress
configObj.l1StartingBlockTag = blockHash
// Write the updated JSON object back to the file
fs.writeFileSync(configFile, JSON.stringify(configObj, null, 2))

View File

@ -1,90 +0,0 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
# TODO: Add in container build or use other tool
echo "Installing jq"
apk update && apk add jq
# Get Sequencer key from keys.json
SEQUENCER_KEY=$(jq -r '.Sequencer.privateKey' /l2-accounts/keys.json | tr -d '"')
# Initialize op-geth if datadir/geth not found
if [ -f /op-node/jwt.txt ] && [ -d datadir/geth ]; then
echo "Found existing datadir, checking block signer key"
BLOCK_SIGNER_KEY=$(cat datadir/block-signer-key)
if [ "$SEQUENCER_KEY" = "$BLOCK_SIGNER_KEY" ]; then
echo "Sequencer and block signer keys match, skipping initialization"
else
echo "Sequencer and block signer keys don't match, please clear L2 geth data volume before starting"
exit 1
fi
else
echo "Initializing op-geth"
mkdir -p datadir
echo "pwd" > datadir/password
echo $SEQUENCER_KEY > datadir/block-signer-key
geth account import --datadir=datadir --password=datadir/password datadir/block-signer-key
while [ ! -f "/op-node/jwt.txt" ]
do
echo "Config files not created. Checking after 5 seconds."
sleep 5
done
echo "Config files created by op-node, proceeding with the initialization..."
geth init --datadir=datadir /op-node/genesis.json
echo "Node Initialized"
fi
SEQUENCER_ADDRESS=$(jq -r '.Sequencer.address' /l2-accounts/keys.json | tr -d '"')
echo "SEQUENCER_ADDRESS: ${SEQUENCER_ADDRESS}"
cleanup() {
echo "Signal received, cleaning up..."
kill ${geth_pid}
wait
echo "Done"
}
trap 'cleanup' INT TERM
# Run op-geth
geth \
--datadir ./datadir \
--http \
--http.corsdomain="*" \
--http.vhosts="*" \
--http.addr=0.0.0.0 \
--http.api=web3,debug,eth,txpool,net,engine \
--ws \
--ws.addr=0.0.0.0 \
--ws.port=8546 \
--ws.origins="*" \
--ws.api=debug,eth,txpool,net,engine \
--syncmode=full \
--gcmode=archive \
--nodiscover \
--maxpeers=0 \
--networkid=42069 \
--authrpc.vhosts="*" \
--authrpc.addr=0.0.0.0 \
--authrpc.port=8551 \
--authrpc.jwtsecret=/op-node/jwt.txt \
--rollup.disabletxpoolgossip=true \
--password=./datadir/password \
--allow-insecure-unlock \
--mine \
--miner.etherbase=$SEQUENCER_ADDRESS \
--unlock=$SEQUENCER_ADDRESS \
&
geth_pid=$!
wait $geth_pid

View File

@ -1,26 +0,0 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
# Get Sequencer key from keys.json
SEQUENCER_KEY=$(jq -r '.Sequencer.privateKey' /l2-accounts/keys.json | tr -d '"')
# Run op-node
op-node \
--l2=http://op-geth:8551 \
--l2.jwt-secret=/op-node-data/jwt.txt \
--sequencer.enabled \
--sequencer.l1-confs=3 \
--verifier.l1-confs=3 \
--rollup.config=/op-node-data/rollup.json \
--rpc.addr=0.0.0.0 \
--rpc.port=8547 \
--p2p.disable \
--rpc.enable-admin \
--p2p.sequencer.key=$SEQUENCER_KEY \
--l1=$CERC_L1_RPC \
--l1.rpckind=any

View File

@ -1,36 +0,0 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
# Read the L2OutputOracle contract address from the deployment
L2OO_DEPLOYMENT=$(cat /contracts-bedrock/deployments/getting-started/L2OutputOracle.json)
L2OO_ADDR=$(echo "$L2OO_DEPLOYMENT" | jq -r '.address')
# Get Proposer key from keys.json
PROPOSER_KEY=$(jq -r '.Proposer.privateKey' /l2-accounts/keys.json | tr -d '"')
cleanup() {
echo "Signal received, cleaning up..."
kill ${proposer_pid}
wait
echo "Done"
}
trap 'cleanup' INT TERM
# Run op-proposer
op-proposer \
--poll-interval 12s \
--rpc.port 8560 \
--rollup-rpc http://op-node:8547 \
--l2oo-address $L2OO_ADDR \
--private-key $PROPOSER_KEY \
--l1-eth-rpc $CERC_L1_RPC \
&
proposer_pid=$!
wait $proposer_pid

View File

@ -1,27 +0,0 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_IPLD_ETH_RPC="${CERC_IPLD_ETH_RPC:-${DEFAULT_CERC_IPLD_ETH_RPC}}"
CERC_IPLD_ETH_GQL="${CERC_IPLD_ETH_GQL:-${DEFAULT_CERC_IPLD_ETH_GQL}}"
echo "Using IPLD ETH RPC endpoint ${CERC_IPLD_ETH_RPC}"
echo "Using IPLD GQL endpoint ${CERC_IPLD_ETH_GQL}"
# Replace env variables in template TOML file
# Read in the config template TOML file and modify it
WATCHER_CONFIG_TEMPLATE=$(cat environments/watcher-config-template.toml)
WATCHER_CONFIG=$(echo "$WATCHER_CONFIG_TEMPLATE" | \
sed -E "s|REPLACE_WITH_CERC_IPLD_ETH_RPC|${CERC_IPLD_ETH_RPC}|g; \
s|REPLACE_WITH_CERC_IPLD_ETH_GQL|${CERC_IPLD_ETH_GQL}| ")
# Write the modified content to a new file
echo "$WATCHER_CONFIG" > environments/watcher-config.toml
# Merge SO watcher config with existing config file
node merge-toml.js
echo 'yarn server'
yarn server

View File

@ -1,14 +0,0 @@
[server]
host = "0.0.0.0"
maxSimultaneousRequests = -1
[database]
host = "watcher-db"
port = 5432
username = "vdbm"
password = "password"
[upstream]
[upstream.ethServer]
gqlApiEndpoint = "REPLACE_WITH_CERC_IPLD_ETH_GQL"
rpcProviderEndpoint = "REPLACE_WITH_CERC_IPLD_ETH_RPC"

View File

@ -1,5 +0,0 @@
# Defaults
# ipld-eth-server endpoints
DEFAULT_CERC_IPLD_ETH_RPC=
DEFAULT_CERC_IPLD_ETH_GQL=

View File

@ -1,58 +0,0 @@
#!/bin/bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_NA_ADDRESS="${CERC_NA_ADDRESS:-${DEFAULT_CERC_NA_ADDRESS}}"
CERC_VPA_ADDRESS="${CERC_VPA_ADDRESS:-${DEFAULT_CERC_VPA_ADDRESS}}"
CERC_CA_ADDRESS="${CERC_CA_ADDRESS:-${DEFAULT_CERC_CA_ADDRESS}}"
NITRO_ADDRESSES_FILE_PATH="/nitro/nitro-addresses.json"
# Check if CERC_NA_ADDRESS environment variable set to skip contract deployment
if [ -n "$CERC_NA_ADDRESS" ]; then
echo "CERC_NA_ADDRESS is set to '$CERC_NA_ADDRESS'"
echo "CERC_VPA_ADDRESS is set to '$CERC_VPA_ADDRESS'"
echo "CERC_CA_ADDRESS is set to '$CERC_CA_ADDRESS'"
echo "Using the above addresses and skipping Nitro contracts deployment"
# Create the required JSON and write it to a file
nitro_addresses_json=$(jq -n \
--arg na "$CERC_NA_ADDRESS" \
--arg vpa "$CERC_VPA_ADDRESS" \
--arg ca "$CERC_CA_ADDRESS" \
'.nitroAdjudicatorAddress = $na | .virtualPaymentAppAddress = $vpa | .consensusAppAddress = $ca')
echo "$nitro_addresses_json" > "${NITRO_ADDRESSES_FILE_PATH}"
exit
fi
# Check and exit if a deployment already exists (on restarts)
if [ -f ${NITRO_ADDRESSES_FILE_PATH} ]; then
echo "${NITRO_ADDRESSES_FILE_PATH} already exists, skipping Nitro contracts deployment"
exit
fi
echo "Using L2 RPC endpoint ${CERC_L2_GETH_RPC}"
if [ -n "$CERC_L1_ACCOUNTS_CSV_URL" ] && \
l1_accounts_response=$(curl -L --write-out '%{http_code}' --silent --output /dev/null "$CERC_L1_ACCOUNTS_CSV_URL") && \
[ "$l1_accounts_response" -eq 200 ];
then
echo "Fetching L1 account credentials using provided URL"
mkdir -p /geth-accounts
wget -O /geth-accounts/accounts.csv "$CERC_L1_ACCOUNTS_CSV_URL"
# Read the private key of an L1 account to deploy contract
CERC_PRIVATE_KEY_DEPLOYER=$(head -n 1 /geth-accounts/accounts.csv | cut -d ',' -f 3)
else
echo "Couldn't fetch L1 account credentials, using CERC_PRIVATE_KEY_DEPLOYER from env"
fi
echo "RPC_URL=${CERC_L2_GETH_RPC}" > .env
echo "NITRO_ADDRESSES_FILE_PATH=${NITRO_ADDRESSES_FILE_PATH}" >> .env
echo "PRIVATE_KEY=${CERC_PRIVATE_KEY_DEPLOYER}" >> .env
yarn ts-node --esm deploy-nitro-contracts.ts

View File

@ -1,49 +0,0 @@
import 'dotenv/config';
import fs from 'fs';
import { providers, Wallet } from 'ethers';
import { deployContracts } from '@cerc-io/nitro-util';
async function main () {
const rpcURL = process.env.RPC_URL;
const addressesFilePath = process.env.NITRO_ADDRESSES_FILE_PATH;
const deployerKey = process.env.PRIVATE_KEY;
if (!rpcURL) {
console.log('RPC_URL not set, skipping deployment');
return;
}
if (!addressesFilePath) {
console.log('NITRO_ADDRESSES_FILE_PATH not set, skipping deployment');
return;
}
if (!deployerKey) {
console.log('PRIVATE_KEY not set, skipping deployment');
return;
}
const provider = new providers.JsonRpcProvider(process.env.RPC_URL);
const signer = new Wallet(deployerKey, provider);
const [
nitroAdjudicatorAddress,
virtualPaymentAppAddress,
consensusAppAddress
] = await deployContracts(signer as any);
const output = {
nitroAdjudicatorAddress,
virtualPaymentAppAddress,
consensusAppAddress
};
fs.writeFileSync(addressesFilePath, JSON.stringify(output, null, 2));
console.log('Nitro contracts deployed, addresses written to', addressesFilePath);
console.log('Result:', JSON.stringify(output, null, 2));
}
main()
.catch((err) => {
console.log(err);
});

View File

@ -1,64 +0,0 @@
#!/bin/bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_CHAIN_ID="${CERC_CHAIN_ID:-${DEFAULT_CERC_CHAIN_ID}}"
CERC_DEPLOYED_CONTRACT="${CERC_DEPLOYED_CONTRACT:-${DEFAULT_CERC_DEPLOYED_CONTRACT}}"
CERC_RELAY_NODES="${CERC_RELAY_NODES:-${DEFAULT_CERC_RELAY_NODES}}"
CERC_DENY_MULTIADDRS="${CERC_DENY_MULTIADDRS:-${DEFAULT_CERC_DENY_MULTIADDRS}}"
CERC_PUBSUB="${CERC_PUBSUB:-${DEFAULT_CERC_PUBSUB}}"
CERC_GOSSIPSUB_DIRECT_PEERS="${CERC_GOSSIPSUB_DIRECT_PEERS:-${DEFAULT_CERC_GOSSIPSUB_DIRECT_PEERS}}"
echo "Using CERC_RELAY_NODES $CERC_RELAY_NODES"
if [ -z "$CERC_DEPLOYED_CONTRACT" ]; then
echo "CERC_DEPLOYED_CONTRACT not set"
exit 1
else
echo "Using CERC_DEPLOYED_CONTRACT ${CERC_DEPLOYED_CONTRACT} from env as the MobyMask contract address"
fi
# Checkout to the required release/branch
cd /app
git checkout $CERC_RELEASE
# Check if CERC_NA_ADDRESS is set
if [ -n "$CERC_NA_ADDRESS" ]; then
echo "CERC_NA_ADDRESS is set to '$CERC_NA_ADDRESS'"
echo "CERC_VPA_ADDRESS is set to '$CERC_VPA_ADDRESS'"
echo "CERC_CA_ADDRESS is set to '$CERC_CA_ADDRESS'"
echo "Using the above Nitro addresses"
# Create the required JSON and write it to a file
nitro_addresses_json=$(jq -n \
--arg na "$CERC_NA_ADDRESS" \
--arg vpa "$CERC_VPA_ADDRESS" \
--arg ca "$CERC_CA_ADDRESS" \
'.nitroAdjudicatorAddress = $na | .virtualPaymentAppAddress = $vpa | .consensusAppAddress = $ca')
echo "$nitro_addresses_json" > /app/src/utils/nitro-addresses.json
else
echo "Nitro addresses not provided"
exit 1
fi
# Export config values in a json file
jq --arg address "$CERC_DEPLOYED_CONTRACT" \
--argjson chainId "$CERC_CHAIN_ID" \
--argjson relayNodes "$CERC_RELAY_NODES" \
--argjson denyMultiaddrs "$CERC_DENY_MULTIADDRS" \
--arg pubsub "$CERC_PUBSUB" \
--argjson directPeers "$CERC_GOSSIPSUB_DIRECT_PEERS" \
'.address = $address | .chainId = $chainId | .relayNodes = $relayNodes | .peer.denyMultiaddrs = $denyMultiaddrs | .peer.pubsub = $pubsub | .peer.directPeers = $directPeers' \
/app/src/mobymask-app-config.json > /app/src/utils/config.json
yarn install
REACT_APP_WATCHER_URI="$CERC_APP_WATCHER_URL/graphql" \
REACT_APP_PAY_TO_NITRO_ADDRESS="$CERC_PAYMENT_NITRO_ADDRESS" \
REACT_APP_SNAP_ORIGIN="local:$CERC_SNAP_URL" \
yarn build
http-server -p 80 /app/build

View File

@ -1,27 +0,0 @@
FROM skylenet/ethereum-genesis-generator@sha256:210353ce7c898686bc5092f16c61220a76d357f51eff9c451e9ad1b9ad03d4d3 AS ethgen
FROM golang:1.20-alpine as builder
RUN apk add --no-cache python3 py3-pip
COPY genesis /opt/genesis
# Install ethereum-genesis-generator tools
COPY --from=ethgen /usr/local/bin/eth2-testnet-genesis /usr/local/bin/
COPY --from=ethgen /usr/local/bin/eth2-val-tools /usr/local/bin/
COPY --from=ethgen /apps /apps
RUN cd /apps/el-gen && pip3 install -r requirements.txt
# web3==5.24.0 used by el-gen is broken on python 3.11
RUN pip3 install --upgrade "web3==6.5.0"
# Build genesis config
RUN apk add --no-cache make bash envsubst jq
RUN cd /opt/genesis && make genesis-el
# Snag the genesis block info.
RUN go install github.com/cerc-io/eth-dump-genblock@latest
RUN eth-dump-genblock /opt/genesis/build/el/geth.json > /opt/genesis/build/el/genesis_block.json
FROM alpine:latest
COPY --from=builder /opt/genesis /opt/genesis

View File

@ -1,40 +0,0 @@
#!/usr/bin/env bash
set -e
# See: https://github.com/skylenet/ethereum-genesis-generator/blob/master/entrypoint.sh
rm -rf ../build/el
mkdir -p ../build/el
tmp_dir=$(mktemp -d -t ci-XXXXXXXXXX)
envsubst < el-config.yaml > $tmp_dir/genesis-config.yaml
ttd=`cat $tmp_dir/genesis-config.yaml | grep terminal_total_difficulty | awk '{ print $2 }'`
homestead_block=`cat $tmp_dir/genesis-config.yaml | grep homestead_block | awk '{ print $2 }'`
eip150_block=`cat $tmp_dir/genesis-config.yaml | grep eip150_block | awk '{ print $2 }'`
eip155_block=`cat $tmp_dir/genesis-config.yaml | grep eip155_block | awk '{ print $2 }'`
eip158_block=`cat $tmp_dir/genesis-config.yaml | grep eip158_block | awk '{ print $2 }'`
byzantium_block=`cat $tmp_dir/genesis-config.yaml | grep byzantium_block | awk '{ print $2 }'`
constantinople_block=`cat $tmp_dir/genesis-config.yaml | grep constantinople_block | awk '{ print $2 }'`
petersburg_block=`cat $tmp_dir/genesis-config.yaml | grep petersburg_block | awk '{ print $2 }'`
istanbul_block=`cat $tmp_dir/genesis-config.yaml | grep istanbul_block | awk '{ print $2 }'`
berlin_block=`cat $tmp_dir/genesis-config.yaml | grep berlin_block | awk '{ print $2 }'`
london_block=`cat $tmp_dir/genesis-config.yaml | grep london_block | awk '{ print $2 }'`
merge_fork_block=`cat $tmp_dir/genesis-config.yaml | grep merge_fork_block | awk '{ print $2 }'`
python3 /apps/el-gen/genesis_geth.py $tmp_dir/genesis-config.yaml | \
jq ".config.terminalTotalDifficulty=$ttd" | \
jq ".config.homesteadBlock=$homestead_block" | \
jq ".config.eip150Block=$eip150_block" | \
jq ".config.eip155Block=$eip155_block" | \
jq ".config.eip158Block=$eip158_block" | \
jq ".config.byzantiumBlock=$byzantium_block" | \
jq ".config.constantinopleBlock=$constantinople_block" | \
jq ".config.petersburgBlock=$petersburg_block" | \
jq ".config.istanbulBlock=$istanbul_block" | \
jq ".config.berlinBlock=$berlin_block" | \
jq ".config.londonBlock=$london_block" | \
jq ".config.mergeForkBlock=$merge_fork_block" | \
jq ".config.mergeNetsplitBlock=$merge_fork_block" \
> ../build/el/geth.json
python3 ../accounts/mnemonic_to_csv.py $tmp_dir/genesis-config.yaml > ../build/el/accounts.csv

View File

@ -1,6 +0,0 @@
# Config for laconic-console running in a fixturenet with laconicd
services:
wns:
server: 'LACONIC_HOSTED_ENDPOINT:9473/api'
webui: 'LACONIC_HOSTED_ENDPOINT:9473/console'

View File

@ -1,29 +0,0 @@
#!/usr/bin/env bash
# Create some demo/test records in the registry
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
registry_command="laconic cns"
record_1_filename=demo-record-1.yml
cat <<EOF > ${record_1_filename}
record:
type: WebsiteRegistrationRecord
url: 'https://cerc.io'
repo_registration_record_cid: QmSnuWmxptJZdLJpKRarxBMS2Ju2oANVrgbr2xWbie9b2D
build_artifact_cid: QmP8jTG1m9GSDJLCbeWhVSVgEzCPPwXRdCRuJtQ5Tz9Kc9
tls_cert_cid: QmbWqxBEKC3P8tqsKc98xmWNzrzDtRLMiMPL8wBuTGsMnR
version: 1.0.23
EOF
# Check we have funds
funds_response=$(${registry_command} account get --address $(cat my-address.txt))
funds_balance=$(echo ${funds_response} | jq -r .[0].balance[0].quantity)
echo "Balance is: ${funds_balance}"
# Create a bond
bond_create_result=$(${registry_command} bond create --type aphoton --quantity 1000000000)
bond_id=$(echo ${bond_create_result} | jq -r .bondId)
echo "Created bond with id: ${bond_id}"
# Publish a demo record
publish_response=$(${registry_command} record publish --filename ${record_1_filename} --bond-id ${bond_id})
published_record_id=$(echo ${publish_response} | jq -r .id)
echo "Published ${record_1_filename} with id: ${published_record_id}"

View File

@ -1,9 +0,0 @@
ARG TAG_SUFFIX="-modern"
FROM sigp/lighthouse:v4.3.0${TAG_SUFFIX}
RUN apt-get update; apt-get install bash netcat curl less jq wget -y;
WORKDIR /root/
ADD start-lighthouse.sh .
ENTRYPOINT [ "./start-lighthouse.sh" ]

View File

@ -1,10 +0,0 @@
#!/usr/bin/env bash
# Build cerc/plugeth-statediff
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
# This container build currently requires access to private dependencies in gitea
# so we check that the necessary access token has been supplied here, then pass it o the build
if [[ -z "${CERC_GO_AUTH_TOKEN}" ]]; then
echo "ERROR: CERC_GO_AUTH_TOKEN is not set" >&2
exit 1
fi
docker build -t cerc/plugeth-statediff:local ${build_command_args} --build-arg GIT_VDBTO_TOKEN=${CERC_GO_AUTH_TOKEN} ${CERC_REPO_BASE_DIR}/plugeth-statediff

View File

@ -1,10 +0,0 @@
#!/usr/bin/env bash
# Build cerc/plugeth
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
# This container build currently requires access to private dependencies in gitea
# so we check that the necessary access token has been supplied here, then pass it o the build
if [[ -z "${CERC_GO_AUTH_TOKEN}" ]]; then
echo "ERROR: CERC_GO_AUTH_TOKEN is not set" >&2
exit 1
fi
docker build -t cerc/plugeth:local ${build_command_args} --build-arg GIT_VDBTO_TOKEN=${CERC_GO_AUTH_TOKEN} ${CERC_REPO_BASE_DIR}/plugeth

View File

@ -1,19 +0,0 @@
#!/usr/bin/env bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
# Test if the container's filesystem is old (run previously) or new
EXISTSFILENAME=/data/exists
echo "Test container starting"
if [[ -f "$EXISTSFILENAME" ]];
then
TIMESTAMP=`cat $EXISTSFILENAME`
echo "Filesystem is old, created: $TIMESTAMP"
else
echo "Filesystem is fresh"
echo `date` > $EXISTSFILENAME
fi
# Run nginx which will block here forever
/usr/sbin/nginx -g "daemon off;"

View File

@ -1,13 +0,0 @@
FROM node:18.16.0-alpine3.16
RUN apk --update --no-cache add git python3 alpine-sdk
WORKDIR /app
COPY . .
RUN echo "Building azimuth-watcher-ts" && \
yarn && yarn build
RUN echo "Install toml-js to update watcher config files" && \
yarn add --dev --ignore-workspace-root-check toml-js

View File

@ -1,13 +0,0 @@
FROM node:16.17.1-alpine3.16
RUN apk --update --no-cache add git python3 alpine-sdk
WORKDIR /app
COPY . .
RUN echo "Building watcher-ts" && \
git checkout v0.2.19 && \
yarn && yarn build
WORKDIR /app/packages/erc20-watcher

View File

@ -1 +0,0 @@
# Put config here.

View File

@ -1,9 +0,0 @@
#!/usr/bin/env bash
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_WEBAPP_FILES_DIR="${CERC_WEBAPP_FILES_DIR:-/data}"
/scripts/apply-webapp-config.sh /config/config.yml ${CERC_WEBAPP_FILES_DIR}
http-server -p 80 ${CERC_WEBAPP_FILES_DIR}

View File

@ -1,72 +0,0 @@
# Azimuth Watcher
Instructions to setup and deploy Azimuth Watcher stack
## Setup
Prerequisite: `ipld-eth-server` RPC and GQL endpoints
Clone required repositories:
```bash
laconic-so --stack azimuth setup-repositories
```
NOTE: If the repository already exists and checked out to a different version, `setup-repositories` command will throw an error.
For getting around this, the `azimuth-watcher-ts` repository can be removed and then run the command.
Checkout to the required versions and branches in repos
```bash
# azimuth-watcher-ts
cd ~/cerc/azimuth-watcher-ts
git checkout v0.1.0
```
Build the container images:
```bash
laconic-so --stack azimuth build-containers
```
This should create the required docker images in the local image registry.
### Configuration
* Create and update an env file to be used in the next step:
```bash
# External ipld-eth-server endpoints
CERC_IPLD_ETH_RPC=
CERC_IPLD_ETH_GQL=
```
* NOTE: If `ipld-eth-server` is running on the host machine, use `host.docker.internal` as the hostname to access host ports
### Deploy the stack
* Deploy the containers:
```bash
laconic-so --stack azimuth deploy-system --env-file <PATH_TO_ENV_FILE> up
```
* List and check the health status of all the containers using `docker ps` and wait for them to be `healthy`
## Clean up
Stop all the services running in background:
```bash
laconic-so --stack azimuth deploy-system down
```
Clear volumes created by this stack:
```bash
# List all relevant volumes
docker volume ls -q --filter "name=.*watcher_db_data"
# Remove all the listed volumes
docker volume rm $(docker volume ls -q --filter "name=.*watcher_db_data")
```

View File

@ -1,13 +0,0 @@
version: "1.0"
name: chain-chunker
description: "Stack to build containers for chain-chunker"
repos:
- github.com/cerc-io/ipld-eth-state-snapshot@v5
- github.com/cerc-io/eth-statediff-service@v5
- github.com/cerc-io/ipld-eth-db@v5
- github.com/cerc-io/ipld-eth-server@v5
containers:
- cerc/ipld-eth-state-snapshot
- cerc/eth-statediff-service
- cerc/ipld-eth-db
- cerc/ipld-eth-server

View File

@ -1,123 +0,0 @@
# fixturenet-optimism
Instructions to setup and deploy an end-to-end L1+L2 stack with [fixturenet-eth](../fixturenet-eth/) (L1) and [Optimism](https://stack.optimism.io) (L2)
We support running just the L2 part of stack, given an external L1 endpoint. Follow the [L2 only doc](./l2-only.md) for the same.
## Setup
Clone required repositories:
```bash
laconic-so --stack fixturenet-optimism setup-repositories
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories mentioned below and re-run the command
```
Build the container images:
```bash
laconic-so --stack fixturenet-optimism build-containers
# If redeploying with changes in the stack containers
laconic-so --stack fixturenet-optimism build-containers --force-rebuild
# If errors are thrown during build, old images used by this stack would have to be deleted
```
Note: this will take >10 mins depending on the specs of your machine, and **requires** 16GB of memory or greater.
This should create the required docker images in the local image registry:
* `cerc/go-ethereum`
* `cerc/lighthouse`
* `cerc/fixturenet-eth-geth`
* `cerc/fixturenet-eth-lighthouse`
* `cerc/foundry`
* `cerc/optimism-contracts`
* `cerc/optimism-l2geth`
* `cerc/optimism-op-node`
* `cerc/optimism-op-batcher`
* `cerc/optimism-op-proposer`
## Deploy
Deploy the stack:
```bash
laconic-so --stack fixturenet-optimism deploy up
```
The `fixturenet-optimism-contracts` service takes a while to complete running as it:
1. waits for the 'Merge' to happen on L1
2. waits for a finalized block to exist on L1 (so that it can be taken as a starting block for roll ups)
3. deploys the L1 contracts
It may restart a few times after running into errors.
To list and monitor the running containers:
```bash
laconic-so --stack fixturenet-optimism deploy ps
# With status
docker ps
# Check logs for a container
docker logs -f <CONTAINER_ID>
```
## Clean up
Stop all services running in the background:
```bash
laconic-so --stack fixturenet-optimism deploy down 30
```
Clear volumes created by this stack:
```bash
# List all relevant volumes
docker volume ls -q --filter "name=.*l1_deployment|.*l2_accounts|.*l2_config|.*l2_geth_data"
# Remove all the listed volumes
docker volume rm $(docker volume ls -q --filter "name=.*l1_deployment|.*l2_accounts|.*l2_config|.*l2_geth_data")
```
## Troubleshooting
* If `op-geth` service aborts or is restarted, the following error might occur in the `op-node` service:
```bash
WARN [02-16|21:22:02.868] Derivation process temporary error attempts=14 err="stage 0 failed resetting: temp: failed to find the L2 Heads to start from: failed to fetch L2 block by hash 0x0000000000000000000000000000000000000000000000000000000000000000: failed to determine block-hash of hash 0x0000000000000000000000000000000000000000000000000000000000000000, could not get payload: not found"
```
* This means that the data directory that `op-geth` is using is corrupted and needs to be reinitialized; the containers `op-geth`, `op-node` and `op-batcher` need to be started afresh:
WARNING: This will reset the L2 chain; consequently, all the data on it will be lost
* Stop and remove the concerned containers:
```bash
# List the containers
docker ps -f "name=op-geth|op-node|op-batcher"
# Force stop and remove the listed containers
docker rm -f $(docker ps -qf "name=op-geth|op-node|op-batcher")
```
* Remove the concerned volume:
```bash
# List the volume
docker volume ls -q --filter name=l2_geth_data
# Remove the listed volume
docker volume rm $(docker volume ls -q --filter name=l2_geth_data)
```
* Re-run the deployment command used in [Deploy](#deploy) to restart the stopped containers
## Known Issues
* Resource requirements (memory + time) for building the `cerc/foundry` image are on the higher side
* `cerc/optimism-contracts` image is currently based on `cerc/foundry` (Optimism requires foundry installation)

View File

@ -1,100 +0,0 @@
# fixturenet-optimism
Instructions to setup and deploy L2 fixturenet using [Optimism](https://stack.optimism.io)
## Setup
Prerequisite: An L1 Ethereum RPC endpoint
Clone required repositories:
```bash
laconic-so --stack fixturenet-optimism setup-repositories --exclude github.com/cerc-io/go-ethereum
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories mentioned below and re-run the command
```
Build the container images:
```bash
laconic-so --stack fixturenet-optimism build-containers --include cerc/foundry,cerc/optimism-contracts,cerc/optimism-op-node,cerc/optimism-l2geth,cerc/optimism-op-batcher,cerc/optimism-op-proposer
```
This should create the required docker images in the local image registry:
* `cerc/foundry`
* `cerc/optimism-contracts`
* `cerc/optimism-l2geth`
* `cerc/optimism-op-node`
* `cerc/optimism-op-batcher`
* `cerc/optimism-op-proposer`
## Deploy
Create and update an env file to be used in the next step ([defaults](../../config/fixturenet-optimism/l1-params.env)):
```bash
# External L1 endpoint
CERC_L1_CHAIN_ID=
CERC_L1_RPC=
CERC_L1_HOST=
CERC_L1_PORT=
# URL to get CSV with credentials for accounts on L1
# that are used to send balance to Optimism Proxy contract
# (enables them to do transactions on L2)
CERC_L1_ACCOUNTS_CSV_URL=
# OR
# Specify the required account credentials
CERC_L1_ADDRESS=
CERC_L1_PRIV_KEY=
CERC_L1_ADDRESS_2=
CERC_L1_PRIV_KEY_2=
```
* NOTE: If L1 is running on the host machine, use `host.docker.internal` as the hostname to access the host port
Deploy the stack:
```bash
laconic-so --stack fixturenet-optimism deploy --include fixturenet-optimism --env-file <PATH_TO_ENV_FILE> up
```
The `fixturenet-optimism-contracts` service may take a while (`~15 mins`) to complete running as it:
1. waits for the 'Merge' to happen on L1
2. waits for a finalized block to exist on L1 (so that it can be taken as a starting block for roll ups)
3. deploys the L1 contracts
To list down and monitor the running containers:
```bash
laconic-so --stack fixturenet-optimism deploy --include fixturenet-optimism ps
# With status
docker ps
# Check logs for a container
docker logs -f <CONTAINER_ID>
```
## Clean up
Stop all services running in the background:
```bash
laconic-so --stack fixturenet-optimism deploy --include fixturenet-optimism down 30
```
Clear volumes created by this stack:
```bash
# List all relevant volumes
docker volume ls -q --filter "name=.*l1_deployment|.*l2_accounts|.*l2_config|.*l2_geth_data"
# Remove all the listed volumes
docker volume rm $(docker volume ls -q --filter "name=.*l1_deployment|.*l2_accounts|.*l2_config|.*l2_geth_data")
```
## Troubleshooting
See [Troubleshooting](./README.md#troubleshooting)

View File

@ -1,29 +0,0 @@
version: "1.0"
name: mainnet-laconic
description: "Mainnet laconic node"
repos:
- cerc-io/laconicd
- lirewine/debug
- lirewine/crypto
- lirewine/gem
- lirewine/sdk
- cerc-io/laconic-sdk
- cerc-io/laconic-registry-cli
- cerc-io/laconic-console
npms:
- laconic-sdk
- laconic-registry-cli
- debug
- crypto
- sdk
- gem
- laconic-console
containers:
- cerc/laconicd
- cerc/laconic-registry-cli
- cerc/webapp-base
- cerc/laconic-console-host
pods:
- mainnet-laconicd
- fixturenet-laconic-console

View File

@ -1,5 +0,0 @@
# Package Registry Stack
The Package Registry Stack supports a build environment that requires a package registry (initially for NPM packages only).
Setup instructions can be found [here](../build-support/README.md).

View File

@ -1,315 +0,0 @@
# Copyright © 2022, 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
import click
from importlib import util
import os
from pathlib import Path
import random
from shutil import copyfile, copytree
import sys
from app.util import get_stack_file_path, get_parsed_deployment_spec, get_parsed_stack_config, global_options, get_yaml
from app.util import get_compose_file_dir
from app.deploy_types import DeploymentContext, LaconicStackSetupCommand
def _make_default_deployment_dir():
return "deployment-001"
def _get_ports(stack):
ports = {}
parsed_stack = get_parsed_stack_config(stack)
pods = parsed_stack["pods"]
yaml = get_yaml()
for pod in pods:
pod_file_path = os.path.join(get_compose_file_dir(), f"docker-compose-{pod}.yml")
parsed_pod_file = yaml.load(open(pod_file_path, "r"))
if "services" in parsed_pod_file:
for svc_name, svc in parsed_pod_file["services"].items():
if "ports" in svc:
# Ports can appear as strings or numbers. We normalize them as strings.
ports[svc_name] = [str(x) for x in svc["ports"]]
return ports
def _get_named_volumes(stack):
# Parse the compose files looking for named volumes
named_volumes = []
parsed_stack = get_parsed_stack_config(stack)
pods = parsed_stack["pods"]
yaml = get_yaml()
for pod in pods:
pod_file_path = os.path.join(get_compose_file_dir(), f"docker-compose-{pod}.yml")
parsed_pod_file = yaml.load(open(pod_file_path, "r"))
if "volumes" in parsed_pod_file:
volumes = parsed_pod_file["volumes"]
for volume in volumes.keys():
# Volume definition looks like:
# 'laconicd-data': None
named_volumes.append(volume)
return named_volumes
# If we're mounting a volume from a relatie path, then we
# assume the directory doesn't exist yet and create it
# so the deployment will start
# Also warn if the path is absolute and doesn't exist
def _create_bind_dir_if_relative(volume, path_string, compose_dir):
path = Path(path_string)
if not path.is_absolute():
absolute_path = Path(compose_dir).parent.joinpath(path)
absolute_path.mkdir(parents=True, exist_ok=True)
else:
if not path.exists():
print(f"WARNING: mount path for volume {volume} does not exist: {path_string}")
# See: https://stackoverflow.com/questions/45699189/editing-docker-compose-yml-with-pyyaml
def _fixup_pod_file(pod, spec, compose_dir):
# Fix up volumes
if "volumes" in spec:
spec_volumes = spec["volumes"]
if "volumes" in pod:
pod_volumes = pod["volumes"]
for volume in pod_volumes.keys():
if volume in spec_volumes:
volume_spec = spec_volumes[volume]
volume_spec_fixedup = volume_spec if Path(volume_spec).is_absolute() else f".{volume_spec}"
_create_bind_dir_if_relative(volume, volume_spec, compose_dir)
new_volume_spec = {"driver": "local",
"driver_opts": {
"type": "none",
"device": volume_spec_fixedup,
"o": "bind"
}
}
pod["volumes"][volume] = new_volume_spec
# Fix up ports
if "ports" in spec:
spec_ports = spec["ports"]
for container_name, container_ports in spec_ports.items():
if container_name in pod["services"]:
pod["services"][container_name]["ports"] = container_ports
def call_stack_deploy_init(deploy_command_context):
# Link with the python file in the stack
# Call a function in it
# If no function found, return None
python_file_path = get_stack_file_path(deploy_command_context.stack).parent.joinpath("deploy", "commands.py")
if python_file_path.exists():
spec = util.spec_from_file_location("commands", python_file_path)
imported_stack = util.module_from_spec(spec)
spec.loader.exec_module(imported_stack)
return imported_stack.init(deploy_command_context)
else:
return None
# TODO: fold this with function above
def call_stack_deploy_setup(deploy_command_context, parameters: LaconicStackSetupCommand, extra_args):
# Link with the python file in the stack
# Call a function in it
# If no function found, return None
python_file_path = get_stack_file_path(deploy_command_context.stack).parent.joinpath("deploy", "commands.py")
if python_file_path.exists():
spec = util.spec_from_file_location("commands", python_file_path)
imported_stack = util.module_from_spec(spec)
spec.loader.exec_module(imported_stack)
return imported_stack.setup(deploy_command_context, parameters, extra_args)
else:
return None
# TODO: fold this with function above
def call_stack_deploy_create(deployment_context, extra_args):
# Link with the python file in the stack
# Call a function in it
# If no function found, return None
python_file_path = get_stack_file_path(deployment_context.command_context.stack).parent.joinpath("deploy", "commands.py")
if python_file_path.exists():
spec = util.spec_from_file_location("commands", python_file_path)
imported_stack = util.module_from_spec(spec)
spec.loader.exec_module(imported_stack)
return imported_stack.create(deployment_context, extra_args)
else:
return None
# Inspect the pod yaml to find config files referenced in subdirectories
# other than the one associated with the pod
def _find_extra_config_dirs(parsed_pod_file, pod):
config_dirs = set()
services = parsed_pod_file["services"]
for service in services:
service_info = services[service]
if "volumes" in service_info:
for volume in service_info["volumes"]:
if ":" in volume:
host_path = volume.split(":")[0]
if host_path.startswith("../config"):
config_dir = host_path.split("/")[2]
if config_dir != pod:
config_dirs.add(config_dir)
return config_dirs
def _get_mapped_ports(stack: str, map_recipe: str):
port_map_recipes = ["any-variable-random", "localhost-same", "any-same", "localhost-fixed-random", "any-fixed-random"]
ports = _get_ports(stack)
if ports:
# Implement any requested mapping recipe
if map_recipe:
if map_recipe in port_map_recipes:
for service in ports.keys():
ports_array = ports[service]
for x in range(0, len(ports_array)):
orig_port = ports_array[x]
# Strip /udp suffix if present
bare_orig_port = orig_port.replace("/udp", "")
random_port = random.randint(20000, 50000) # Beware: we're relying on luck to not collide
if map_recipe == "any-variable-random":
# This is the default so take no action
pass
elif map_recipe == "localhost-same":
# Replace instances of "- XX" with "- 127.0.0.1:XX"
ports_array[x] = f"127.0.0.1:{bare_orig_port}:{orig_port}"
elif map_recipe == "any-same":
# Replace instances of "- XX" with "- 0.0.0.0:XX"
ports_array[x] = f"0.0.0.0:{bare_orig_port}:{orig_port}"
elif map_recipe == "localhost-fixed-random":
# Replace instances of "- XX" with "- 127.0.0.1:<rnd>:XX"
ports_array[x] = f"127.0.0.1:{random_port}:{orig_port}"
elif map_recipe == "any-fixed-random":
# Replace instances of "- XX" with "- 0.0.0.0:<rnd>:XX"
ports_array[x] = f"0.0.0.0:{random_port}:{orig_port}"
else:
print("Error: bad map_recipe")
else:
print(f"Error: --map-ports-to-host must specify one of: {port_map_recipes}")
sys.exit(1)
return ports
@click.command()
@click.option("--output", required=True, help="Write yaml spec file here")
@click.option("--map-ports-to-host", required=False,
help="Map ports to the host as one of: any-variable-random (default), "
"localhost-same, any-same, localhost-fixed-random, any-fixed-random")
@click.pass_context
def init(ctx, output, map_ports_to_host):
yaml = get_yaml()
stack = global_options(ctx).stack
verbose = global_options(ctx).verbose
default_spec_file_content = call_stack_deploy_init(ctx.obj)
spec_file_content = {"stack": stack}
if default_spec_file_content:
spec_file_content.update(default_spec_file_content)
if verbose:
print(f"Creating spec file for stack: {stack}")
ports = _get_mapped_ports(stack, map_ports_to_host)
spec_file_content["ports"] = ports
named_volumes = _get_named_volumes(stack)
if named_volumes:
volume_descriptors = {}
for named_volume in named_volumes:
volume_descriptors[named_volume] = f"./data/{named_volume}"
spec_file_content["volumes"] = volume_descriptors
with open(output, "w") as output_file:
yaml.dump(spec_file_content, output_file)
@click.command()
@click.option("--spec-file", required=True, help="Spec file to use to create this deployment")
@click.option("--deployment-dir", help="Create deployment files in this directory")
# TODO: Hack
@click.option("--network-dir", help="Network configuration supplied in this directory")
@click.option("--initial-peers", help="Initial set of persistent peers")
@click.pass_context
def create(ctx, spec_file, deployment_dir, network_dir, initial_peers):
# This function fails with a useful error message if the file doens't exist
parsed_spec = get_parsed_deployment_spec(spec_file)
stack_name = parsed_spec['stack']
stack_file = get_stack_file_path(stack_name)
parsed_stack = get_parsed_stack_config(stack_name)
if global_options(ctx).debug:
print(f"parsed spec: {parsed_spec}")
if deployment_dir is None:
deployment_dir = _make_default_deployment_dir()
if os.path.exists(deployment_dir):
print(f"Error: {deployment_dir} already exists")
sys.exit(1)
os.mkdir(deployment_dir)
# Copy spec file and the stack file into the deployment dir
copyfile(spec_file, os.path.join(deployment_dir, os.path.basename(spec_file)))
copyfile(stack_file, os.path.join(deployment_dir, os.path.basename(stack_file)))
# Copy the pod files into the deployment dir, fixing up content
pods = parsed_stack['pods']
destination_compose_dir = os.path.join(deployment_dir, "compose")
os.mkdir(destination_compose_dir)
data_dir = Path(__file__).absolute().parent.joinpath("data")
yaml = get_yaml()
for pod in pods:
pod_file_path = os.path.join(get_compose_file_dir(), f"docker-compose-{pod}.yml")
parsed_pod_file = yaml.load(open(pod_file_path, "r"))
extra_config_dirs = _find_extra_config_dirs(parsed_pod_file, pod)
if global_options(ctx).debug:
print(f"extra config dirs: {extra_config_dirs}")
_fixup_pod_file(parsed_pod_file, parsed_spec, destination_compose_dir)
with open(os.path.join(destination_compose_dir, os.path.basename(pod_file_path)), "w") as output_file:
yaml.dump(parsed_pod_file, output_file)
# Copy the config files for the pod, if any
config_dirs = {pod}
config_dirs = config_dirs.union(extra_config_dirs)
for config_dir in config_dirs:
source_config_dir = data_dir.joinpath("config", config_dir)
if os.path.exists(source_config_dir):
destination_config_dir = os.path.join(deployment_dir, "config", config_dir)
# If the same config dir appears in multiple pods, it may already have been copied
if not os.path.exists(destination_config_dir):
copytree(source_config_dir, destination_config_dir)
# Delegate to the stack's Python code
# The deploy create command doesn't require a --stack argument so we need to insert the
# stack member here.
deployment_command_context = ctx.obj
deployment_command_context.stack = stack_name
deployment_context = DeploymentContext(Path(deployment_dir), deployment_command_context)
call_stack_deploy_create(deployment_context, [network_dir, initial_peers])
# TODO: this code should be in the stack .py files but
# we haven't yet figured out how to integrate click across
# the plugin boundary
@click.command()
@click.option("--node-moniker", help="Moniker for this node")
@click.option("--chain-id", help="The new chain id")
@click.option("--key-name", help="Name for new node key")
@click.option("--gentx-files", help="List of comma-delimited gentx filenames from other nodes")
@click.option("--genesis-file", help="Genesis file for the network")
@click.option("--initialize-network", is_flag=True, default=False, help="Initialize phase")
@click.option("--join-network", is_flag=True, default=False, help="Join phase")
@click.option("--create-network", is_flag=True, default=False, help="Create phase")
@click.option("--network-dir", help="Directory for network files")
@click.argument('extra_args', nargs=-1)
@click.pass_context
def setup(ctx, node_moniker, chain_id, key_name, gentx_files, genesis_file, initialize_network, join_network, create_network,
network_dir, extra_args):
parmeters = LaconicStackSetupCommand(chain_id, node_moniker, key_name, initialize_network, join_network, create_network,
gentx_files, genesis_file, network_dir)
call_stack_deploy_setup(ctx.obj, parmeters, extra_args)

View File

@ -1,96 +0,0 @@
# Copyright © 2022, 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
import os.path
import sys
import ruamel.yaml
from pathlib import Path
def include_exclude_check(s, include, exclude):
if include is None and exclude is None:
return True
if include is not None:
include_list = include.split(",")
return s in include_list
if exclude is not None:
exclude_list = exclude.split(",")
return s not in exclude_list
def get_stack_file_path(stack):
# In order to be compatible with Python 3.8 we need to use this hack to get the path:
# See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
stack_file_path = Path(__file__).absolute().parent.joinpath("data", "stacks", stack, "stack.yml")
return stack_file_path
# Caller can pass either the name of a stack, or a path to a stack file
def get_parsed_stack_config(stack):
stack_file_path = stack if isinstance(stack, os.PathLike) else get_stack_file_path(stack)
try:
with stack_file_path:
stack_config = get_yaml().load(open(stack_file_path, "r"))
return stack_config
except FileNotFoundError as error:
# We try here to generate a useful diagnostic error
# First check if the stack directory is present
stack_directory = stack_file_path.parent
if os.path.exists(stack_directory):
print(f"Error: stack.yml file is missing from stack: {stack}")
else:
print(f"Error: stack: {stack} does not exist")
print(f"Exiting, error: {error}")
sys.exit(1)
def get_compose_file_dir():
# TODO: refactor to use common code with deploy command
# See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
data_dir = Path(__file__).absolute().parent.joinpath("data")
source_compose_dir = data_dir.joinpath("compose")
return source_compose_dir
def get_parsed_deployment_spec(spec_file):
spec_file_path = Path(spec_file)
try:
with spec_file_path:
deploy_spec = get_yaml().load(open(spec_file_path, "r"))
return deploy_spec
except FileNotFoundError as error:
# We try here to generate a useful diagnostic error
print(f"Error: spec file: {spec_file_path} does not exist")
print(f"Exiting, error: {error}")
sys.exit(1)
def get_yaml():
# See: https://stackoverflow.com/a/45701840/1701505
yaml = ruamel.yaml.YAML()
yaml.preserve_quotes = True
yaml.indent(sequence=3, offset=1)
return yaml
# TODO: this is fragile wrt to the subcommand depth
# See also: https://github.com/pallets/click/issues/108
def global_options(ctx):
return ctx.parent.parent.obj
# TODO: hack
def global_options2(ctx):
return ctx.parent.obj

52
cli.py
View File

@ -1,52 +0,0 @@
# Copyright © 2022, 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
import click
from app.command_types import CommandOptions
from app import setup_repositories
from app import build_containers
from app import build_npms
from app import deploy
from app import version
from app import deployment
from app import update
CONTEXT_SETTINGS = dict(help_option_names=['-h', '--help'])
@click.group(context_settings=CONTEXT_SETTINGS)
@click.option('--stack', help="specify a stack to build/deploy")
@click.option('--quiet', is_flag=True, default=False)
@click.option('--verbose', is_flag=True, default=False)
@click.option('--dry-run', is_flag=True, default=False)
@click.option('--local-stack', is_flag=True, default=False)
@click.option('--debug', is_flag=True, default=False)
@click.option('--continue-on-error', is_flag=True, default=False)
# See: https://click.palletsprojects.com/en/8.1.x/complex/#building-a-git-clone
@click.pass_context
def cli(ctx, stack, quiet, verbose, dry_run, local_stack, debug, continue_on_error):
"""Laconic Stack Orchestrator"""
ctx.obj = CommandOptions(stack, quiet, verbose, dry_run, local_stack, debug, continue_on_error)
cli.add_command(setup_repositories.command, "setup-repositories")
cli.add_command(build_containers.command, "build-containers")
cli.add_command(build_npms.command, "build-npms")
cli.add_command(deploy.command, "deploy") # deploy is an alias for deploy-system
cli.add_command(deploy.command, "deploy-system")
cli.add_command(deployment.command, "deployment")
cli.add_command(version.command, "version")
cli.add_command(update.command, "update")

View File

@ -26,7 +26,7 @@ In addition to the pre-requisites listed in the [README](/README.md), the follow
1. Clone this repository:
```
$ git clone https://github.com/cerc-io/stack-orchestrator.git
$ git clone https://git.vdb.to/cerc-io/stack-orchestrator.git
```
2. Enter the project directory:

View File

@ -0,0 +1,71 @@
# Adding a new stack
See [this PR](https://git.vdb.to/cerc-io/stack-orchestrator/pull/434) for an example of how to currently add a minimal stack to stack orchestrator. The [reth stack](https://git.vdb.to/cerc-io/stack-orchestrator/pull/435) is another good example.
For external developers, we recommend forking this repo and adding your stack directly to your fork. This initially requires running in "developer mode" as described [here](/docs/CONTRIBUTING.md). Check out the [Namada stack](https://github.com/vknowable/stack-orchestrator/blob/main/app/data/stacks/public-namada/digitalocean_quickstart.md) from Knowable to see how that is done.
Core to the feature completeness of stack orchestrator is to [decouple the tool functionality from payload](https://git.vdb.to/cerc-io/stack-orchestrator/issues/315) which will no longer require forking to add a stack.
## Example
- in `stack_orchestrator/data/stacks/my-new-stack/stack.yml` add:
```yaml
version: "0.1"
name: my-new-stack
repos:
- github.com/my-org/my-new-stack
containers:
- cerc/my-new-stack
pods:
- my-new-stack
```
- in `stack_orchestrator/data/container-build/cerc-my-new-stack/build.sh` add:
```yaml
#!/usr/bin/env bash
# Build the my-new-stack image
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
docker build -t cerc/my-new-stack:local -f ${CERC_REPO_BASE_DIR}/my-new-stack/Dockerfile ${build_command_args} ${CERC_REPO_BASE_DIR}/my-new-stack
```
- in `stack_orchestrator/data/compose/docker-compose-my-new-stack.yml` add:
```yaml
version: "3.2"
services:
my-new-stack:
image: cerc/my-new-stack:local
restart: always
ports:
- "0.0.0.0:3000:3000"
```
- in `stack_orchestrator/data/repository-list.txt` add:
```bash
github.com/my-org/my-new-stack
```
whereby that repository contains your source code and a `Dockerfile`, and matches the `repos:` field in the `stack.yml`.
- in `stack_orchestrator/data/container-image-list.txt` add:
```bash
cerc/my-new-stack
```
- in `stack_orchestrator/data/pod-list.txt` add:
```bash
my-new-stack
```
Now, the following commands will fetch, build, and deploy you app:
```bash
laconic-so --stack my-new-stack setup-repositories
laconic-so --stack my-new-stack build-containers
laconic-so --stack my-new-stack deploy-system up
```

View File

@ -51,7 +51,7 @@ $ laconic-so build-npms --include <package-name>
```
e.g.
```
$ laconic-so build-npms --include laconic-sdk
$ laconic-so build-npms --include registry-sdk
```
Build the packages for a stack:
```

View File

@ -0,0 +1,9 @@
# Fetching pre-built container images
When Stack Orchestrator deploys a stack containing a suite of one or more containers it expects images for those containers to be on the local machine with a tag of the form `<image-name>:local` Images for these containers can be built from source (and optionally base container images from public registries) with the `build-containers` subcommand.
However, the task of building a large number of containers from source may consume considerable time and machine resources. This is where the `fetch-containers` subcommand steps in. It is designed to work exactly like `build-containers` but instead the images, pre-built, are fetched from an image registry then re-tagged for deployment. It can be used in place of `build-containers` for any stack provided the necessary containers, built for the local machine architecture (e.g. arm64 or x86-64) have already been published in an image registry.
## Usage
To use `fetch-containers`, provide an image registry path, a username and token/password with read access to the registry, and optionally specify `--force-local-overwrite`. If this argument is not specified, if there is already a locally built or previously fetched image for a stack container on the machine, it will not be overwritten and a warning issued.
```
$ laconic-so --stack mobymask-v3-demo fetch-containers --image-registry git.vdb.to/cerc-io --registry-username <registry-user> --registry-token <registry-token> --force-local-overwrite
```

View File

@ -56,7 +56,7 @@ laconic-so --stack fixturenet-laconicd build-npms
Navigate to the Gitea console and switch to the `cerc-io` user then find the `Packages` tab to confirm that these two npm packages have been published:
- `@cerc-io/laconic-registry-cli`
- `@cerc-io/laconic-sdk`
- `@cerc-io/registry-sdk`
### Build and deploy fixturenet containers
@ -74,7 +74,7 @@ laconic-so --stack fixturenet-laconicd deploy logs
### Test with the registry CLI
```bash
laconic-so --stack fixturenet-laconicd deploy exec cli "laconic cns status"
laconic-so --stack fixturenet-laconicd deploy exec cli "laconic registry status"
```
Try additional CLI commands, documented [here](https://github.com/cerc-io/laconic-registry-cli#operations).

View File

@ -0,0 +1,27 @@
# K8S Deployment Enhancements
## Controlling pod placement
The placement of pods created as part of a stack deployment can be controlled to either avoid certain nodes, or require certain nodes.
### Pod/Node Affinity
Node affinity rules applied to pods target node labels. The effect is that a pod can only be placed on a node having the specified label value. Note that other pods that do not have any node affinity rules can also be placed on those same nodes. Thus node affinity for a pod controls where that pod can be placed, but does not control where other pods are placed.
Node affinity for stack pods is specified in the deployment's `spec.yml` file as follows:
```
node-affinities:
- label: nodetype
value: typeb
```
This example denotes that the stack's pods should only be placed on nodes that have the label `nodetype` with value `typeb`.
### Node Taint Toleration
K8s nodes can be given one or more "taints". These are special fields (distinct from labels) with a name (key) and optional value.
When placing pods, the k8s scheduler will only assign a pod to a tainted node if the pod posesses a corresponding "toleration".
This is metadata associated with the pod that specifies that the pod "tolerates" a given taint.
Therefore taint toleration provides a mechanism by which only certain pods can be placed on specific nodes, and provides a complementary mechanism to node affinity.
Taint toleration for stack pods is specified in the deployment's `spec.yml` file as follows:
```
node-tolerations:
- key: nodetype
value: typeb
```
This example denotes that the stack's pods will tolerate a taint: `nodetype=typeb`

View File

@ -1,146 +1,153 @@
# Running a laconicd fixturenet with console
The following tutorial explains the steps to run a laconicd fixturenet with CLI and web console that displays records in the registry. It is designed as an introduction to Stack Orchestrator and to showcase one component of the Laconic Stack. Prior to Stack Orchestrator, the following 4 repositories had to be cloned and setup manually:
The following tutorial explains the steps to run a laconicd fixturenet with CLI and web console that displays records in the registry. It is designed as an introduction to Stack Orchestrator and to showcase one component of the Laconic Stack. Prior to Stack Orchestrator, the following repositories had to be cloned and setup manually:
- https://github.com/cerc-io/laconicd
- https://github.com/cerc-io/laconic-sdk
- https://github.com/cerc-io/laconic-registry-cli
- https://github.com/cerc-io/laconic-console
- https://git.vdb.to/cerc-io/laconicd
- https://git.vdb.to/cerc-io/laconic-registry-cli
- https://git.vdb.to/cerc-io/laconic-console
Now, with Stack Orchestrator, it is a few quick commands. Additionally, the `docker` and `docker compose` integration on the back-end allows the stack to easily persist, facilitating workflows.
## Setup laconic-so
To avoid hiccups on Mac M1/M2 and any local machine nuances that may affect the user experience, this tutorial is focused on using a fresh Digital Ocean (DO) droplet with similar specs:
To avoid hiccups on Mac M1/M2 and any local machine nuances that may affect the user experience, this tutorial is focused on using a fresh Digital Ocean (DO) droplet with similar specs:
16 GB Memory / 8 Intel vCPUs / 160 GB Disk.
1. Login to the droplet as root (either by SSH key or password set in the DO console)
```
ssh root@IP
```
```
ssh root@IP
```
1. Get the install script, give it executable permissions, and run it:
2. Get the install script, give it executable permissions, and run it:
```
curl -o install.sh https://raw.githubusercontent.com/cerc-io/stack-orchestrator/main/scripts/quick-install-linux.sh
```
```
chmod +x install.sh
```
```
bash install.sh
```
```
curl -o install.sh https://raw.githubusercontent.com/cerc-io/stack-orchestrator/main/scripts/quick-install-linux.sh
```
```
chmod +x install.sh
```
```
bash install.sh
```
1. Confirm docker was installed and activate the changes in `~/.profile`:
3. Confirm docker was installed and activate the changes in `~/.profile`:
```
docker run hello-world
```
```
source ~/.profile
```
```
docker run hello-world
```
```
source ~/.profile
```
1. Verify installation:
4. Verify installation:
```
laconic-so version
```
```
laconic-so version
```
## Setup the laconic fixturenet stack
1. Get the repositories
```
laconic-so --stack fixturenet-laconic-loaded setup-repositories --include github.com/cerc-io/laconicd,github.com/cerc-io/laconic-sdk,github.com/cerc-io/laconic-registry-cli,github.com/cerc-io/laconic-console
```
```
laconic-so --stack fixturenet-laconic-loaded setup-repositories --include git.vdb.to/cerc-io/laconicd
```
2. Set this environment variable to the Laconic self-hosted Gitea instance:
1. Build the containers:
```
export CERC_NPM_REGISTRY_URL=https://git.vdb.to/api/packages/cerc-io/npm/
```
```
laconic-so --stack fixturenet-laconic-loaded build-containers
```
3. Build the containers:
It's possible to run into an `ESOCKETTIMEDOUT` error, e.g., `error An unexpected error occurred: "https://registry.yarnpkg.com/@material-ui/icons/-/icons-4.11.3.tgz: ESOCKETTIMEDOUT"`. This may happen even if you have a great internet connection. In that case, re-run the `build-containers` command.
```
laconic-so --stack fixturenet-laconic-loaded build-containers
```
It's possible to run into an `ESOCKETTIMEDOUT` error, e.g., `error An unexpected error occurred: "https://registry.yarnpkg.com/@material-ui/icons/-/icons-4.11.3.tgz: ESOCKETTIMEDOUT"`. This may happen even if you have a great internet connection. In that case, re-run the `build-containers` command.
1. Set this environment variable to your droplet's IP address or fully qualified DNS host name if it has one:
4. Set this environment variable to your droplet's IP address:
```
export BACKEND_ENDPOINT=http://<your-IP-or-hostname>:9473
```
e.g.
```
export BACKEND_ENDPOINT=http://my-test-server.example.com:9473
```
```
export LACONIC_HOSTED_ENDPOINT=http://<your-IP>
```
1. Create a deployment directory for the stack:
```
laconic-so --stack fixturenet-laconic-loaded deploy init --output laconic-loaded.spec --map-ports-to-host any-same --config LACONIC_HOSTED_ENDPOINT=$BACKEND_ENDPOINT
5. Deploy the stack:
# Update port mapping in the laconic-loaded.spec file to resolve port conflicts on host if any
```
```
laconic-so --stack fixturenet-laconic-loaded deploy create --deployment-dir laconic-loaded-deployment --spec-file laconic-loaded.spec
```
2. Start the stack:
```
laconic-so --stack fixturenet-laconic-loaded deploy up
```
```
laconic-so deployment --dir laconic-loaded-deployment start
```
6. Check the logs:
3. Check the logs:
```
laconic-so --stack fixturenet-laconic-loaded deploy logs
```
```
laconic-so deployment --dir laconic-loaded-deployment logs
```
You'll see output from `laconicd` and the block height should be >1 to confirm it is running:
You'll see output from `laconicd` and the block height should be >1 to confirm it is running:
```
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:29PM INF indexed block exents height=12 module=txindex server=node
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF Timed out dur=4976.960115 height=13 module=consensus round=0 server=node step=1
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF received proposal module=consensus proposal={"Type":32,"block_id":{"hash":"D26C088A711F912ADB97888C269F628DA33153795621967BE44DCB43C3D03CA4","parts":{"hash":"22411A20B7F14CDA33244420FBDDAF24450C0628C7A06034FF22DAC3699DDCC8","total":1}},"height":13,"pol_round":-1,"round":0,"signature":"DEuqnaQmvyYbUwckttJmgKdpRu6eVm9i+9rQ1pIrV2PidkMNdWRZBLdmNghkIrUzGbW8Xd7UVJxtLRmwRASgBg==","timestamp":"2023-04-18T21:30:01.49450663Z"} server=node
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF received complete proposal block hash=D26C088A711F912ADB97888C269F628DA33153795621967BE44DCB43C3D03CA4 height=13 module=consensus server=node
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF finalizing commit of block hash={} height=13 module=consensus num_txs=0 root=1A8CA1AF139CCC80EC007C6321D8A63A46A793386EE2EDF9A5CA0AB2C90728B7 server=node
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF minted coins from module account amount=2059730459416582643aphoton from=mint module=x/bank
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF executed block height=13 module=state num_invalid_txs=0 num_valid_txs=0 server=node
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF commit synced commit=436F6D6D697449447B5B363520313037203630203232372039352038352032303820313334203231392032303520313433203130372031343920313431203139203139322038362031323720362031383520323533203137362031333820313735203135392031383620323334203135382031323120313431203230342037335D3A447D
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF committed state app_hash=416B3CE35F55D086DBCD8F6B958D13C0567F06B9FDB08AAF9FBAEA9E798DCC49 height=13 module=state num_txs=0 server=node
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF indexed block exents height=13 module=txindex server=node
```
```
laconicd-1 | 6:12AM INF indexed block events height=16 module=txindex
laconicd-1 | 6:12AM INF Timed out dur=2993.893332 height=17 module=consensus round=0 step=RoundStepNewHeight
laconicd-1 | 6:12AM INF received proposal module=consensus proposal="Proposal{17/0 (E15D03C180CE607AE8340A1325A0C134DFB4E1ADD992E173C701EBD362523267:1:DF138772FEF0, -1) 6A6F3B0A42B3 @ 2024-07-25T06:12:31.952967053Z}" proposer=86970D950BC9C16F3991A52D9C6DC55BA478A7C6
laconicd-1 | 6:12AM INF received complete proposal block hash=E15D03C180CE607AE8340A1325A0C134DFB4E1ADD992E173C701EBD362523267 height=17 module=consensus
laconicd-1 | 6:12AM INF finalizing commit of block hash=E15D03C180CE607AE8340A1325A0C134DFB4E1ADD992E173C701EBD362523267 height=17 module=consensus num_txs=0 root=AF4941107DC718ED1425E77A3DC7F1154FB780B7A7DE20288DC43442203527E3
laconicd-1 | 6:12AM INF finalized block block_app_hash=26A665360BB1EE64E54F97F2A5AB7F621B33A86D9896574000C05DE63F43F788 height=17 module=state num_txs_res=0 num_val_updates=0
laconicd-1 | 6:12AM INF executed block app_hash=26A665360BB1EE64E54F97F2A5AB7F621B33A86D9896574000C05DE63F43F788 height=17 module=state
laconicd-1 | 6:12AM INF committed state block_app_hash=AF4941107DC718ED1425E77A3DC7F1154FB780B7A7DE20288DC43442203527E3 height=17 module=state
laconicd-1 | 6:12AM INF indexed block events height=17 module=txindex
```
7. Confirm operation of the registry CLI:
4. Confirm operation of the registry CLI:
```
laconic-so --stack fixturenet-laconic-loaded deploy exec cli "laconic cns status"
```
```
laconic-so deployment --dir laconic-loaded-deployment exec cli "laconic registry status"
```
```
{
"version": "0.3.0",
"node": {
"id": "4216af2ac9f68bda33a38803fc1b5c9559312c1d",
"network": "laconic_9000-1",
"moniker": "localtestnet"
},
"sync": {
"latest_block_hash": "1BDF4CB9AE2390DA65BCF997C83133C18014FCDDCAE03708488F0B56FCEEA429",
"latest_block_height": "5",
"latest_block_time": "2023-08-09 16:00:30.386903172 +0000 UTC",
"catching_up": false
},
"validator": {
"address": "651FBC700B747C76E90ACFC18CC9508C3D0905B9",
"voting_power": "1000000000000000"
},
"validators": [
{
"address": "651FBC700B747C76E90ACFC18CC9508C3D0905B9",
"voting_power": "1000000000000000",
"proposer_priority": "0"
}
],
"num_peers": "0",
"peers": [],
"disk_usage": "292.0K"
}
```
```
{
"version": "0.3.0",
"node": {
"id": "6e072894aa1f5d9535a1127a0d7a7f8e65100a2c",
"network": "laconic_9000-1",
"moniker": "localtestnet"
},
"sync": {
"latestBlockHash": "260102C283D0411CFBA0270F7DC182650FFCA737A2F6F652A985F6065696F590",
"latestBlockHeight": "49",
"latestBlockTime": "2024-07-25 06:14:05.626744215 +0000 UTC",
"catchingUp": false
},
"validator": {
"address": "86970D950BC9C16F3991A52D9C6DC55BA478A7C6",
"votingPower": "1000000000000000"
},
"validators": [
{
"address": "86970D950BC9C16F3991A52D9C6DC55BA478A7C6",
"votingPower": "1000000000000000",
"proposerPriority": "0"
}
],
"numPeers": "0",
"peers": [],
"diskUsage": "688K"
}
```
## Configure Digital Ocean firewall
(Note this step may not be necessary depending on the droplet image used)
Let's open some ports.
1. In the Digital Ocean web console, navigate to your droplet's main page. Select the "Networking" tab and scroll down to "Firewall".
@ -179,13 +186,13 @@ wns
1. The following command will create a bond and publish a record:
```
laconic-so --stack fixturenet-laconic-loaded deploy exec cli ./scripts/create-demo-records.sh
laconic-so deployment --dir laconic-loaded-deployment exec cli ./scripts/create-demo-records.sh
```
You'll get an output like:
```
Balance is: 99998999999999998999600000
Balance is: 9.9999e+25
Created bond with id: dd88e8d6f9567b32b28e70552aea4419c5dd3307ebae85a284d1fe38904e301a
Published demo-record-1.yml with id: bafyreierh3xnfivexlscdwubvczmddsnf46uytyfvrbdhkjzztvsz6ruly
```
@ -212,9 +219,9 @@ record:
3. Try out additional CLI commands
- these are documented [here](https://github.com/cerc-io/laconic-registry-cli#readme) and updates are forthcoming
- these are documented [here](https://git.vdb.to/cerc-io/laconic-registry-cli#readme) and updates are forthcoming
- e.g,:
```
laconic-so --stack fixturenet-laconic-loaded deploy exec cli "laconic cns record list"
laconic-so deployment --dir laconic-loaded-deployment exec cli "laconic registry record list"
```

View File

@ -1,6 +1,6 @@
# Specification
Note: this page is out of date (but still useful) - it will no longer be useful once stacks are [decoupled from the tool functionality](https://github.com/cerc-io/stack-orchestrator/issues/315).
Note: this page is out of date (but still useful) - it will no longer be useful once stacks are [decoupled from the tool functionality](https://git.vdb.to/cerc-io/stack-orchestrator/issues/315).
## Implementation

64
docs/webapp.md Normal file
View File

@ -0,0 +1,64 @@
### Building and Running Webapps
It is possible to build and run Next.js webapps using the `build-webapp` and `run-webapp` subcommands.
To make it easier to build once and deploy into different environments and with different configuration,
compilation and static page generation are separated in the `build-webapp` and `run-webapp` steps.
This offers much more flexibilty than standard Next.js build methods, since any environment variables accessed
via `process.env`, whether for pages or for API, will have values drawn from their runtime deployment environment,
not their build environment.
## Building
Building usually requires no additional configuration. By default, the Next.js version specified in `package.json`
is used, and either `yarn` or `npm` will be used automatically depending on which lock files are present. These
can be overidden with the build arguments `CERC_NEXT_VERSION` and `CERC_BUILD_TOOL` respectively. For example: `--extra-build-args "--build-arg CERC_NEXT_VERSION=13.4.12"`
**Example**:
```
$ cd ~/cerc
$ git clone git@git.vdb.to:cerc-io/test-progressive-web-app.git
$ laconic-so build-webapp --source-repo ~/cerc/test-progressive-web-app
...
Built host container for ~/cerc/test-progressive-web-app with tag:
cerc/test-progressive-web-app:local
To test locally run:
laconic-so run-webapp --image cerc/test-progressive-web-app:local --env-file /path/to/environment.env
```
## Running
With `run-webapp` a new container will be launched on the local machine, with runtime configuration provided by `--env-file` (if specified) and published on an available port. Multiple instances can be launched with different configuration.
**Example**:
```
# Production env
$ laconic-so run-webapp --image cerc/test-progressive-web-app:local --env-file /path/to/environment/production.env
Image: cerc/test-progressive-web-app:local
ID: 4c6e893bf436b3e91a2b92ce37e30e499685131705700bd92a90d2eb14eefd05
URL: http://localhost:32768
# Dev env
$ laconic-so run-webapp --image cerc/test-progressive-web-app:local --env-file /path/to/environment/dev.env
Image: cerc/test-progressive-web-app:local
ID: 9ab96494f563aafb6c057d88df58f9eca81b90f8721a4e068493a289a976051c
URL: http://localhost:32769
```
## Deploying
Use the subcommand `deploy-webapp create` to make a deployment directory that can be subsequently deployed to a Kubernetes cluster.
Example commands are shown below, assuming that the webapp container image `cerc/test-progressive-web-app:local` has already been built:
```
$ laconic-so deploy-webapp create --kube-config ~/kubectl/k8s-kubeconfig.yaml --image-registry registry.digitalocean.com/laconic-registry --deployment-dir webapp-k8s-deployment --image cerc/test-progressive-web-app:local --url https://test-pwa-app.hosting.laconic.com/ --env-file test-webapp.env
$ laconic-so deployment --dir webapp-k8s-deployment push-images
$ laconic-so deployment --dir webapp-k8s-deployment start
```

View File

@ -1,4 +1,5 @@
python-decouple>=3.8
python-dotenv==1.0.0
GitPython>=3.1.32
tqdm>=4.65.0
python-on-whales>=0.64.0
@ -8,3 +9,7 @@ ruamel.yaml>=0.17.32
pydantic==1.10.9
tomli==2.0.1
validators==0.22.0
kubernetes>=28.1.0
humanfriendly>=10.0
python-gnupg>=0.5.2
requests>=2.3.2

View File

@ -41,4 +41,4 @@ runcmd:
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- systemctl enable docker
- systemctl start docker
- git clone https://github.com/cerc-io/stack-orchestrator.git /home/ubuntu/stack-orchestrator
- git clone https://git.vdb.to/cerc-io/stack-orchestrator.git /home/ubuntu/stack-orchestrator

View File

@ -31,5 +31,5 @@ runcmd:
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- systemctl enable docker
- systemctl start docker
- curl -L -o /usr/local/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so
- curl -L -o /usr/local/bin/laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
- chmod +x /usr/local/bin/laconic-so

View File

@ -1,6 +1,6 @@
build_tag_file_name=./app/data/build_tag.txt
build_tag_file_name=./stack_orchestrator/data/build_tag.txt
echo "# This file should be re-generated running: scripts/create_build_tag_file.sh script" > $build_tag_file_name
product_version_string=$( tail -1 ./app/data/version.txt )
product_version_string=$( tail -1 ./stack_orchestrator/data/version.txt )
commit_string=$( git rev-parse --short HEAD )
timestamp_string=$(date +'%Y%m%d%H%M')
build_tag_string=${product_version_string}-${commit_string}-${timestamp_string}

19
scripts/quick-deploy-test.sh Executable file
View File

@ -0,0 +1,19 @@
#!/usr/bin/env bash
# Beginnings of a script to quickly spin up and test a deployment
if [[ -n "$CERC_SCRIPT_DEBUG" ]]; then
set -x
fi
if [[ -n "$1" ]]; then
stack_name=$1
else
stack_name="test"
fi
spec_file_name="${stack_name}-spec.yml"
deployment_dir_name="${stack_name}-deployment"
rm -f ${spec_file_name}
rm -rf ${deployment_dir_name}
laconic-so --stack ${stack_name} deploy --deploy-to compose init --output ${spec_file_name}
laconic-so --stack ${stack_name} deploy --deploy-to compose create --deployment-dir ${deployment_dir_name} --spec-file ${spec_file_name}
#laconic-so deployment --dir ${deployment_dir_name} start
#laconic-so deployment --dir ${deployment_dir_name} ps
#laconic-so deployment --dir ${deployment_dir_name} stop

View File

@ -137,7 +137,7 @@ fi
echo "**************************************************************************************"
echo "Installing laconic-so"
# install latest `laconic-so`
distribution_url=https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so
distribution_url=https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
install_filename=${install_dir}/laconic-so
mkdir -p ${install_dir}
curl -L -o ${install_filename} ${distribution_url}

View File

@ -4,17 +4,19 @@ with open("README.md", "r", encoding="utf-8") as fh:
long_description = fh.read()
with open("requirements.txt", "r", encoding="utf-8") as fh:
requirements = fh.read()
with open("stack_orchestrator/data/version.txt", "r", encoding="utf-8") as fh:
version = fh.readlines()[-1].strip(" \n")
setup(
name='laconic-stack-orchestrator',
version='1.0.12',
version=version,
author='Cerc',
author_email='info@cerc.io',
license='GNU Affero General Public License',
description='Orchestrates deployment of the Laconic stack',
long_description=long_description,
long_description_content_type="text/markdown",
url='https://github.com/cerc-io/stack-orchestrator',
py_modules=['cli', 'app'],
url='https://git.vdb.to/cerc-io/stack-orchestrator',
py_modules=['stack_orchestrator'],
packages=find_packages(),
install_requires=[requirements],
python_requires='>=3.7',
@ -25,6 +27,6 @@ setup(
"Operating System :: OS Independent",
],
entry_points={
'console_scripts': ['laconic-so=cli:cli'],
'console_scripts': ['laconic-so=stack_orchestrator.main:cli'],
}
)

View File

@ -15,7 +15,7 @@
import os
from abc import ABC, abstractmethod
from app.deploy import get_stack_status
from stack_orchestrator.deploy.deploy import get_stack_status
from decouple import config

View File

@ -0,0 +1,183 @@
# Copyright © 2022, 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
# Builds or pulls containers for the system components
# env vars:
# CERC_REPO_BASE_DIR defaults to ~/cerc
# TODO: display the available list of containers; allow re-build of either all or specific containers
import os
import sys
from decouple import config
import subprocess
import click
from pathlib import Path
from stack_orchestrator.opts import opts
from stack_orchestrator.util import include_exclude_check, stack_is_external, error_exit
from stack_orchestrator.base import get_npm_registry_url
from stack_orchestrator.build.build_types import BuildContext
from stack_orchestrator.build.publish import publish_image
from stack_orchestrator.build.build_util import get_containers_in_scope
# TODO: find a place for this
# epilog="Config provided either in .env or settings.ini or env vars: CERC_REPO_BASE_DIR (defaults to ~/cerc)"
def make_container_build_env(dev_root_path: str,
container_build_dir: str,
debug: bool,
force_rebuild: bool,
extra_build_args: str):
container_build_env = {
"CERC_NPM_REGISTRY_URL": get_npm_registry_url(),
"CERC_GO_AUTH_TOKEN": config("CERC_GO_AUTH_TOKEN", default=""),
"CERC_NPM_AUTH_TOKEN": config("CERC_NPM_AUTH_TOKEN", default=""),
"CERC_REPO_BASE_DIR": dev_root_path,
"CERC_CONTAINER_BASE_DIR": container_build_dir,
"CERC_HOST_UID": f"{os.getuid()}",
"CERC_HOST_GID": f"{os.getgid()}",
"DOCKER_BUILDKIT": config("DOCKER_BUILDKIT", default="0")
}
container_build_env.update({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
container_build_env.update({"CERC_FORCE_REBUILD": "true"} if force_rebuild else {})
container_build_env.update({"CERC_CONTAINER_EXTRA_BUILD_ARGS": extra_build_args} if extra_build_args else {})
docker_host_env = os.getenv("DOCKER_HOST")
if docker_host_env:
container_build_env.update({"DOCKER_HOST": docker_host_env})
return container_build_env
def process_container(build_context: BuildContext) -> bool:
if not opts.o.quiet:
print(f"Building: {build_context.container}")
default_container_tag = f"{build_context.container}:local"
build_context.container_build_env.update({"CERC_DEFAULT_CONTAINER_IMAGE_TAG": default_container_tag})
# Check if this is in an external stack
if stack_is_external(build_context.stack):
container_parent_dir = Path(build_context.stack).parent.parent.joinpath("container-build")
temp_build_dir = container_parent_dir.joinpath(build_context.container.replace("/", "-"))
temp_build_script_filename = temp_build_dir.joinpath("build.sh")
# Now check if the container exists in the external stack.
if not temp_build_script_filename.exists():
# If not, revert to building an internal container
container_parent_dir = build_context.container_build_dir
else:
container_parent_dir = build_context.container_build_dir
build_dir = container_parent_dir.joinpath(build_context.container.replace("/", "-"))
build_script_filename = build_dir.joinpath("build.sh")
if opts.o.verbose:
print(f"Build script filename: {build_script_filename}")
if os.path.exists(build_script_filename):
build_command = build_script_filename.as_posix()
else:
if opts.o.verbose:
print(f"No script file found: {build_script_filename}, using default build script")
repo_dir = build_context.container.split('/')[1]
# TODO: make this less of a hack -- should be specified in some metadata somewhere
# Check if we have a repo for this container. If not, set the context dir to the container-build subdir
repo_full_path = os.path.join(build_context.dev_root_path, repo_dir)
repo_dir_or_build_dir = repo_full_path if os.path.exists(repo_full_path) else build_dir
build_command = os.path.join(build_context.container_build_dir,
"default-build.sh") + f" {default_container_tag} {repo_dir_or_build_dir}"
if not opts.o.dry_run:
# No PATH at all causes failures with podman.
if "PATH" not in build_context.container_build_env:
build_context.container_build_env["PATH"] = os.environ["PATH"]
if opts.o.verbose:
print(f"Executing: {build_command} with environment: {build_context.container_build_env}")
build_result = subprocess.run(build_command, shell=True, env=build_context.container_build_env)
if opts.o.verbose:
print(f"Return code is: {build_result.returncode}")
if build_result.returncode != 0:
return False
else:
return True
else:
print("Skipped")
return True
@click.command()
@click.option('--include', help="only build these containers")
@click.option('--exclude', help="don\'t build these containers")
@click.option("--force-rebuild", is_flag=True, default=False, help="Override dependency checking -- always rebuild")
@click.option("--extra-build-args", help="Supply extra arguments to build")
@click.option("--publish-images", is_flag=True, default=False, help="Publish the built images in the specified image registry")
@click.option("--image-registry", help="Specify the image registry for --publish-images")
@click.pass_context
def command(ctx, include, exclude, force_rebuild, extra_build_args, publish_images, image_registry):
'''build the set of containers required for a complete stack'''
local_stack = ctx.obj.local_stack
stack = ctx.obj.stack
# See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
container_build_dir = Path(__file__).absolute().parent.parent.joinpath("data", "container-build")
if local_stack:
dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")]
print(f'Local stack dev_root_path (CERC_REPO_BASE_DIR) overridden to: {dev_root_path}')
else:
dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc"))
if not opts.o.quiet:
print(f'Dev Root is: {dev_root_path}')
if not os.path.isdir(dev_root_path):
print('Dev root directory doesn\'t exist, creating')
if publish_images:
if not image_registry:
error_exit("--image-registry must be supplied with --publish-images")
containers_in_scope = get_containers_in_scope(stack)
container_build_env = make_container_build_env(dev_root_path,
container_build_dir,
opts.o.debug,
force_rebuild,
extra_build_args)
for container in containers_in_scope:
if include_exclude_check(container, include, exclude):
build_context = BuildContext(
stack,
container,
container_build_dir,
container_build_env,
dev_root_path
)
result = process_container(build_context)
if result:
if publish_images:
publish_image(f"{container}:local", image_registry)
else:
print(f"Error running build for {build_context.container}")
if not opts.o.continue_on_error:
error_exit("container build failed and --continue-on-error not set, exiting")
sys.exit(1)
else:
print("****** Container Build Error, continuing because --continue-on-error is set")
else:
if opts.o.verbose:
print(f"Excluding: {container}")

View File

@ -25,8 +25,8 @@ from decouple import config
import click
import importlib.resources
from python_on_whales import docker, DockerException
from app.base import get_stack
from app.util import include_exclude_check, get_parsed_stack_config
from stack_orchestrator.base import get_stack
from stack_orchestrator.util import include_exclude_check, get_parsed_stack_config
builder_js_image_name = "cerc/builder-js:local"
@ -83,7 +83,7 @@ def command(ctx, include, exclude, force_rebuild, extra_build_args):
os.makedirs(build_root_path)
# See: https://stackoverflow.com/a/20885799/1701505
from app import data
from stack_orchestrator import data
with importlib.resources.open_text(data, "npm-package-list.txt") as package_list_file:
all_packages = package_list_file.read().splitlines()

View File

@ -0,0 +1,29 @@
# Copyright © 2024 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from dataclasses import dataclass
from pathlib import Path
from typing import Mapping
@dataclass
class BuildContext:
stack: str
container: str
container_build_dir: Path
container_build_env: Mapping[str,str]
dev_root_path: str

View File

@ -0,0 +1,41 @@
# Copyright © 2024 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
import importlib.resources
from stack_orchestrator.opts import opts
from stack_orchestrator.util import get_parsed_stack_config, warn_exit
def get_containers_in_scope(stack: str):
containers_in_scope = []
if stack:
stack_config = get_parsed_stack_config(stack)
if "containers" not in stack_config or stack_config["containers"] is None:
warn_exit(f"stack {stack} does not define any containers")
containers_in_scope = stack_config['containers']
else:
# See: https://stackoverflow.com/a/20885799/1701505
from stack_orchestrator import data
with importlib.resources.open_text(data, "container-image-list.txt") as container_list_file:
containers_in_scope = container_list_file.read().splitlines()
if opts.o.verbose:
print(f'Containers: {containers_in_scope}')
if stack:
print(f"Stack: {stack}")
return containers_in_scope

View File

@ -0,0 +1,117 @@
# Copyright © 2022, 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
# Builds webapp containers
# env vars:
# CERC_REPO_BASE_DIR defaults to ~/cerc
# TODO: display the available list of containers; allow re-build of either all or specific containers
import os
import sys
from decouple import config
import click
from pathlib import Path
from stack_orchestrator.build import build_containers
from stack_orchestrator.deploy.webapp.util import determine_base_container, TimedLogger
from stack_orchestrator.build.build_types import BuildContext
@click.command()
@click.option('--base-container')
@click.option('--source-repo', help="directory containing the webapp to build", required=True)
@click.option("--force-rebuild", is_flag=True, default=False, help="Override dependency checking -- always rebuild")
@click.option("--extra-build-args", help="Supply extra arguments to build")
@click.option("--tag", help="Container tag (default: cerc/<app_name>:local)")
@click.pass_context
def command(ctx, base_container, source_repo, force_rebuild, extra_build_args, tag):
'''build the specified webapp container'''
logger = TimedLogger()
quiet = ctx.obj.quiet
debug = ctx.obj.debug
verbose = ctx.obj.verbose
local_stack = ctx.obj.local_stack
stack = ctx.obj.stack
# See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
container_build_dir = Path(__file__).absolute().parent.parent.joinpath("data", "container-build")
if local_stack:
dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")]
logger.log(f'Local stack dev_root_path (CERC_REPO_BASE_DIR) overridden to: {dev_root_path}')
else:
dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc"))
if verbose:
logger.log(f'Dev Root is: {dev_root_path}')
if not base_container:
base_container = determine_base_container(source_repo)
# First build the base container.
container_build_env = build_containers.make_container_build_env(dev_root_path, container_build_dir, debug,
force_rebuild, extra_build_args)
if verbose:
logger.log(f"Building base container: {base_container}")
build_context_1 = BuildContext(
stack,
base_container,
container_build_dir,
container_build_env,
dev_root_path,
)
ok = build_containers.process_container(build_context_1)
if not ok:
logger.log("ERROR: Build failed.")
sys.exit(1)
if verbose:
logger.log(f"Base container {base_container} build finished.")
# Now build the target webapp. We use the same build script, but with a different Dockerfile and work dir.
container_build_env["CERC_WEBAPP_BUILD_RUNNING"] = "true"
container_build_env["CERC_CONTAINER_BUILD_WORK_DIR"] = os.path.abspath(source_repo)
container_build_env["CERC_CONTAINER_BUILD_DOCKERFILE"] = os.path.join(container_build_dir,
base_container.replace("/", "-"),
"Dockerfile.webapp")
if not tag:
webapp_name = os.path.abspath(source_repo).split(os.path.sep)[-1]
tag = f"cerc/{webapp_name}:local"
container_build_env["CERC_CONTAINER_BUILD_TAG"] = tag
if verbose:
logger.log(f"Building app container: {tag}")
build_context_2 = BuildContext(
stack,
base_container,
container_build_dir,
container_build_env,
dev_root_path,
)
ok = build_containers.process_container(build_context_2)
if not ok:
logger.log("ERROR: Build failed.")
sys.exit(1)
if verbose:
logger.log(f"App container {base_container} build finished.")
logger.log("build-webapp complete", show_step_time=False, show_total_time=True)

View File

@ -0,0 +1,195 @@
# Copyright © 2024 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
import click
from dataclasses import dataclass
import json
import platform
from python_on_whales import DockerClient
from python_on_whales.components.manifest.cli_wrapper import ManifestCLI, ManifestList
from python_on_whales.utils import run
import requests
from typing import List
from stack_orchestrator.opts import opts
from stack_orchestrator.util import include_exclude_check, error_exit
from stack_orchestrator.build.build_util import get_containers_in_scope
# Experimental fetch-container command
@dataclass
class RegistryInfo:
registry: str
registry_username: str
registry_token: str
# Extending this code to support the --verbose option, cnosider contributing upstream
# https://github.com/gabrieldemarmiesse/python-on-whales/blob/master/python_on_whales/components/manifest/cli_wrapper.py#L129
class ExtendedManifestCLI(ManifestCLI):
def inspect_verbose(self, x: str) -> ManifestList:
"""Returns a Docker manifest list object."""
json_str = run(self.docker_cmd + ["manifest", "inspect", "--verbose", x])
return json.loads(json_str)
def _local_tag_for(container: str):
return f"{container}:local"
# See: https://docker-docs.uclv.cu/registry/spec/api/
# Emulate this:
# $ curl -u "my-username:my-token" -X GET "https://<container-registry-hostname>/v2/cerc-io/cerc/test-container/tags/list"
# {"name":"cerc-io/cerc/test-container","tags":["202402232130","202402232208"]}
def _get_tags_for_container(container: str, registry_info: RegistryInfo) -> List[str]:
# registry looks like: git.vdb.to/cerc-io
registry_parts = registry_info.registry.split("/")
url = f"https://{registry_parts[0]}/v2/{registry_parts[1]}/{container}/tags/list"
if opts.o.debug:
print(f"Fetching tags from: {url}")
response = requests.get(url, auth=(registry_info.registry_username, registry_info.registry_token))
if response.status_code == 200:
tag_info = response.json()
if opts.o.debug:
print(f"container tags list: {tag_info}")
tags_array = tag_info["tags"]
return tags_array
else:
error_exit(f"failed to fetch tags from image registry, status code: {response.status_code}")
def _find_latest(candidate_tags: List[str]):
# Lex sort should give us the latest first
sorted_candidates = sorted(candidate_tags)
if opts.o.debug:
print(f"sorted candidates: {sorted_candidates}")
return sorted_candidates[-1]
def _filter_for_platform(container: str,
registry_info: RegistryInfo,
tag_list: List[str]) -> List[str] :
filtered_tags = []
this_machine = platform.machine()
# Translate between Python and docker platform names
if this_machine == "x86_64":
this_machine = "amd64"
if this_machine == "aarch64":
this_machine = "arm64"
if opts.o.debug:
print(f"Python says the architecture is: {this_machine}")
docker = DockerClient()
for tag in tag_list:
remote_tag = f"{registry_info.registry}/{container}:{tag}"
manifest_cmd = ExtendedManifestCLI(docker.client_config)
manifest = manifest_cmd.inspect_verbose(remote_tag)
if opts.o.debug:
print(f"manifest: {manifest}")
image_architecture = manifest["Descriptor"]["platform"]["architecture"]
if opts.o.debug:
print(f"image_architecture: {image_architecture}")
if this_machine == image_architecture:
filtered_tags.append(tag)
if opts.o.debug:
print(f"Tags filtered for platform: {filtered_tags}")
return filtered_tags
def _get_latest_image(container: str, registry_info: RegistryInfo):
all_tags = _get_tags_for_container(container, registry_info)
tags_for_platform = _filter_for_platform(container, registry_info, all_tags)
if len(tags_for_platform) > 0:
latest_tag = _find_latest(tags_for_platform)
return f"{container}:{latest_tag}"
else:
return None
def _fetch_image(tag: str, registry_info: RegistryInfo):
docker = DockerClient()
remote_tag = f"{registry_info.registry}/{tag}"
if opts.o.debug:
print(f"Attempting to pull this image: {remote_tag}")
docker.image.pull(remote_tag)
def _exists_locally(container: str):
docker = DockerClient()
return docker.image.exists(_local_tag_for(container))
def _add_local_tag(remote_tag: str, registry: str, local_tag: str):
docker = DockerClient()
docker.image.tag(f"{registry}/{remote_tag}", local_tag)
@click.command()
@click.option('--include', help="only fetch these containers")
@click.option('--exclude', help="don\'t fetch these containers")
@click.option("--force-local-overwrite", is_flag=True, default=False, help="Overwrite a locally built image, if present")
@click.option("--image-registry", required=True, help="Specify the image registry to fetch from")
@click.option("--registry-username", required=True, help="Specify the image registry username")
@click.option("--registry-token", required=True, help="Specify the image registry access token")
@click.pass_context
def command(ctx, include, exclude, force_local_overwrite, image_registry, registry_username, registry_token):
'''EXPERIMENTAL: fetch the images for a stack from remote registry'''
registry_info = RegistryInfo(image_registry, registry_username, registry_token)
docker = DockerClient()
if not opts.o.quiet:
print("Logging into container registry:")
docker.login(registry_info.registry, registry_info.registry_username, registry_info.registry_token)
# Generate list of target containers
stack = ctx.obj.stack
containers_in_scope = get_containers_in_scope(stack)
all_containers_found = True
for container in containers_in_scope:
local_tag = _local_tag_for(container)
if include_exclude_check(container, include, exclude):
if opts.o.debug:
print(f"Processing: {container}")
# For each container, attempt to find the latest of a set of
# images with the correct name and platform in the specified registry
image_to_fetch = _get_latest_image(container, registry_info)
if not image_to_fetch:
print(f"Warning: no image found to fetch for container: {container}")
all_containers_found = False
continue
if opts.o.debug:
print(f"Fetching: {image_to_fetch}")
_fetch_image(image_to_fetch, registry_info)
# Now check if the target container already exists exists locally already
if (_exists_locally(container)):
if not opts.o.quiet:
print(f"Container image {container} already exists locally")
# if so, fail unless the user specified force-local-overwrite
if (force_local_overwrite):
# In that case remove the existing :local tag
if not opts.o.quiet:
print(f"Warning: overwriting local tag from this image: {container} because "
"--force-local-overwrite was specified")
else:
if not opts.o.quiet:
print(f"Skipping local tagging for this image: {container} because that would "
"overwrite an existing :local tagged image, use --force-local-overwrite to do so.")
continue
# Tag the fetched image with the :local tag
_add_local_tag(image_to_fetch, image_registry, local_tag)
else:
if opts.o.verbose:
print(f"Excluding: {container}")
if not all_containers_found:
print("Warning: couldn't find usable images for one or more containers, this stack will not deploy")

View File

@ -0,0 +1,48 @@
# Copyright © 2024 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from datetime import datetime
from python_on_whales import DockerClient
from stack_orchestrator.opts import opts
from stack_orchestrator.util import error_exit
def _publish_tag_for_image(local_image_tag: str, remote_repo: str, version: str):
# Turns image tags of the form: foo/bar:local into remote.repo/org/bar:deploy
(image_name, image_version) = local_image_tag.split(":")
if image_version == "local":
return f"{remote_repo}/{image_name}:{version}"
else:
error_exit("Asked to publish a non-locally built image")
def publish_image(local_tag, registry):
if opts.o.verbose:
print(f"Publishing this image: {local_tag} to this registry: {registry}")
docker = DockerClient()
# Figure out the target image tag
# Eventually this version will be generated from the source repo state
# Using a timestemp is an intermediate step
version = datetime.now().strftime("%Y%m%d%H%M")
remote_tag = _publish_tag_for_image(local_tag, registry, version)
# Tag the image thus
if opts.o.debug:
print(f"Tagging {local_tag} to {remote_tag}")
docker.image.tag(local_tag, remote_tag)
# Push it to the desired registry
if opts.o.verbose:
print(f"Pushing image {remote_tag}")
docker.image.push(remote_tag)

View File

@ -0,0 +1,41 @@
# Copyright © 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
cluster_name_prefix = "laconic-"
stack_file_name = "stack.yml"
spec_file_name = "spec.yml"
config_file_name = "config.env"
deployment_file_name = "deployment.yml"
compose_dir_name = "compose"
compose_deploy_type = "compose"
k8s_kind_deploy_type = "k8s-kind"
k8s_deploy_type = "k8s"
cluster_id_key = "cluster-id"
kube_config_key = "kube-config"
deploy_to_key = "deploy-to"
network_key = "network"
http_proxy_key = "http-proxy"
image_registry_key = "image-registry"
configmaps_key = "configmaps"
resources_key = "resources"
volumes_key = "volumes"
security_key = "security"
annotations_key = "annotations"
labels_key = "labels"
replicas_key = "replicas"
node_affinities_key = "node-affinities"
node_tolerations_key = "node-tolerations"
kind_config_filename = "kind-config.yml"
kube_config_filename = "kubeconfig.yml"

View File

View File

@ -0,0 +1,15 @@
services:
registry:
image: registry:2.8
restart: always
environment:
REGISTRY_LOG_LEVEL: ${REGISTRY_LOG_LEVEL}
volumes:
- config:/config:ro
- registry-data:/var/lib/registry
ports:
- "5000"
volumes:
config:
registry-data:

View File

@ -0,0 +1,80 @@
# From: https://raw.githubusercontent.com/blast-io/deployment/master/docker-compose.yml
services:
# generate jwt.txt if it's absent
generate-jwt:
image: blastio/openssl
volumes:
- blast-data:/blast:rw
command: >
sh -c "[ ! -f /blast/jwt.txt ] && openssl rand -hex 32 | tr -d '\n' > /blast/jwt.txt || exit 0"
# initialise geth db
geth-init:
image: blastio/blast-geth:${NETWORK:-testnet-sepolia}
volumes:
- blast-data:/blast:rw
- ../config/fixturenet-blast/genesis.json:/blast/genesis.json
entrypoint: /bin/sh
command: >
-c "[ ! -d /blast/${GETH_DATA_DIR:-blast-geth-data}/geth ] && /usr/local/bin/geth init --datadir=/blast/${GETH_DATA_DIR:-blast-geth-data} /blast/genesis.json || exit 0"
depends_on:
generate-jwt:
condition: service_completed_successfully
env_file:
- ../config/fixturenet-blast/${NETWORK:-fixturenet}.config
blast-geth:
image: blastio/blast-geth:${NETWORK:-testnet-sepolia}
volumes:
- blast-data:/blast
ports:
- "9545"
- "9546"
command: >
--datadir=/blast/${GETH_DATA_DIR:-blast-geth-data}
--http
--http.corsdomain="*"
--http.vhosts="*"
--http.addr=0.0.0.0
--http.port=9545
--http.api=web3,debug,eth,txpool,net,engine
--ws
--ws.addr=0.0.0.0
--ws.port=9546
--ws.origins="*"
--ws.api=debug,eth,txpool,net,engine
--authrpc.addr="0.0.0.0"
--authrpc.port="8551"
--authrpc.vhosts="*"
--authrpc.jwtsecret=/blast/jwt.txt
--syncmode=full
--gcmode=archive
--nodiscover
--maxpeers=0
--rollup.disabletxpoolgossip=true
env_file:
- ../config/fixturenet-blast/${NETWORK:-fixturenet}.config
depends_on:
geth-init:
condition: service_completed_successfully
op-node:
image: blastio/blast-optimism:${NETWORK:-testnet-sepolia}
volumes:
- blast-data:/blast
- ../config/fixturenet-blast/rollup.json:/blast/rollup.json
ports:
- "9003"
command: >
op-node
--l1="${CERC_L1_RPC}"
--l1.rpckind="any"
--l1.trustrpc=true
--l2="http://blast-geth:8551"
--l2.jwt-secret=/blast/jwt.txt
--rollup.config="/blast/rollup.json"
depends_on:
- blast-geth
env_file:
- ../config/fixturenet-blast/${NETWORK:-fixturenet}.config
volumes:
blast-data:

View File

@ -40,6 +40,7 @@ services:
- fixturenet-eth-bootnode-geth
ports:
- "8545"
- "8546"
- "40000"
- "6060"
@ -61,6 +62,9 @@ services:
- fixturenet-eth-bootnode-geth
volumes:
- fixturenet_eth_geth_2_data:/root/ethdata
ports:
- "8545"
- "8546"
fixturenet-eth-bootnode-lighthouse:
restart: always

Some files were not shown because too many files have changed in this diff Show More