Compare commits

...

142 Commits

Author SHA1 Message Date
01f9fe67ed add Mars v2 interface (#744)
Tested on DO with real funds on mainnet

Co-authored-by: zramsay <zach@bluecollarcoding.ca>
Reviewed-on: cerc-io/stack-orchestrator#744
2024-02-19 19:11:59 +00:00
049ffcff71 Fix test failure 2024-02-18 12:28:48 -07:00
f5314a979b Install ed to fix CI job 2024-02-18 12:20:01 -07:00
39f4fa4487 Container Registry Stack (#747)
Co-authored-by: David Boreham <david@bozemanpas.com>
Reviewed-on: cerc-io/stack-orchestrator#747
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-18 18:55:55 +00:00
0b0394a940 Use absolute path for the data volume (#749)
Reviewed-on: cerc-io/stack-orchestrator#749
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-17 14:29:53 +00:00
37b9500483 Support non-tls ingress for kind (#748)
Reviewed-on: cerc-io/stack-orchestrator#748
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-17 01:54:30 +00:00
3c3e582939 Minor envsubst improvements. (#746)
Minor fixes to envsubst for webapps.  Somewhat specially treated is `LACONIC_HOSTED_CONFIG_homepage` which can be used to replace the homepage in package.json.  With react, this gets an extra `/` though, which we need to remove.

Reviewed-on: cerc-io/stack-orchestrator#746
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-16 04:11:09 +00:00
26d265360d Rename workflow file 2024-02-15 07:39:04 -07:00
f81b78cfbc Update .gitea/workflows/triggers/test-database 2024-02-15 14:35:56 +00:00
d9bb6b3588 Test Database Stack (#737)
Reviewed-on: cerc-io/stack-orchestrator#737
2024-02-15 05:26:29 +00:00
b59beb66eb Add simple quick deploy script (#743)
Reviewed-on: cerc-io/stack-orchestrator#743
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-15 05:00:51 +00:00
65d67dba10 Fix k8s and enable it by default on PRs (#742)
Reviewed-on: cerc-io/stack-orchestrator#742
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-14 23:50:09 +00:00
b22c72e715 For k8s, use provisioner-managed volumes when an absolute host path is not specified. (#741)
In kind, when we bind-mount a host directory it is first mounted into the kind container at /mnt, then into the pod at the desired location.

We accidentally picked this up for full-blown k8s, and were creating volumes at /mnt.  This changes the behavior for both kind and regular k8s so that bind mounts are only allowed if a fully-qualified path is specified.  If no path is specified at all, a default storageClass is assumed to be present, and the volume managed by a provisioner.

Eg, for kind, the default provisioner is: https://github.com/rancher/local-path-provisioner

```
stack: test
deploy-to: k8s-kind
config:
  test-variable-1: test-value-1
network:
  ports:
    test:
     - '80'
volumes:
  # this will be bind-mounted to a host-path
  test-data-bind: /srv/data
  # this will be managed by the k8s node
  test-data-auto:
configmaps:
  test-config: ./configmap/test-config
```

Reviewed-on: cerc-io/stack-orchestrator#741
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-14 21:45:01 +00:00
c9444591f5 Fix default webapp port number. (#740)
Reviewed-on: cerc-io/stack-orchestrator#740
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-09 01:20:41 +00:00
903f3b10e2 Add support for annotations and labels in spec. (#739)
```
stack: webapp-deployer-backend
deploy-to: k8s
annotations:
  foo.bar.annot/{name}: baz
labels:
  a.b.c/{name}.blah: "value"
```

Reviewed-on: cerc-io/stack-orchestrator#739
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-09 00:11:07 +00:00
72ed2eb91a Fix bad test in tag check. (#738)
Reviewed-on: cerc-io/stack-orchestrator#738
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-08 20:38:41 +00:00
2104eb5f30 Merge pull request 'Add Mars stack' (#725) from zach/mars into main
Reviewed-on: cerc-io/stack-orchestrator#725
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
2024-02-08 20:30:47 +00:00
afd6be3b13 Ping pub (#663)
for #170, revives #190
uses https://github.com/LaconicNetwork/explorer/pull/1

Co-authored-by: zramsay <zach@bluecollarcoding.ca>
Co-authored-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#663
Co-authored-by: zramsay <zramsay@noreply.git.vdb.to>
Co-committed-by: zramsay <zramsay@noreply.git.vdb.to>
2024-02-08 20:13:12 +00:00
f914baa913 Merge branch 'main' into zach/mars 2024-02-08 19:52:49 +00:00
8be1e684e8 Process environment variables defined in compose files (#736)
Reviewed-on: cerc-io/stack-orchestrator#736
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
2024-02-08 19:41:57 +00:00
5d16251ce9 Merge pull request 'Add resource limit options to spec.' (#735) from telackey/limits into main
Reviewed-on: cerc-io/stack-orchestrator#735
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
2024-02-08 16:52:49 +00:00
3309782439 Refactor 2024-02-08 00:47:46 -06:00
4b3b3478e7 Switch to Docker-style limits 2024-02-08 00:43:41 -06:00
2a9955055c debug 2024-02-07 16:56:35 -06:00
8964e1c0fe Add resource limit options to spec. 2024-02-07 16:48:02 -06:00
d2ebb81d77 Tags for undeploy (#734)
```
  --include-tags TEXT             Only include requests with matching tags
                                  (comma-separated).
  --exclude-tags TEXT             Exclude requests with matching tags (comma-
                                  separated).
```

Reviewed-on: cerc-io/stack-orchestrator#734
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-07 21:45:16 +00:00
4a981d8d2e Fix repo URL (#733)
Needs a '/' (http) not ':' (ssh).

Reviewed-on: cerc-io/stack-orchestrator#733
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-07 18:52:07 +00:00
88a0236ca9 Add the ability to filter deployment requests by tag. (#730)
Reviewed-on: cerc-io/stack-orchestrator#730
2024-02-07 03:12:40 +00:00
937b983ec9 Update links from github.com to git.vdb.to (#732)
Update links and references to github.com to git.vdb.to.

Also enable the flake8 lint action in gitea.

Reviewed-on: cerc-io/stack-orchestrator#732
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-02-06 22:42:44 +00:00
bfbcfb7904
Volume processing fixes (#729) 2024-02-06 12:32:10 -07:00
3d5ececba5
Remove duplicate plugin paths and resulting extraneous error message (#728) 2024-02-06 11:59:37 -07:00
6848fc33cf
Implement dry run support for k8s deploy (#727) 2024-02-06 07:07:56 -07:00
36bb068983
Add ConfigMap test. (#726)
* Add ConfigMap test.

* eof

* Minor tweak

* Trigger test

---------

Co-authored-by: David Boreham <david@bozemanpass.com>
2024-02-05 14:15:11 -06:00
25a2b70f2c
Fix command in mainnet-eth docs 2024-02-03 18:25:02 -07:00
2fcd416e29
Basic webapp deployer stack. (#722) 2024-02-02 19:05:15 -07:00
6629017d6a
Support other webapp types (react, static). (#721)
* Support other webapp types (react, static).
2024-02-02 18:04:06 -06:00
1c30441000
Add schedule for k8s deploy test 2024-02-01 07:40:20 -07:00
b398050787
Don't include volumes in spec if we don't have any. (#720) 2024-01-31 15:11:32 -06:00
12ec1bec43
Add ConfigMap support for k8s. (#714)
* Minor fixes for deploying with k8s and podman.

* ConfigMap support
2024-01-30 23:09:48 -06:00
62af03077f
Add deployed/error status output to the state file. (#719)
* More status info
* Up default resource limits.
* Need ps
2024-01-30 22:13:45 -06:00
Zach
098567625a
Create README.md 2024-01-30 17:47:56 -05:00
428b05158e
Fix DnsRecord ownership check. (#718)
* Fix DnsRecord ownership check.

* Var names
2024-01-30 13:31:59 -06:00
a750b645b9
Merge Ci test branch fixes (#717) 2024-01-30 11:18:08 -07:00
zramsay
23ee3e19b7 mars: add env vars to docker-compose 2024-01-29 22:44:55 +00:00
zramsay
2d764fc7d0 basic mars stack 2024-01-29 16:00:58 +00:00
b7f215d9bf
k8s test fixes (#713)
* Add cgroup setup, increase test timeouts

* Trigger from test script or CI job changes too
2024-01-28 16:21:39 -07:00
eca52b10b7
Fix copy-paste error 2024-01-25 11:46:50 -07:00
b9128841e4
Switch test back to stock runner 2024-01-25 11:42:07 -07:00
0a302ea555
Add a schedule for fixturenet-eth-plugeth-test 2024-01-25 07:33:44 -07:00
aa0f60baa1
Trigger fixturenet-eth-plugeth-test run 2024-01-25 06:46:18 -07:00
cef73d8de2
Update fixturenet-laconicd-test schedule 2024-01-23 10:32:44 -07:00
7d0f2adb46
Enable scheduled test run (#711) 2024-01-23 09:42:24 -07:00
5fdee25dc1
update laconicd test (#710)
* Remove legacy docker config

* Trigger test run
2024-01-23 09:25:02 -07:00
554f05de87
Fix pip3 error (#709) 2024-01-22 21:06:15 -06:00
b4fbee9b13
Update fixturenet-laconicd-test 2024-01-22 07:43:38 -07:00
f826f50c4d Merge branch 'main' of github.com:cerc-io/stack-orchestrator 2024-01-17 08:27:38 -07:00
b83465767d Set k8s deploy test to manual trigger 2024-01-17 08:27:09 -07:00
50509203d1 Set k8s deploy test to manual trigger 2024-01-17 08:26:51 -07:00
prathamesh0
282e175566
Remove unnecessary hyperlinks and pin image versions (#706)
* Remove invalid dashboard and panel ids from alert rules

* Pin grafana and prometheus versions

* Configure custom grafana server URL
2024-01-17 14:02:10 +05:30
635aa7037b Build test container 2024-01-16 21:15:21 -07:00
9877cfaf85 Update for new runner 2024-01-16 20:08:32 -07:00
c642e5d490 Try different runner 2024-01-16 17:09:58 -07:00
02c49d66f5 Add debug output to check container 2024-01-16 17:07:03 -07:00
90cebdb7a6
Add CI script for k8s deployment test (#705) 2024-01-16 16:16:07 -07:00
1f9653e6f7
Fix kind mode and add k8s deployment test (#704)
* Fix kind mode and add k8s deployment test

* Fix lint errors
2024-01-16 15:55:58 -07:00
0587813dd0
Create next.config.js if it is missing. (#701)
* Create next.config.js if it is missing.

* Add comment.
2024-01-15 12:12:59 -06:00
4b3ea7c30f
Update independent act-runner stack to use custom act as well. (#702)
* Update independent act-runner stack to use custom act as well.

* Remove branches which are not needed or already merged.
2024-01-15 12:10:48 -06:00
db8aec52aa
Pin commit hash of asset list repo in osmosis frontend app (#703) 2024-01-15 16:56:06 +05:30
b83030f63b
Use custom act with gitea. (#700)
* Use custom act with gitea.

* Make sure wget is available

* Fix repo url
2024-01-09 22:53:43 -06:00
prathamesh0
a3eb3c0bb0
Setup basic alerting for watchers in monitoring stack (#698)
* Provision Grafana alert contactpoints and policies for Slack

* Add watcher alert rules

* Update watcher monitoring instructions

* Add listening port flag to node exporter command

* Add reference links
2024-01-08 17:25:30 +05:30
eae2af7ccc
Upgrade watcher versions (#699) 2024-01-08 11:51:54 +05:30
837e443800
Support application removal requests. (#697)
* Support application removal request.

* Git should never prompt when deploying a webapp
2023-12-21 18:05:40 -06:00
prathamesh0
a57b0cdd26
Add a stack for prom node exporter and its dashboard in monitoring stack (#696)
* Add a stack for Prometheus node exporter

* Add node exporter dashboard to monitoring stack
2023-12-21 15:15:03 +05:30
prathamesh0
38622fb33c
[WIP] Use templating for watcher dashboard and add Postgres exporter (#695)
* Add Postgres exporter and it's dashboard

* Use templating for watcher dashboard

* Add subgraph related panels to watcher dashboard

* Remove individual watcher dashboards and update instructions
2023-12-21 13:41:36 +05:30
prathamesh0
4a1a46facc
Update monitoring stack with additional dashboards and watcher metrics (#693)
* Include retry jobs and update default refresh intervals

* Add prometheus blackbox exporter and it's dashboard

* Add NodeJS application dashboard

* Allow UI updates

* Update watcher dashboards for upstream and external chain heads

* Update watcher dashboards with watcher config metrics

* Upgrade sushiswap and azimuth watchers

* Removed fixed title size values

* Update instructions

* Update instructions for env config

* Update instructions with setup
2023-12-21 09:26:37 +05:30
Zach
42b92f7e23
use square logo for an urbit tile (#689)
* use square logo for an urbit tile

* bump version, improve info text
2023-12-19 09:50:14 -05:00
def192edab
Update new environment values for Osmosis frontend app (#694)
* Update new env values for Osmosis frontend app

* Use .env.production instead of local
2023-12-18 17:49:45 +05:30
d8357df345
Add image pull secret to pods (#692) 2023-12-15 14:27:45 -07:00
997496b8a5
Update script for new nextjs build output. (#691) 2023-12-14 19:47:30 -06:00
61f2884505
Reduce base image size (first round of improvements) (#690) 2023-12-14 17:46:03 -06:00
27a14737f8
Make the container tag based on the deployment path. (#688) 2023-12-14 09:49:21 -06:00
prathamesh0
b9b758bfdd
Add a stack for running Osmosis frontend app on Urbit (#683)
* Deploy osmosis on Urbit fake ship

* Remove Urbit configuration from existing osmosis stack

* Add a separate stack for Osmosis front end on Urbit

* Run script for renaming build files with bash

* Add environment variables required in urbit osmosis build

* Fix BASEPATH in compose file

* Remove ipfs-glob-host from network config in osmosis readme

* Use laconic branch for osmosis frontend

---------

Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
2023-12-14 17:28:10 +05:30
prathamesh0
9ba410b095
Add a stack for monitoring watchers with prometheus and grafana (#687)
* Setup Prometheus and Grafana for monitoring stack

* Add dashboard for azimuth watchers

* Add a dashboard for sushiswap watcher

* Persist prometheus server data

* Additional metrics in watcher dashboards

* Update dashboards and add for merkl sushiswap watcher

* Add dashboards for remaining azimuth watchers

* Separate out preconfigured watcher dashboards and add instructions

* Keep the empty dashboards dir
2023-12-14 16:59:00 +05:30
1f4eb57069
Add --dry-run option (#686) 2023-12-13 22:56:40 -06:00
88f66a3626
Add deployment update and deploy-webapp-from-registry commands. (#676) 2023-12-13 21:02:34 -06:00
prathamesh0
1ef0b316c6
Expose metrics endpoints for sushiswap and merkl sushiswap watchers (#685) 2023-12-13 14:58:26 +05:30
prathamesh0
232d5618cb
Update instructions in osmosis stack (#684) 2023-12-12 13:54:00 +05:30
fa6b570f4a
Add stack for running osmosis frontend app (#673)
* osmosis FE stack

* chmod

* dont use 3000

* fix for neww stack format

* updates

* update osmosis readme

* Update stack.yml

* Update osmosis frontend stack to serve app

* Host osmosis app static build using python server

* Fix mapped ports in deployment for containers

* Update instructions

* Use nginx server to host files and handle page reloads

* Fix typo

---------

Co-authored-by: zramsay <zach@bluecollarcoding.ca>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2023-12-11 14:10:54 +05:30
prathamesh0
f9eb5a4ba8
Refactor to make Urbit setup generic (#682)
* Refactor to make Urbit app deployment script generic

* Rename urbit pod and update instructions

* Add a flag to allow skipping app installation on Urbit

* Make remote Urbit app deployment scripts generic

* Move remote deployment scripts to urbit fixturenet

* Update and use existing kubo pod for Urbit glob hosting
2023-12-08 09:35:00 +05:30
077ea80c70
Add deployment status command and fix k8s output for deployment ps (#679) 2023-12-06 09:27:47 -07:00
15faed00de
Generate a unique deployment id for each deployment (#680)
* Move cluster name generation into a function

* Generate a unique deployment id for each deployment
2023-12-05 22:56:58 -07:00
prathamesh0
6bef0c5b2f
Separate out GQL proxy server from uniswap-urbit stack (#681)
* Separate out uniswap gql proxy in a stack

* Use proxy server from watcher-ts

* Add a flag to enable/disable the proxy server

* Update env configuratoin for uniswap urbit app stack

* Update stack file for uniswap urbit app stack

* Fix env variables in instructions
2023-12-06 10:41:10 +05:30
prathamesh0
f27da19808
Use IPFS for hosting glob files for Urbit (#677)
* Use IPFS for hosting glob files for Urbit

* Add env configuration for IPFS endpoints to instructions

* Make ship pier dir configurable in remote deployment script

* Update remote deployment script to accept glob hash arg
2023-12-05 15:00:03 +05:30
2dd54892a1
Allow specifying the webapp tag explicitly (#675) 2023-12-04 21:39:16 -06:00
ab0e70ed83
Change path portion of unique cluster name to point to compose file, not argv[0]. (#678) 2023-12-04 13:39:14 -06:00
c319e90ddd
Add a stack for running uniswap frontend on urbit (#670)
* Create uniswap-frontend stack

* Add stack for building uniswap frontend app

* Add a container for Urbit fake ship

* Update with deployment command

* Add a service for uniswap app deployment to urbit

* Use a script to start urbit ship to handle restarts

* Rename stack name to uniswap-urbit-app

* Rename build.sh to build-app.sh and check if build already exists

* Rename stack directory name

* Update uniswap build restart on failure

* Perform uniswap app deployment in the urbit container

* Add steps to create glob for the app

* Tail /dev/null after deployment

* Add steps to install the app to desk

* Host glob files for uniswap

* Update repo branch

* Update readme with command to get urbit password

* Update readme

* Update readme to open urbit web UI

* Expose the port on glob hosting container

* Avoid exposing urbit http port

* Add scripts for installing uniswap on remote urbit instance

* Configure GQL proxy for uniswap app

* Use laconic branch for app repo

* Rename urbit pod for uniswap app deployment

---------

Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2023-12-04 18:39:19 +05:30
2173e7ce6a
If the next version is unsupported, print a big warning and try higher version. (#674) 2023-11-30 12:33:06 -06:00
c19559967d
Add doc for deploy-webapp (#672) 2023-11-29 20:55:14 -07:00
d7093277b4
Use constants (#671) 2023-11-29 20:50:53 -07:00
03a3645b3c
Add --port option to run-webapp. (#667)
* Add --port option to run-webapp

* Fixed merge

* lint
2023-11-29 11:32:28 -06:00
113c0bfbf1
Propagate env file for webapp deployment (#669) 2023-11-28 21:14:02 -07:00
1a069a6816
Use a temp file for the spec file name (#668) 2023-11-28 19:56:12 -07:00
a68cd5d65c
Webapp deploy (#662) 2023-11-27 22:02:16 -07:00
1b94db27c1
Upgrade azimuth watcher release version to 0.1.2 (#666)
* Upgrade azimuth watcher release version

* Fix version for azimuth watcher repo
2023-11-24 14:05:37 +05:30
prathamesh0
9499941891
Increase max connections for Azimuth watcher dbs (#665) 2023-11-23 19:11:02 +05:30
3fefc67e77
Run azimuth contract watcher in active mode (#661)
* Update stack to run azimuth job runner

* Run azimuth watcher in active mode

* Update stack to run job-runners for all watchers

* Update ports in job-runner health checks

* Map metrics ports to host

* Configure historical block processing batch size for Azimuth watcher

* Use deployment command for azimuth stack

---------

Co-authored-by: neeraj <neeraj.rtly@gmail.com>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2023-11-23 15:00:18 +05:30
1a37255c18
Tweak laconicd config to allow setting endpoint port and to make the fixturenet restartable. (#660)
* Endpoint includes port

* Make it restartable

* Don't try to remove the mounted directory

* Make copy of init.sh
2023-11-22 11:31:30 -06:00
87bedde5cb
Support for k8s ingress and tls (#659) 2023-11-21 16:04:36 -07:00
01029cf7aa
Fix for code path that doesn't create a DeploymentContext (#658) 2023-11-21 08:35:31 -07:00
0b87c12c13
Upgrade merkl and sushiswap watcher to v0.1.4 (#657)
* Upgrade merkl and sushi watcher versions

* Set gqlPath to base URL and remove filling start block

* Upgrade watcher versions to 0.1.4
2023-11-21 19:07:09 +05:30
f6624cb33a
Add image push command (#656) 2023-11-20 20:23:55 -07:00
c9c6a0eee3
Changes for remote k8s (#655) 2023-11-20 09:12:57 -07:00
5c80887215
Fix missing tty parameter. (#653) 2023-11-16 12:58:03 -06:00
Ian
80c4b9214b
Merge pull request #643 from cerc-io/iskay/update-optimism
Iskay/update optimism
2023-11-16 12:31:57 -05:00
70529c43e7
Upgrade merkl and sushiswap watcher versions (#654) 2023-11-16 16:27:41 +05:30
1e9d24a8ce
Update webapp.md 2023-11-15 12:52:34 -06:00
9900565714
Update webapp.md 2023-11-15 12:48:58 -06:00
a13f841f34
Update webapp.md 2023-11-15 12:37:30 -06:00
d37f80553d
Add webapp doc (#652) 2023-11-15 12:28:07 -06:00
2059d67dca
Add run-webapp command. (#651) 2023-11-15 10:54:27 -07:00
638fa01649
Support external stack file (#650) 2023-11-14 20:59:48 -07:00
4ae4d3b61d
Print docker container logs in webapp test. (#649) 2023-11-14 17:30:01 -06:00
9687d84468
646: Add error message for webapp startup hang (#647)
This fixes three issues:

1. #644 (build output)
2. #646 (error on startup)
3. automatic env quote handling (related to 2)


For the build output we now have:

```
#################################################################

Built host container for /home/telackey/tmp/iglootools-home with tag:

    cerc/iglootools-home:local

To test locally run:

    docker run -p 3000:3000 cerc/iglootools-home:local
```

For the startup error, it was hung waiting for the "success" message from the next generate output (itself a workaround for a nextjs bug fixed by this PR we submitted: https://github.com/vercel/next.js/pull/58276).

I added a timeout which will cause it to wait up to a maximum _n_ seconds before issuing:

```
ERROR: 'npm run cerc_generate' exceeded CERC_MAX_GENERATE_TIME.
```

On the quoting itself, I plan on adding a new run-webapp command, but I realized I had a decent spot to do effect the quote replacement on-the-fly after all when I am already escaping the values for insertion/replacement into JS.

The "dequoting" can be disabled with `CERC_RETAIN_ENV_QUOTES=true`.
2023-11-14 16:07:26 -06:00
iskay
f088cbb3b0 fix linter errors 2023-11-14 14:38:49 +00:00
f1f618c57a
Don't change the next.js version by default. (#640) 2023-11-13 11:56:04 -06:00
0aca087558
Upgrade release versions for merkl and sushiswap watchers (#642)
* Upgrade merkl-sushiswap-v3-watcher-ts release

* Increase blockDelayInMilliSecs for merkl-sushiswap-v3 watcher

* Upgrade sushiswap-v3-watcher-ts release

* Add sushiswap-v3 watcher to stack list

* Avoid mapping ports that are not required to be exposed
2023-11-13 17:36:37 +05:30
a04730e7ac
Add a merkl-sushiswap-v3 watcher stack (#641)
* Add a merkl-sushiswap-v3 watcher stack

* Remove unrequired image from list
2023-11-13 11:13:55 +05:30
prathamesh0
95e881ba19
Add a sushiswap-v3 watcher stack (#638)
* Add a sushiswap-v3 watcher stack

* Add services for watcher db and server

* Add service for watcher job-runner

* Use 0.0.0.0 for watcher server config

---------

Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
2023-11-13 10:58:55 +05:30
414b887036
Allow setting build tool (npm/yarn) and next.js version. (#639)
* Allow setting build tool (npm/yarn) and next.js version.
2023-11-10 17:44:25 -06:00
iskay
3db443b2bb fix commands import 2023-11-10 20:42:02 +00:00
iskay
1072fc98c3 update fixturenet-optimism 2023-11-10 20:05:22 +00:00
042b413598
Support the case where webpack config is already present next.config.js (#631)
* Support the case where webpack config is already present next.config.js

* Update scripts for experimental-compile/experimental-generate
2023-11-08 23:44:48 -06:00
8384e95049
Add debug commands to test job (#636) 2023-11-08 19:42:19 -07:00
a27cf86748
Add basic k8s test (#635)
* Add CI job

* Add basic k8s test
2023-11-08 19:12:48 -07:00
ce587457d7
Add env var support for k8s (#634) 2023-11-08 17:53:46 -07:00
5e91c2224e
kind test stack (#629) 2023-11-08 01:11:00 -07:00
36e13f7199
Remove test output from tree. (#628) 2023-11-07 23:10:32 -06:00
d9bcc088a8
Enable webapp test in GitHub CI. (#627) 2023-11-07 18:27:08 -06:00
660326f713
Add new build-webapp command and related scripts and containers. (#626)
* Add new build-webapp command and related scripts and containers.
2023-11-07 18:15:04 -06:00
4456e70c93
Rename app -> stack_orchestrator (#625) 2023-11-07 00:06:55 -07:00
e989368793
Add generated kind config (#623) 2023-11-05 23:21:53 -07:00
0f93d30d54
Basic volume support (#622) 2023-11-03 17:02:13 -06:00
660 changed files with 41874 additions and 1634 deletions

View File

@ -6,6 +6,8 @@ on:
paths: paths:
- '!**' - '!**'
- '.gitea/workflows/triggers/fixturenet-eth-plugeth-test' - '.gitea/workflows/triggers/fixturenet-eth-plugeth-test'
schedule: # Note: coordinate with other tests to not overload runners at the same time of day
- cron: '2 14 * * *'
# Needed until we can incorporate docker startup into the executor container # Needed until we can incorporate docker startup into the executor container
env: env:

View File

@ -6,11 +6,8 @@ on:
paths: paths:
- '!**' - '!**'
- '.gitea/workflows/triggers/fixturenet-laconicd-test' - '.gitea/workflows/triggers/fixturenet-laconicd-test'
schedule:
# Needed until we can incorporate docker startup into the executor container - cron: '1 13 * * *'
env:
DOCKER_HOST: unix:///var/run/dind.sock
jobs: jobs:
test: test:
@ -47,9 +44,5 @@ jobs:
run: ./scripts/create_build_tag_file.sh run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package" - name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh run: ./scripts/build_shiv_package.sh
- name: Start dockerd # Also needed until we can incorporate into the executor
run: |
dockerd -H $DOCKER_HOST --userland-proxy=false &
sleep 5
- name: "Run fixturenet-laconicd tests" - name: "Run fixturenet-laconicd tests"
run: ./tests/fixturenet-laconicd/run-test.sh run: ./tests/fixturenet-laconicd/run-test.sh

21
.gitea/workflows/lint.yml Normal file
View File

@ -0,0 +1,21 @@
name: Lint Checks
on:
pull_request:
branches: '*'
push:
branches: '*'
jobs:
test:
name: "Run linter"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name : "Run flake8"
uses: py-actions/flake8@v2

View File

@ -0,0 +1,54 @@
name: Container Registry Test
on:
push:
branches: '*'
paths:
- '!**'
- '.gitea/workflows/triggers/test-container-registry'
- '.gitea/workflows/test-container-registry.yml'
- 'tests/container-registry/run-test.sh'
schedule: # Note: coordinate with other tests to not overload runners at the same time of day
- cron: '6 19 * * *'
jobs:
test:
name: "Run contaier registry hosting test on kind/k8s"
runs-on: ubuntu-22.04
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
# At present the stock setup-python action fails on Linux/aarch64
# Conditional steps below workaroud this by using deadsnakes for that case only
- name: "Install Python for ARM on Linux"
if: ${{ runner.arch == 'arm64' && runner.os == 'Linux' }}
uses: deadsnakes/action@v3.0.1
with:
python-version: '3.8'
- name: "Install Python cases other than ARM on Linux"
if: ${{ ! (runner.arch == 'arm64' && runner.os == 'Linux') }}
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Check cgroups version"
run: mount | grep cgroup
- name: "Install kind"
run: ./tests/scripts/install-kind.sh
- name: "Install Kubectl"
run: ./tests/scripts/install-kubectl.sh
- name: "Install ed" # Only needed until we remove the need to edit the spec file
run: apt update && apt install -y ed
- name: "Run container registry deployment test"
run: |
source /opt/bash-utils/cgroup-helper.sh
join_cgroup
./tests/container-registry/run-test.sh

View File

@ -0,0 +1,52 @@
name: Database Test
on:
push:
branches: '*'
paths:
- '!**'
- '.gitea/workflows/triggers/test-database'
- '.gitea/workflows/test-database.yml'
- 'tests/database/run-test.sh'
schedule: # Note: coordinate with other tests to not overload runners at the same time of day
- cron: '5 18 * * *'
jobs:
test:
name: "Run database hosting test on kind/k8s"
runs-on: ubuntu-22.04
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
# At present the stock setup-python action fails on Linux/aarch64
# Conditional steps below workaroud this by using deadsnakes for that case only
- name: "Install Python for ARM on Linux"
if: ${{ runner.arch == 'arm64' && runner.os == 'Linux' }}
uses: deadsnakes/action@v3.0.1
with:
python-version: '3.8'
- name: "Install Python cases other than ARM on Linux"
if: ${{ ! (runner.arch == 'arm64' && runner.os == 'Linux') }}
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Check cgroups version"
run: mount | grep cgroup
- name: "Install kind"
run: ./tests/scripts/install-kind.sh
- name: "Install Kubectl"
run: ./tests/scripts/install-kubectl.sh
- name: "Run database deployment test"
run: |
source /opt/bash-utils/cgroup-helper.sh
join_cgroup
./tests/database/run-test.sh

View File

@ -0,0 +1,54 @@
name: K8s Deploy Test
on:
pull_request:
branches: '*'
push:
branches: '*'
paths:
- '!**'
- '.gitea/workflows/triggers/test-k8s-deploy'
- '.gitea/workflows/test-k8s-deploy.yml'
- 'tests/k8s-deploy/run-deploy-test.sh'
schedule: # Note: coordinate with other tests to not overload runners at the same time of day
- cron: '3 15 * * *'
jobs:
test:
name: "Run deploy test suite on kind/k8s"
runs-on: ubuntu-22.04
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
# At present the stock setup-python action fails on Linux/aarch64
# Conditional steps below workaroud this by using deadsnakes for that case only
- name: "Install Python for ARM on Linux"
if: ${{ runner.arch == 'arm64' && runner.os == 'Linux' }}
uses: deadsnakes/action@v3.0.1
with:
python-version: '3.8'
- name: "Install Python cases other than ARM on Linux"
if: ${{ ! (runner.arch == 'arm64' && runner.os == 'Linux') }}
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Check cgroups version"
run: mount | grep cgroup
- name: "Install kind"
run: ./tests/scripts/install-kind.sh
- name: "Install Kubectl"
run: ./tests/scripts/install-kubectl.sh
- name: "Run k8s deployment test"
run: |
source /opt/bash-utils/cgroup-helper.sh
join_cgroup
./tests/k8s-deploy/run-deploy-test.sh

View File

@ -0,0 +1,51 @@
name: Webapp Test
on:
pull_request:
branches: '*'
push:
branches:
- main
- ci-test
paths-ignore:
- '.gitea/workflows/triggers/*'
# Needed until we can incorporate docker startup into the executor container
env:
DOCKER_HOST: unix:///var/run/dind.sock
jobs:
test:
name: "Run webapp test suite"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
# At present the stock setup-python action fails on Linux/aarch64
# Conditional steps below workaroud this by using deadsnakes for that case only
- name: "Install Python for ARM on Linux"
if: ${{ runner.arch == 'arm64' && runner.os == 'Linux' }}
uses: deadsnakes/action@v3.0.1
with:
python-version: '3.8'
- name: "Install Python cases other than ARM on Linux"
if: ${{ ! (runner.arch == 'arm64' && runner.os == 'Linux') }}
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Install wget" # 20240109 - Only needed until the executors are updated.
run: apt update && apt install -y wget
- name: Start dockerd # Also needed until we can incorporate into the executor
run: |
dockerd -H $DOCKER_HOST --userland-proxy=false &
sleep 5
- name: "Run webapp tests"
run: ./tests/webapp-test/run-webapp-test.sh

View File

@ -1,2 +1,3 @@
Change this file to trigger running the fixturenet-eth-plugeth-test CI job Change this file to trigger running the fixturenet-eth-plugeth-test CI job
trigger trigger
trigger

View File

@ -1,2 +1,3 @@
Change this file to trigger running the fixturenet-laconicd-test CI job Change this file to trigger running the fixturenet-laconicd-test CI job
Trigger
Trigger

View File

@ -0,0 +1 @@
Change this file to trigger running the test-container-registry CI job

View File

@ -0,0 +1,2 @@
Change this file to trigger running the test-database CI job
Trigger test run

View File

@ -0,0 +1,2 @@
Change this file to trigger running the test-k8s-deploy CI job
Trigger test on PR branch

29
.github/workflows/test-webapp.yml vendored Normal file
View File

@ -0,0 +1,29 @@
name: Webapp Test
on:
pull_request:
branches: '*'
push:
branches: '*'
jobs:
test:
name: "Run webapp test suite"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name: "Print Python version"
run: python3 --version
- name: "Install shiv"
run: pip install shiv
- name: "Generate build version file"
run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh
- name: "Run webapp tests"
run: ./tests/webapp-test/run-webapp-test.sh

2
.gitignore vendored
View File

@ -6,5 +6,5 @@ laconic_stack_orchestrator.egg-info
__pycache__ __pycache__
*~ *~
package package
app/data/build_tag.txt stack_orchestrator/data/build_tag.txt
/build /build

View File

@ -29,10 +29,10 @@ chmod +x ~/.docker/cli-plugins/docker-compose
Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be
a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory. a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory.
Now, having selected that directory, download the latest release from [this page](https://github.com/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable: Now, having selected that directory, download the latest release from [this page](https://git.vdb.to/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable:
```bash ```bash
curl -L -o ~/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so curl -L -o ~/bin/laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
``` ```
Give it execute permissions: Give it execute permissions:
@ -52,7 +52,7 @@ Version: 1.1.0-7a607c2-202304260513
Save the distribution url to `~/.laconic-so/config.yml`: Save the distribution url to `~/.laconic-so/config.yml`:
```bash ```bash
mkdir ~/.laconic-so mkdir ~/.laconic-so
echo "distribution-url: https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so" > ~/.laconic-so/config.yml echo "distribution-url: https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so" > ~/.laconic-so/config.yml
``` ```
### Update ### Update
@ -64,12 +64,12 @@ laconic-so update
## Usage ## Usage
The various [stacks](/app/data/stacks) each contain instructions for running different stacks based on your use case. For example: The various [stacks](/stack_orchestrator/data/stacks) each contain instructions for running different stacks based on your use case. For example:
- [self-hosted Gitea](/app/data/stacks/build-support) - [self-hosted Gitea](/stack_orchestrator/data/stacks/build-support)
- [an Optimism Fixturenet](/app/data/stacks/fixturenet-optimism) - [an Optimism Fixturenet](/stack_orchestrator/data/stacks/fixturenet-optimism)
- [laconicd with console and CLI](app/data/stacks/fixturenet-laconic-loaded) - [laconicd with console and CLI](stack_orchestrator/data/stacks/fixturenet-laconic-loaded)
- [kubo (IPFS)](app/data/stacks/kubo) - [kubo (IPFS)](stack_orchestrator/data/stacks/kubo)
## Contributing ## Contributing

View File

@ -1,13 +0,0 @@
version: "3.2"
# See: https://docs.ipfs.tech/install/run-ipfs-inside-docker/#set-up
services:
ipfs:
image: ipfs/kubo:master-2023-02-20-714a968
restart: always
volumes:
- ./ipfs/import:/import
- ./ipfs/data:/data/ipfs
ports:
- "0.0.0.0:8080:8080"
- "0.0.0.0:4001:4001"
- "0.0.0.0:5001:5001"

View File

@ -1,118 +0,0 @@
#!/bin/bash
# TODO: this file is now an unmodified copy of cerc-io/laconicd/init.sh
# so we should have a mechanism to bundle it inside the container rather than link from here
# at deploy time.
KEY="mykey"
CHAINID="laconic_9000-1"
MONIKER="localtestnet"
KEYRING="test"
KEYALGO="eth_secp256k1"
LOGLEVEL="info"
# trace evm
TRACE="--trace"
# TRACE=""
# validate dependencies are installed
command -v jq > /dev/null 2>&1 || { echo >&2 "jq not installed. More info: https://stedolan.github.io/jq/download/"; exit 1; }
# remove existing daemon and client
rm -rf ~/.laconic*
make install
laconicd config keyring-backend $KEYRING
laconicd config chain-id $CHAINID
# if $KEY exists it should be deleted
laconicd keys add $KEY --keyring-backend $KEYRING --algo $KEYALGO
# Set moniker and chain-id for Ethermint (Moniker can be anything, chain-id must be an integer)
laconicd init $MONIKER --chain-id $CHAINID
# Change parameter token denominations to aphoton
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["staking"]["params"]["bond_denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["crisis"]["constant_fee"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["gov"]["deposit_params"]["min_deposit"][0]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["mint"]["params"]["mint_denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
# Custom modules
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["record_rent"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_rent"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_commit_fee"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_reveal_fee"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_minimum_bid"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
if [[ "$TEST_REGISTRY_EXPIRY" == "true" ]]; then
echo "Setting timers for expiry tests."
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["record_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_grace_period"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
fi
if [[ "$TEST_AUCTION_ENABLED" == "true" ]]; then
echo "Enabling auction and setting timers."
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_enabled"]=true' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_grace_period"]="300s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_commits_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_reveals_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
fi
# increase block time (?)
cat $HOME/.laconicd/config/genesis.json | jq '.consensus_params["block"]["time_iota_ms"]="1000"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
# Set gas limit in genesis
cat $HOME/.laconicd/config/genesis.json | jq '.consensus_params["block"]["max_gas"]="10000000"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
# disable produce empty block
if [[ "$OSTYPE" == "darwin"* ]]; then
sed -i '' 's/create_empty_blocks = true/create_empty_blocks = false/g' $HOME/.laconicd/config/config.toml
else
sed -i 's/create_empty_blocks = true/create_empty_blocks = false/g' $HOME/.laconicd/config/config.toml
fi
if [[ $1 == "pending" ]]; then
if [[ "$OSTYPE" == "darwin"* ]]; then
sed -i '' 's/create_empty_blocks_interval = "0s"/create_empty_blocks_interval = "30s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_propose = "3s"/timeout_propose = "30s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_propose_delta = "500ms"/timeout_propose_delta = "5s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_prevote = "1s"/timeout_prevote = "10s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_prevote_delta = "500ms"/timeout_prevote_delta = "5s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_precommit = "1s"/timeout_precommit = "10s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_precommit_delta = "500ms"/timeout_precommit_delta = "5s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_commit = "5s"/timeout_commit = "150s"/g' $HOME/.laconicd/config/config.toml
sed -i '' 's/timeout_broadcast_tx_commit = "10s"/timeout_broadcast_tx_commit = "150s"/g' $HOME/.laconicd/config/config.toml
else
sed -i 's/create_empty_blocks_interval = "0s"/create_empty_blocks_interval = "30s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_propose = "3s"/timeout_propose = "30s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_propose_delta = "500ms"/timeout_propose_delta = "5s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_prevote = "1s"/timeout_prevote = "10s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_prevote_delta = "500ms"/timeout_prevote_delta = "5s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_precommit = "1s"/timeout_precommit = "10s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_precommit_delta = "500ms"/timeout_precommit_delta = "5s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_commit = "5s"/timeout_commit = "150s"/g' $HOME/.laconicd/config/config.toml
sed -i 's/timeout_broadcast_tx_commit = "10s"/timeout_broadcast_tx_commit = "150s"/g' $HOME/.laconicd/config/config.toml
fi
fi
# Allocate genesis accounts (cosmos formatted addresses)
laconicd add-genesis-account $KEY 100000000000000000000000000aphoton --keyring-backend $KEYRING
# Sign genesis transaction
laconicd gentx $KEY 1000000000000000000000aphoton --keyring-backend $KEYRING --chain-id $CHAINID
# Collect genesis tx
laconicd collect-gentxs
# Run this to ensure everything worked and that the genesis file is setup correctly
laconicd validate-genesis
if [[ $1 == "pending" ]]; then
echo "pending mode is on, please wait for the first block committed."
fi
# Start the node (remove the --pruning=nothing flag if historical queries are not needed)
laconicd start --pruning=nothing --evm.tracer=json $TRACE --log_level $LOGLEVEL --minimum-gas-prices=0.0001aphoton --json-rpc.api eth,txpool,personal,net,debug,web3,miner --api.enable --gql-server --gql-playground

View File

@ -1,37 +0,0 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
# Check existing config if it exists
if [ -f /app/jwt.txt ] && [ -f /app/rollup.json ]; then
echo "Found existing L2 config, cross-checking with L1 deployment config"
SOURCE_L1_CONF=$(cat /contracts-bedrock/deploy-config/getting-started.json)
EXP_L1_BLOCKHASH=$(echo "$SOURCE_L1_CONF" | jq -r '.l1StartingBlockTag')
EXP_BATCHER=$(echo "$SOURCE_L1_CONF" | jq -r '.batchSenderAddress')
GEN_L2_CONF=$(cat /app/rollup.json)
GEN_L1_BLOCKHASH=$(echo "$GEN_L2_CONF" | jq -r '.genesis.l1.hash')
GEN_BATCHER=$(echo "$GEN_L2_CONF" | jq -r '.genesis.system_config.batcherAddr')
if [ "$EXP_L1_BLOCKHASH" = "$GEN_L1_BLOCKHASH" ] && [ "$EXP_BATCHER" = "$GEN_BATCHER" ]; then
echo "Config cross-checked, exiting"
exit 0
fi
echo "Existing L2 config doesn't match the L1 deployment config, please clear L2 config volume before starting"
exit 1
fi
op-node genesis l2 \
--deploy-config /contracts-bedrock/deploy-config/getting-started.json \
--deployment-dir /contracts-bedrock/deployments/getting-started/ \
--outfile.l2 /app/genesis.json \
--outfile.rollup /app/rollup.json \
--l1-rpc $CERC_L1_RPC
openssl rand -hex 32 > /app/jwt.txt

View File

@ -1,131 +0,0 @@
#!/bin/bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_CHAIN_ID="${CERC_L1_CHAIN_ID:-${DEFAULT_CERC_L1_CHAIN_ID}}"
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
CERC_L1_ACCOUNTS_CSV_URL="${CERC_L1_ACCOUNTS_CSV_URL:-${DEFAULT_CERC_L1_ACCOUNTS_CSV_URL}}"
echo "Using L1 RPC endpoint ${CERC_L1_RPC}"
IMPORT_1="import './verify-contract-deployment'"
IMPORT_2="import './rekey-json'"
IMPORT_3="import './send-balance'"
# Append mounted tasks to tasks/index.ts file if not present
if ! grep -Fxq "$IMPORT_1" tasks/index.ts; then
echo "$IMPORT_1" >> tasks/index.ts
echo "$IMPORT_2" >> tasks/index.ts
echo "$IMPORT_3" >> tasks/index.ts
fi
# Update the chainId in the hardhat config
sed -i "/getting-started/ {n; s/.*chainId.*/ chainId: $CERC_L1_CHAIN_ID,/}" hardhat.config.ts
# Exit if a deployment already exists (on restarts)
# Note: fixturenet-eth-geth currently starts fresh on a restart
if [ -d "deployments/getting-started" ]; then
echo "Deployment directory deployments/getting-started found, checking SystemDictator deployment"
# Read JSON file into variable
SYSTEM_DICTATOR_DETAILS=$(cat deployments/getting-started/SystemDictator.json)
# Parse JSON into variables
SYSTEM_DICTATOR_ADDRESS=$(echo "$SYSTEM_DICTATOR_DETAILS" | jq -r '.address')
SYSTEM_DICTATOR_TXHASH=$(echo "$SYSTEM_DICTATOR_DETAILS" | jq -r '.transactionHash')
if yarn hardhat verify-contract-deployment --contract "${SYSTEM_DICTATOR_ADDRESS}" --transaction-hash "${SYSTEM_DICTATOR_TXHASH}"; then
echo "Deployment verfication successful, exiting"
exit 0
else
echo "Deployment verfication failed, please clear L1 deployment volume before starting"
exit 1
fi
fi
# Generate the L2 account addresses
yarn hardhat rekey-json --output /l2-accounts/keys.json
# Read JSON file into variable
KEYS_JSON=$(cat /l2-accounts/keys.json)
# Parse JSON into variables
ADMIN_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Admin.address')
ADMIN_PRIV_KEY=$(echo "$KEYS_JSON" | jq -r '.Admin.privateKey')
PROPOSER_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Proposer.address')
BATCHER_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Batcher.address')
SEQUENCER_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Sequencer.address')
# Get the private keys of L1 accounts
if [ -n "$CERC_L1_ACCOUNTS_CSV_URL" ] && \
l1_accounts_response=$(curl -L --write-out '%{http_code}' --silent --output /dev/null "$CERC_L1_ACCOUNTS_CSV_URL") && \
[ "$l1_accounts_response" -eq 200 ];
then
echo "Fetching L1 account credentials using provided URL"
mkdir -p /geth-accounts
wget -O /geth-accounts/accounts.csv "$CERC_L1_ACCOUNTS_CSV_URL"
CERC_L1_ADDRESS=$(head -n 1 /geth-accounts/accounts.csv | cut -d ',' -f 2)
CERC_L1_PRIV_KEY=$(head -n 1 /geth-accounts/accounts.csv | cut -d ',' -f 3)
CERC_L1_ADDRESS_2=$(awk -F, 'NR==2{print $(NF-1)}' /geth-accounts/accounts.csv)
CERC_L1_PRIV_KEY_2=$(awk -F, 'NR==2{print $NF}' /geth-accounts/accounts.csv)
else
echo "Couldn't fetch L1 account credentials, using them from env"
fi
# Send balances to the above L2 addresses
yarn hardhat send-balance --to "${ADMIN_ADDRESS}" --amount 2 --private-key "${CERC_L1_PRIV_KEY}" --network getting-started
yarn hardhat send-balance --to "${PROPOSER_ADDRESS}" --amount 5 --private-key "${CERC_L1_PRIV_KEY}" --network getting-started
yarn hardhat send-balance --to "${BATCHER_ADDRESS}" --amount 1000 --private-key "${CERC_L1_PRIV_KEY}" --network getting-started
echo "Balances sent to L2 accounts"
# Select a finalized L1 block as the starting point for roll ups
until FINALIZED_BLOCK=$(cast block finalized --rpc-url "$CERC_L1_RPC"); do
echo "Waiting for a finalized L1 block to exist, retrying after 10s"
sleep 10
done
L1_BLOCKNUMBER=$(echo "$FINALIZED_BLOCK" | awk '/number/{print $2}')
L1_BLOCKHASH=$(echo "$FINALIZED_BLOCK" | awk '/hash/{print $2}')
L1_BLOCKTIMESTAMP=$(echo "$FINALIZED_BLOCK" | awk '/timestamp/{print $2}')
echo "Selected L1 block ${L1_BLOCKNUMBER} as the starting block for roll ups"
# Update the deployment config
sed -i 's/"l2OutputOracleStartingTimestamp": TIMESTAMP/"l2OutputOracleStartingTimestamp": '"$L1_BLOCKTIMESTAMP"'/g' deploy-config/getting-started.json
jq --arg chainid "$CERC_L1_CHAIN_ID" '.l1ChainID = ($chainid | tonumber)' deploy-config/getting-started.json > tmp.json && mv tmp.json deploy-config/getting-started.json
node update-config.js deploy-config/getting-started.json "$ADMIN_ADDRESS" "$PROPOSER_ADDRESS" "$BATCHER_ADDRESS" "$SEQUENCER_ADDRESS" "$L1_BLOCKHASH"
echo "Updated the deployment config"
# Create a .env file
echo "L1_RPC=$CERC_L1_RPC" > .env
echo "PRIVATE_KEY_DEPLOYER=$ADMIN_PRIV_KEY" >> .env
echo "Deploying the L1 smart contracts, this will take a while..."
# Deploy the L1 smart contracts
yarn hardhat deploy --network getting-started --tags l1
echo "Deployed the L1 smart contracts"
# Read Proxy contract's JSON and get the address
PROXY_JSON=$(cat deployments/getting-started/Proxy__OVM_L1StandardBridge.json)
PROXY_ADDRESS=$(echo "$PROXY_JSON" | jq -r '.address')
# Send balance to the above Proxy contract in L1 for reflecting balance in L2
# First account
yarn hardhat send-balance --to "${PROXY_ADDRESS}" --amount 1 --private-key "${CERC_L1_PRIV_KEY}" --network getting-started
# Second account
yarn hardhat send-balance --to "${PROXY_ADDRESS}" --amount 1 --private-key "${CERC_L1_PRIV_KEY_2}" --network getting-started
echo "Balance sent to Proxy L2 contract"
echo "Use following accounts for transactions in L2:"
echo "${CERC_L1_ADDRESS}"
echo "${CERC_L1_ADDRESS_2}"
echo "Done"

View File

@ -1,36 +0,0 @@
const fs = require('fs')
// Get the command-line argument
const configFile = process.argv[2]
const adminAddress = process.argv[3]
const proposerAddress = process.argv[4]
const batcherAddress = process.argv[5]
const sequencerAddress = process.argv[6]
const blockHash = process.argv[7]
// Read the JSON file
const configData = fs.readFileSync(configFile)
const configObj = JSON.parse(configData)
// Update the finalSystemOwner property with the ADMIN_ADDRESS value
configObj.finalSystemOwner =
configObj.portalGuardian =
configObj.controller =
configObj.l2OutputOracleChallenger =
configObj.proxyAdminOwner =
configObj.baseFeeVaultRecipient =
configObj.l1FeeVaultRecipient =
configObj.sequencerFeeVaultRecipient =
configObj.governanceTokenOwner =
adminAddress
configObj.l2OutputOracleProposer = proposerAddress
configObj.batchSenderAddress = batcherAddress
configObj.p2pSequencerAddress = sequencerAddress
configObj.l1StartingBlockTag = blockHash
// Write the updated JSON object back to the file
fs.writeFileSync(configFile, JSON.stringify(configObj, null, 2))

View File

@ -1,90 +0,0 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
# TODO: Add in container build or use other tool
echo "Installing jq"
apk update && apk add jq
# Get Sequencer key from keys.json
SEQUENCER_KEY=$(jq -r '.Sequencer.privateKey' /l2-accounts/keys.json | tr -d '"')
# Initialize op-geth if datadir/geth not found
if [ -f /op-node/jwt.txt ] && [ -d datadir/geth ]; then
echo "Found existing datadir, checking block signer key"
BLOCK_SIGNER_KEY=$(cat datadir/block-signer-key)
if [ "$SEQUENCER_KEY" = "$BLOCK_SIGNER_KEY" ]; then
echo "Sequencer and block signer keys match, skipping initialization"
else
echo "Sequencer and block signer keys don't match, please clear L2 geth data volume before starting"
exit 1
fi
else
echo "Initializing op-geth"
mkdir -p datadir
echo "pwd" > datadir/password
echo $SEQUENCER_KEY > datadir/block-signer-key
geth account import --datadir=datadir --password=datadir/password datadir/block-signer-key
while [ ! -f "/op-node/jwt.txt" ]
do
echo "Config files not created. Checking after 5 seconds."
sleep 5
done
echo "Config files created by op-node, proceeding with the initialization..."
geth init --datadir=datadir /op-node/genesis.json
echo "Node Initialized"
fi
SEQUENCER_ADDRESS=$(jq -r '.Sequencer.address' /l2-accounts/keys.json | tr -d '"')
echo "SEQUENCER_ADDRESS: ${SEQUENCER_ADDRESS}"
cleanup() {
echo "Signal received, cleaning up..."
kill ${geth_pid}
wait
echo "Done"
}
trap 'cleanup' INT TERM
# Run op-geth
geth \
--datadir ./datadir \
--http \
--http.corsdomain="*" \
--http.vhosts="*" \
--http.addr=0.0.0.0 \
--http.api=web3,debug,eth,txpool,net,engine \
--ws \
--ws.addr=0.0.0.0 \
--ws.port=8546 \
--ws.origins="*" \
--ws.api=debug,eth,txpool,net,engine \
--syncmode=full \
--gcmode=archive \
--nodiscover \
--maxpeers=0 \
--networkid=42069 \
--authrpc.vhosts="*" \
--authrpc.addr=0.0.0.0 \
--authrpc.port=8551 \
--authrpc.jwtsecret=/op-node/jwt.txt \
--rollup.disabletxpoolgossip=true \
--password=./datadir/password \
--allow-insecure-unlock \
--mine \
--miner.etherbase=$SEQUENCER_ADDRESS \
--unlock=$SEQUENCER_ADDRESS \
&
geth_pid=$!
wait $geth_pid

View File

@ -1,26 +0,0 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
# Get Sequencer key from keys.json
SEQUENCER_KEY=$(jq -r '.Sequencer.privateKey' /l2-accounts/keys.json | tr -d '"')
# Run op-node
op-node \
--l2=http://op-geth:8551 \
--l2.jwt-secret=/op-node-data/jwt.txt \
--sequencer.enabled \
--sequencer.l1-confs=3 \
--verifier.l1-confs=3 \
--rollup.config=/op-node-data/rollup.json \
--rpc.addr=0.0.0.0 \
--rpc.port=8547 \
--p2p.disable \
--rpc.enable-admin \
--p2p.sequencer.key=$SEQUENCER_KEY \
--l1=$CERC_L1_RPC \
--l1.rpckind=any

View File

@ -1,36 +0,0 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
# Read the L2OutputOracle contract address from the deployment
L2OO_DEPLOYMENT=$(cat /contracts-bedrock/deployments/getting-started/L2OutputOracle.json)
L2OO_ADDR=$(echo "$L2OO_DEPLOYMENT" | jq -r '.address')
# Get Proposer key from keys.json
PROPOSER_KEY=$(jq -r '.Proposer.privateKey' /l2-accounts/keys.json | tr -d '"')
cleanup() {
echo "Signal received, cleaning up..."
kill ${proposer_pid}
wait
echo "Done"
}
trap 'cleanup' INT TERM
# Run op-proposer
op-proposer \
--poll-interval 12s \
--rpc.port 8560 \
--rollup-rpc http://op-node:8547 \
--l2oo-address $L2OO_ADDR \
--private-key $PROPOSER_KEY \
--l1-eth-rpc $CERC_L1_RPC \
&
proposer_pid=$!
wait $proposer_pid

View File

@ -1,5 +0,0 @@
# Defaults
# ipld-eth-server endpoints
DEFAULT_CERC_IPLD_ETH_RPC=
DEFAULT_CERC_IPLD_ETH_GQL=

View File

@ -1,6 +0,0 @@
# Config for laconic-console running in a fixturenet with laconicd
services:
wns:
server: 'LACONIC_HOSTED_ENDPOINT:9473/api'
webui: 'LACONIC_HOSTED_ENDPOINT:9473/console'

View File

@ -1,21 +0,0 @@
#!/usr/bin/env bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
# Test if the container's filesystem is old (run previously) or new
EXISTSFILENAME=/data/exists
echo "Test container starting"
if [[ -f "$EXISTSFILENAME" ]];
then
TIMESTAMP=`cat $EXISTSFILENAME`
echo "Filesystem is old, created: $TIMESTAMP"
else
echo "Filesystem is fresh"
echo `date` > $EXISTSFILENAME
fi
if [ -n "$CERC_TEST_PARAM_1" ]; then
echo "Test-param-1: ${CERC_TEST_PARAM_1}"
fi
# Run nginx which will block here forever
/usr/sbin/nginx -g "daemon off;"

View File

@ -1,9 +0,0 @@
#!/usr/bin/env bash
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_WEBAPP_FILES_DIR="${CERC_WEBAPP_FILES_DIR:-/data}"
/scripts/apply-webapp-config.sh /config/config.yml ${CERC_WEBAPP_FILES_DIR}
http-server -p 80 ${CERC_WEBAPP_FILES_DIR}

View File

@ -1,72 +0,0 @@
# Azimuth Watcher
Instructions to setup and deploy Azimuth Watcher stack
## Setup
Prerequisite: `ipld-eth-server` RPC and GQL endpoints
Clone required repositories:
```bash
laconic-so --stack azimuth setup-repositories
```
NOTE: If the repository already exists and checked out to a different version, `setup-repositories` command will throw an error.
For getting around this, the `azimuth-watcher-ts` repository can be removed and then run the command.
Checkout to the required versions and branches in repos
```bash
# azimuth-watcher-ts
cd ~/cerc/azimuth-watcher-ts
git checkout v0.1.0
```
Build the container images:
```bash
laconic-so --stack azimuth build-containers
```
This should create the required docker images in the local image registry.
### Configuration
* Create and update an env file to be used in the next step:
```bash
# External ipld-eth-server endpoints
CERC_IPLD_ETH_RPC=
CERC_IPLD_ETH_GQL=
```
* NOTE: If `ipld-eth-server` is running on the host machine, use `host.docker.internal` as the hostname to access host ports
### Deploy the stack
* Deploy the containers:
```bash
laconic-so --stack azimuth deploy-system --env-file <PATH_TO_ENV_FILE> up
```
* List and check the health status of all the containers using `docker ps` and wait for them to be `healthy`
## Clean up
Stop all the services running in background:
```bash
laconic-so --stack azimuth deploy-system down
```
Clear volumes created by this stack:
```bash
# List all relevant volumes
docker volume ls -q --filter "name=.*watcher_db_data"
# Remove all the listed volumes
docker volume rm $(docker volume ls -q --filter "name=.*watcher_db_data")
```

View File

@ -1,123 +0,0 @@
# fixturenet-optimism
Instructions to setup and deploy an end-to-end L1+L2 stack with [fixturenet-eth](../fixturenet-eth/) (L1) and [Optimism](https://stack.optimism.io) (L2)
We support running just the L2 part of stack, given an external L1 endpoint. Follow the [L2 only doc](./l2-only.md) for the same.
## Setup
Clone required repositories:
```bash
laconic-so --stack fixturenet-optimism setup-repositories
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories mentioned below and re-run the command
```
Build the container images:
```bash
laconic-so --stack fixturenet-optimism build-containers
# If redeploying with changes in the stack containers
laconic-so --stack fixturenet-optimism build-containers --force-rebuild
# If errors are thrown during build, old images used by this stack would have to be deleted
```
Note: this will take >10 mins depending on the specs of your machine, and **requires** 16GB of memory or greater.
This should create the required docker images in the local image registry:
* `cerc/go-ethereum`
* `cerc/lighthouse`
* `cerc/fixturenet-eth-geth`
* `cerc/fixturenet-eth-lighthouse`
* `cerc/foundry`
* `cerc/optimism-contracts`
* `cerc/optimism-l2geth`
* `cerc/optimism-op-node`
* `cerc/optimism-op-batcher`
* `cerc/optimism-op-proposer`
## Deploy
Deploy the stack:
```bash
laconic-so --stack fixturenet-optimism deploy up
```
The `fixturenet-optimism-contracts` service takes a while to complete running as it:
1. waits for the 'Merge' to happen on L1
2. waits for a finalized block to exist on L1 (so that it can be taken as a starting block for roll ups)
3. deploys the L1 contracts
It may restart a few times after running into errors.
To list and monitor the running containers:
```bash
laconic-so --stack fixturenet-optimism deploy ps
# With status
docker ps
# Check logs for a container
docker logs -f <CONTAINER_ID>
```
## Clean up
Stop all services running in the background:
```bash
laconic-so --stack fixturenet-optimism deploy down 30
```
Clear volumes created by this stack:
```bash
# List all relevant volumes
docker volume ls -q --filter "name=.*l1_deployment|.*l2_accounts|.*l2_config|.*l2_geth_data"
# Remove all the listed volumes
docker volume rm $(docker volume ls -q --filter "name=.*l1_deployment|.*l2_accounts|.*l2_config|.*l2_geth_data")
```
## Troubleshooting
* If `op-geth` service aborts or is restarted, the following error might occur in the `op-node` service:
```bash
WARN [02-16|21:22:02.868] Derivation process temporary error attempts=14 err="stage 0 failed resetting: temp: failed to find the L2 Heads to start from: failed to fetch L2 block by hash 0x0000000000000000000000000000000000000000000000000000000000000000: failed to determine block-hash of hash 0x0000000000000000000000000000000000000000000000000000000000000000, could not get payload: not found"
```
* This means that the data directory that `op-geth` is using is corrupted and needs to be reinitialized; the containers `op-geth`, `op-node` and `op-batcher` need to be started afresh:
WARNING: This will reset the L2 chain; consequently, all the data on it will be lost
* Stop and remove the concerned containers:
```bash
# List the containers
docker ps -f "name=op-geth|op-node|op-batcher"
# Force stop and remove the listed containers
docker rm -f $(docker ps -qf "name=op-geth|op-node|op-batcher")
```
* Remove the concerned volume:
```bash
# List the volume
docker volume ls -q --filter name=l2_geth_data
# Remove the listed volume
docker volume rm $(docker volume ls -q --filter name=l2_geth_data)
```
* Re-run the deployment command used in [Deploy](#deploy) to restart the stopped containers
## Known Issues
* Resource requirements (memory + time) for building the `cerc/foundry` image are on the higher side
* `cerc/optimism-contracts` image is currently based on `cerc/foundry` (Optimism requires foundry installation)

View File

@ -1,100 +0,0 @@
# fixturenet-optimism
Instructions to setup and deploy L2 fixturenet using [Optimism](https://stack.optimism.io)
## Setup
Prerequisite: An L1 Ethereum RPC endpoint
Clone required repositories:
```bash
laconic-so --stack fixturenet-optimism setup-repositories --exclude git.vdb.to/cerc-io/go-ethereum
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories mentioned below and re-run the command
```
Build the container images:
```bash
laconic-so --stack fixturenet-optimism build-containers --include cerc/foundry,cerc/optimism-contracts,cerc/optimism-op-node,cerc/optimism-l2geth,cerc/optimism-op-batcher,cerc/optimism-op-proposer
```
This should create the required docker images in the local image registry:
* `cerc/foundry`
* `cerc/optimism-contracts`
* `cerc/optimism-l2geth`
* `cerc/optimism-op-node`
* `cerc/optimism-op-batcher`
* `cerc/optimism-op-proposer`
## Deploy
Create and update an env file to be used in the next step ([defaults](../../config/fixturenet-optimism/l1-params.env)):
```bash
# External L1 endpoint
CERC_L1_CHAIN_ID=
CERC_L1_RPC=
CERC_L1_HOST=
CERC_L1_PORT=
# URL to get CSV with credentials for accounts on L1
# that are used to send balance to Optimism Proxy contract
# (enables them to do transactions on L2)
CERC_L1_ACCOUNTS_CSV_URL=
# OR
# Specify the required account credentials
CERC_L1_ADDRESS=
CERC_L1_PRIV_KEY=
CERC_L1_ADDRESS_2=
CERC_L1_PRIV_KEY_2=
```
* NOTE: If L1 is running on the host machine, use `host.docker.internal` as the hostname to access the host port
Deploy the stack:
```bash
laconic-so --stack fixturenet-optimism deploy --include fixturenet-optimism --env-file <PATH_TO_ENV_FILE> up
```
The `fixturenet-optimism-contracts` service may take a while (`~15 mins`) to complete running as it:
1. waits for the 'Merge' to happen on L1
2. waits for a finalized block to exist on L1 (so that it can be taken as a starting block for roll ups)
3. deploys the L1 contracts
To list down and monitor the running containers:
```bash
laconic-so --stack fixturenet-optimism deploy --include fixturenet-optimism ps
# With status
docker ps
# Check logs for a container
docker logs -f <CONTAINER_ID>
```
## Clean up
Stop all services running in the background:
```bash
laconic-so --stack fixturenet-optimism deploy --include fixturenet-optimism down 30
```
Clear volumes created by this stack:
```bash
# List all relevant volumes
docker volume ls -q --filter "name=.*l1_deployment|.*l2_accounts|.*l2_config|.*l2_geth_data"
# Remove all the listed volumes
docker volume rm $(docker volume ls -q --filter "name=.*l1_deployment|.*l2_accounts|.*l2_config|.*l2_geth_data")
```
## Troubleshooting
See [Troubleshooting](./README.md#troubleshooting)

View File

@ -1,67 +0,0 @@
# Copyright © 2022, 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from python_on_whales import DockerClient, DockerException
from app.deploy.deployer import Deployer, DeployerException
class DockerDeployer(Deployer):
name: str = "compose"
def __init__(self, compose_files, compose_project_name, compose_env_file) -> None:
self.docker = DockerClient(compose_files=compose_files, compose_project_name=compose_project_name,
compose_env_file=compose_env_file)
def up(self, detach, services):
try:
return self.docker.compose.up(detach=detach, services=services)
except DockerException as e:
raise DeployerException(e)
def down(self, timeout, volumes):
try:
return self.docker.compose.down(timeout=timeout, volumes=volumes)
except DockerException as e:
raise DeployerException(e)
def ps(self):
try:
return self.docker.compose.ps()
except DockerException as e:
raise DeployerException(e)
def port(self, service, private_port):
try:
return self.docker.compose.port(service=service, private_port=private_port)
except DockerException as e:
raise DeployerException(e)
def execute(self, service, command, tty, envs):
try:
return self.docker.compose.execute(service=service, command=command, tty=tty, envs=envs)
except DockerException as e:
raise DeployerException(e)
def logs(self, services, tail, follow, stream):
try:
return self.docker.compose.logs(services=services, tail=tail, follow=follow, stream=stream)
except DockerException as e:
raise DeployerException(e)
def run(self, image, command, user, volumes, entrypoint=None):
try:
return self.docker.run(image=image, command=command, user=user, volumes=volumes, entrypoint=entrypoint)
except DockerException as e:
raise DeployerException(e)

View File

@ -1,26 +0,0 @@
# Copyright © 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from app.deploy.k8s.deploy_k8s import K8sDeployer
from app.deploy.compose.deploy_docker import DockerDeployer
def getDeployer(type, compose_files, compose_project_name, compose_env_file):
if type == "compose" or type is None:
return DockerDeployer(compose_files, compose_project_name, compose_env_file)
elif type == "k8s":
return K8sDeployer(compose_files, compose_project_name, compose_env_file)
else:
print(f"ERROR: deploy-to {type} is not valid")

View File

@ -1,84 +0,0 @@
# Copyright © 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from kubernetes import client
from typing import Any, List, Set
from app.opts import opts
from app.util import get_yaml
class ClusterInfo:
parsed_pod_yaml_map: Any = {}
image_set: Set[str] = set()
app_name: str = "test-app"
deployment_name: str = "test-deployment"
def __init__(self) -> None:
pass
def int_from_pod_files(self, pod_files: List[str]):
for pod_file in pod_files:
with open(pod_file, "r") as pod_file_descriptor:
parsed_pod_file = get_yaml().load(pod_file_descriptor)
self.parsed_pod_yaml_map[pod_file] = parsed_pod_file
if opts.o.debug:
print(f"parsed_pod_yaml_map: {self.parsed_pod_yaml_map}")
# Find the set of images in the pods
for pod_name in self.parsed_pod_yaml_map:
pod = self.parsed_pod_yaml_map[pod_name]
services = pod["services"]
for service_name in services:
service_info = services[service_name]
image = service_info["image"]
self.image_set.add(image)
if opts.o.debug:
print(f"image_set: {self.image_set}")
def get_deployment(self):
containers = []
for pod_name in self.parsed_pod_yaml_map:
pod = self.parsed_pod_yaml_map[pod_name]
services = pod["services"]
for service_name in services:
container_name = service_name
service_info = services[service_name]
image = service_info["image"]
container = client.V1Container(
name=container_name,
image=image,
ports=[client.V1ContainerPort(container_port=80)],
resources=client.V1ResourceRequirements(
requests={"cpu": "100m", "memory": "200Mi"},
limits={"cpu": "500m", "memory": "500Mi"},
),
)
containers.append(container)
template = client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(labels={"app": self.app_name}),
spec=client.V1PodSpec(containers=containers),
)
spec = client.V1DeploymentSpec(
replicas=1, template=template, selector={
"matchLabels":
{"app": self.app_name}})
deployment = client.V1Deployment(
api_version="apps/v1",
kind="Deployment",
metadata=client.V1ObjectMeta(name=self.deployment_name),
spec=spec,
)
return deployment

View File

@ -1,99 +0,0 @@
# Copyright © 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from kubernetes import client, config
from app.deploy.deployer import Deployer
from app.deploy.k8s.helpers import create_cluster, destroy_cluster, load_images_into_kind
from app.deploy.k8s.helpers import pods_in_deployment, log_stream_from_string
from app.deploy.k8s.cluster_info import ClusterInfo
from app.opts import opts
class K8sDeployer(Deployer):
name: str = "k8s"
core_api: client.CoreV1Api
apps_api: client.AppsV1Api
kind_cluster_name: str
cluster_info : ClusterInfo
def __init__(self, compose_files, compose_project_name, compose_env_file) -> None:
if (opts.o.debug):
print(f"Compose files: {compose_files}")
print(f"Project name: {compose_project_name}")
print(f"Env file: {compose_env_file}")
self.kind_cluster_name = compose_project_name
self.cluster_info = ClusterInfo()
self.cluster_info.int_from_pod_files(compose_files)
def connect_api(self):
config.load_kube_config(context=f"kind-{self.kind_cluster_name}")
self.core_api = client.CoreV1Api()
self.apps_api = client.AppsV1Api()
def up(self, detach, services):
# Create the kind cluster
create_cluster(self.kind_cluster_name)
self.connect_api()
# Ensure the referenced containers are copied into kind
load_images_into_kind(self.kind_cluster_name, self.cluster_info.image_set)
# Process compose files into a Deployment
deployment = self.cluster_info.get_deployment()
# Create the k8s objects
resp = self.apps_api.create_namespaced_deployment(
body=deployment, namespace="default"
)
if opts.o.debug:
print("Deployment created.\n")
print(f"{resp.metadata.namespace} {resp.metadata.name} \
{resp.metadata.generation} {resp.spec.template.spec.containers[0].image}")
def down(self, timeout, volumes):
# Delete the k8s objects
# Destroy the kind cluster
destroy_cluster(self.kind_cluster_name)
def ps(self):
self.connect_api()
# Call whatever API we need to get the running container list
ret = self.core_api.list_pod_for_all_namespaces(watch=False)
if ret.items:
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
ret = self.core_api.list_node(pretty=True, watch=False)
return []
def port(self, service, private_port):
# Since we handle the port mapping, need to figure out where this comes from
# Also look into whether it makes sense to get ports for k8s
pass
def execute(self, service_name, command, tty, envs):
# Call the API to execute a command in a running container
pass
def logs(self, services, tail, follow, stream):
self.connect_api()
pods = pods_in_deployment(self.core_api, "test-deployment")
if len(pods) > 1:
print("Warning: more than one pod in the deployment")
k8s_pod_name = pods[0]
log_data = self.core_api.read_namespaced_pod_log(k8s_pod_name, namespace="default", container="test")
return log_stream_from_string(log_data)
def run(self, image, command, user, volumes, entrypoint=None):
# We need to figure out how to do this -- check why we're being called first
pass

View File

@ -1,57 +0,0 @@
# Copyright © 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from kubernetes import client
import subprocess
from typing import Set
from app.opts import opts
def _run_command(command: str):
if opts.o.debug:
print(f"Running: {command}")
result = subprocess.run(command, shell=True)
if opts.o.debug:
print(f"Result: {result}")
def create_cluster(name: str):
_run_command(f"kind create cluster --name {name}")
def destroy_cluster(name: str):
_run_command(f"kind delete cluster --name {name}")
def load_images_into_kind(kind_cluster_name: str, image_set: Set[str]):
for image in image_set:
_run_command(f"kind load docker-image {image} --name {kind_cluster_name}")
def pods_in_deployment(core_api: client.CoreV1Api, deployment_name: str):
pods = []
pod_response = core_api.list_namespaced_pod(namespace="default", label_selector="app=test-app")
if opts.o.debug:
print(f"pod_response: {pod_response}")
for pod_info in pod_response.items:
pod_name = pod_info.metadata.name
pods.append(pod_name)
return pods
def log_stream_from_string(s: str):
# Note response has to be UTF-8 encoded because the caller expects to decode it
yield ("ignore", s.encode())

View File

@ -1,30 +0,0 @@
# Copyright © 2022, 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from pathlib import Path
import typing
from app.util import get_yaml
class Spec:
obj: typing.Any
def __init__(self) -> None:
pass
def init_from_file(self, file_path: Path):
with file_path:
self.obj = get_yaml().load(open(file_path, "r"))

View File

@ -26,7 +26,7 @@ In addition to the pre-requisites listed in the [README](/README.md), the follow
1. Clone this repository: 1. Clone this repository:
``` ```
$ git clone https://github.com/cerc-io/stack-orchestrator.git $ git clone https://git.vdb.to/cerc-io/stack-orchestrator.git
``` ```
2. Enter the project directory: 2. Enter the project directory:

View File

@ -1,14 +1,14 @@
# Adding a new stack # Adding a new stack
See [this PR](https://github.com/cerc-io/stack-orchestrator/pull/434) for an example of how to currently add a minimal stack to stack orchestrator. The [reth stack](https://github.com/cerc-io/stack-orchestrator/pull/435) is another good example. See [this PR](https://git.vdb.to/cerc-io/stack-orchestrator/pull/434) for an example of how to currently add a minimal stack to stack orchestrator. The [reth stack](https://git.vdb.to/cerc-io/stack-orchestrator/pull/435) is another good example.
For external developers, we recommend forking this repo and adding your stack directly to your fork. This initially requires running in "developer mode" as described [here](/docs/CONTRIBUTING.md). Check out the [Namada stack](https://github.com/vknowable/stack-orchestrator/blob/main/app/data/stacks/public-namada/digitalocean_quickstart.md) from Knowable to see how that is done. For external developers, we recommend forking this repo and adding your stack directly to your fork. This initially requires running in "developer mode" as described [here](/docs/CONTRIBUTING.md). Check out the [Namada stack](https://github.com/vknowable/stack-orchestrator/blob/main/app/data/stacks/public-namada/digitalocean_quickstart.md) from Knowable to see how that is done.
Core to the feature completeness of stack orchestrator is to [decouple the tool functionality from payload](https://github.com/cerc-io/stack-orchestrator/issues/315) which will no longer require forking to add a stack. Core to the feature completeness of stack orchestrator is to [decouple the tool functionality from payload](https://git.vdb.to/cerc-io/stack-orchestrator/issues/315) which will no longer require forking to add a stack.
## Example ## Example
- in `app/data/stacks/my-new-stack/stack.yml` add: - in `stack_orchestrator/data/stacks/my-new-stack/stack.yml` add:
```yaml ```yaml
version: "0.1" version: "0.1"
@ -21,7 +21,7 @@ pods:
- my-new-stack - my-new-stack
``` ```
- in `app/data/container-build/cerc-my-new-stack/build.sh` add: - in `stack_orchestrator/data/container-build/cerc-my-new-stack/build.sh` add:
```yaml ```yaml
#!/usr/bin/env bash #!/usr/bin/env bash
@ -30,7 +30,7 @@ source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
docker build -t cerc/my-new-stack:local -f ${CERC_REPO_BASE_DIR}/my-new-stack/Dockerfile ${build_command_args} ${CERC_REPO_BASE_DIR}/my-new-stack docker build -t cerc/my-new-stack:local -f ${CERC_REPO_BASE_DIR}/my-new-stack/Dockerfile ${build_command_args} ${CERC_REPO_BASE_DIR}/my-new-stack
``` ```
- in `app/data/compose/docker-compose-my-new-stack.yml` add: - in `stack_orchestrator/data/compose/docker-compose-my-new-stack.yml` add:
```yaml ```yaml
version: "3.2" version: "3.2"
@ -43,20 +43,20 @@ services:
- "0.0.0.0:3000:3000" - "0.0.0.0:3000:3000"
``` ```
- in `app/data/repository-list.txt` add: - in `stack_orchestrator/data/repository-list.txt` add:
```bash ```bash
github.com/my-org/my-new-stack github.com/my-org/my-new-stack
``` ```
whereby that repository contains your source code and a `Dockerfile`, and matches the `repos:` field in the `stack.yml`. whereby that repository contains your source code and a `Dockerfile`, and matches the `repos:` field in the `stack.yml`.
- in `app/data/container-image-list.txt` add: - in `stack_orchestrator/data/container-image-list.txt` add:
```bash ```bash
cerc/my-new-stack cerc/my-new-stack
``` ```
- in `app/data/pod-list.txt` add: - in `stack_orchestrator/data/pod-list.txt` add:
```bash ```bash
my-new-stack my-new-stack

View File

@ -1,6 +1,6 @@
# Specification # Specification
Note: this page is out of date (but still useful) - it will no longer be useful once stacks are [decoupled from the tool functionality](https://github.com/cerc-io/stack-orchestrator/issues/315). Note: this page is out of date (but still useful) - it will no longer be useful once stacks are [decoupled from the tool functionality](https://git.vdb.to/cerc-io/stack-orchestrator/issues/315).
## Implementation ## Implementation

64
docs/webapp.md Normal file
View File

@ -0,0 +1,64 @@
### Building and Running Webapps
It is possible to build and run Next.js webapps using the `build-webapp` and `run-webapp` subcommands.
To make it easier to build once and deploy into different environments and with different configuration,
compilation and static page generation are separated in the `build-webapp` and `run-webapp` steps.
This offers much more flexibilty than standard Next.js build methods, since any environment variables accessed
via `process.env`, whether for pages or for API, will have values drawn from their runtime deployment environment,
not their build environment.
## Building
Building usually requires no additional configuration. By default, the Next.js version specified in `package.json`
is used, and either `yarn` or `npm` will be used automatically depending on which lock files are present. These
can be overidden with the build arguments `CERC_NEXT_VERSION` and `CERC_BUILD_TOOL` respectively. For example: `--extra-build-args "--build-arg CERC_NEXT_VERSION=13.4.12"`
**Example**:
```
$ cd ~/cerc
$ git clone git@git.vdb.to:cerc-io/test-progressive-web-app.git
$ laconic-so build-webapp --source-repo ~/cerc/test-progressive-web-app
...
Built host container for ~/cerc/test-progressive-web-app with tag:
cerc/test-progressive-web-app:local
To test locally run:
laconic-so run-webapp --image cerc/test-progressive-web-app:local --env-file /path/to/environment.env
```
## Running
With `run-webapp` a new container will be launched on the local machine, with runtime configuration provided by `--env-file` (if specified) and published on an available port. Multiple instances can be launched with different configuration.
**Example**:
```
# Production env
$ laconic-so run-webapp --image cerc/test-progressive-web-app:local --env-file /path/to/environment/production.env
Image: cerc/test-progressive-web-app:local
ID: 4c6e893bf436b3e91a2b92ce37e30e499685131705700bd92a90d2eb14eefd05
URL: http://localhost:32768
# Dev env
$ laconic-so run-webapp --image cerc/test-progressive-web-app:local --env-file /path/to/environment/dev.env
Image: cerc/test-progressive-web-app:local
ID: 9ab96494f563aafb6c057d88df58f9eca81b90f8721a4e068493a289a976051c
URL: http://localhost:32769
```
## Deploying
Use the subcommand `deploy-webapp create` to make a deployment directory that can be subsequently deployed to a Kubernetes cluster.
Example commands are shown below, assuming that the webapp container image `cerc/test-progressive-web-app:local` has already been built:
```
$ laconic-so deploy-webapp create --kube-config ~/kubectl/k8s-kubeconfig.yaml --image-registry registry.digitalocean.com/laconic-registry --deployment-dir webapp-k8s-deployment --image cerc/test-progressive-web-app:local --url https://test-pwa-app.hosting.laconic.com/ --env-file test-webapp.env
$ laconic-so deployment --dir webapp-k8s-deployment push-images
$ laconic-so deployment --dir webapp-k8s-deployment start
```

View File

@ -1,4 +1,5 @@
python-decouple>=3.8 python-decouple>=3.8
python-dotenv==1.0.0
GitPython>=3.1.32 GitPython>=3.1.32
tqdm>=4.65.0 tqdm>=4.65.0
python-on-whales>=0.64.0 python-on-whales>=0.64.0
@ -9,3 +10,4 @@ pydantic==1.10.9
tomli==2.0.1 tomli==2.0.1
validators==0.22.0 validators==0.22.0
kubernetes>=28.1.0 kubernetes>=28.1.0
humanfriendly>=10.0

View File

@ -41,4 +41,4 @@ runcmd:
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin - apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- systemctl enable docker - systemctl enable docker
- systemctl start docker - systemctl start docker
- git clone https://github.com/cerc-io/stack-orchestrator.git /home/ubuntu/stack-orchestrator - git clone https://git.vdb.to/cerc-io/stack-orchestrator.git /home/ubuntu/stack-orchestrator

View File

@ -31,5 +31,5 @@ runcmd:
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin - apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- systemctl enable docker - systemctl enable docker
- systemctl start docker - systemctl start docker
- curl -L -o /usr/local/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so - curl -L -o /usr/local/bin/laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
- chmod +x /usr/local/bin/laconic-so - chmod +x /usr/local/bin/laconic-so

View File

@ -1,6 +1,6 @@
build_tag_file_name=./app/data/build_tag.txt build_tag_file_name=./stack_orchestrator/data/build_tag.txt
echo "# This file should be re-generated running: scripts/create_build_tag_file.sh script" > $build_tag_file_name echo "# This file should be re-generated running: scripts/create_build_tag_file.sh script" > $build_tag_file_name
product_version_string=$( tail -1 ./app/data/version.txt ) product_version_string=$( tail -1 ./stack_orchestrator/data/version.txt )
commit_string=$( git rev-parse --short HEAD ) commit_string=$( git rev-parse --short HEAD )
timestamp_string=$(date +'%Y%m%d%H%M') timestamp_string=$(date +'%Y%m%d%H%M')
build_tag_string=${product_version_string}-${commit_string}-${timestamp_string} build_tag_string=${product_version_string}-${commit_string}-${timestamp_string}

19
scripts/quick-deploy-test.sh Executable file
View File

@ -0,0 +1,19 @@
#!/usr/bin/env bash
# Beginnings of a script to quickly spin up and test a deployment
if [[ -n "$CERC_SCRIPT_DEBUG" ]]; then
set -x
fi
if [[ -n "$1" ]]; then
stack_name=$1
else
stack_name="test"
fi
spec_file_name="${stack_name}-spec.yml"
deployment_dir_name="${stack_name}-deployment"
rm -f ${spec_file_name}
rm -rf ${deployment_dir_name}
laconic-so --stack ${stack_name} deploy --deploy-to compose init --output ${spec_file_name}
laconic-so --stack ${stack_name} deploy --deploy-to compose create --deployment-dir ${deployment_dir_name} --spec-file ${spec_file_name}
#laconic-so deployment --dir ${deployment_dir_name} start
#laconic-so deployment --dir ${deployment_dir_name} ps
#laconic-so deployment --dir ${deployment_dir_name} stop

View File

@ -137,7 +137,7 @@ fi
echo "**************************************************************************************" echo "**************************************************************************************"
echo "Installing laconic-so" echo "Installing laconic-so"
# install latest `laconic-so` # install latest `laconic-so`
distribution_url=https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so distribution_url=https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
install_filename=${install_dir}/laconic-so install_filename=${install_dir}/laconic-so
mkdir -p ${install_dir} mkdir -p ${install_dir}
curl -L -o ${install_filename} ${distribution_url} curl -L -o ${install_filename} ${distribution_url}

View File

@ -13,8 +13,8 @@ setup(
description='Orchestrates deployment of the Laconic stack', description='Orchestrates deployment of the Laconic stack',
long_description=long_description, long_description=long_description,
long_description_content_type="text/markdown", long_description_content_type="text/markdown",
url='https://github.com/cerc-io/stack-orchestrator', url='https://git.vdb.to/cerc-io/stack-orchestrator',
py_modules=['cli', 'app'], py_modules=['stack_orchestrator'],
packages=find_packages(), packages=find_packages(),
install_requires=[requirements], install_requires=[requirements],
python_requires='>=3.7', python_requires='>=3.7',
@ -25,6 +25,6 @@ setup(
"Operating System :: OS Independent", "Operating System :: OS Independent",
], ],
entry_points={ entry_points={
'console_scripts': ['laconic-so=cli:cli'], 'console_scripts': ['laconic-so=stack_orchestrator.main:cli'],
} }
) )

View File

@ -15,7 +15,7 @@
import os import os
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from app.deploy.deploy import get_stack_status from stack_orchestrator.deploy.deploy import get_stack_status
from decouple import config from decouple import config

View File

@ -27,13 +27,103 @@ import subprocess
import click import click
import importlib.resources import importlib.resources
from pathlib import Path from pathlib import Path
from app.util import include_exclude_check, get_parsed_stack_config from stack_orchestrator.util import include_exclude_check, get_parsed_stack_config, stack_is_external, warn_exit
from app.base import get_npm_registry_url from stack_orchestrator.base import get_npm_registry_url
# TODO: find a place for this # TODO: find a place for this
# epilog="Config provided either in .env or settings.ini or env vars: CERC_REPO_BASE_DIR (defaults to ~/cerc)" # epilog="Config provided either in .env or settings.ini or env vars: CERC_REPO_BASE_DIR (defaults to ~/cerc)"
def make_container_build_env(dev_root_path: str,
container_build_dir: str,
debug: bool,
force_rebuild: bool,
extra_build_args: str):
container_build_env = {
"CERC_NPM_REGISTRY_URL": get_npm_registry_url(),
"CERC_GO_AUTH_TOKEN": config("CERC_GO_AUTH_TOKEN", default=""),
"CERC_NPM_AUTH_TOKEN": config("CERC_NPM_AUTH_TOKEN", default=""),
"CERC_REPO_BASE_DIR": dev_root_path,
"CERC_CONTAINER_BASE_DIR": container_build_dir,
"CERC_HOST_UID": f"{os.getuid()}",
"CERC_HOST_GID": f"{os.getgid()}",
"DOCKER_BUILDKIT": config("DOCKER_BUILDKIT", default="0")
}
container_build_env.update({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
container_build_env.update({"CERC_FORCE_REBUILD": "true"} if force_rebuild else {})
container_build_env.update({"CERC_CONTAINER_EXTRA_BUILD_ARGS": extra_build_args} if extra_build_args else {})
docker_host_env = os.getenv("DOCKER_HOST")
if docker_host_env:
container_build_env.update({"DOCKER_HOST": docker_host_env})
return container_build_env
def process_container(stack: str,
container,
container_build_dir: str,
container_build_env: dict,
dev_root_path: str,
quiet: bool,
verbose: bool,
dry_run: bool,
continue_on_error: bool,
):
if not quiet:
print(f"Building: {container}")
default_container_tag = f"{container}:local"
container_build_env.update({"CERC_DEFAULT_CONTAINER_IMAGE_TAG": default_container_tag})
# Check if this is in an external stack
if stack_is_external(stack):
container_parent_dir = Path(stack).joinpath("container-build")
temp_build_dir = container_parent_dir.joinpath(container.replace("/", "-"))
temp_build_script_filename = temp_build_dir.joinpath("build.sh")
# Now check if the container exists in the external stack.
if not temp_build_script_filename.exists():
# If not, revert to building an internal container
container_parent_dir = container_build_dir
else:
container_parent_dir = container_build_dir
build_dir = container_parent_dir.joinpath(container.replace("/", "-"))
build_script_filename = build_dir.joinpath("build.sh")
if verbose:
print(f"Build script filename: {build_script_filename}")
if os.path.exists(build_script_filename):
build_command = build_script_filename.as_posix()
else:
if verbose:
print(f"No script file found: {build_script_filename}, using default build script")
repo_dir = container.split('/')[1]
# TODO: make this less of a hack -- should be specified in some metadata somewhere
# Check if we have a repo for this container. If not, set the context dir to the container-build subdir
repo_full_path = os.path.join(dev_root_path, repo_dir)
repo_dir_or_build_dir = repo_full_path if os.path.exists(repo_full_path) else build_dir
build_command = os.path.join(container_build_dir,
"default-build.sh") + f" {default_container_tag} {repo_dir_or_build_dir}"
if not dry_run:
# No PATH at all causes failures with podman.
if "PATH" not in container_build_env:
container_build_env["PATH"] = os.environ["PATH"]
if verbose:
print(f"Executing: {build_command} with environment: {container_build_env}")
build_result = subprocess.run(build_command, shell=True, env=container_build_env)
if verbose:
print(f"Return code is: {build_result.returncode}")
if build_result.returncode != 0:
print(f"Error running build for {container}")
if not continue_on_error:
print("FATAL Error: container build failed and --continue-on-error not set, exiting")
sys.exit(1)
else:
print("****** Container Build Error, continuing because --continue-on-error is set")
else:
print("Skipped")
@click.command() @click.command()
@click.option('--include', help="only build these containers") @click.option('--include', help="only build these containers")
@click.option('--exclude', help="don\'t build these containers") @click.option('--exclude', help="don\'t build these containers")
@ -67,13 +157,15 @@ def command(ctx, include, exclude, force_rebuild, extra_build_args):
print('Dev root directory doesn\'t exist, creating') print('Dev root directory doesn\'t exist, creating')
# See: https://stackoverflow.com/a/20885799/1701505 # See: https://stackoverflow.com/a/20885799/1701505
from app import data from stack_orchestrator import data
with importlib.resources.open_text(data, "container-image-list.txt") as container_list_file: with importlib.resources.open_text(data, "container-image-list.txt") as container_list_file:
all_containers = container_list_file.read().splitlines() all_containers = container_list_file.read().splitlines()
containers_in_scope = [] containers_in_scope = []
if stack: if stack:
stack_config = get_parsed_stack_config(stack) stack_config = get_parsed_stack_config(stack)
if "containers" not in stack_config or stack_config["containers"] is None:
warn_exit(f"stack {stack} does not define any containers")
containers_in_scope = stack_config['containers'] containers_in_scope = stack_config['containers']
else: else:
containers_in_scope = all_containers containers_in_scope = all_containers
@ -83,61 +175,16 @@ def command(ctx, include, exclude, force_rebuild, extra_build_args):
if stack: if stack:
print(f"Stack: {stack}") print(f"Stack: {stack}")
# TODO: make this configurable container_build_env = make_container_build_env(dev_root_path,
container_build_env = { container_build_dir,
"CERC_NPM_REGISTRY_URL": get_npm_registry_url(), debug,
"CERC_GO_AUTH_TOKEN": config("CERC_GO_AUTH_TOKEN", default=""), force_rebuild,
"CERC_NPM_AUTH_TOKEN": config("CERC_NPM_AUTH_TOKEN", default=""), extra_build_args)
"CERC_REPO_BASE_DIR": dev_root_path,
"CERC_CONTAINER_BASE_DIR": container_build_dir,
"CERC_HOST_UID": f"{os.getuid()}",
"CERC_HOST_GID": f"{os.getgid()}",
"DOCKER_BUILDKIT": config("DOCKER_BUILDKIT", default="0")
}
container_build_env.update({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
container_build_env.update({"CERC_FORCE_REBUILD": "true"} if force_rebuild else {})
container_build_env.update({"CERC_CONTAINER_EXTRA_BUILD_ARGS": extra_build_args} if extra_build_args else {})
docker_host_env = os.getenv("DOCKER_HOST")
if docker_host_env:
container_build_env.update({"DOCKER_HOST": docker_host_env})
def process_container(container):
if not quiet:
print(f"Building: {container}")
build_dir = os.path.join(container_build_dir, container.replace("/", "-"))
build_script_filename = os.path.join(build_dir, "build.sh")
if verbose:
print(f"Build script filename: {build_script_filename}")
if os.path.exists(build_script_filename):
build_command = build_script_filename
else:
if verbose:
print(f"No script file found: {build_script_filename}, using default build script")
repo_dir = container.split('/')[1]
# TODO: make this less of a hack -- should be specified in some metadata somewhere
# Check if we have a repo for this container. If not, set the context dir to the container-build subdir
repo_full_path = os.path.join(dev_root_path, repo_dir)
repo_dir_or_build_dir = repo_full_path if os.path.exists(repo_full_path) else build_dir
build_command = os.path.join(container_build_dir, "default-build.sh") + f" {container}:local {repo_dir_or_build_dir}"
if not dry_run:
if verbose:
print(f"Executing: {build_command} with environment: {container_build_env}")
build_result = subprocess.run(build_command, shell=True, env=container_build_env)
if verbose:
print(f"Return code is: {build_result.returncode}")
if build_result.returncode != 0:
print(f"Error running build for {container}")
if not continue_on_error:
print("FATAL Error: container build failed and --continue-on-error not set, exiting")
sys.exit(1)
else:
print("****** Container Build Error, continuing because --continue-on-error is set")
else:
print("Skipped")
for container in containers_in_scope: for container in containers_in_scope:
if include_exclude_check(container, include, exclude): if include_exclude_check(container, include, exclude):
process_container(container) process_container(stack, container, container_build_dir, container_build_env,
dev_root_path, quiet, verbose, dry_run, continue_on_error)
else: else:
if verbose: if verbose:
print(f"Excluding: {container}") print(f"Excluding: {container}")

View File

@ -25,8 +25,8 @@ from decouple import config
import click import click
import importlib.resources import importlib.resources
from python_on_whales import docker, DockerException from python_on_whales import docker, DockerException
from app.base import get_stack from stack_orchestrator.base import get_stack
from app.util import include_exclude_check, get_parsed_stack_config from stack_orchestrator.util import include_exclude_check, get_parsed_stack_config
builder_js_image_name = "cerc/builder-js:local" builder_js_image_name = "cerc/builder-js:local"
@ -83,7 +83,7 @@ def command(ctx, include, exclude, force_rebuild, extra_build_args):
os.makedirs(build_root_path) os.makedirs(build_root_path)
# See: https://stackoverflow.com/a/20885799/1701505 # See: https://stackoverflow.com/a/20885799/1701505
from app import data from stack_orchestrator import data
with importlib.resources.open_text(data, "npm-package-list.txt") as package_list_file: with importlib.resources.open_text(data, "npm-package-list.txt") as package_list_file:
all_packages = package_list_file.read().splitlines() all_packages = package_list_file.read().splitlines()

View File

@ -0,0 +1,84 @@
# Copyright © 2022, 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
# Builds webapp containers
# env vars:
# CERC_REPO_BASE_DIR defaults to ~/cerc
# TODO: display the available list of containers; allow re-build of either all or specific containers
import os
from decouple import config
import click
from pathlib import Path
from stack_orchestrator.build import build_containers
from stack_orchestrator.deploy.webapp.util import determine_base_container
@click.command()
@click.option('--base-container')
@click.option('--source-repo', help="directory containing the webapp to build", required=True)
@click.option("--force-rebuild", is_flag=True, default=False, help="Override dependency checking -- always rebuild")
@click.option("--extra-build-args", help="Supply extra arguments to build")
@click.option("--tag", help="Container tag (default: cerc/<app_name>:local)")
@click.pass_context
def command(ctx, base_container, source_repo, force_rebuild, extra_build_args, tag):
'''build the specified webapp container'''
quiet = ctx.obj.quiet
verbose = ctx.obj.verbose
dry_run = ctx.obj.dry_run
debug = ctx.obj.debug
local_stack = ctx.obj.local_stack
stack = ctx.obj.stack
continue_on_error = ctx.obj.continue_on_error
# See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
container_build_dir = Path(__file__).absolute().parent.parent.joinpath("data", "container-build")
if local_stack:
dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")]
print(f'Local stack dev_root_path (CERC_REPO_BASE_DIR) overridden to: {dev_root_path}')
else:
dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc"))
if not quiet:
print(f'Dev Root is: {dev_root_path}')
if not base_container:
base_container = determine_base_container(source_repo)
# First build the base container.
container_build_env = build_containers.make_container_build_env(dev_root_path, container_build_dir, debug,
force_rebuild, extra_build_args)
build_containers.process_container(None, base_container, container_build_dir, container_build_env, dev_root_path, quiet,
verbose, dry_run, continue_on_error)
# Now build the target webapp. We use the same build script, but with a different Dockerfile and work dir.
container_build_env["CERC_WEBAPP_BUILD_RUNNING"] = "true"
container_build_env["CERC_CONTAINER_BUILD_WORK_DIR"] = os.path.abspath(source_repo)
container_build_env["CERC_CONTAINER_BUILD_DOCKERFILE"] = os.path.join(container_build_dir,
base_container.replace("/", "-"),
"Dockerfile.webapp")
if not tag:
webapp_name = os.path.abspath(source_repo).split(os.path.sep)[-1]
container_build_env["CERC_CONTAINER_BUILD_TAG"] = f"cerc/{webapp_name}:local"
else:
container_build_env["CERC_CONTAINER_BUILD_TAG"] = tag
build_containers.process_container(None, base_container, container_build_dir, container_build_env, dev_root_path, quiet,
verbose, dry_run, continue_on_error)

View File

@ -0,0 +1,38 @@
# Copyright © 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
cluster_name_prefix = "laconic-"
stack_file_name = "stack.yml"
spec_file_name = "spec.yml"
config_file_name = "config.env"
deployment_file_name = "deployment.yml"
compose_dir_name = "compose"
compose_deploy_type = "compose"
k8s_kind_deploy_type = "k8s-kind"
k8s_deploy_type = "k8s"
cluster_id_key = "cluster-id"
kube_config_key = "kube-config"
deploy_to_key = "deploy-to"
network_key = "network"
http_proxy_key = "http-proxy"
image_registry_key = "image-registry"
configmaps_key = "configmaps"
resources_key = "resources"
volumes_key = "volumes"
security_key = "security"
annotations_key = "annotations"
labels_key = "labels"
kind_config_filename = "kind-config.yml"
kube_config_filename = "kubeconfig.yml"

View File

@ -0,0 +1,13 @@
services:
registry:
image: registry:2.8
restart: always
environment:
REGISTRY_LOG_LEVEL: ${REGISTRY_LOG_LEVEL}
volumes:
- registry-data:/var/lib/registry
ports:
- "5000"
volumes:
registry-data:

View File

@ -4,6 +4,6 @@ services:
image: cerc/laconic-console-host:local image: cerc/laconic-console-host:local
environment: environment:
- CERC_WEBAPP_FILES_DIR=${CERC_WEBAPP_FILES_DIR:-/usr/local/share/.config/yarn/global/node_modules/@cerc-io/console-app/dist/production} - CERC_WEBAPP_FILES_DIR=${CERC_WEBAPP_FILES_DIR:-/usr/local/share/.config/yarn/global/node_modules/@cerc-io/console-app/dist/production}
- LACONIC_HOSTED_ENDPOINT=${LACONIC_HOSTED_ENDPOINT:-http://localhost} - LACONIC_HOSTED_ENDPOINT=${LACONIC_HOSTED_ENDPOINT:-http://localhost:9473}
ports: ports:
- "80" - "80"

View File

@ -5,7 +5,7 @@ services:
command: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"] command: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"]
volumes: volumes:
# The cosmos-sdk node's database directory: # The cosmos-sdk node's database directory:
- laconicd-data:/root/.laconicd/data - laconicd-data:/root/.laconicd
# TODO: look at folding these scripts into the container # TODO: look at folding these scripts into the container
- ../config/fixturenet-laconicd/create-fixturenet.sh:/docker-entrypoint-scripts.d/create-fixturenet.sh - ../config/fixturenet-laconicd/create-fixturenet.sh:/docker-entrypoint-scripts.d/create-fixturenet.sh
- ../config/fixturenet-laconicd/export-mykey.sh:/docker-entrypoint-scripts.d/export-mykey.sh - ../config/fixturenet-laconicd/export-mykey.sh:/docker-entrypoint-scripts.d/export-mykey.sh

View File

@ -6,8 +6,8 @@ services:
# Deploys the L1 smart contracts (outputs to volume l1_deployment) # Deploys the L1 smart contracts (outputs to volume l1_deployment)
fixturenet-optimism-contracts: fixturenet-optimism-contracts:
restart: on-failure restart: on-failure
hostname: fixturenet-optimism-contracts
image: cerc/optimism-contracts:local image: cerc/optimism-contracts:local
hostname: fixturenet-optimism-contracts
env_file: env_file:
- ../config/fixturenet-optimism/l1-params.env - ../config/fixturenet-optimism/l1-params.env
environment: environment:
@ -17,27 +17,49 @@ services:
CERC_L1_ACCOUNTS_CSV_URL: ${CERC_L1_ACCOUNTS_CSV_URL} CERC_L1_ACCOUNTS_CSV_URL: ${CERC_L1_ACCOUNTS_CSV_URL}
CERC_L1_ADDRESS: ${CERC_L1_ADDRESS} CERC_L1_ADDRESS: ${CERC_L1_ADDRESS}
CERC_L1_PRIV_KEY: ${CERC_L1_PRIV_KEY} CERC_L1_PRIV_KEY: ${CERC_L1_PRIV_KEY}
CERC_L1_ADDRESS_2: ${CERC_L1_ADDRESS_2}
CERC_L1_PRIV_KEY_2: ${CERC_L1_PRIV_KEY_2}
# Waits for L1 endpoint to be up before running the script
command: |
"./wait-for-it.sh -h ${CERC_L1_HOST:-$${DEFAULT_CERC_L1_HOST}} -p ${CERC_L1_PORT:-$${DEFAULT_CERC_L1_PORT}} -s -t 60 -- ./run.sh"
volumes: volumes:
- ../config/network/wait-for-it.sh:/app/packages/contracts-bedrock/wait-for-it.sh - ../config/network/wait-for-it.sh:/app/packages/contracts-bedrock/wait-for-it.sh
- ../config/optimism-contracts/hardhat-tasks/verify-contract-deployment.ts:/app/packages/contracts-bedrock/tasks/verify-contract-deployment.ts - ../config/fixturenet-optimism/optimism-contracts/deploy-contracts.sh:/app/packages/contracts-bedrock/deploy-contracts.sh
- ../config/optimism-contracts/hardhat-tasks/rekey-json.ts:/app/packages/contracts-bedrock/tasks/rekey-json.ts
- ../config/optimism-contracts/hardhat-tasks/send-balance.ts:/app/packages/contracts-bedrock/tasks/send-balance.ts
- ../config/fixturenet-optimism/optimism-contracts/update-config.js:/app/packages/contracts-bedrock/update-config.js
- ../config/fixturenet-optimism/optimism-contracts/run.sh:/app/packages/contracts-bedrock/run.sh
- l2_accounts:/l2-accounts - l2_accounts:/l2-accounts
- l1_deployment:/app/packages/contracts-bedrock - l1_deployment:/l1-deployment
- l2_config:/l2-config
# Waits for L1 endpoint to be up before running the contract deploy script
command: |
"./wait-for-it.sh -h ${CERC_L1_HOST:-$${DEFAULT_CERC_L1_HOST}} -p ${CERC_L1_PORT:-$${DEFAULT_CERC_L1_PORT}} -s -t 60 -- ./deploy-contracts.sh"
# Initializes and runs the L2 execution client (outputs to volume l2_geth_data)
op-geth:
restart: always
image: cerc/optimism-l2geth:local
hostname: op-geth
depends_on:
op-node:
condition: service_started
volumes:
- ../config/fixturenet-optimism/run-op-geth.sh:/run-op-geth.sh
- l2_config:/l2-config:ro
- l2_accounts:/l2-accounts:ro
- l2_geth_data:/datadir
entrypoint: "sh"
command: "/run-op-geth.sh"
ports:
- "8545"
- "8546"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost:8545"]
interval: 30s
timeout: 10s
retries: 100
start_period: 10s
extra_hosts: extra_hosts:
- "host.docker.internal:host-gateway" - "host.docker.internal:host-gateway"
# Generates the config files required for L2 (outputs to volume l2_config) # Runs the L2 consensus client (Sequencer node)
op-node-l2-config-gen: # Generates the L2 config files if not already present (outputs to volume l2_config)
restart: on-failure op-node:
restart: always
image: cerc/optimism-op-node:local image: cerc/optimism-op-node:local
hostname: op-node
depends_on: depends_on:
fixturenet-optimism-contracts: fixturenet-optimism-contracts:
condition: service_completed_successfully condition: service_completed_successfully
@ -47,61 +69,19 @@ services:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG} CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_L1_RPC: ${CERC_L1_RPC} CERC_L1_RPC: ${CERC_L1_RPC}
volumes: volumes:
- ../config/fixturenet-optimism/generate-l2-config.sh:/app/generate-l2-config.sh - ../config/fixturenet-optimism/run-op-node.sh:/run-op-node.sh
- l1_deployment:/contracts-bedrock:ro - l1_deployment:/l1-deployment:ro
- l2_config:/app - l2_config:/l2-config
command: ["sh", "/app/generate-l2-config.sh"]
extra_hosts:
- "host.docker.internal:host-gateway"
# Initializes and runs the L2 execution client (outputs to volume l2_geth_data)
op-geth:
restart: always
image: cerc/optimism-l2geth:local
depends_on:
op-node-l2-config-gen:
condition: service_started
volumes:
- ../config/fixturenet-optimism/run-op-geth.sh:/run-op-geth.sh
- l2_config:/op-node:ro
- l2_accounts:/l2-accounts:ro - l2_accounts:/l2-accounts:ro
- l2_geth_data:/datadir
entrypoint: "sh" entrypoint: "sh"
command: "/run-op-geth.sh" command: "/run-op-node.sh"
ports: ports:
- "0.0.0.0:8545:8545" - "8547"
- "0.0.0.0:8546:8546"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost:8545"]
interval: 30s
timeout: 10s
retries: 10
start_period: 10s
# Runs the L2 consensus client (Sequencer node)
op-node:
restart: always
image: cerc/optimism-op-node:local
depends_on:
op-geth:
condition: service_healthy
env_file:
- ../config/fixturenet-optimism/l1-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_L1_RPC: ${CERC_L1_RPC}
volumes:
- ../config/fixturenet-optimism/run-op-node.sh:/app/run-op-node.sh
- l2_config:/op-node-data:ro
- l2_accounts:/l2-accounts:ro
command: ["sh", "/app/run-op-node.sh"]
ports:
- "0.0.0.0:8547:8547"
healthcheck: healthcheck:
test: ["CMD", "nc", "-vz", "localhost:8547"] test: ["CMD", "nc", "-vz", "localhost:8547"]
interval: 30s interval: 30s
timeout: 10s timeout: 10s
retries: 10 retries: 100
start_period: 10s start_period: 10s
extra_hosts: extra_hosts:
- "host.docker.internal:host-gateway" - "host.docker.internal:host-gateway"
@ -110,6 +90,7 @@ services:
op-batcher: op-batcher:
restart: always restart: always
image: cerc/optimism-op-batcher:local image: cerc/optimism-op-batcher:local
hostname: op-batcher
depends_on: depends_on:
op-node: op-node:
condition: service_healthy condition: service_healthy
@ -129,7 +110,7 @@ services:
command: | command: |
"/wait-for-it.sh -h ${CERC_L1_HOST:-$${DEFAULT_CERC_L1_HOST}} -p ${CERC_L1_PORT:-$${DEFAULT_CERC_L1_PORT}} -s -t 60 -- /run-op-batcher.sh" "/wait-for-it.sh -h ${CERC_L1_HOST:-$${DEFAULT_CERC_L1_HOST}} -p ${CERC_L1_PORT:-$${DEFAULT_CERC_L1_PORT}} -s -t 60 -- /run-op-batcher.sh"
ports: ports:
- "127.0.0.1:8548:8548" - "8548"
extra_hosts: extra_hosts:
- "host.docker.internal:host-gateway" - "host.docker.internal:host-gateway"
@ -137,25 +118,29 @@ services:
op-proposer: op-proposer:
restart: always restart: always
image: cerc/optimism-op-proposer:local image: cerc/optimism-op-proposer:local
hostname: op-proposer
depends_on: depends_on:
op-node: op-node:
condition: service_healthy condition: service_healthy
op-geth:
condition: service_healthy
env_file: env_file:
- ../config/fixturenet-optimism/l1-params.env - ../config/fixturenet-optimism/l1-params.env
environment: environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG} CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_L1_RPC: ${CERC_L1_RPC} CERC_L1_RPC: ${CERC_L1_RPC}
CERC_L1_CHAIN_ID: ${CERC_L1_CHAIN_ID}
volumes: volumes:
- ../config/network/wait-for-it.sh:/wait-for-it.sh - ../config/network/wait-for-it.sh:/wait-for-it.sh
- ../config/fixturenet-optimism/run-op-proposer.sh:/run-op-proposer.sh - ../config/fixturenet-optimism/run-op-proposer.sh:/run-op-proposer.sh
- l1_deployment:/contracts-bedrock:ro - l1_deployment:/l1-deployment:ro
- l2_accounts:/l2-accounts:ro - l2_accounts:/l2-accounts:ro
entrypoint: ["sh", "-c"] entrypoint: ["sh", "-c"]
# Waits for L1 endpoint to be up before running the proposer # Waits for L1 endpoint to be up before running the proposer
command: | command: |
"/wait-for-it.sh -h ${CERC_L1_HOST:-$${DEFAULT_CERC_L1_HOST}} -p ${CERC_L1_PORT:-$${DEFAULT_CERC_L1_PORT}} -s -t 60 -- /run-op-proposer.sh" "/wait-for-it.sh -h ${CERC_L1_HOST:-$${DEFAULT_CERC_L1_HOST}} -p ${CERC_L1_PORT:-$${DEFAULT_CERC_L1_PORT}} -s -t 60 -- /run-op-proposer.sh"
ports: ports:
- "127.0.0.1:8560:8560" - "8560"
extra_hosts: extra_hosts:
- "host.docker.internal:host-gateway" - "host.docker.internal:host-gateway"

View File

@ -0,0 +1,36 @@
version: '3.7'
services:
# Runs an Urbit fake ship and attempts an app installation using given data
# Uploads the app glob to given IPFS endpoint
# From urbit_app_builds volume:
# - takes app build from ${CERC_URBIT_APP}/build (waits for it to appear)
# - takes additional mark files from ${CERC_URBIT_APP}/mar
# - takes the docket file from ${CERC_URBIT_APP}/desk.docket-0
urbit-fake-ship:
restart: unless-stopped
image: tloncorp/vere
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_URBIT_APP: ${CERC_URBIT_APP}
CERC_ENABLE_APP_INSTALL: ${CERC_ENABLE_APP_INSTALL:-true}
CERC_IPFS_GLOB_HOST_ENDPOINT: ${CERC_IPFS_GLOB_HOST_ENDPOINT:-http://ipfs:5001}
CERC_IPFS_SERVER_ENDPOINT: ${CERC_IPFS_SERVER_ENDPOINT:-http://ipfs:8080}
entrypoint: ["bash", "-c", "./run-urbit-ship.sh && ./deploy-app.sh && tail -f /dev/null"]
volumes:
- urbit_data:/urbit
- urbit_app_builds:/app-builds
- ../config/urbit/run-urbit-ship.sh:/urbit/run-urbit-ship.sh
- ../config/urbit/deploy-app.sh:/urbit/deploy-app.sh
ports:
- "80"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "80"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
volumes:
urbit_data:
urbit_app_builds:

View File

@ -0,0 +1,23 @@
version: "3.7"
services:
grafana:
image: grafana/grafana:10.2.2
restart: always
environment:
GF_SERVER_ROOT_URL: ${GF_SERVER_ROOT_URL}
volumes:
- ../config/monitoring/grafana/provisioning:/etc/grafana/provisioning
- ../config/monitoring/grafana/dashboards:/etc/grafana/dashboards
- grafana_storage:/var/lib/grafana
ports:
- "3000"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3000"]
interval: 30s
timeout: 5s
retries: 10
start_period: 3s
volumes:
grafana_storage:

View File

@ -0,0 +1,24 @@
version: "3.2"
# See: https://docs.ipfs.tech/install/run-ipfs-inside-docker/#set-up
services:
ipfs:
image: ipfs/kubo:v0.24.0
restart: always
volumes:
- ipfs-import:/import
- ipfs-data:/data/ipfs
ports:
- "4001"
- "8080"
- "0.0.0.0:5001:5001"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "5001"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
volumes:
ipfs-import:
ipfs-data:

View File

@ -0,0 +1,12 @@
version: "3.2"
services:
mars:
image: cerc/mars-v2:local
restart: always
ports:
- "3000:3000"
environment:
- URL_OSMOSIS_REST=https://lcd-osmosis.blockapsis.com
- URL_OSMOSIS_RPC=https://rpc-osmosis.blockapsis.com
- WALLET_CONNECT_ID=0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x

View File

@ -0,0 +1,20 @@
version: "3.2"
services:
mars:
image: cerc/mars:local
restart: always
ports:
- "3000:3000"
environment:
- URL_OSMOSIS_GQL=https://osmosis-node.marsprotocol.io/GGSFGSFGFG34/osmosis-hive-front/graphql
- URL_OSMOSIS_REST=https://lcd-osmosis.blockapsis.com
- URL_OSMOSIS_RPC=https://rpc-osmosis.blockapsis.com
- URL_NEUTRON_GQL=https://neutron.rpc.p2p.world/qgrnU6PsQZA8F9S5Fb8Fn3tV3kXmMBl2M9bcc9jWLjQy8p/hive/graphql
- URL_NEUTRON_REST=https://rest-kralum.neutron-1.neutron.org
- URL_NEUTRON_RPC=https://rpc-kralum.neutron-1.neutron.org
- URL_NEUTRON_TEST_GQL=https://testnet-neutron-gql.marsprotocol.io/graphql
- URL_NEUTRON_TEST_REST=https://rest-palvus.pion-1.ntrn.tech
- URL_NEUTRON_TEST_RPC=https://rpc-palvus.pion-1.ntrn.tech
- WALLET_CONNECT_ID=0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x

Some files were not shown because too many files have changed in this diff Show More