Minor tweaks for running the container-registry in k8s. The big change is not requiring --image-registry.
Reviewed-on: cerc-io/stack-orchestrator#760
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Reviewed-on: cerc-io/stack-orchestrator#758
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
This adds a stack for the backend from snowball/snowballtools-base.
Reviewed-on: cerc-io/stack-orchestrator#751
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-authored-by: David Boreham <david@bozemanpas.com>
Reviewed-on: cerc-io/stack-orchestrator#747
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
In kind, when we bind-mount a host directory it is first mounted into the kind container at /mnt, then into the pod at the desired location.
We accidentally picked this up for full-blown k8s, and were creating volumes at /mnt. This changes the behavior for both kind and regular k8s so that bind mounts are only allowed if a fully-qualified path is specified. If no path is specified at all, a default storageClass is assumed to be present, and the volume managed by a provisioner.
Eg, for kind, the default provisioner is: https://github.com/rancher/local-path-provisioner
```
stack: test
deploy-to: k8s-kind
config:
test-variable-1: test-value-1
network:
ports:
test:
- '80'
volumes:
# this will be bind-mounted to a host-path
test-data-bind: /srv/data
# this will be managed by the k8s node
test-data-auto:
configmaps:
test-config: ./configmap/test-config
```
Reviewed-on: cerc-io/stack-orchestrator#741
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#740
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#736
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
* Add Postgres exporter and it's dashboard
* Use templating for watcher dashboard
* Add subgraph related panels to watcher dashboard
* Remove individual watcher dashboards and update instructions
* Deploy osmosis on Urbit fake ship
* Remove Urbit configuration from existing osmosis stack
* Add a separate stack for Osmosis front end on Urbit
* Run script for renaming build files with bash
* Add environment variables required in urbit osmosis build
* Fix BASEPATH in compose file
* Remove ipfs-glob-host from network config in osmosis readme
* Use laconic branch for osmosis frontend
---------
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
* Setup Prometheus and Grafana for monitoring stack
* Add dashboard for azimuth watchers
* Add a dashboard for sushiswap watcher
* Persist prometheus server data
* Additional metrics in watcher dashboards
* Update dashboards and add for merkl sushiswap watcher
* Add dashboards for remaining azimuth watchers
* Separate out preconfigured watcher dashboards and add instructions
* Keep the empty dashboards dir
* Refactor to make Urbit app deployment script generic
* Rename urbit pod and update instructions
* Add a flag to allow skipping app installation on Urbit
* Make remote Urbit app deployment scripts generic
* Move remote deployment scripts to urbit fixturenet
* Update and use existing kubo pod for Urbit glob hosting
* Separate out uniswap gql proxy in a stack
* Use proxy server from watcher-ts
* Add a flag to enable/disable the proxy server
* Update env configuratoin for uniswap urbit app stack
* Update stack file for uniswap urbit app stack
* Fix env variables in instructions
* Use IPFS for hosting glob files for Urbit
* Add env configuration for IPFS endpoints to instructions
* Make ship pier dir configurable in remote deployment script
* Update remote deployment script to accept glob hash arg
* Create uniswap-frontend stack
* Add stack for building uniswap frontend app
* Add a container for Urbit fake ship
* Update with deployment command
* Add a service for uniswap app deployment to urbit
* Use a script to start urbit ship to handle restarts
* Rename stack name to uniswap-urbit-app
* Rename build.sh to build-app.sh and check if build already exists
* Rename stack directory name
* Update uniswap build restart on failure
* Perform uniswap app deployment in the urbit container
* Add steps to create glob for the app
* Tail /dev/null after deployment
* Add steps to install the app to desk
* Host glob files for uniswap
* Update repo branch
* Update readme with command to get urbit password
* Update readme
* Update readme to open urbit web UI
* Expose the port on glob hosting container
* Avoid exposing urbit http port
* Add scripts for installing uniswap on remote urbit instance
* Configure GQL proxy for uniswap app
* Use laconic branch for app repo
* Rename urbit pod for uniswap app deployment
---------
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
* Update stack to run azimuth job runner
* Run azimuth watcher in active mode
* Update stack to run job-runners for all watchers
* Update ports in job-runner health checks
* Map metrics ports to host
* Configure historical block processing batch size for Azimuth watcher
* Use deployment command for azimuth stack
---------
Co-authored-by: neeraj <neeraj.rtly@gmail.com>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>