Implementation of a command to fetch pre-built images from a remote registry, complementing the --push-images option already present on build-containers.
The two subcommands used together allow a stack to be deployed without needing to built its images, provided they have been already built and pushed to the specified container image registry.
This implementation simply picks the newest image with the right name and platform (matches against the platform Python is running on, so watch out for scenarios where Python is an x86 binary on M1 macs).
Reviewed-on: #768
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
This adds support for auto-detecting pnpm as a build tool for webapps.
Reviewed-on: #767
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
webapps are meant to be build-once/deploy-many, but we were rebuilding them for every request. This changes that, so that we rebuild only for every unique ApplicationRecord.
When we push the image, we now tag it according to its ApplicationRecord.
We don't want to use that tag directly in the compose file for the deployment, however, as the deployment needs to be able to adjust to new builds w/o re-writing the file all the time. Instead, we use a per-deployment unique tag (same as before), we just update what image it references as needed.
Reviewed-on: #764
This creates a new environment variable, CERC_SINGLE_PAGE_APP, which controls whether a catchall redirection back to / is applied.
If the value is not explicitly set, we try to detect if the page looks like a single-page app.
Reviewed-on: #763
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Minor tweaks for running the container-registry in k8s. The big change is not requiring --image-registry.
Reviewed-on: #760
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Hopefully the last one for a bit.
This only output the cmdline if log_file is present (ie, not to stdout). It also fixes a bug where the log_file was not passed in one line.
Reviewed-on: #757
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
We were missing errors related to registration, this should fix that.
Reviewed-on: #754
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
This adds a stack for the backend from snowball/snowballtools-base.
Reviewed-on: #751
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
If the tree has a 'build-webapp.sh' script, use that.
Reviewed-on: #750
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-authored-by: David Boreham <david@bozemanpas.com>
Reviewed-on: #747
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Minor fixes to envsubst for webapps. Somewhat specially treated is `LACONIC_HOSTED_CONFIG_homepage` which can be used to replace the homepage in package.json. With react, this gets an extra `/` though, which we need to remove.
Reviewed-on: #746
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
In kind, when we bind-mount a host directory it is first mounted into the kind container at /mnt, then into the pod at the desired location.
We accidentally picked this up for full-blown k8s, and were creating volumes at /mnt. This changes the behavior for both kind and regular k8s so that bind mounts are only allowed if a fully-qualified path is specified. If no path is specified at all, a default storageClass is assumed to be present, and the volume managed by a provisioner.
Eg, for kind, the default provisioner is: https://github.com/rancher/local-path-provisioner
```
stack: test
deploy-to: k8s-kind
config:
test-variable-1: test-value-1
network:
ports:
test:
- '80'
volumes:
# this will be bind-mounted to a host-path
test-data-bind: /srv/data
# this will be managed by the k8s node
test-data-auto:
configmaps:
test-config: ./configmap/test-config
```
Reviewed-on: #741
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
```
--include-tags TEXT Only include requests with matching tags
(comma-separated).
--exclude-tags TEXT Exclude requests with matching tags (comma-
separated).
```
Reviewed-on: #734
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Needs a '/' (http) not ':' (ssh).
Reviewed-on: #733
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Update links and references to github.com to git.vdb.to.
Also enable the flake8 lint action in gitea.
Reviewed-on: #732
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
* Add Postgres exporter and it's dashboard
* Use templating for watcher dashboard
* Add subgraph related panels to watcher dashboard
* Remove individual watcher dashboards and update instructions
* Deploy osmosis on Urbit fake ship
* Remove Urbit configuration from existing osmosis stack
* Add a separate stack for Osmosis front end on Urbit
* Run script for renaming build files with bash
* Add environment variables required in urbit osmosis build
* Fix BASEPATH in compose file
* Remove ipfs-glob-host from network config in osmosis readme
* Use laconic branch for osmosis frontend
---------
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
* Setup Prometheus and Grafana for monitoring stack
* Add dashboard for azimuth watchers
* Add a dashboard for sushiswap watcher
* Persist prometheus server data
* Additional metrics in watcher dashboards
* Update dashboards and add for merkl sushiswap watcher
* Add dashboards for remaining azimuth watchers
* Separate out preconfigured watcher dashboards and add instructions
* Keep the empty dashboards dir
* Refactor to make Urbit app deployment script generic
* Rename urbit pod and update instructions
* Add a flag to allow skipping app installation on Urbit
* Make remote Urbit app deployment scripts generic
* Move remote deployment scripts to urbit fixturenet
* Update and use existing kubo pod for Urbit glob hosting
* Separate out uniswap gql proxy in a stack
* Use proxy server from watcher-ts
* Add a flag to enable/disable the proxy server
* Update env configuratoin for uniswap urbit app stack
* Update stack file for uniswap urbit app stack
* Fix env variables in instructions
* Use IPFS for hosting glob files for Urbit
* Add env configuration for IPFS endpoints to instructions
* Make ship pier dir configurable in remote deployment script
* Update remote deployment script to accept glob hash arg
* Create uniswap-frontend stack
* Add stack for building uniswap frontend app
* Add a container for Urbit fake ship
* Update with deployment command
* Add a service for uniswap app deployment to urbit
* Use a script to start urbit ship to handle restarts
* Rename stack name to uniswap-urbit-app
* Rename build.sh to build-app.sh and check if build already exists
* Rename stack directory name
* Update uniswap build restart on failure
* Perform uniswap app deployment in the urbit container
* Add steps to create glob for the app
* Tail /dev/null after deployment
* Add steps to install the app to desk
* Host glob files for uniswap
* Update repo branch
* Update readme with command to get urbit password
* Update readme
* Update readme to open urbit web UI
* Expose the port on glob hosting container
* Avoid exposing urbit http port
* Add scripts for installing uniswap on remote urbit instance
* Configure GQL proxy for uniswap app
* Use laconic branch for app repo
* Rename urbit pod for uniswap app deployment
---------
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
* Update stack to run azimuth job runner
* Run azimuth watcher in active mode
* Update stack to run job-runners for all watchers
* Update ports in job-runner health checks
* Map metrics ports to host
* Configure historical block processing batch size for Azimuth watcher
* Use deployment command for azimuth stack
---------
Co-authored-by: neeraj <neeraj.rtly@gmail.com>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>