Part of [Service provider auctions for web deployments](https://www.notion.so/Service-provider-auctions-for-web-deployments-104a6b22d47280dbad51d28aa3a91d75) and cerc-io/stack-orchestrator#948
- Add a registry mutex decorator over tx methods in `LaconicRegistryClient` wrapper
- Required to allow multiple process to run webapp deployment tooling without running into account sequence errors when sending laconicd txs
Reviewed-on: cerc-io/stack-orchestrator#957
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Part of [Service provider auctions for web deployments](https://www.notion.so/Service-provider-auctions-for-web-deployments-104a6b22d47280dbad51d28aa3a91d75) and cerc-io/stack-orchestrator#948
- Add a command `publish-deployment-auction` to create and publish an app deployment auction
- Add a command `handle-deployment-auction` to handle auctions on deployer side
- Update `request-webapp-deployment` command to allow using an auction id in deployment requests
- Update `deploy-webapp-from-registry` command to handle deployment requests with auction
- Add a command `request-webapp-undeployment` to request an application undeployment
Reviewed-on: cerc-io/stack-orchestrator#950
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Part of cerc-io/stack-orchestrator#955
- Using `shiv` version 1.0.6
Reviewed-on: cerc-io/stack-orchestrator#956
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
Otherwise we sometimes see errors like:
```
cerc-webapp-deployer: File "/root/.shiv/laconic-so_0f937aa98c2748ef9af8585d6f441dbc01546ace0d6660cbb159d1e5040aeddf/site-packages/stack_orchestrator/deploy/webapp/deploy_webapp_from_registry.py", line 671, in command
cerc-webapp-deployer: shutil.rmtree(tempdir)
cerc-webapp-deployer: File "/usr/lib/python3.10/shutil.py", line 725, in rmtree
cerc-webapp-deployer: _rmtree_safe_fd(fd, path, onerror)
cerc-webapp-deployer: File "/usr/lib/python3.10/shutil.py", line 681, in _rmtree_safe_fd
cerc-webapp-deployer: onerror(os.unlink, fullname, sys.exc_info())
cerc-webapp-deployer: File "/usr/lib/python3.10/shutil.py", line 679, in _rmtree_safe_fd
cerc-webapp-deployer: os.unlink(entry.name, dir_fd=topfd)
cerc-webapp-deployer: FileNotFoundError: [Errno 2] No such file or directory: 'S.gpg-agent.extra'
```
Reviewed-on: cerc-io/stack-orchestrator#941
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Add a flag to re-use config.
Reviewed-on: cerc-io/stack-orchestrator#939
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
This adds two new commands: `publish-webapp-deployer` and `request-webapp-deployment`.
`publish-webapp-deployer` creates a `WebappDeployer` record, which provides information to requestors like the API URL, minimum required payment, payment address, and public key to use for encrypting config.
```
$ laconic-so publish-deployer-to-registry \
--laconic-config ~/.laconic/laconic.yml \
--api-url https://webapp-deployer-api.dev.vaasl.io \
--public-key-file webapp-deployer-api.dev.vaasl.io.pgp.pub \
--lrn lrn://laconic/deployers/webapp-deployer-api.dev.vaasl.io \
--min-required-payment 100000
```
`request-webapp-deployment` simplifies publishing a `WebappDeploymentRequest` and can also handle automatic payment, and encryption and upload of configuration.
```
$ laconic-so request-webapp-deployment \
--laconic-config ~/.laconic/laconic.yml \
--deployer lrn://laconic/deployers/webapp-deployer-api.dev.vaasl.io \
--app lrn://cerc-io/applications/webapp-hello-world@0.1.3 \
--env-file ~/yaml/hello.env \
--make-payment auto
```
Related changes are included for the deploy/undeploy commands for decrypting and using config, using the payment address from the WebappDeployer record, etc.
Reviewed-on: cerc-io/stack-orchestrator#938
Adds three new options for deployment/undeployment:
```
"--min-required-payment",
help="Requests must have a minimum payment to be processed",
"--payment-address",
help="The address to which payments should be made. Default is the current laconic account.",
"--all-requests",
help="Handle requests addressed to anyone (by default only requests to my payment address are examined).",
```
In this mode, requests should be designated for a particular address with the attribute `to` and include a `payment` attribute which is the tx hash for the payment.
The deployer will confirm the payment (to the right account, right amount, not used before, etc.) and then proceed with the deployment or undeployment.
Reviewed-on: cerc-io/stack-orchestrator#928
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#930
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#917
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Related to cerc-io/webapp-deployment-status-api#10
There are two issues in that. One is that the output probably changed recently, whether in the client or server, where no matching record is found by ID (Note this is specific to `laconic record get --id <v>` and does not seem to apply to the similar command to retrieve a record by name, `laconic name resolve <n>`).
Rather than returning `[]` it is now returning `[ null ]`. This cause us to think there *was* an application record found, and we attempt to treat the `null` entry like an Application object. That's fixed by filtering out null responses, which is a good precaution for the deployer, though I think it makes sense to ask whether this new behavior by the client/server is correct. Seems suspicious.
The other issue is that all the defensive checks we had in place to deal with broken/bad AppDeploymentRequests were around the _build_. This error was coming much earlier, merely when parsing and examining the request to see if it needed to be handled at all.
I have now added similar defensive error handling around that portion of the code.
Reviewed-on: cerc-io/stack-orchestrator#922
Reviewed-by: zramsay <zramsay@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
The str type check doesn't work if the port is a ruamel.yaml.scalarstring.SingleQuotedScalarString or ruamel.yaml.scalarstring.DoubleQuotedScalarString
Reviewed-on: cerc-io/stack-orchestrator#919
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#918
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#915
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#916
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
This emulates the K8S ConfigMap behavior on Docker by using a regular volume.
If a directory exists under `config/` which matches a named volume, the contents will be copied to the volume on `create` (provided the destination volume is empty). That is, rather than a ConfigMap, it is essentially a "config volume".
For example, with a compose file like:
```
version: '3.7'
services:
caddy:
image: cerc/caddy-ethcache:local
restart: always
volumes:
- caddyconfig:/etc/caddy:ro
volumes:
caddyconfig:
```
And a directory:
```
❯ ls stack-orchestrator/config/caddyconfig/
Caddyfile
```
After `laconic-so deploy create --spec-file caddy.yml --deployment-dir /srv/caddy` there will be:
```
❯ ls /srv/caddy/data/caddyconfig
Caddyfile
```
Mounted at `/etc/caddy`
Reviewed-on: cerc-io/stack-orchestrator#914
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
NodePort example:
```
network:
ports:
caddy:
- 1234
- 32020:2020
```
Replicas example:
```
replicas: 2
```
This also adds an optimization for k8s where if a directory matching the name of a configmap exists in beneath config/ in the stack, its contents will be copied into the corresponding configmap.
For example:
```
# Config files in the stack
❯ ls stack-orchestrator/config/caddyconfig
Caddyfile Caddyfile.one-req-per-upstream-example
# ConfigMap in the spec
❯ cat foo.yml | grep config
...
configmaps:
caddyconfig: ./configmaps/caddyconfig
# Create the deployment
❯ laconic-so --stack ~/cerc/caddy-ethcache/stack-orchestrator/stacks/caddy-ethcache deploy create --spec-file foo.yml
# The files from beneath config/<config_map_name> have been copied to the ConfigMap directory from the spec.
❯ ls deployment-001/configmaps/caddyconfig
Caddyfile Caddyfile.one-req-per-upstream-example
```
Reviewed-on: cerc-io/stack-orchestrator#913
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#909
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Instead of attempting to rewriting the nextConfig file directly, inject a helper function to add the config we need.
Reviewed-on: cerc-io/stack-orchestrator#901
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#896
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Fix for cerc-io/stack-orchestrator#880 to support the next compile/generate syntax.
Reviewed-on: cerc-io/stack-orchestrator#886
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#877
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#875
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#873
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
This is working off pull request "[Add support for pnpm as a webapp build tool. #767](https://git.vdb.to/cerc-io/stack-orchestrator/pulls/767/files)" that adds `pnpm` package manager support for `nextjs` & `webapps`.
`bun` default build output directory (defined as `CERC_BUILD_OUTPUT_DIR`) is `dist` which should already be handled with `pnpm` support in the previously mentioned [pull request](https://git.vdb.to/cerc-io/stack-orchestrator/pulls/767/files)
Installing `bun` using `npm` following our previous `pnpm` installation
```zsh
npm install -g bun
```
We'll be using `bun` as a package manager that works with `Node.js` projects as defined in bun's [docs](https://bun.sh/docs/cli/install)
> The bun CLI contains a Node.js-compatible package manager designed to be a dramatically faster replacement for npm, yarn, and pnpm. It's a standalone tool that will work in pre-existing Node.js projects; if your project has a package.json, bun install can help you speed up your workflow.
To test `next.js` apps using `node.js` and compatibility with all four packager managers -- `npm`, `yarn`, `pnpm`, and `bun` -- use the branches of snowball's [nextjs-package-manager-example-app](https://git.vdb.to/snowball/nextjs-package-manager-example-app) repo: `nextjs-package-manager/npm`, `nextjs-package-manager/yarn`, `nextjs-package-manager/pnpm`, `nextjs-package-manager/bun`.
Co-authored-by: Vivian Phung <dev+github@vivianphung.com>
Co-authored-by: David Boreham <dboreham@noreply.git.vdb.to>
Reviewed-on: https://git.vdb.to/cerc-io/stack-orchestrator/pulls/800
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: VPhung24 <vphung24@noreply.git.vdb.to>
Co-committed-by: VPhung24 <vphung24@noreply.git.vdb.to>
Fixes
- stack path resolution for `build`
- external stack path resolution for deployments
- "extra" config detection
- `deployment ports` command
- `version` command in dist or source install (without build_tag.txt)
- `setup-repos`, so it won't die when an existing repo is not at a branch or exact tag
Used in cerc-io/fixturenet-eth-stacks#14
Reviewed-on: cerc-io/stack-orchestrator#851
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Reviewed-on: cerc-io/stack-orchestrator#868
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#866
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#865
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Helps with diagnosing failures and odd behavior seen in the deployer in production.
Reviewed-on: cerc-io/stack-orchestrator#863
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#854
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Co-authored-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-committed-by: David Boreham <dboreham@noreply.git.vdb.to>
Reviewed-on: cerc-io/stack-orchestrator#838
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Part of [Metrics and logging for GQL queries in watcher](https://www.notion.so/Metrics-and-logging-for-GQL-queries-in-watcher-928c692292b140a2a4f52cda9795df5e)
- Update watcher config templates after config refactoring
- Mount watcher GQL query log files on volumes
- Update watcher dashboard to
- add a panel to show latest processed block number
- use latest processed block from sync status for diff values
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Reviewed-on: cerc-io/stack-orchestrator#835
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Reviewed-on: cerc-io/stack-orchestrator#831
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#830
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#829
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Tweak the auto-detection logic slightly for single-page apps, and also print the results.
Reviewed-on: cerc-io/stack-orchestrator#828
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#815
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#810
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#806
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#805
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
This add a new option `--fqdn-policy` to the `deploy-webapp-from-registry`.
The default policy, `prohibit` means that `ApplicationDeploymentRequests` which specify a FQDN will be rejected. The `allow` policy will cause them to be processed. The `preexisting` policy will only process them if an existing `DnsRecord` exists in the registry with the correct ownership.
The latter would be useful in conjunction with a pre-checking scheme in the UI (eg, that the DNS entry is properly configured, the domain is under the control of the requestor, etc.) Only after all the checks were successful would the `DnsRecord` be created, allowing for `ApplicationDeploymentRequests` to use it.
Reviewed-on: cerc-io/stack-orchestrator#802
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Updates fixturenet-plugeth stack for the Deneb fork based on Geth v1.13.x:
- updates genesis generator tool, and simplifies the config: the default from `ethereum-genesis-generator` can be used for a from-genesis Merged chain.
Reviewed-on: cerc-io/stack-orchestrator#789
Reviewed-by: jonathanface <jonathanface@noreply.git.vdb.to>
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Part of https://www.notion.so/Alerting-for-failing-CI-jobs-d0183b65453947aeab11dbddf989d9c0
- Run CI alert steps only on main to avoid alerts for in-progress PRs
- The Slack alerts will be sent on a CI job failure if
- A commit is pushed directly to main
- A PR gets merged into main
- A scheduled job runs on main
Reviewed-on: cerc-io/stack-orchestrator#797
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
revert Blind commit to fix laconic CLI calls after rename. (#784)
`laconic cns` got renamed to `laconic registry` which breaks all the scripts and commands that use it.
Reviewed-on: cerc-io/stack-orchestrator#784
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#788
`laconic cns` got renamed to `laconic registry` which breaks all the scripts and commands that use it.
Reviewed-on: cerc-io/stack-orchestrator#784
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Rather than always requesting a certificate, attempt to re-use an existing certificate if it already exists in the k8s cluster. This includes matching to a wildcard certificate.
Reviewed-on: cerc-io/stack-orchestrator#779
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Make sure to check the exit code of the docker build and bubble it back up to laconic-so.
Reviewed-on: cerc-io/stack-orchestrator#778
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
This saves about 1GB of space in the image.
Reviewed-on: cerc-io/stack-orchestrator#773
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#769
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Implementation of a command to fetch pre-built images from a remote registry, complementing the --push-images option already present on build-containers.
The two subcommands used together allow a stack to be deployed without needing to built its images, provided they have been already built and pushed to the specified container image registry.
This implementation simply picks the newest image with the right name and platform (matches against the platform Python is running on, so watch out for scenarios where Python is an x86 binary on M1 macs).
Reviewed-on: cerc-io/stack-orchestrator#768
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
This adds support for auto-detecting pnpm as a build tool for webapps.
Reviewed-on: cerc-io/stack-orchestrator#767
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
webapps are meant to be build-once/deploy-many, but we were rebuilding them for every request. This changes that, so that we rebuild only for every unique ApplicationRecord.
When we push the image, we now tag it according to its ApplicationRecord.
We don't want to use that tag directly in the compose file for the deployment, however, as the deployment needs to be able to adjust to new builds w/o re-writing the file all the time. Instead, we use a per-deployment unique tag (same as before), we just update what image it references as needed.
Reviewed-on: cerc-io/stack-orchestrator#764
This creates a new environment variable, CERC_SINGLE_PAGE_APP, which controls whether a catchall redirection back to / is applied.
If the value is not explicitly set, we try to detect if the page looks like a single-page app.
Reviewed-on: cerc-io/stack-orchestrator#763
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#762
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Minor tweaks for running the container-registry in k8s. The big change is not requiring --image-registry.
Reviewed-on: cerc-io/stack-orchestrator#760
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Reviewed-on: cerc-io/stack-orchestrator#758
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Hopefully the last one for a bit.
This only output the cmdline if log_file is present (ie, not to stdout). It also fixes a bug where the log_file was not passed in one line.
Reviewed-on: cerc-io/stack-orchestrator#757
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#756
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#755
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
We were missing errors related to registration, this should fix that.
Reviewed-on: cerc-io/stack-orchestrator#754
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
This adds a stack for the backend from snowball/snowballtools-base.
Reviewed-on: cerc-io/stack-orchestrator#751
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
If the tree has a 'build-webapp.sh' script, use that.
Reviewed-on: cerc-io/stack-orchestrator#750
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-authored-by: David Boreham <david@bozemanpas.com>
Reviewed-on: cerc-io/stack-orchestrator#747
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#749
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#748
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Minor fixes to envsubst for webapps. Somewhat specially treated is `LACONIC_HOSTED_CONFIG_homepage` which can be used to replace the homepage in package.json. With react, this gets an extra `/` though, which we need to remove.
Reviewed-on: cerc-io/stack-orchestrator#746
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#743
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#742
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
In kind, when we bind-mount a host directory it is first mounted into the kind container at /mnt, then into the pod at the desired location.
We accidentally picked this up for full-blown k8s, and were creating volumes at /mnt. This changes the behavior for both kind and regular k8s so that bind mounts are only allowed if a fully-qualified path is specified. If no path is specified at all, a default storageClass is assumed to be present, and the volume managed by a provisioner.
Eg, for kind, the default provisioner is: https://github.com/rancher/local-path-provisioner
```
stack: test
deploy-to: k8s-kind
config:
test-variable-1: test-value-1
network:
ports:
test:
- '80'
volumes:
# this will be bind-mounted to a host-path
test-data-bind: /srv/data
# this will be managed by the k8s node
test-data-auto:
configmaps:
test-config: ./configmap/test-config
```
Reviewed-on: cerc-io/stack-orchestrator#741
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#740
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#738
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#736
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
```
--include-tags TEXT Only include requests with matching tags
(comma-separated).
--exclude-tags TEXT Exclude requests with matching tags (comma-
separated).
```
Reviewed-on: cerc-io/stack-orchestrator#734
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Needs a '/' (http) not ':' (ssh).
Reviewed-on: cerc-io/stack-orchestrator#733
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Update links and references to github.com to git.vdb.to.
Also enable the flake8 lint action in gitea.
Reviewed-on: cerc-io/stack-orchestrator#732
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
* Add Postgres exporter and it's dashboard
* Use templating for watcher dashboard
* Add subgraph related panels to watcher dashboard
* Remove individual watcher dashboards and update instructions
* Deploy osmosis on Urbit fake ship
* Remove Urbit configuration from existing osmosis stack
* Add a separate stack for Osmosis front end on Urbit
* Run script for renaming build files with bash
* Add environment variables required in urbit osmosis build
* Fix BASEPATH in compose file
* Remove ipfs-glob-host from network config in osmosis readme
* Use laconic branch for osmosis frontend
---------
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
* Setup Prometheus and Grafana for monitoring stack
* Add dashboard for azimuth watchers
* Add a dashboard for sushiswap watcher
* Persist prometheus server data
* Additional metrics in watcher dashboards
* Update dashboards and add for merkl sushiswap watcher
* Add dashboards for remaining azimuth watchers
* Separate out preconfigured watcher dashboards and add instructions
* Keep the empty dashboards dir
* Refactor to make Urbit app deployment script generic
* Rename urbit pod and update instructions
* Add a flag to allow skipping app installation on Urbit
* Make remote Urbit app deployment scripts generic
* Move remote deployment scripts to urbit fixturenet
* Update and use existing kubo pod for Urbit glob hosting
* Separate out uniswap gql proxy in a stack
* Use proxy server from watcher-ts
* Add a flag to enable/disable the proxy server
* Update env configuratoin for uniswap urbit app stack
* Update stack file for uniswap urbit app stack
* Fix env variables in instructions
* Use IPFS for hosting glob files for Urbit
* Add env configuration for IPFS endpoints to instructions
* Make ship pier dir configurable in remote deployment script
* Update remote deployment script to accept glob hash arg
* Create uniswap-frontend stack
* Add stack for building uniswap frontend app
* Add a container for Urbit fake ship
* Update with deployment command
* Add a service for uniswap app deployment to urbit
* Use a script to start urbit ship to handle restarts
* Rename stack name to uniswap-urbit-app
* Rename build.sh to build-app.sh and check if build already exists
* Rename stack directory name
* Update uniswap build restart on failure
* Perform uniswap app deployment in the urbit container
* Add steps to create glob for the app
* Tail /dev/null after deployment
* Add steps to install the app to desk
* Host glob files for uniswap
* Update repo branch
* Update readme with command to get urbit password
* Update readme
* Update readme to open urbit web UI
* Expose the port on glob hosting container
* Avoid exposing urbit http port
* Add scripts for installing uniswap on remote urbit instance
* Configure GQL proxy for uniswap app
* Use laconic branch for app repo
* Rename urbit pod for uniswap app deployment
---------
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
* Update stack to run azimuth job runner
* Run azimuth watcher in active mode
* Update stack to run job-runners for all watchers
* Update ports in job-runner health checks
* Map metrics ports to host
* Configure historical block processing batch size for Azimuth watcher
* Use deployment command for azimuth stack
---------
Co-authored-by: neeraj <neeraj.rtly@gmail.com>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
This fixes three issues:
1. #644 (build output)
2. #646 (error on startup)
3. automatic env quote handling (related to 2)
For the build output we now have:
```
#################################################################
Built host container for /home/telackey/tmp/iglootools-home with tag:
cerc/iglootools-home:local
To test locally run:
docker run -p 3000:3000 cerc/iglootools-home:local
```
For the startup error, it was hung waiting for the "success" message from the next generate output (itself a workaround for a nextjs bug fixed by this PR we submitted: https://github.com/vercel/next.js/pull/58276).
I added a timeout which will cause it to wait up to a maximum _n_ seconds before issuing:
```
ERROR: 'npm run cerc_generate' exceeded CERC_MAX_GENERATE_TIME.
```
On the quoting itself, I plan on adding a new run-webapp command, but I realized I had a decent spot to do effect the quote replacement on-the-fly after all when I am already escaping the values for insertion/replacement into JS.
The "dequoting" can be disabled with `CERC_RETAIN_ENV_QUOTES=true`.
* Upgrade merkl-sushiswap-v3-watcher-ts release
* Increase blockDelayInMilliSecs for merkl-sushiswap-v3 watcher
* Upgrade sushiswap-v3-watcher-ts release
* Add sushiswap-v3 watcher to stack list
* Avoid mapping ports that are not required to be exposed
* Add a sushiswap-v3 watcher stack
* Add services for watcher db and server
* Add service for watcher job-runner
* Use 0.0.0.0 for watcher server config
---------
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
* Separate ponder indexer and ponder watcher and add second ponder indexer
* Handle review changes
* Update config to point ponder watcher to indexer 2 to indexer 1
* Update Ponder demo
* Use deployed ERC20 contract in second Ponder indexer
* Add order by timestamp in Ponder watcher app entities query
* Upgrade go-nitro version to v0.1.2-ts-port-0.1.9
* Decrease Ponder start block to process contract transfer event at deployment
---------
Co-authored-by: Shreerang Kale <shreerangkale@gmail.com>
* Use durable store for in-process Nitro node
* Update setup for external go-nitro node
* Add a separate service for ipld-eth-server with remote Nitro node
* Update repo branches / versions
* Wait for external Nitro node endpoint and update instructions
* Update repo branches
* Add support to pass ratesFile in config
* Change branch for ponder to laconic-esm
* Fix repo name for ponder
---------
Co-authored-by: Shreerang Kale <shreerangkale@gmail.com>
* Remove reverse payment proxy service from payment stack
* Remove run-reverse-payment-proxy.sh
* Remove reverse payment proxy port from readme
---------
Co-authored-by: Shreerang Kale <shreerangkale@gmail.com>
* Use ponder in watcher mode and indexer mode separately in payments stack
* Refactor config file and configure env variables for watcher mode
* Update demo.md for payments stack
* Handle review changes
* Setup config to pay for watcher to indexer GQL queries
* Fix config in stack for making payments in watcher ponder app
* Update demo for payment from watcher to indexer mode Ponder apps
* Use laconic-esm brannch for ponder
---------
Co-authored-by: Shreerang Kale <shreerangkale@gmail.com>
* First part of deployments for external repos
* Generate deployment dir
* Create empty config file
* Copy script files into deployment
* Run scripts in deployment
* Refactor
* Integrate external plugins
* Remove debug output
* Add a fixturenet-payments stack
* Export the WebSocket port in fixturenet-eth-geth service
* Add container to run a go-nitro node
* Add container to deploy Nitro contracts
* Read contract addresses from a volume when running the Nitro node
* Add a service for Nitro reverse payment proxy
* Expose payment proxy endpoint to be accessible from host
* Map nitro node messaging and payment proxy ports to host
* Use container to deploy Nitro contracts in mobymask-v3 stack
* Use a common contract deployment script from mobymask-v3 stack
* Add MobyMask contract deployment and watcher services
* Fixes for contract deployment and watcher scripts
* Add a container and service for mobymask-snap
* Add MobyMask app service
* Add container and service for a ponder app
* Fix ponder setup and update instructions
* Handle review comments
* Use enablepaidrpcmethods flag in reverse payment proxy server
* Update go-nitro branch
* Fixes for mobymask-v3 stack
---------
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
* Add a mobymask-v3 stack
* Fix Nitro deployment script and add watcher container
* Setup Nitro config
* Run build after setting Nitro addresses
* Setup consensus config
* Add a container for web-app
* Use node 18 for the web-app
* Persist Nitro node data to a volume
* Add clean up steps
* Update query rates
* Add steps to force rebuild and persist peers_ids volume
* Update mobymask-v2 stack with pubsub option
* Update watcher-ts version
---------
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
* Update mobymask v1 watcher with new contract
* Update mobymask v1 stack with deployment commands
* Use release tag for mobymask-watcher-ts repo
* Upgrade MobyMask version in v2 stack to use latest contract
* Update sushiswap-subgraph stack to point to filecoin endpoint
* Deploy blocks subgraph in sushiswap-subgraph stack
* Update subgraph config
* Remove duplicate nativePricePool
* Enable debug logs in graph-node and update instructions
* Two additional miner nodes in fixturenet-lotus
* Revert experimental change for additional miner nodes
* Set ETHEREUM_REORG_THRESHOLD env for graph-node to catch up to head
* Take ETH RPC endpoint for graph-node from env
* Set default values for RPC endpoint in graph-node
* Rename fixturenet-graph-node pod to graph-node
* Clean up sushiswap-subgraph stack
* Use deployment command in sushiswap-subgraph stack
* Add a separate fixturenet-sushiswap-subgraph stack
* Fix pods in fixturenet-sushiswap-subgraph
* Fix fixturenet subgraph deployment commands and instructions
---------
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
* Use ip utility to get the required miner node multiaddr
* Persist lotus node data to support restarts
* Add clean up steps to instructions
* Fix lotus-seed sector-dir arg
* Add a sushiswap stack with contract deployments
* Add watcher services
* Add a service for the info app
* Add instructions to run smoke tests
* Use sushi-info-watcher in demo mode
* Turn off block prefetching
* Fix sushiswap demo instructions
* Use release version and add healthcheck in Lotus stack
* Wait for Lotus node to start before sushiswap watchers
---------
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
* move health check inside startup script
* remove pre-built genesis
* move health check inside startup script
* remove pre-built genesis
* Use hardcoded paths for Lotus node data directories
* Persist proof parameters
* Write out miner node's multiaddr with docker network IP
* Enable Lotus ETH RPC API and bind to all available interfaces
* Fund a known account
---------
Co-authored-by: iskay <ikay@lakeheadu.ca>
Co-authored-by: Ian Kay <ian@knowable.vc>
* forward more vars for debugging
forward CERC_GETH_VERBOSITY
forward CERC_STATEDIFF_DB_LOG_STATEMENTS
forward CERC_REMOTE_DEBUG
* fix env var
* remote flag can be set from env
* random nits
* geth - visibility of migration status
* forward CERC_RUN_STATEDIFF to geth container
* fix ipld-eth-server vars
* fix fixturenet-eth-loaded stack
* fixturenet geth genesis - include mergeNetsplitBlock
* forward CERC_STATEDIFF_DB_GOOSE_MIN_VER to env file
* add TAG_SUFFIX arg to lighthouse build
intended to avoid sporadic failures when running lcli on github CI runners, likely related to non-portable builds
* Add a stack for Gelato watcher
* Add option to create and use a state snapshot
* Add commands to create and import a state checkpoint
* Rename ipld-eth-server endpoint env variables
* Fix default env variable
Former-commit-id: 8b4b5deba8
@ -6,7 +6,7 @@ Stack Orchestrator allows building and deployment of a Laconic Stack on a single
## Install
**To get started quickly** on a fresh Ubuntu instance (e.g, Digital Ocean); [try this script](./scripts/quick-install-ubuntu.sh). **WARNING:** always review scripts prior to running them so that you know what is happening on your machine.
**To get started quickly** on a fresh Ubuntu instance (e.g, Digital Ocean); [try this script](./scripts/quick-install-linux.sh). **WARNING:** always review scripts prior to running them so that you know what is happening on your machine.
For any other installation, follow along below and **adapt these instructions based on the specifics of your system.**
@ -16,6 +16,7 @@ Ensure that the following are already installed:
- [Python3](https://wiki.python.org/moin/BeginnersGuide/Download): `python3 --version` >= `3.8.10` (the Python3 shipped in Ubuntu 20+ is good to go)
Note: if installing docker-compose via package manager on Linux (as opposed to Docker Desktop), you must [install the plugin](https://docs.docker.com/compose/install/linux/#install-the-plugin-manually), e.g. :
Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be
a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory.
Now, having selected that directory, download the latest release from [this page](https://github.com/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable:
Now, having selected that directory, download the latest release from [this page](https://git.vdb.to/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable:
If Stack Orchestrator was installed using the process described above, it is able to subsequently self-update to the current latest version by running:
```bash
laconic-so update
```
## Usage
The various [stacks](/app/data/stacks) each contain instructions for running different stacks based on your use case. For example:
The various [stacks](/stack_orchestrator/data/stacks) each contain instructions for running different stacks based on your use case. For example:
Instructions to setup and deploy an end-to-end L1+L2 stack with [fixturenet-eth](../fixturenet-eth/) (L1) and [Optimism](https://stack.optimism.io) (L2)
We support running just the L2 part of stack, given an external L1 endpoint. Follow the [L2 only doc](./l2-only.md) for the same.
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories mentioned below and re-run the command
```
Checkout to the required versions and branches in repos:
Note: this will take >10 mins depending on the specs of your machine, and **requires** 16GB of memory or greater.
This should create the required docker images in the local image registry:
* `cerc/go-ethereum`
* `cerc/lighthouse`
* `cerc/fixturenet-eth-geth`
* `cerc/fixturenet-eth-lighthouse`
* `cerc/foundry`
* `cerc/optimism-contracts`
* `cerc/optimism-l2geth`
* `cerc/optimism-op-node`
* `cerc/optimism-op-batcher`
* `cerc/optimism-op-proposer`
## Deploy
Deploy the stack:
```bash
laconic-so --stack fixturenet-optimism deploy up
```
If you get the error `service "fixturenet-optimism-contracts" didn't complete successfully: exit 1` with ~25 lines of Traceback, wait 15-20 mins then re-run the command.
The `fixturenet-optimism-contracts` service takes a while to complete running as it:
1. waits for the 'Merge' to happen on L1
2. waits for a finalized block to exist on L1 (so that it can be taken as a starting block for roll ups)
3. deploys the L1 contracts
To list and monitor the running containers:
```bash
laconic-so --stack fixturenet-optimism deploy ps
# With status
docker ps
# Check logs for a container
docker logs -f <CONTAINER_ID>
```
## Clean up
Stop all services running in the background:
```bash
laconic-so --stack fixturenet-optimism deploy down 30
```
Clear volumes created by this stack:
```bash
# List all relevant volumes
docker volume ls -q --filter "name=.*l1_deployment|.*l2_accounts|.*l2_config|.*l2_geth_data"
# Remove all the listed volumes
docker volume rm $(docker volume ls -q --filter "name=.*l1_deployment|.*l2_accounts|.*l2_config|.*l2_geth_data")
```
## Troubleshooting
* If `op-geth` service aborts or is restarted, the following error might occur in the `op-node` service:
```bash
WARN [02-16|21:22:02.868] Derivation process temporary error attempts=14 err="stage 0 failed resetting: temp: failed to find the L2 Heads to start from: failed to fetch L2 block by hash 0x0000000000000000000000000000000000000000000000000000000000000000: failed to determine block-hash of hash 0x0000000000000000000000000000000000000000000000000000000000000000, could not get payload: not found"
```
* This means that the data directory that `op-geth` is using is corrupted and needs to be reinitialized; the containers `op-geth`, `op-node` and `op-batcher` need to be started afresh:
WARNING: This will reset the L2 chain; consequently, all the data on it will be lost
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories mentioned below and re-run the command
```
Checkout to the required versions and branches in repos:
The MobyMask watcher is a Laconic Network component that provides efficient access to MobyMask contract data from Ethereum, along with evidence allowing users to verify the correctness of that data. The watcher source code is available in [this repository](https://github.com/cerc-io/watcher-ts/tree/main/packages/mobymask-watcher) and a developer-oriented Docker Compose setup for the watcher can be found [here](https://github.com/cerc-io/mobymask-watcher). The watcher can be deployed automatically using the Laconic Stack Orchestrator tool as detailed below:
## Deploy the MobyMask Watcher
The instructions below show how to deploy a MobyMask watcher using laconic-stack-orchestrator (the installation of which is covered [here](https://github.com/cerc-io/stack-orchestrator#user-mode)).
This deployment expects that ipld-eth-server's endpoints are available on the local machine at http://ipld-eth-server.example.com:8083/graphql and http://ipld-eth-server.example.com:8082. More advanced configurations are supported by modifying the watcher's [config file](../../config/watcher-mobymask/mobymask-watcher.toml).
$ laconic-so deploy-system --include watcher-mobymask up
```
Correct operation should be verified by following the instructions [here](https://github.com/cerc-io/mobymask-watcher/tree/main/mainnet-watcher-only#run), checking GraphQL queries return valid results in the watcher's [playground](http://127.0.0.1:3001/graphql).
## Clean up
Stop all the services running in background:
```bash
$ laconic-so deploy-system --include watcher-mobymask down
Here you will find information about the design of stack orchestrator, contributing to it, and deploying services/applications that combine two or more "stacks".
Most "stacks" contain their own README which has plenty of information on deploying, but stacks can be combined in a variety of ways which are document here, for example:
- [Gitea with Laconicd Fixturenet](./gitea-with-laconicd-fixturenet.md)
- [Laconicd Registry with Console](./laconicd-with-console.md)
See [this PR](https://git.vdb.to/cerc-io/stack-orchestrator/pull/434) for an example of how to currently add a minimal stack to stack orchestrator. The [reth stack](https://git.vdb.to/cerc-io/stack-orchestrator/pull/435) is another good example.
For external developers, we recommend forking this repo and adding your stack directly to your fork. This initially requires running in "developer mode" as described [here](/docs/CONTRIBUTING.md). Check out the [Namada stack](https://github.com/vknowable/stack-orchestrator/blob/main/app/data/stacks/public-namada/digitalocean_quickstart.md) from Knowable to see how that is done.
Core to the feature completeness of stack orchestrator is to [decouple the tool functionality from payload](https://git.vdb.to/cerc-io/stack-orchestrator/issues/315) which will no longer require forking to add a stack.
## Example
- in `stack_orchestrator/data/stacks/my-new-stack/stack.yml` add:
```yaml
version: "0.1"
name: my-new-stack
repos:
- github.com/my-org/my-new-stack
containers:
- cerc/my-new-stack
pods:
- my-new-stack
```
- in `stack_orchestrator/data/container-build/cerc-my-new-stack/build.sh` add:
When Stack Orchestrator deploys a stack containing a suite of one or more containers it expects images for those containers to be on the local machine with a tag of the form `<image-name>:local` Images for these containers can be built from source (and optionally base container images from public registries) with the `build-containers` subcommand.
However, the task of building a large number of containers from source may consume considerable time and machine resources. This is where the `fetch-containers` subcommand steps in. It is designed to work exactly like `build-containers` but instead the images, pre-built, are fetched from an image registry then re-tagged for deployment. It can be used in place of `build-containers` for any stack provided the necessary containers, built for the local machine architecture (e.g. arm64 or x86-64) have already been published in an image registry.
## Usage
To use `fetch-containers`, provide an image registry path, a username and token/password with read access to the registry, and optionally specify `--force-local-overwrite`. If this argument is not specified, if there is already a locally built or previously fetched image for a stack container on the machine, it will not be overwritten and a warning issued.
Deploy a local Gitea server, publish NPM packages to it, then use those packages to build a Laconicd fixturenet. Demonstrates several components of the Laconic stack
These commands can take awhile. Eventually, some instructions and a token will output. Set `CERC_NPM_AUTH_TOKEN`:
```bash
export CERC_NPM_AUTH_TOKEN=<your-token>
```
### Configure the hostname gitea.local
How to do this depends on your operating system but usually involves editing a `hosts` file. For example, on Linux add this line to the file `/etc/hosts` (needs sudo):
```bash
127.0.0.1 gitea.local
```
Test with:
```bash
ping gitea.local
```
```bash
PING gitea.local (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.147 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.033 ms
```
Although not necessary in order to build and publish packages, you can now access the Gitea web interface at: [http://gitea.local:3000](http://gitea.local:3000) using these credentials: `gitea_admin/admin1234` (Note: please properly secure Gitea if public internet access is allowed).
Navigate to the Gitea console and switch to the `cerc-io` user then find the `Packages` tab to confirm that these two npm packages have been published:
The placement of pods created as part of a stack deployment can be controlled to either avoid certain nodes, or require certain nodes.
### Pod/Node Affinity
Node affinity rules applied to pods target node labels. The effect is that a pod can only be placed on a node having the specified label value. Note that other pods that do not have any node affinity rules can also be placed on those same nodes. Thus node affinity for a pod controls where that pod can be placed, but does not control where other pods are placed.
Node affinity for stack pods is specified in the deployment's `spec.yml` file as follows:
```
node-affinities:
- label: nodetype
value: typeb
```
This example denotes that the stack's pods should only be placed on nodes that have the label `nodetype` with value `typeb`.
### Node Taint Toleration
K8s nodes can be given one or more "taints". These are special fields (distinct from labels) with a name (key) and optional value.
When placing pods, the k8s scheduler will only assign a pod to a tainted node if the pod posesses a corresponding "toleration".
This is metadata associated with the pod that specifies that the pod "tolerates" a given taint.
Therefore taint toleration provides a mechanism by which only certain pods can be placed on specific nodes, and provides a complementary mechanism to node affinity.
Taint toleration for stack pods is specified in the deployment's `spec.yml` file as follows:
```
node-tolerations:
- key: nodetype
value: typeb
```
This example denotes that the stack's pods will tolerate a taint: `nodetype=typeb`
The following tutorial explains the steps to run a laconicd fixturenet with CLI and web console that displays records in the registry. It is designed as an introduction to Stack Orchestrator and to showcase one component of the Laconic Stack. Prior to Stack Orchestrator, the following 4 repositories had to be cloned and setup manually:
- https://github.com/cerc-io/laconicd
- https://github.com/cerc-io/laconic-sdk
- https://github.com/cerc-io/laconic-registry-cli
- https://github.com/cerc-io/laconic-console
Now, with Stack Orchestrator, it is a few quick commands. Additionally, the `docker` and `docker compose` integration on the back-end allows the stack to easily persist, facilitating workflows.
## Setup laconic-so
To avoid hiccups on Mac M1/M2 and any local machine nuances that may affect the user experience, this tutorial is focused on using a fresh Digital Ocean (DO) droplet with similar specs:
16 GB Memory / 8 Intel vCPUs / 160 GB Disk.
1. Login to the droplet as root (either by SSH key or password set in the DO console)
```
ssh root@IP
```
2. Get the install script, give it executable permissions, and run it:
It's possible to run into an `ESOCKETTIMEDOUT` error, e.g., `error An unexpected error occurred: "https://registry.yarnpkg.com/@material-ui/icons/-/icons-4.11.3.tgz: ESOCKETTIMEDOUT"`. This may happen even if you have a great internet connection. In that case, re-run the `build-containers` command.
4. Set this environment variable to your droplet's IP address:
```
export LACONIC_HOSTED_ENDPOINT=http://<your-IP>
```
5. Deploy the stack:
```
laconic-so --stack fixturenet-laconic-loaded deploy up
3. Go back to the Digital Ocean web console and set an Inbound Rule for Custom TCP of the above port:
- `32778` in this example, but yours will be different.
- do the same for port `9473`
Additional ports will need to be opened depending on your application. Ensure you add your droplet to this new Firewall and wait a minute or so for the update to propagate.
4. Navigate to http://IP:port and ensure laconic-console is functioning as expected:
- ensure you are connected to `laconicd`; no error message should pop up;
- the wifi symbol in the bottom right should have a green check mark beside it
- navigate to the status tab; it should display similar/identical information
- navigate to the config tab, you'll see something like (with your IP):
```
wns
webui http://68.183.195.210:9473/console
server http://68.183.195.210:9473/api
```
## Publish and query a sample record to the registry
1. The following command will create a bond and publish a record:
The following tutorial explains the steps to run a laconicd fixturenet with CLI and web console that displays records in the registry. It is designed as an introduction to Stack Orchestrator and to showcase one component of the Laconic Stack. Prior to Stack Orchestrator, the following repositories had to be cloned and setup manually:
- https://git.vdb.to/cerc-io/laconicd
- https://git.vdb.to/cerc-io/laconic-registry-cli
- https://git.vdb.to/cerc-io/laconic-console
Now, with Stack Orchestrator, it is a few quick commands. Additionally, the `docker` and `docker compose` integration on the back-end allows the stack to easily persist, facilitating workflows.
## Setup laconic-so
To avoid hiccups on Mac M1/M2 and any local machine nuances that may affect the user experience, this tutorial is focused on using a fresh Digital Ocean (DO) droplet with similar specs:
16 GB Memory / 8 Intel vCPUs / 160 GB Disk.
1. Login to the droplet as root (either by SSH key or password set in the DO console)
```
ssh root@IP
```
1. Get the install script, give it executable permissions, and run it:
It's possible to run into an `ESOCKETTIMEDOUT` error, e.g., `error An unexpected error occurred: "https://registry.yarnpkg.com/@material-ui/icons/-/icons-4.11.3.tgz: ESOCKETTIMEDOUT"`. This may happen even if you have a great internet connection. In that case, re-run the `build-containers` command.
1. Set this environment variable to your droplet's IP address or fully qualified DNS host name if it has one:
3. Go back to the Digital Ocean web console and set an Inbound Rule for Custom TCP of the above port:
- `32778` in this example, but yours will be different.
- do the same for port `9473`
Additional ports will need to be opened depending on your application. Ensure you add your droplet to this new Firewall and wait a minute or so for the update to propagate.
4. Navigate to http://IP:port and ensure laconic-console is functioning as expected:
- ensure you are connected to `laconicd`; no error message should pop up;
- the wifi symbol in the bottom right should have a green check mark beside it
- navigate to the status tab; it should display similar/identical information
- navigate to the config tab, you'll see something like (with your IP):
```
wns
webui http://68.183.195.210:9473/console
server http://68.183.195.210:9473/api
```
## Publish and query a sample record to the registry
1. The following command will create a bond and publish a record:
Note: this page is out of date (but still useful) - it will no longer be useful once stacks are [decoupled from the tool functionality](https://git.vdb.to/cerc-io/stack-orchestrator/issues/315).
It is possible to build and run Next.js webapps using the `build-webapp` and `run-webapp` subcommands.
To make it easier to build once and deploy into different environments and with different configuration,
compilation and static page generation are separated in the `build-webapp` and `run-webapp` steps.
This offers much more flexibilty than standard Next.js build methods, since any environment variables accessed
via `process.env`, whether for pages or for API, will have values drawn from their runtime deployment environment,
not their build environment.
## Building
Building usually requires no additional configuration. By default, the Next.js version specified in `package.json`
is used, and either `yarn` or `npm` will be used automatically depending on which lock files are present. These
can be overidden with the build arguments `CERC_NEXT_VERSION` and `CERC_BUILD_TOOL` respectively. For example: `--extra-build-args "--build-arg CERC_NEXT_VERSION=13.4.12"`
With `run-webapp` a new container will be launched on the local machine, with runtime configuration provided by `--env-file` (if specified) and published on an available port. Multiple instances can be launched with different configuration.
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.