In docker-compose, services can reference each other by name (e.g., 'db:5432').
In Kubernetes, when multiple containers are in the same pod (sidecars), they
share the same network namespace and must use 'localhost' instead.
This fix adds translate_sidecar_service_names() which replaces docker-compose
service name references with 'localhost' in environment variable values for
containers that share the same pod.
Fixes issue where multi-container pods fail because one container tries to
connect to a sibling using the compose service name instead of localhost.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Each deployment now gets its own Kubernetes namespace (laconic-{deployment_id}).
This provides:
- Resource isolation between deployments on the same cluster
- Simplified cleanup: deleting the namespace cascades to all namespaced resources
- No orphaned resources possible when deployment IDs change
Changes:
- Set k8s_namespace based on deployment name in __init__
- Add _ensure_namespace() to create namespace before deploying resources
- Add _delete_namespace() for cleanup
- Simplify down() to just delete PVs (cluster-scoped) and the namespace
- Fix hardcoded "default" namespace in logs function
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously, down() generated resource names from the deployment config
and deleted those specific names. This failed to clean up orphaned
resources when deployment IDs changed (e.g., after force_redeploy).
Changes:
- Add 'app' label to all resources: Ingress, Service, NodePort, ConfigMap, PV
- Refactor down() to query K8s by label selector instead of generating names
- This ensures all resources for a deployment are cleaned up, even if
the deployment config has changed or been deleted
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Mount /var/lib/etcd and /etc/kubernetes/pki to host filesystem
so cluster state is preserved for offline recovery. Each deployment
gets its own backup directory keyed by deployment ID.
Directory structure:
data/cluster-backups/{deployment_id}/etcd/
data/cluster-backups/{deployment_id}/pki/
This enables extracting secrets from etcd backups using etcdctl
with the preserved PKI certificates.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Adds support for configuring ACME email for Let's Encrypt certificates
in kind deployments. The email can be specified in the spec under
network.acme-email and will be used to configure the Caddy ingress
controller ConfigMap.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
For k8s-kind, relative paths (e.g., ./data/rpc-config) are resolved to
$DEPLOYMENT_DIR/path by _make_absolute_host_path() during kind config
generation. This provides Docker Host persistence that survives cluster
restarts.
Previously, validation threw an exception before paths could be resolved,
making it impossible to use relative paths for persistent storage.
Changes:
- deployment_create.py: Skip relative path check for k8s-kind
- cluster_info.py: Allow relative paths to reach PV generation
- docs/deployment_patterns.md: Document volume persistence patterns
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Document that:
- Volumes persist across cluster deletion by design
- Only use --delete-volumes when explicitly requested
- Multiple deployments share one kind cluster
- Use --skip-cluster-management to stop single deployment
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously, install_ingress_for_kind() applied the YAML (which starts
the Caddy pod with email: ""), then patched the ConfigMap afterward.
The pod had already read the empty email and Caddy doesn't hot-reload.
Now template the email into the YAML before applying, so the pod starts
with the correct email from the beginning.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The existing 'image-registry' key is used for pushing images to a remote
registry (URL string). Rename the new auth config to 'registry-credentials'
to avoid collision.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add ability to configure private container registry credentials in spec.yml
for deployments using images from registries like GHCR.
- Add get_image_registry_config() to spec.py for parsing image-registry config
- Add create_registry_secret() to create K8s docker-registry secrets
- Update cluster_info.py to use dynamic {deployment}-registry secret names
- Update deploy_k8s.py to create registry secret before deployment
- Document feature in deployment_patterns.md
The token-env pattern keeps credentials out of git - the spec references an
environment variable name, and the actual token is passed at runtime.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Check stack.yml containers: field to determine which images are local builds
- Only load local images via kind load; let k8s pull registry images directly
- Add is_ingress_running() to skip ingress installation if already running
- Fixes deployment failures when public registry images aren't in local Docker
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When stack: field in spec.yml contains a path (e.g., stack_orchestrator/data/stacks/name),
extract just the final name component for K8s secret naming. K8s resource names must
be valid RFC 1123 subdomains and cannot contain slashes.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add GENERATE_TOKEN_PATTERN to detect $generate:hex:N$ and $generate:base64:N$ tokens
- Add _generate_and_store_secrets() to create K8s Secrets from spec.yml config
- Modify _write_config_file() to separate secrets from regular config
- Add env_from with secretRef to container spec in cluster_info.py
- Secrets are injected directly into containers via K8s native mechanism
This enables declarative secret generation in spec.yml:
config:
SESSION_SECRET: $generate:hex:32$
DB_PASSWORD: $generate:hex:16$
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When deploying a second stack to k8s-kind, automatically reuse an existing
kind cluster instead of trying to create a new one (which would fail due
to port 80/443 conflicts).
Changes:
- helpers.py: create_cluster() now checks for existing cluster first
- deploy_k8s.py: up() captures returned cluster name and updates self
This enables deploying multiple stacks (e.g., gorbagana-rpc + trashscan-explorer)
to the same kind cluster.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add --spec-file option to specify spec location in repo
- Auto-detect deployment/spec.yml in repo as GitOps location
- Fall back to deployment dir if no repo spec found
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The etcd directory is root-owned, so shell test -f fails.
Use docker with volume mount to check file existence.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Create member.backup-YYYYMMDD-HHMMSS before cleaning.
Each cluster recreation creates a new backup, preserving history.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Move original to .bak, move new into place, then delete bak.
If anything fails before the swap, original remains intact.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Instead of trying to delete specific stale resources (blacklist),
keep only the valuable data (caddy TLS certs) and delete everything
else. This is more robust as we don't need to maintain a list of
all possible stale resources.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Use docker containers with volume mounts to handle all file
operations on root-owned etcd directories, avoiding the need
for sudo on the host.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When etcd is persisted (for certificate backup) and a cluster is
recreated, kind tries to install CNI (kindnet) fresh but the
persisted etcd already has those resources, causing 'AlreadyExists'
errors and cluster creation failure.
This fix:
- Detects etcd mount path from kind config
- Before cluster creation, clears stale CNI resources (kindnet, coredns)
- Preserves certificate and other important data
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add acme_email_key constant for spec.yml parsing
- Add get_acme_email() method to Spec class
- Modify install_ingress_for_kind() to patch ConfigMap with email
- Pass acme-email from spec to ingress installation
- Add 'delete' verb to leases RBAC for certificate lock cleanup
The acme-email field in spec.yml was previously ignored, causing
Let's Encrypt to fail with "unable to parse email address".
The missing delete permission on leases caused lock cleanup failures.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add `laconic-so deployment restart` command that:
- Pulls latest code from stack git repository
- Regenerates spec.yml from stack's commands.py
- Verifies DNS if hostname changed (with --force to skip)
- Syncs deployment directory preserving cluster ID and data
- Stops and restarts deployment with --skip-cluster-management
Also stores stack-source path in deployment.yml during create
for automatic stack location on restart.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Document that external stack pattern should be used when creating new
stacks for any reason, with directory structure and usage examples.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously, volumes defined in a stack's commands.py init() function
were being overwritten by volumes discovered from compose files.
This prevented stacks from adding infrastructure volumes like caddy-data
that aren't defined in the compose files.
Now volumes are merged, with init() volumes taking precedence over
compose-discovered defaults.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add support for Docker Compose host path mounts (like ../config/file:/path)
in k8s deployments. Previously these were silently skipped, causing k8s
deployments to fail when compose files used host path mounts.
Changes:
- Add helper functions for host path detection and name sanitization
- Generate kind extraMounts for host path mounts
- Create hostPath volumes in pod specs for host path mounts
- Create volumeMounts with sanitized names for host path mounts
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
To allow updating an existing deployment
- Check the deployment dir exists when updating
- Write to temp dir, then safely copy tree
- Don't overwrite data dir or config.env
Resolve conflicts:
- deployment_context.py: Keep single modify_yaml method from main
- fixturenet-optimism/commands.py: Use modify_yaml helper from main
- deployment_create.py: Keep helm-chart, network-dir, initial-peers options
- deploy_webapp.py: Update create_operation call signature
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
get_service() returns None when there are no http-proxy routes,
so we must check before calling create_namespaced_service().
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Change 'docker remove -f' to 'docker rm -f' - the 'remove' subcommand
doesn't exist in docker CLI.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously these workflows ran on PRs to any branch. Now:
- PRs to main: run all tests (full CI gate)
- Pushes to other branches: use existing path filtering
This reduces CI load on feature branch PRs while maintaining
full test coverage for PRs targeting main.
Affected workflows:
- test-k8s-deploy.yml
- test-k8s-deployment-control.yml
- test-webapp.yml
- test-deploy.yml
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously get_service() only exposed the first port from pod definition.
Now it collects all unique ports from http-proxy routes and exposes them
all in the Service spec.
This is needed for WebSocket support where RPC runs on one port (8899)
and WebSocket pubsub on another (8900) - both need to be accessible
through the ingress.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Update stack command for continuous deployment workflow
- Separate deployer from CLI
- Separate stacks from orchestrator repo
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The ingress annotation was still set to nginx class even though we're now
using Caddy as the ingress controller. Caddy won't pick up ingresses
annotated with the nginx class.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Caddy provides automatic HTTPS with Let's Encrypt, but needs port 443
mapped from the kind container to the host. Previously only port 80 was
mapped.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The helm-charts-with-caddy branch had the Caddy manifest file but was still
using nginx in the code. This change:
- Switch install_ingress_for_kind() to use ingress-caddy-kind-deploy.yaml
- Update wait_for_ingress_in_kind() to watch caddy-system namespace
- Use correct label selector for Caddy ingress controller pods
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The base_runtime_spec for containerd requires a complete OCI spec,
not just the rlimits section. The minimal spec was causing runc to
fail with "open /proc/self/fd: no such file or directory" because
essential mounts and namespaces were missing.
This commit uses kind's default cri-base.json as the base and adds
the rlimits configuration on top. The spec includes all necessary
mounts, namespaces, capabilities, and kind-specific hooks.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The previous approach of mounting cri-base.json into kind nodes failed
because we didn't tell containerd to use it via containerdConfigPatches.
RuntimeClass allows different stacks to have different rlimit profiles,
which is essential since kind only supports one cluster per host and
multiple stacks share the same cluster.
Changes:
- Add containerdConfigPatches to kind-config.yml to define runtime handlers
- Create RuntimeClass resources after cluster creation
- Add runtimeClassName to pod specs based on stack's security settings
- Rename cri-base.json to high-memlock-spec.json for clarity
- Add get_runtime_class() method to Spec that auto-derives from
unlimited-memlock setting
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add pyrightconfig.json for pyright 1.1.408 TOML parsing workaround
- Add NoReturn annotations to fatal() functions for proper type narrowing
- Add None checks and assertions after require=True get_record() calls
- Fix AttrDict class with __getattr__ for dynamic attribute access
- Add type annotations and casts for Kubernetes client objects
- Store compose config as DockerDeployer instance attributes
- Filter None values from dotenv and environment mappings
- Use hasattr/getattr patterns for optional container attributes
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Format code with black (line length 88)
- Fix E501 line length errors by breaking long strings and comments
- Fix F841 unused variable (removed unused 'quiet' variable)
- Configure pyright to disable common type issues in existing codebase
(reportGeneralTypeIssues, reportOptionalMemberAccess, etc.)
- All pre-commit hooks now pass
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add spec.yml option `security.unlimited-memlock` that configures
RLIMIT_MEMLOCK to unlimited for Kind cluster pods. This is needed
for workloads like Solana validators that require large amounts of
locked memory for memory-mapped files during snapshot decompression.
When enabled, generates a cri-base.json file with rlimits and mounts
it into the Kind node to override the default containerd runtime spec.
Also includes flake8 line-length fixes for affected files.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add black formatter (rev 23.12.1)
- Add pyright type checker (rev v1.1.345)
- Add yamllint with relaxed mode (rev v1.35.1)
- Update flake8 args: max-line-length=88, extend-ignore=E203,W503,E402
- Remove ansible-lint from dev dependencies (no ansible files in repo)
- Sync pyproject.toml flake8 config with pre-commit
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add Caddy ingress controller manifest for kind deployments
- Add k8s cluster list command for kind cluster management
- Add k8s_command import and registration in deploy.py
- Fix network section merge to preserve http-proxy settings
- Increase default container resources (4 CPUs, 8GB memory)
- Add UDP protocol support for K8s port definitions
- Add command/entrypoint support for K8s deployments
- Implement docker-compose variable expansion for K8s
- Set ConfigMap defaultMode to 0755 for executable scripts
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Create comprehensive documentation for deploying stacks using Docker
Compose, which is the default and recommended deployment mode.
The guide covers:
- Complete deployment workflows (deployment directory and quick deploy)
- Real-world examples (test stack and fixturenet-eth)
- Configuration options (ports, volumes, environment variables)
- Common operations and troubleshooting
- CLI commands reference
Mentions that Kubernetes deployment options exist but are out of scope
for this guide (covered in separate K8s documentation).
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- AI-FRIENDLY-PLAN.md: Plan for making repo AI-friendly
- STACK-CREATION-GUIDE.md: Implementation details for create-stack command
- laconic-network-deployment.md: Laconic network deployment overview
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Part of https://plan.wireit.in/deepstack/browse/VUL-265/
- Added a flag `--helm-chart` to `deploy create` command
- Uses Kompose CLI wrapper to generate a helm chart from compose files in a stack
- To be handled in a follow on PR(s):
- Templatize generated charts and generate a `values.yml` file with defaults
Reviewed-on: cerc-io/stack-orchestrator#974
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
This is needed to allow custom deploy commands to handle arbitrary args.
* Adds a `DeploymentContext.modify_yaml` helper
* Removes `laconicd` from test stack to simplify it
Reviewed-on: cerc-io/stack-orchestrator#972
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Part of [Service provider auctions for web deployments](https://www.notion.so/Service-provider-auctions-for-web-deployments-104a6b22d47280dbad51d28aa3a91d75) and cerc-io/stack-orchestrator#948
- Add a registry mutex decorator over tx methods in `LaconicRegistryClient` wrapper
- Required to allow multiple process to run webapp deployment tooling without running into account sequence errors when sending laconicd txs
Reviewed-on: cerc-io/stack-orchestrator#957
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Part of [Service provider auctions for web deployments](https://www.notion.so/Service-provider-auctions-for-web-deployments-104a6b22d47280dbad51d28aa3a91d75) and cerc-io/stack-orchestrator#948
- Add a command `publish-deployment-auction` to create and publish an app deployment auction
- Add a command `handle-deployment-auction` to handle auctions on deployer side
- Update `request-webapp-deployment` command to allow using an auction id in deployment requests
- Update `deploy-webapp-from-registry` command to handle deployment requests with auction
- Add a command `request-webapp-undeployment` to request an application undeployment
Reviewed-on: cerc-io/stack-orchestrator#950
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Part of cerc-io/stack-orchestrator#955
- Using `shiv` version 1.0.6
Reviewed-on: cerc-io/stack-orchestrator#956
Reviewed-by: ashwin <ashwin@noreply.git.vdb.to>
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
Otherwise we sometimes see errors like:
```
cerc-webapp-deployer: File "/root/.shiv/laconic-so_0f937aa98c2748ef9af8585d6f441dbc01546ace0d6660cbb159d1e5040aeddf/site-packages/stack_orchestrator/deploy/webapp/deploy_webapp_from_registry.py", line 671, in command
cerc-webapp-deployer: shutil.rmtree(tempdir)
cerc-webapp-deployer: File "/usr/lib/python3.10/shutil.py", line 725, in rmtree
cerc-webapp-deployer: _rmtree_safe_fd(fd, path, onerror)
cerc-webapp-deployer: File "/usr/lib/python3.10/shutil.py", line 681, in _rmtree_safe_fd
cerc-webapp-deployer: onerror(os.unlink, fullname, sys.exc_info())
cerc-webapp-deployer: File "/usr/lib/python3.10/shutil.py", line 679, in _rmtree_safe_fd
cerc-webapp-deployer: os.unlink(entry.name, dir_fd=topfd)
cerc-webapp-deployer: FileNotFoundError: [Errno 2] No such file or directory: 'S.gpg-agent.extra'
```
Reviewed-on: cerc-io/stack-orchestrator#941
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Add a flag to re-use config.
Reviewed-on: cerc-io/stack-orchestrator#939
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
This adds two new commands: `publish-webapp-deployer` and `request-webapp-deployment`.
`publish-webapp-deployer` creates a `WebappDeployer` record, which provides information to requestors like the API URL, minimum required payment, payment address, and public key to use for encrypting config.
```
$ laconic-so publish-deployer-to-registry \
--laconic-config ~/.laconic/laconic.yml \
--api-url https://webapp-deployer-api.dev.vaasl.io \
--public-key-file webapp-deployer-api.dev.vaasl.io.pgp.pub \
--lrn lrn://laconic/deployers/webapp-deployer-api.dev.vaasl.io \
--min-required-payment 100000
```
`request-webapp-deployment` simplifies publishing a `WebappDeploymentRequest` and can also handle automatic payment, and encryption and upload of configuration.
```
$ laconic-so request-webapp-deployment \
--laconic-config ~/.laconic/laconic.yml \
--deployer lrn://laconic/deployers/webapp-deployer-api.dev.vaasl.io \
--app lrn://cerc-io/applications/webapp-hello-world@0.1.3 \
--env-file ~/yaml/hello.env \
--make-payment auto
```
Related changes are included for the deploy/undeploy commands for decrypting and using config, using the payment address from the WebappDeployer record, etc.
Reviewed-on: cerc-io/stack-orchestrator#938
Adds three new options for deployment/undeployment:
```
"--min-required-payment",
help="Requests must have a minimum payment to be processed",
"--payment-address",
help="The address to which payments should be made. Default is the current laconic account.",
"--all-requests",
help="Handle requests addressed to anyone (by default only requests to my payment address are examined).",
```
In this mode, requests should be designated for a particular address with the attribute `to` and include a `payment` attribute which is the tx hash for the payment.
The deployer will confirm the payment (to the right account, right amount, not used before, etc.) and then proceed with the deployment or undeployment.
Reviewed-on: cerc-io/stack-orchestrator#928
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#930
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#917
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Related to cerc-io/webapp-deployment-status-api#10
There are two issues in that. One is that the output probably changed recently, whether in the client or server, where no matching record is found by ID (Note this is specific to `laconic record get --id <v>` and does not seem to apply to the similar command to retrieve a record by name, `laconic name resolve <n>`).
Rather than returning `[]` it is now returning `[ null ]`. This cause us to think there *was* an application record found, and we attempt to treat the `null` entry like an Application object. That's fixed by filtering out null responses, which is a good precaution for the deployer, though I think it makes sense to ask whether this new behavior by the client/server is correct. Seems suspicious.
The other issue is that all the defensive checks we had in place to deal with broken/bad AppDeploymentRequests were around the _build_. This error was coming much earlier, merely when parsing and examining the request to see if it needed to be handled at all.
I have now added similar defensive error handling around that portion of the code.
Reviewed-on: cerc-io/stack-orchestrator#922
Reviewed-by: zramsay <zramsay@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
The str type check doesn't work if the port is a ruamel.yaml.scalarstring.SingleQuotedScalarString or ruamel.yaml.scalarstring.DoubleQuotedScalarString
Reviewed-on: cerc-io/stack-orchestrator#919
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#918
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#915
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#916
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
This emulates the K8S ConfigMap behavior on Docker by using a regular volume.
If a directory exists under `config/` which matches a named volume, the contents will be copied to the volume on `create` (provided the destination volume is empty). That is, rather than a ConfigMap, it is essentially a "config volume".
For example, with a compose file like:
```
version: '3.7'
services:
caddy:
image: cerc/caddy-ethcache:local
restart: always
volumes:
- caddyconfig:/etc/caddy:ro
volumes:
caddyconfig:
```
And a directory:
```
❯ ls stack-orchestrator/config/caddyconfig/
Caddyfile
```
After `laconic-so deploy create --spec-file caddy.yml --deployment-dir /srv/caddy` there will be:
```
❯ ls /srv/caddy/data/caddyconfig
Caddyfile
```
Mounted at `/etc/caddy`
Reviewed-on: cerc-io/stack-orchestrator#914
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
NodePort example:
```
network:
ports:
caddy:
- 1234
- 32020:2020
```
Replicas example:
```
replicas: 2
```
This also adds an optimization for k8s where if a directory matching the name of a configmap exists in beneath config/ in the stack, its contents will be copied into the corresponding configmap.
For example:
```
# Config files in the stack
❯ ls stack-orchestrator/config/caddyconfig
Caddyfile Caddyfile.one-req-per-upstream-example
# ConfigMap in the spec
❯ cat foo.yml | grep config
...
configmaps:
caddyconfig: ./configmaps/caddyconfig
# Create the deployment
❯ laconic-so --stack ~/cerc/caddy-ethcache/stack-orchestrator/stacks/caddy-ethcache deploy create --spec-file foo.yml
# The files from beneath config/<config_map_name> have been copied to the ConfigMap directory from the spec.
❯ ls deployment-001/configmaps/caddyconfig
Caddyfile Caddyfile.one-req-per-upstream-example
```
Reviewed-on: cerc-io/stack-orchestrator#913
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#909
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Instead of attempting to rewriting the nextConfig file directly, inject a helper function to add the config we need.
Reviewed-on: cerc-io/stack-orchestrator#901
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#896
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Fix for cerc-io/stack-orchestrator#880 to support the next compile/generate syntax.
Reviewed-on: cerc-io/stack-orchestrator#886
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#877
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#875
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#873
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
This is working off pull request "[Add support for pnpm as a webapp build tool. #767](https://git.vdb.to/cerc-io/stack-orchestrator/pulls/767/files)" that adds `pnpm` package manager support for `nextjs` & `webapps`.
`bun` default build output directory (defined as `CERC_BUILD_OUTPUT_DIR`) is `dist` which should already be handled with `pnpm` support in the previously mentioned [pull request](https://git.vdb.to/cerc-io/stack-orchestrator/pulls/767/files)
Installing `bun` using `npm` following our previous `pnpm` installation
```zsh
npm install -g bun
```
We'll be using `bun` as a package manager that works with `Node.js` projects as defined in bun's [docs](https://bun.sh/docs/cli/install)
> The bun CLI contains a Node.js-compatible package manager designed to be a dramatically faster replacement for npm, yarn, and pnpm. It's a standalone tool that will work in pre-existing Node.js projects; if your project has a package.json, bun install can help you speed up your workflow.
To test `next.js` apps using `node.js` and compatibility with all four packager managers -- `npm`, `yarn`, `pnpm`, and `bun` -- use the branches of snowball's [nextjs-package-manager-example-app](https://git.vdb.to/snowball/nextjs-package-manager-example-app) repo: `nextjs-package-manager/npm`, `nextjs-package-manager/yarn`, `nextjs-package-manager/pnpm`, `nextjs-package-manager/bun`.
Co-authored-by: Vivian Phung <dev+github@vivianphung.com>
Co-authored-by: David Boreham <dboreham@noreply.git.vdb.to>
Reviewed-on: https://git.vdb.to/cerc-io/stack-orchestrator/pulls/800
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: VPhung24 <vphung24@noreply.git.vdb.to>
Co-committed-by: VPhung24 <vphung24@noreply.git.vdb.to>
Fixes
- stack path resolution for `build`
- external stack path resolution for deployments
- "extra" config detection
- `deployment ports` command
- `version` command in dist or source install (without build_tag.txt)
- `setup-repos`, so it won't die when an existing repo is not at a branch or exact tag
Used in cerc-io/fixturenet-eth-stacks#14
Reviewed-on: cerc-io/stack-orchestrator#851
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Reviewed-on: cerc-io/stack-orchestrator#868
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#866
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#865
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Helps with diagnosing failures and odd behavior seen in the deployer in production.
Reviewed-on: cerc-io/stack-orchestrator#863
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#854
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Co-authored-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-committed-by: David Boreham <dboreham@noreply.git.vdb.to>
Reviewed-on: cerc-io/stack-orchestrator#838
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Part of [Metrics and logging for GQL queries in watcher](https://www.notion.so/Metrics-and-logging-for-GQL-queries-in-watcher-928c692292b140a2a4f52cda9795df5e)
- Update watcher config templates after config refactoring
- Mount watcher GQL query log files on volumes
- Update watcher dashboard to
- add a panel to show latest processed block number
- use latest processed block from sync status for diff values
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Reviewed-on: cerc-io/stack-orchestrator#835
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Reviewed-on: cerc-io/stack-orchestrator#831
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#830
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#829
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Tweak the auto-detection logic slightly for single-page apps, and also print the results.
Reviewed-on: cerc-io/stack-orchestrator#828
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#815
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#810
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#806
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#805
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
This add a new option `--fqdn-policy` to the `deploy-webapp-from-registry`.
The default policy, `prohibit` means that `ApplicationDeploymentRequests` which specify a FQDN will be rejected. The `allow` policy will cause them to be processed. The `preexisting` policy will only process them if an existing `DnsRecord` exists in the registry with the correct ownership.
The latter would be useful in conjunction with a pre-checking scheme in the UI (eg, that the DNS entry is properly configured, the domain is under the control of the requestor, etc.) Only after all the checks were successful would the `DnsRecord` be created, allowing for `ApplicationDeploymentRequests` to use it.
Reviewed-on: cerc-io/stack-orchestrator#802
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Updates fixturenet-plugeth stack for the Deneb fork based on Geth v1.13.x:
- updates genesis generator tool, and simplifies the config: the default from `ethereum-genesis-generator` can be used for a from-genesis Merged chain.
Reviewed-on: cerc-io/stack-orchestrator#789
Reviewed-by: jonathanface <jonathanface@noreply.git.vdb.to>
Reviewed-by: Thomas E Lackey <telackey@noreply.git.vdb.to>
Part of https://www.notion.so/Alerting-for-failing-CI-jobs-d0183b65453947aeab11dbddf989d9c0
- Run CI alert steps only on main to avoid alerts for in-progress PRs
- The Slack alerts will be sent on a CI job failure if
- A commit is pushed directly to main
- A PR gets merged into main
- A scheduled job runs on main
Reviewed-on: cerc-io/stack-orchestrator#797
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
revert Blind commit to fix laconic CLI calls after rename. (#784)
`laconic cns` got renamed to `laconic registry` which breaks all the scripts and commands that use it.
Reviewed-on: cerc-io/stack-orchestrator#784
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#788
`laconic cns` got renamed to `laconic registry` which breaks all the scripts and commands that use it.
Reviewed-on: cerc-io/stack-orchestrator#784
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Rather than always requesting a certificate, attempt to re-use an existing certificate if it already exists in the k8s cluster. This includes matching to a wildcard certificate.
Reviewed-on: cerc-io/stack-orchestrator#779
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Make sure to check the exit code of the docker build and bubble it back up to laconic-so.
Reviewed-on: cerc-io/stack-orchestrator#778
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
This saves about 1GB of space in the image.
Reviewed-on: cerc-io/stack-orchestrator#773
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#769
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Implementation of a command to fetch pre-built images from a remote registry, complementing the --push-images option already present on build-containers.
The two subcommands used together allow a stack to be deployed without needing to built its images, provided they have been already built and pushed to the specified container image registry.
This implementation simply picks the newest image with the right name and platform (matches against the platform Python is running on, so watch out for scenarios where Python is an x86 binary on M1 macs).
Reviewed-on: cerc-io/stack-orchestrator#768
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
This adds support for auto-detecting pnpm as a build tool for webapps.
Reviewed-on: cerc-io/stack-orchestrator#767
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
webapps are meant to be build-once/deploy-many, but we were rebuilding them for every request. This changes that, so that we rebuild only for every unique ApplicationRecord.
When we push the image, we now tag it according to its ApplicationRecord.
We don't want to use that tag directly in the compose file for the deployment, however, as the deployment needs to be able to adjust to new builds w/o re-writing the file all the time. Instead, we use a per-deployment unique tag (same as before), we just update what image it references as needed.
Reviewed-on: cerc-io/stack-orchestrator#764
This creates a new environment variable, CERC_SINGLE_PAGE_APP, which controls whether a catchall redirection back to / is applied.
If the value is not explicitly set, we try to detect if the page looks like a single-page app.
Reviewed-on: cerc-io/stack-orchestrator#763
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#762
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Minor tweaks for running the container-registry in k8s. The big change is not requiring --image-registry.
Reviewed-on: cerc-io/stack-orchestrator#760
Reviewed-by: David Boreham <dboreham@noreply.git.vdb.to>
Reviewed-on: cerc-io/stack-orchestrator#758
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Hopefully the last one for a bit.
This only output the cmdline if log_file is present (ie, not to stdout). It also fixes a bug where the log_file was not passed in one line.
Reviewed-on: cerc-io/stack-orchestrator#757
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#756
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#755
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
We were missing errors related to registration, this should fix that.
Reviewed-on: cerc-io/stack-orchestrator#754
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
This adds a stack for the backend from snowball/snowballtools-base.
Reviewed-on: cerc-io/stack-orchestrator#751
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
If the tree has a 'build-webapp.sh' script, use that.
Reviewed-on: cerc-io/stack-orchestrator#750
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-authored-by: David Boreham <david@bozemanpas.com>
Reviewed-on: cerc-io/stack-orchestrator#747
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#749
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#748
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Minor fixes to envsubst for webapps. Somewhat specially treated is `LACONIC_HOSTED_CONFIG_homepage` which can be used to replace the homepage in package.json. With react, this gets an extra `/` though, which we need to remove.
Reviewed-on: cerc-io/stack-orchestrator#746
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#743
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#742
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
In kind, when we bind-mount a host directory it is first mounted into the kind container at /mnt, then into the pod at the desired location.
We accidentally picked this up for full-blown k8s, and were creating volumes at /mnt. This changes the behavior for both kind and regular k8s so that bind mounts are only allowed if a fully-qualified path is specified. If no path is specified at all, a default storageClass is assumed to be present, and the volume managed by a provisioner.
Eg, for kind, the default provisioner is: https://github.com/rancher/local-path-provisioner
```
stack: test
deploy-to: k8s-kind
config:
test-variable-1: test-value-1
network:
ports:
test:
- '80'
volumes:
# this will be bind-mounted to a host-path
test-data-bind: /srv/data
# this will be managed by the k8s node
test-data-auto:
configmaps:
test-config: ./configmap/test-config
```
Reviewed-on: cerc-io/stack-orchestrator#741
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#740
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#738
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Reviewed-on: cerc-io/stack-orchestrator#736
Co-authored-by: David Boreham <david@bozemanpass.com>
Co-committed-by: David Boreham <david@bozemanpass.com>
```
--include-tags TEXT Only include requests with matching tags
(comma-separated).
--exclude-tags TEXT Exclude requests with matching tags (comma-
separated).
```
Reviewed-on: cerc-io/stack-orchestrator#734
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Needs a '/' (http) not ':' (ssh).
Reviewed-on: cerc-io/stack-orchestrator#733
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
Update links and references to github.com to git.vdb.to.
Also enable the flake8 lint action in gitea.
Reviewed-on: cerc-io/stack-orchestrator#732
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
* Add Postgres exporter and it's dashboard
* Use templating for watcher dashboard
* Add subgraph related panels to watcher dashboard
* Remove individual watcher dashboards and update instructions
* Deploy osmosis on Urbit fake ship
* Remove Urbit configuration from existing osmosis stack
* Add a separate stack for Osmosis front end on Urbit
* Run script for renaming build files with bash
* Add environment variables required in urbit osmosis build
* Fix BASEPATH in compose file
* Remove ipfs-glob-host from network config in osmosis readme
* Use laconic branch for osmosis frontend
---------
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
* Setup Prometheus and Grafana for monitoring stack
* Add dashboard for azimuth watchers
* Add a dashboard for sushiswap watcher
* Persist prometheus server data
* Additional metrics in watcher dashboards
* Update dashboards and add for merkl sushiswap watcher
* Add dashboards for remaining azimuth watchers
* Separate out preconfigured watcher dashboards and add instructions
* Keep the empty dashboards dir
* Refactor to make Urbit app deployment script generic
* Rename urbit pod and update instructions
* Add a flag to allow skipping app installation on Urbit
* Make remote Urbit app deployment scripts generic
* Move remote deployment scripts to urbit fixturenet
* Update and use existing kubo pod for Urbit glob hosting
* Separate out uniswap gql proxy in a stack
* Use proxy server from watcher-ts
* Add a flag to enable/disable the proxy server
* Update env configuratoin for uniswap urbit app stack
* Update stack file for uniswap urbit app stack
* Fix env variables in instructions
* Use IPFS for hosting glob files for Urbit
* Add env configuration for IPFS endpoints to instructions
* Make ship pier dir configurable in remote deployment script
* Update remote deployment script to accept glob hash arg
* Create uniswap-frontend stack
* Add stack for building uniswap frontend app
* Add a container for Urbit fake ship
* Update with deployment command
* Add a service for uniswap app deployment to urbit
* Use a script to start urbit ship to handle restarts
* Rename stack name to uniswap-urbit-app
* Rename build.sh to build-app.sh and check if build already exists
* Rename stack directory name
* Update uniswap build restart on failure
* Perform uniswap app deployment in the urbit container
* Add steps to create glob for the app
* Tail /dev/null after deployment
* Add steps to install the app to desk
* Host glob files for uniswap
* Update repo branch
* Update readme with command to get urbit password
* Update readme
* Update readme to open urbit web UI
* Expose the port on glob hosting container
* Avoid exposing urbit http port
* Add scripts for installing uniswap on remote urbit instance
* Configure GQL proxy for uniswap app
* Use laconic branch for app repo
* Rename urbit pod for uniswap app deployment
---------
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
* Update stack to run azimuth job runner
* Run azimuth watcher in active mode
* Update stack to run job-runners for all watchers
* Update ports in job-runner health checks
* Map metrics ports to host
* Configure historical block processing batch size for Azimuth watcher
* Use deployment command for azimuth stack
---------
Co-authored-by: neeraj <neeraj.rtly@gmail.com>
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
This fixes three issues:
1. #644 (build output)
2. #646 (error on startup)
3. automatic env quote handling (related to 2)
For the build output we now have:
```
#################################################################
Built host container for /home/telackey/tmp/iglootools-home with tag:
cerc/iglootools-home:local
To test locally run:
docker run -p 3000:3000 cerc/iglootools-home:local
```
For the startup error, it was hung waiting for the "success" message from the next generate output (itself a workaround for a nextjs bug fixed by this PR we submitted: https://github.com/vercel/next.js/pull/58276).
I added a timeout which will cause it to wait up to a maximum _n_ seconds before issuing:
```
ERROR: 'npm run cerc_generate' exceeded CERC_MAX_GENERATE_TIME.
```
On the quoting itself, I plan on adding a new run-webapp command, but I realized I had a decent spot to do effect the quote replacement on-the-fly after all when I am already escaping the values for insertion/replacement into JS.
The "dequoting" can be disabled with `CERC_RETAIN_ENV_QUOTES=true`.
* Upgrade merkl-sushiswap-v3-watcher-ts release
* Increase blockDelayInMilliSecs for merkl-sushiswap-v3 watcher
* Upgrade sushiswap-v3-watcher-ts release
* Add sushiswap-v3 watcher to stack list
* Avoid mapping ports that are not required to be exposed
* Add a sushiswap-v3 watcher stack
* Add services for watcher db and server
* Add service for watcher job-runner
* Use 0.0.0.0 for watcher server config
---------
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
* Separate ponder indexer and ponder watcher and add second ponder indexer
* Handle review changes
* Update config to point ponder watcher to indexer 2 to indexer 1
* Update Ponder demo
* Use deployed ERC20 contract in second Ponder indexer
* Add order by timestamp in Ponder watcher app entities query
* Upgrade go-nitro version to v0.1.2-ts-port-0.1.9
* Decrease Ponder start block to process contract transfer event at deployment
---------
Co-authored-by: Shreerang Kale <shreerangkale@gmail.com>
* Use durable store for in-process Nitro node
* Update setup for external go-nitro node
* Add a separate service for ipld-eth-server with remote Nitro node
* Update repo branches / versions
* Wait for external Nitro node endpoint and update instructions
* Update repo branches
* Add support to pass ratesFile in config
* Change branch for ponder to laconic-esm
* Fix repo name for ponder
---------
Co-authored-by: Shreerang Kale <shreerangkale@gmail.com>
* Remove reverse payment proxy service from payment stack
* Remove run-reverse-payment-proxy.sh
* Remove reverse payment proxy port from readme
---------
Co-authored-by: Shreerang Kale <shreerangkale@gmail.com>
* Use ponder in watcher mode and indexer mode separately in payments stack
* Refactor config file and configure env variables for watcher mode
* Update demo.md for payments stack
* Handle review changes
* Setup config to pay for watcher to indexer GQL queries
* Fix config in stack for making payments in watcher ponder app
* Update demo for payment from watcher to indexer mode Ponder apps
* Use laconic-esm brannch for ponder
---------
Co-authored-by: Shreerang Kale <shreerangkale@gmail.com>
* First part of deployments for external repos
* Generate deployment dir
* Create empty config file
* Copy script files into deployment
* Run scripts in deployment
* Refactor
* Integrate external plugins
* Remove debug output
* Add a fixturenet-payments stack
* Export the WebSocket port in fixturenet-eth-geth service
* Add container to run a go-nitro node
* Add container to deploy Nitro contracts
* Read contract addresses from a volume when running the Nitro node
* Add a service for Nitro reverse payment proxy
* Expose payment proxy endpoint to be accessible from host
* Map nitro node messaging and payment proxy ports to host
* Use container to deploy Nitro contracts in mobymask-v3 stack
* Use a common contract deployment script from mobymask-v3 stack
* Add MobyMask contract deployment and watcher services
* Fixes for contract deployment and watcher scripts
* Add a container and service for mobymask-snap
* Add MobyMask app service
* Add container and service for a ponder app
* Fix ponder setup and update instructions
* Handle review comments
* Use enablepaidrpcmethods flag in reverse payment proxy server
* Update go-nitro branch
* Fixes for mobymask-v3 stack
---------
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
* Add a mobymask-v3 stack
* Fix Nitro deployment script and add watcher container
* Setup Nitro config
* Run build after setting Nitro addresses
* Setup consensus config
* Add a container for web-app
* Use node 18 for the web-app
* Persist Nitro node data to a volume
* Add clean up steps
* Update query rates
* Add steps to force rebuild and persist peers_ids volume
* Update mobymask-v2 stack with pubsub option
* Update watcher-ts version
---------
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
* Update mobymask v1 watcher with new contract
* Update mobymask v1 stack with deployment commands
* Use release tag for mobymask-watcher-ts repo
* Upgrade MobyMask version in v2 stack to use latest contract
* Update sushiswap-subgraph stack to point to filecoin endpoint
* Deploy blocks subgraph in sushiswap-subgraph stack
* Update subgraph config
* Remove duplicate nativePricePool
* Enable debug logs in graph-node and update instructions
* Two additional miner nodes in fixturenet-lotus
* Revert experimental change for additional miner nodes
* Set ETHEREUM_REORG_THRESHOLD env for graph-node to catch up to head
* Take ETH RPC endpoint for graph-node from env
* Set default values for RPC endpoint in graph-node
* Rename fixturenet-graph-node pod to graph-node
* Clean up sushiswap-subgraph stack
* Use deployment command in sushiswap-subgraph stack
* Add a separate fixturenet-sushiswap-subgraph stack
* Fix pods in fixturenet-sushiswap-subgraph
* Fix fixturenet subgraph deployment commands and instructions
---------
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
* Use ip utility to get the required miner node multiaddr
* Persist lotus node data to support restarts
* Add clean up steps to instructions
* Fix lotus-seed sector-dir arg
* Add a sushiswap stack with contract deployments
* Add watcher services
* Add a service for the info app
* Add instructions to run smoke tests
* Use sushi-info-watcher in demo mode
* Turn off block prefetching
* Fix sushiswap demo instructions
* Use release version and add healthcheck in Lotus stack
* Wait for Lotus node to start before sushiswap watchers
---------
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
* move health check inside startup script
* remove pre-built genesis
* move health check inside startup script
* remove pre-built genesis
* Use hardcoded paths for Lotus node data directories
* Persist proof parameters
* Write out miner node's multiaddr with docker network IP
* Enable Lotus ETH RPC API and bind to all available interfaces
* Fund a known account
---------
Co-authored-by: iskay <ikay@lakeheadu.ca>
Co-authored-by: Ian Kay <ian@knowable.vc>
* forward more vars for debugging
forward CERC_GETH_VERBOSITY
forward CERC_STATEDIFF_DB_LOG_STATEMENTS
forward CERC_REMOTE_DEBUG
* fix env var
* remote flag can be set from env
* random nits
* geth - visibility of migration status
* forward CERC_RUN_STATEDIFF to geth container
* fix ipld-eth-server vars
* fix fixturenet-eth-loaded stack
* fixturenet geth genesis - include mergeNetsplitBlock
* forward CERC_STATEDIFF_DB_GOOSE_MIN_VER to env file
* add TAG_SUFFIX arg to lighthouse build
intended to avoid sporadic failures when running lcli on github CI runners, likely related to non-portable builds
* Add a stack for Gelato watcher
* Add option to create and use a state snapshot
* Add commands to create and import a state checkpoint
* Rename ipld-eth-server endpoint env variables
* Fix default env variable
Former-commit-id: 8b4b5deba8
* Setup gateway-server with watchers
* Add js script to merge toml config files
* Remove individual watcher configs
* Add all azimuth watchers in stack
* Fix toml-js install
* Use env variables for ipld-eth-server endpoints
* Checkout to version tag in azimuth-watcher-ts repo
Former-commit-id: 5a94aed7f7
* add fixturenet-gaia stack
* add fixturenet-pocket
* integrate with eth fixturenet
* separate out fixturenet-gaia
* use pocket-deployments Dockerfile
---------
Co-authored-by: iskay <ian@knowable.vc>
Co-authored-by: Ian <ikay@lakeheadu.ca>
Former-commit-id: b23b5ae3bf
* Remove unnecessary check on watcher endpoint
* Add instructions to run MobyMask app with a watcher on network
* Move watcher on network docs to a separate folder
* Add nginx config for watcher endpoint
* Add expected output logs
* Add sample nginx config for hosting the app
* Update instructions
Former-commit-id: 018950858b
* Use the latest stable optimism release
* Remove unnecessary repos from repo-list
* Add op-proposer service to fixturenet-optimism stack
* Add jq and bash to op-proposer image
* Update instructions
* Update op-batcher and op-geth commands
Former-commit-id: 988be0ef9a
* Refactor L2 enpoint check to contract deployment script
* Add instructions to join to an existing watcher network
* Include mobymask-v2-watcher-ts in repositories setup
* Add a clean up section and expected outputs
* Add a troubleshooting section
* Use lxdao frontend
* Update instructions for updated UI
Former-commit-id: f78176a27f
* Add container to stack for lxdao mobymask-app
* Remove shm_size
* Use cerc-io scoped alias for lxdao app package
* Change alias to @cerc-io/mobymask-ui-lxdao
Former-commit-id: 46b36c3cb6
* Use standalone mobymask-v2-watcher-ts to run peer test
* Add watcher-ts image for running peer tests
* Run separate containers for peer ids generation and tests
* Wait for watcher to be up before starting peer-test-app
* Resolve peer-test-app compose file and remove setup-repositories for web-apps
Former-commit-id: c4002dcc5c
* Use mounted volumes for data in geth nodes
* Use mounted volumes for data in lighthouse nodes
* Avoid resetting genesis time in a lighthouse node on restart
* Mount parent datadir for lighthouse nodes
* Trap signals on shutdown and clean up in lighthouse nodes
* Allow stalled sync in lighthouse beacon nodes
* Gracefully shutdown geth nodes
* Add clean up instructions
* Gracefully shutdown lighthouse boot node
Former-commit-id: 3130af1615
* Build MobyMask web-app at container build step
* Fix web-app start script to use env variables in config
* Replace variables in built web-app files
* Use published mobymask-ui package from gitea
* Use published react-peer/test-app from gitea
* Remove local gitea publish TODO
Former-commit-id: cf79f0de0a
* Fix contract deployment script in fixturenet-optimism stack
* Configure relay node's announce domain from env
* Configure relay peers list for the relay node from env
* Create and use peer ids from a mounted volume
* Fix command to create watcher config
* Fix mobymask-app deployment script
Former-commit-id: 882f0b16aa
* Add an option to pass env file to deploy command
* Use env variable mapping in fixturenet-optimism stack
* Use default values from checked in env files
* Use env variable mapping in mobymask-v2 stack
* Update instructions
* Add extra hosts in app compose files and update instructions
* Add CERC prefix to env variables in fixturenet-optimism stack
* Add CERC prefix to env variables in mobymask-v2 stack
Former-commit-id: 6b62247ef7
Make the stack-orchestrator repository easier for AI tools (Claude Code, Cursor, Copilot) to understand and use for generating stacks, including adding a `create-stack` command.
---
## Part 1: Documentation & Context Files
### 1.1 Add CLAUDE.md
Create a root-level context file for AI assistants.
**File:** `CLAUDE.md`
Contents:
- Project overview (what stack-orchestrator does)
- Stack creation workflow (step-by-step)
- File naming conventions
- Required vs optional fields in stack.yml
- Common patterns and anti-patterns
- Links to example stacks (simple, medium, complex)
This file provides guidance to Claude Code when working with the stack-orchestrator project.
## Some rules to follow
NEVER speculate about the cause of something
NEVER assume your hypotheses are true without evidence
ALWAYS clearly state when something is a hypothesis
ALWAYS use evidence from the systems your interacting with to support your claims and hypotheses
ALWAYS run `pre-commit run --all-files` before committing changes
## Key Principles
### Development Guidelines
- **Single responsibility** - Each component has one clear purpose
- **Fail fast** - Let errors propagate, don't hide failures
- **DRY/KISS** - Minimize duplication and complexity
## Development Philosophy: Conversational Literate Programming
### Approach
This project follows principles inspired by literate programming, where development happens through explanatory conversation rather than code-first implementation.
### Core Principles
- **Documentation-First**: All changes begin with discussion of intent and reasoning
- **Narrative-Driven**: Complex systems are explained through conversational exploration
- **Justification Required**: Every coding task must have a corresponding TODO.md item explaining the "why"
- **Iterative Understanding**: Architecture and implementation evolve through dialogue
### Working Method
1. **Explore and Understand**: Read existing code to understand current state
2. **Discuss Architecture**: Workshop complex design decisions through conversation
3. **Document Intent**: Update TODO.md with clear justification before coding
4. **Explain Changes**: Each modification includes reasoning and context
5. **Maintain Narrative**: Conversations serve as living documentation of design evolution
### Implementation Guidelines
- Treat conversations as primary documentation
- Explain architectural decisions before implementing
- Use TODO.md as the "literate document" that justifies all work
- Maintain clear narrative threads across sessions
- Workshop complex ideas before coding
This approach treats the human-AI collaboration as a form of **conversational literate programming** where understanding emerges through dialogue before code implementation.
## External Stacks Preferred
When creating new stacks for any reason, **use the external stack pattern** rather than adding stacks directly to this repository.
- **When something times out that doesn't mean it needs a longer timeout it means something that was expected never happened, not that we need to wait longer for it.**
- **NEVER change a timeout because you believe something truncated, you don't understand timeouts, don't edit them unless told to explicitly by user.**
@ -6,7 +6,7 @@ Stack Orchestrator allows building and deployment of a Laconic Stack on a single
## Install
## Install
**To get started quickly** on a fresh Ubuntu instance (e.g, Digital Ocean); [try this script](./scripts/quick-install-ubuntu.sh). **WARNING:** always review scripts prior to running them so that you know what is happening on your machine.
**To get started quickly** on a fresh Ubuntu instance (e.g, Digital Ocean); [try this script](./scripts/quick-install-linux.sh). **WARNING:** always review scripts prior to running them so that you know what is happening on your machine.
For any other installation, follow along below and **adapt these instructions based on the specifics of your system.**
For any other installation, follow along below and **adapt these instructions based on the specifics of your system.**
@ -16,6 +16,7 @@ Ensure that the following are already installed:
- [Python3](https://wiki.python.org/moin/BeginnersGuide/Download): `python3 --version` >= `3.8.10` (the Python3 shipped in Ubuntu 20+ is good to go)
- [Python3](https://wiki.python.org/moin/BeginnersGuide/Download): `python3 --version` >= `3.8.10` (the Python3 shipped in Ubuntu 20+ is good to go)
Note: if installing docker-compose via package manager on Linux (as opposed to Docker Desktop), you must [install the plugin](https://docs.docker.com/compose/install/linux/#install-the-plugin-manually), e.g. :
Note: if installing docker-compose via package manager on Linux (as opposed to Docker Desktop), you must [install the plugin](https://docs.docker.com/compose/install/linux/#install-the-plugin-manually), e.g. :
Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be
Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be
a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory.
a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory.
Now, having selected that directory, download the latest release from [this page](https://github.com/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable:
Now, having selected that directory, download the latest release from [this page](https://git.vdb.to/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable:
If Stack Orchestrator was installed using the process described above, it is able to subsequently self-update to the current latest version by running:
```bash
laconic-so update
```
```
## Usage
## Usage
Three sub-commands: `setup-repositories`, `build-containers` and `deploy-system` are generally run in order. The following is a slim example for standing up the `erc20-watcher`. Go further with the [erc20 watcher demo](/app/data/stacks/erc20) and other pieces of the stack, within the [`stacks` directory](/app/data/stacks).
The various [stacks](/stack_orchestrator/data/stacks) each contain instructions for running different stacks based on your use case. For example:
We need an "update stack" command in stack orchestrator and cleaner documentation regarding how to do continuous deployment with and without payments.
**Context**: Currently, `deploy init` generates a spec file and `deploy create` creates a deployment directory. The `deployment update` command (added by Thomas Lackey) only syncs env vars and restarts - it doesn't regenerate configurations. There's a gap in the workflow for updating stack configurations after initial deployment.
## Architecture Refactoring
### Separate Deployer from Stack Orchestrator CLI
The deployer logic should be decoupled from the CLI tool to allow independent development and reuse.
### Separate Stacks from Stack Orchestrator Repo
Stacks should live in their own repositories, not bundled with the orchestrator tool. This allows stacks to evolve independently and be maintained by different teams.
Instructions for deploying a local Laconic blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator.
## 1. Install Laconic Stack Orchestrator
Installation is covered in detail [here](https://github.com/cerc-io/stack-orchestrator#user-mode) but if you're on Linux and already have docker installed it should be as simple as:
Instructions to setup and deploy an end-to-end L1+L2 stack with [fixturenet-eth](../fixturenet-eth/) (L1) and [Optimism](https://stack.optimism.io) (L2)
We support running just the L2 part of stack, given an external L1 endpoint. Follow [l2-only](./l2-only.md) for the same.
This should create the required docker images in the local image registry:
* `cerc/go-ethereum`
* `cerc/lighthouse`
* `cerc/fixturenet-eth-geth`
* `cerc/fixturenet-eth-lighthouse`
* `cerc/foundry`
* `cerc/optimism-contracts`
* `cerc/optimism-l2geth`
* `cerc/optimism-op-batcher`
* `cerc/optimism-op-node`
## Deploy
Deploy the stack:
```bash
laconic-so --stack fixturenet-optimism deploy up
```
The `fixturenet-optimism-contracts` service may take a while (`~15 mins`) to complete running as it:
1. waits for the 'Merge' to happen on L1
2. waits for a finalized block to exist on L1 (so that it can be taken as a starting block for roll ups)
3. deploys the L1 contracts
To list down and monitor the running containers:
```bash
laconic-so --stack fixturenet-optimism deploy ps
# With status
docker ps
# Check logs for a container
docker logs -f <CONTAINER_ID>
```
## Clean up
Stop all services running in the background:
```bash
laconic-so --stack fixturenet-optimism deploy down
```
Clear volumes created by this stack:
```bash
# List all relevant volumes
docker volume ls -q --filter "name=.*fixturenet_geth_accounts|.*l1_deployment|.*l2_accounts|.*l2_config|.*l2_geth_data"
# Remove all the listed volumes
docker volume rm $(docker volume ls -q --filter "name=.*fixturenet_geth_accounts|.*l1_deployment|.*l2_accounts|.*l2_config|.*l2_geth_data")
```
## Troubleshooting
* If `op-geth` service aborts or is restarted, the following error might occur in the `op-node` service:
```bash
WARN [02-16|21:22:02.868] Derivation process temporary error attempts=14 err="stage 0 failed resetting: temp: failed to find the L2 Heads to start from: failed to fetch L2 block by hash 0x0000000000000000000000000000000000000000000000000000000000000000: failed to determine block-hash of hash 0x0000000000000000000000000000000000000000000000000000000000000000, could not get payload: not found"
```
* This means that the data directory that `op-geth` is using is corrupted and needs to be reinitialized; the containers `op-geth`, `op-node` and `op-batcher` need to be started afresh:
This should create the required docker images in the local image registry
## Deploy
### Configuration
* In [mobymask-params.env](../../config/watcher-mobymask-v2/mobymask-params.env) file set `DEPLOYED_CONTRACT` to existing deployed mobymask contract address
* Setting `DEPLOYED_CONTRACT` will skip contract deployment when running stack
* `ENABLE_PEER_L2_TXS` is used to enable/disable sending txs to L2 chain from watcher peer.
* Update the [optimism-params.env](../../config/watcher-mobymask-v2/optimism-params.env) file with Optimism endpoints and other params for the Optimism running separately
* `PRIVATE_KEY_PEER` is used by watcher peer to send txs to L2 chain
* NOTE:
* Stack Orchestrator needs to be run in [`dev`](/docs/CONTRIBUTING.md#install-developer-mode) mode to be able to edit the env file
* If Optimism is running on the host machine, use `host.docker.internal` as the hostname to access the host port
### Deploy the stack
```bash
laconic-so --stack mobymask-v2 deploy --include watcher-mobymask-v2 up
This should create the required docker images in the local image registry
## Deploy
### Configuration
* Update the [mobymask-params.env](../../config/watcher-mobymask-v2/mobymask-params.env) file with watcher endpoints and other params required by the web-apps
* `WATCHER_HOST` and `WATCHER_PORT` is used to check if watcher is up before building and deploying mobymask-app
* `APP_WATCHER_URL` is used by mobymask-app to make GQL queries
* `DEPLOYED_CONTRACT` and `CHAIN_ID` is used by mobymask-app in app config when creating messgaes for L2 txs
* `RELAY_NODES` is used by the web-apps to connect to the relay nodes (run in watcher)
* NOTE:
* Stack Orchestrator needs to be run in [`dev`](/docs/CONTRIBUTING.md#install-developer-mode) mode to be able to edit the env file
* If watcher is running on the host machine, use `host.docker.internal` as the hostname to access the host port
### Deploy the stack
For running mobymask-app
```bash
laconic-so --stack mobymask-v2 deploy --include mobymask-app up
```
For running peer-test-app
```bash
laconic-so --stack mobymask-v2 deploy --include peer-test-app up
```
To list down and monitor the running containers:
```bash
docker ps
# Check logs for a container
docker logs -f <CONTAINER_ID>
```
## Clean up
Stop all services running in the background:
For mobymask-app
```bash
laconic-so --stack mobymask-v2 deploy --include mobymask-app down
```
For peer-test-app
```bash
laconic-so --stack mobymask-v2 deploy --include peer-test-app down
```
Clear volumes created by this stack:
```bash
# List all relevant volumes
docker volume ls -q --filter "name=.*mobymask_deployment"
# Remove all the listed volumes
docker volume rm $(docker volume ls -q --filter "name=.*mobymask_deployment")
The MobyMask watcher is a Laconic Network component that provides efficient access to MobyMask contract data from Ethereum, along with evidence allowing users to verify the correctness of that data. The watcher source code is available in [this repository](https://github.com/cerc-io/watcher-ts/tree/main/packages/mobymask-watcher) and a developer-oriented Docker Compose setup for the watcher can be found [here](https://github.com/cerc-io/mobymask-watcher). The watcher can be deployed automatically using the Laconic Stack Orchestrator tool as detailed below:
## Deploy the MobyMask Watcher
The instructions below show how to deploy a MobyMask watcher using laconic-stack-orchestrator (the installation of which is covered [here](https://github.com/cerc-io/stack-orchestrator#user-mode)).
This deployment expects that ipld-eth-server's endpoints are available on the local machine at http://ipld-eth-server.example.com:8083/graphql and http://ipld-eth-server.example.com:8082. More advanced configurations are supported by modifying the watcher's [config file](../../config/watcher-mobymask/mobymask-watcher.toml).
$ laconic-so deploy-system --include watcher-mobymask up
```
Correct operation should be verified by following the instructions [here](https://github.com/cerc-io/mobymask-watcher/tree/main/mainnet-watcher-only#run), checking GraphQL queries return valid results in the watcher's [playground](http://127.0.0.1:3001/graphql).
## Clean up
Stop all the services running in background:
```bash
$ laconic-so deploy-system --include watcher-mobymask down
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.