Adds token-file key to image-pull-secret spec config. Reads the
registry token from a file on disk instead of requiring an environment
variable. File path supports ~ expansion. Falls back to token-env
if token-file is not set or file doesn't exist.
This lets operators store the GHCR token in ~/.credentials/ alongside
other secrets, removing the need for ansible to pass REGISTRY_TOKEN
as an env var.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The stack path in spec.yml is relative — both create_operation and
up_operation need cwd at the repo root for stack_is_external() to
resolve it. Move os.chdir(prev_cwd) to after up_operation completes
instead of between the two operations.
Reverts the SystemExit catch in call_stack_deploy_start — the root
cause was cwd, not the hook.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two fixes for multi-deployment:
1. _pod_has_pvcs now excludes ConfigMap volumes from PVC detection.
Pods with only ConfigMap volumes (like maintenance) correctly get
RollingUpdate strategy instead of Recreate.
2. call_stack_deploy_start catches SystemExit when stack path doesn't
resolve from cwd (common during restart). Most stacks don't have
deploy hooks, so this is non-fatal.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Split up() into _setup_cluster(), _create_ingress(), _create_nodeports().
Reduces cyclomatic complexity below the flake8 threshold.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The spec's "stack:" value is a relative path that must resolve from
the repo root. stack_is_external() checks Path(stack).exists() from
cwd, which fails when cwd isn't the repo root.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The repo_root calculation assumed stack paths are always 4 levels deep
(stack_orchestrator/data/stacks/name). External stacks with different
nesting (e.g. stack-orchestrator/stacks/name = 3 levels) got the wrong
root, causing --spec-file resolution to fail.
Use git rev-parse --show-toplevel instead.
Fixes: so-k1k
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Each pod entry in stack.yml now creates its own k8s Deployment with
independent lifecycle and update strategy. Pods with PVCs get Recreate,
pods without get RollingUpdate. This enables maintenance services that
survive main pod restarts.
- cluster_info: get_deployments() builds per-pod Deployments, Services
- cluster_info: Ingress routes to correct per-pod Service
- deploy_k8s: _create_deployment() iterates all Deployments/Services
- deployment: restart swaps Ingress to maintenance service during Recreate
- spec: add maintenance-service key
Single-pod stacks are backward compatible (same resource names).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Strategic merge patch preserves fields not present in the patch body.
This means removed volumes, ports, and env vars persist in the running
Deployment after a restart. Replace sends the complete spec built from
the current compose files — removed fields are actually deleted.
Affects Deployment, Service, Ingress, and NodePort updates. Service
replace preserves clusterIP (immutable field) by reading it from the
existing resource before replacing.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Allows callers to override container images during restart, e.g.:
laconic-so deployment restart --image backend=ghcr.io/org/app:sha123
The override is applied to the k8s Deployment spec before
create-or-patch. Docker/compose deployers accept the parameter
but ignore it.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
_write_config_file() now reads each file listed under the credentials-files
top-level spec key and appends its contents to config.env after config vars.
Paths support ~ expansion. Missing files fail hard with sys.exit(1).
Also adds get_credentials_files() to Spec class following the same pattern
as get_image_registry_config().
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The spec key `registry-credentials` was ambiguous — could mean container
registry auth or Laconic registry config. Rename to `image-pull-secret`
which matches the k8s secret name it creates.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Containers referenced in spec.yml http-proxy routes now get TCP
readiness probes on the proxied port. This tells k8s when a container
is actually ready to serve traffic.
Without readiness probes, k8s considers pods ready immediately after
start, which means:
- Rolling updates cut over before the app is listening
- Broken containers look "ready" and receive traffic (502s)
- kubectl rollout undo has nothing to roll back to
The probes use TCP socket checks (not HTTP) to work with any protocol.
Initial delay 5s, check every 10s, fail after 3 consecutive failures.
Closes so-l2l part C.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the destroy-and-recreate deployment model with in-place updates.
deploy_k8s.py: All resource creation (Deployment, Service, Ingress,
NodePort, ConfigMap) now uses create-or-update semantics. If a resource
already exists (409 Conflict), it patches instead of failing. For
Deployments, this triggers a k8s rolling update — old pods serve traffic
until new pods pass readiness checks.
deployment.py: restart() no longer calls down(). It just calls up()
which patches existing resources. No namespace deletion, no downtime
gap, no race conditions. k8s handles the rollout.
This gives:
- Zero-downtime deploys (old pods serve during rollout)
- Automatic rollback (if new pods fail readiness, rollout stalls)
- Manual rollback via kubectl rollout undo
Closes so-l2l (parts A and B).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
create_registry_secret() hardcoded namespace="default" but deployments
now run in dedicated laconic-* namespaces. The secret was invisible
to pods in the deployment namespace, causing 401 on GHCR pulls.
Accept namespace as parameter, passed from deploy_k8s.py which knows
the correct namespace.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reverts the label-based deletion approach — resources created by older
laconic-so lack labels, so label queries return empty results. Namespace
deletion is the only reliable cleanup.
Adds _wait_for_namespace_gone() so down() blocks until the namespace
is fully terminated. This prevents the race condition where up() tries
to create resources in a still-terminating namespace (403 Forbidden).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
down() deleted the entire namespace when it wasn't explicitly set in
the spec. This causes a race condition on restart: up() tries to create
resources in a namespace that's still terminating, getting 403 Forbidden.
Always use _delete_resources_by_label() instead. The namespace is cheap
to keep and required for immediate up() after down(). This also matches
the shared-namespace behavior, making down() consistent regardless of
namespace configuration.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
config.env is regenerated from spec.yml on every deploy create and
restart, silently overwriting manual edits. Add a header comment
explaining this so operators know to edit spec.yml instead.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The secret name `{app}-registry` is ambiguous — it could be a container
registry credential or a Laconic registry config. Rename to
`{app}-image-pull-secret` which clearly describes its purpose as a
Kubernetes imagePullSecret for private container registries.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Kind deployments used imagePullPolicy=None (defaults to IfNotPresent),
which means the kind node caches images by tag and never re-pulls from
the local registry. After a container rebuild + registry push, the pod
keeps using the stale cached image.
Set Always for all deployment types so k8s re-pulls on every pod
restart. With a local registry this adds negligible overhead.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The --update path excluded config.env from the safe_copy_tree, which
meant new config vars added to spec.yml were never written to
config.env. The XXX comment already flagged this as broken.
Remove config.env from exclude_patterns so --update regenerates it
from spec.yml like the non-update path does.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Previously get_ingress() only used the first http-proxy entry,
silently ignoring additional hostnames. Now iterates over all
entries, creating an Ingress rule and TLS config per hostname.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
In docker-compose, services can reference each other by name (e.g., 'db:5432').
In Kubernetes, when multiple containers are in the same pod (sidecars), they
share the same network namespace and must use 'localhost' instead.
This fix adds translate_sidecar_service_names() which replaces docker-compose
service name references with 'localhost' in environment variable values for
containers that share the same pod.
Fixes issue where multi-container pods fail because one container tries to
connect to a sibling using the compose service name instead of localhost.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Each deployment now gets its own Kubernetes namespace (laconic-{deployment_id}).
This provides:
- Resource isolation between deployments on the same cluster
- Simplified cleanup: deleting the namespace cascades to all namespaced resources
- No orphaned resources possible when deployment IDs change
Changes:
- Set k8s_namespace based on deployment name in __init__
- Add _ensure_namespace() to create namespace before deploying resources
- Add _delete_namespace() for cleanup
- Simplify down() to just delete PVs (cluster-scoped) and the namespace
- Fix hardcoded "default" namespace in logs function
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously, down() generated resource names from the deployment config
and deleted those specific names. This failed to clean up orphaned
resources when deployment IDs changed (e.g., after force_redeploy).
Changes:
- Add 'app' label to all resources: Ingress, Service, NodePort, ConfigMap, PV
- Refactor down() to query K8s by label selector instead of generating names
- This ensures all resources for a deployment are cleaned up, even if
the deployment config has changed or been deleted
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Mount /var/lib/etcd and /etc/kubernetes/pki to host filesystem
so cluster state is preserved for offline recovery. Each deployment
gets its own backup directory keyed by deployment ID.
Directory structure:
data/cluster-backups/{deployment_id}/etcd/
data/cluster-backups/{deployment_id}/pki/
This enables extracting secrets from etcd backups using etcdctl
with the preserved PKI certificates.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Adds support for configuring ACME email for Let's Encrypt certificates
in kind deployments. The email can be specified in the spec under
network.acme-email and will be used to configure the Caddy ingress
controller ConfigMap.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
For k8s-kind, relative paths (e.g., ./data/rpc-config) are resolved to
$DEPLOYMENT_DIR/path by _make_absolute_host_path() during kind config
generation. This provides Docker Host persistence that survives cluster
restarts.
Previously, validation threw an exception before paths could be resolved,
making it impossible to use relative paths for persistent storage.
Changes:
- deployment_create.py: Skip relative path check for k8s-kind
- cluster_info.py: Allow relative paths to reach PV generation
- docs/deployment_patterns.md: Document volume persistence patterns
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Previously, install_ingress_for_kind() applied the YAML (which starts
the Caddy pod with email: ""), then patched the ConfigMap afterward.
The pod had already read the empty email and Caddy doesn't hot-reload.
Now template the email into the YAML before applying, so the pod starts
with the correct email from the beginning.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The existing 'image-registry' key is used for pushing images to a remote
registry (URL string). Rename the new auth config to 'registry-credentials'
to avoid collision.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add ability to configure private container registry credentials in spec.yml
for deployments using images from registries like GHCR.
- Add get_image_registry_config() to spec.py for parsing image-registry config
- Add create_registry_secret() to create K8s docker-registry secrets
- Update cluster_info.py to use dynamic {deployment}-registry secret names
- Update deploy_k8s.py to create registry secret before deployment
- Document feature in deployment_patterns.md
The token-env pattern keeps credentials out of git - the spec references an
environment variable name, and the actual token is passed at runtime.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Check stack.yml containers: field to determine which images are local builds
- Only load local images via kind load; let k8s pull registry images directly
- Add is_ingress_running() to skip ingress installation if already running
- Fixes deployment failures when public registry images aren't in local Docker
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When stack: field in spec.yml contains a path (e.g., stack_orchestrator/data/stacks/name),
extract just the final name component for K8s secret naming. K8s resource names must
be valid RFC 1123 subdomains and cannot contain slashes.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add GENERATE_TOKEN_PATTERN to detect $generate:hex:N$ and $generate:base64:N$ tokens
- Add _generate_and_store_secrets() to create K8s Secrets from spec.yml config
- Modify _write_config_file() to separate secrets from regular config
- Add env_from with secretRef to container spec in cluster_info.py
- Secrets are injected directly into containers via K8s native mechanism
This enables declarative secret generation in spec.yml:
config:
SESSION_SECRET: $generate:hex:32$
DB_PASSWORD: $generate:hex:16$
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When deploying a second stack to k8s-kind, automatically reuse an existing
kind cluster instead of trying to create a new one (which would fail due
to port 80/443 conflicts).
Changes:
- helpers.py: create_cluster() now checks for existing cluster first
- deploy_k8s.py: up() captures returned cluster name and updates self
This enables deploying multiple stacks (e.g., gorbagana-rpc + trashscan-explorer)
to the same kind cluster.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add --spec-file option to specify spec location in repo
- Auto-detect deployment/spec.yml in repo as GitOps location
- Fall back to deployment dir if no repo spec found
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The etcd directory is root-owned, so shell test -f fails.
Use docker with volume mount to check file existence.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Create member.backup-YYYYMMDD-HHMMSS before cleaning.
Each cluster recreation creates a new backup, preserving history.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Move original to .bak, move new into place, then delete bak.
If anything fails before the swap, original remains intact.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Instead of trying to delete specific stale resources (blacklist),
keep only the valuable data (caddy TLS certs) and delete everything
else. This is more robust as we don't need to maintain a list of
all possible stale resources.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Use docker containers with volume mounts to handle all file
operations on root-owned etcd directories, avoiding the need
for sudo on the host.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When etcd is persisted (for certificate backup) and a cluster is
recreated, kind tries to install CNI (kindnet) fresh but the
persisted etcd already has those resources, causing 'AlreadyExists'
errors and cluster creation failure.
This fix:
- Detects etcd mount path from kind config
- Before cluster creation, clears stale CNI resources (kindnet, coredns)
- Preserves certificate and other important data
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add acme_email_key constant for spec.yml parsing
- Add get_acme_email() method to Spec class
- Modify install_ingress_for_kind() to patch ConfigMap with email
- Pass acme-email from spec to ingress installation
- Add 'delete' verb to leases RBAC for certificate lock cleanup
The acme-email field in spec.yml was previously ignored, causing
Let's Encrypt to fail with "unable to parse email address".
The missing delete permission on leases caused lock cleanup failures.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add `laconic-so deployment restart` command that:
- Pulls latest code from stack git repository
- Regenerates spec.yml from stack's commands.py
- Verifies DNS if hostname changed (with --force to skip)
- Syncs deployment directory preserving cluster ID and data
- Stops and restarts deployment with --skip-cluster-management
Also stores stack-source path in deployment.yml during create
for automatic stack location on restart.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>