Compare commits

..

No commits in common. "main" and "helm-charts-with-caddy" have entirely different histories.

34 changed files with 319 additions and 3218 deletions

View File

@ -2,8 +2,7 @@ name: Deploy Test
on:
pull_request:
branches:
- main
branches: '*'
push:
branches:
- main

View File

@ -2,8 +2,7 @@ name: K8s Deploy Test
on:
pull_request:
branches:
- main
branches: '*'
push:
branches: '*'
paths:

View File

@ -2,8 +2,7 @@ name: K8s Deployment Control Test
on:
pull_request:
branches:
- main
branches: '*'
push:
branches: '*'
paths:

View File

@ -2,8 +2,7 @@ name: Webapp Test
on:
pull_request:
branches:
- main
branches: '*'
push:
branches:
- main

View File

@ -1,151 +0,0 @@
# Plan: Make Stack-Orchestrator AI-Friendly
## Goal
Make the stack-orchestrator repository easier for AI tools (Claude Code, Cursor, Copilot) to understand and use for generating stacks, including adding a `create-stack` command.
---
## Part 1: Documentation & Context Files
### 1.1 Add CLAUDE.md
Create a root-level context file for AI assistants.
**File:** `CLAUDE.md`
Contents:
- Project overview (what stack-orchestrator does)
- Stack creation workflow (step-by-step)
- File naming conventions
- Required vs optional fields in stack.yml
- Common patterns and anti-patterns
- Links to example stacks (simple, medium, complex)
### 1.2 Add JSON Schema for stack.yml
Create formal validation schema.
**File:** `schemas/stack-schema.json`
Benefits:
- AI tools can validate generated stacks
- IDEs provide autocomplete
- CI can catch errors early
### 1.3 Add Template Stack with Comments
Create an annotated template for reference.
**File:** `stack_orchestrator/data/stacks/_template/stack.yml`
```yaml
# Stack definition template - copy this directory to create a new stack
version: "1.2" # Required: 1.0, 1.1, or 1.2
name: my-stack # Required: lowercase, hyphens only
description: "Human-readable description" # Optional
repos: # Git repositories to clone
- github.com/org/repo
containers: # Container images to build (must have matching container-build/)
- cerc/my-container
pods: # Deployment units (must have matching docker-compose-{pod}.yml)
- my-pod
```
### 1.4 Document Validation Rules
Create explicit documentation of constraints currently scattered in code.
**File:** `docs/stack-format.md`
Contents:
- Container names must start with `cerc/`
- Pod names must match compose file: `docker-compose-{pod}.yml`
- Repository format: `host/org/repo[@ref]`
- Stack directory name should match `name` field
- Version field options and differences
---
## Part 2: Add `create-stack` Command
### 2.1 Command Overview
```bash
laconic-so create-stack --repo github.com/org/my-app [--name my-app] [--type webapp]
```
**Behavior:**
1. Parse repo URL to extract app name (if --name not provided)
2. Create `stacks/{name}/stack.yml`
3. Create `container-build/cerc-{name}/Dockerfile` and `build.sh`
4. Create `compose/docker-compose-{name}.yml`
5. Update list files (repository-list.txt, container-image-list.txt, pod-list.txt)
### 2.2 Files to Create
| File | Purpose |
|------|---------|
| `stack_orchestrator/create/__init__.py` | Package init |
| `stack_orchestrator/create/create_stack.py` | Command implementation |
### 2.3 Files to Modify
| File | Change |
|------|--------|
| `stack_orchestrator/main.py` | Add import and `cli.add_command()` |
### 2.4 Command Options
| Option | Required | Description |
|--------|----------|-------------|
| `--repo` | Yes | Git repository URL (e.g., github.com/org/repo) |
| `--name` | No | Stack name (defaults to repo name) |
| `--type` | No | Template type: webapp, service, empty (default: webapp) |
| `--force` | No | Overwrite existing files |
### 2.5 Template Types
| Type | Base Image | Port | Use Case |
|------|------------|------|----------|
| webapp | node:20-bullseye-slim | 3000 | React/Vue/Next.js apps |
| service | python:3.11-slim | 8080 | Python backend services |
| empty | none | none | Custom from scratch |
---
## Part 3: Implementation Summary
### New Files (6)
1. `CLAUDE.md` - AI assistant context
2. `schemas/stack-schema.json` - Validation schema
3. `stack_orchestrator/data/stacks/_template/stack.yml` - Annotated template
4. `docs/stack-format.md` - Stack format documentation
5. `stack_orchestrator/create/__init__.py` - Package init
6. `stack_orchestrator/create/create_stack.py` - Command implementation
### Modified Files (1)
1. `stack_orchestrator/main.py` - Register create-stack command
---
## Verification
```bash
# 1. Command appears in help
laconic-so --help | grep create-stack
# 2. Dry run works
laconic-so --dry-run create-stack --repo github.com/org/test-app
# 3. Creates all expected files
laconic-so create-stack --repo github.com/org/test-app
ls stack_orchestrator/data/stacks/test-app/
ls stack_orchestrator/data/container-build/cerc-test-app/
ls stack_orchestrator/data/compose/docker-compose-test-app.yml
# 4. Build works with generated stack
laconic-so --stack test-app build-containers
```

View File

@ -8,7 +8,6 @@ NEVER assume your hypotheses are true without evidence
ALWAYS clearly state when something is a hypothesis
ALWAYS use evidence from the systems your interacting with to support your claims and hypotheses
ALWAYS run `pre-commit run --all-files` before committing changes
## Key Principles
@ -44,76 +43,6 @@ This project follows principles inspired by literate programming, where developm
This approach treats the human-AI collaboration as a form of **conversational literate programming** where understanding emerges through dialogue before code implementation.
## External Stacks Preferred
When creating new stacks for any reason, **use the external stack pattern** rather than adding stacks directly to this repository.
External stacks follow this structure:
```
my-stack/
└── stack-orchestrator/
├── stacks/
│ └── my-stack/
│ ├── stack.yml
│ └── README.md
├── compose/
│ └── docker-compose-my-stack.yml
└── config/
└── my-stack/
└── (config files)
```
### Usage
```bash
# Fetch external stack
laconic-so fetch-stack github.com/org/my-stack
# Use external stack
STACK_PATH=~/cerc/my-stack/stack-orchestrator/stacks/my-stack
laconic-so --stack $STACK_PATH deploy init --output spec.yml
laconic-so --stack $STACK_PATH deploy create --spec-file spec.yml --deployment-dir deployment
laconic-so deployment --dir deployment start
```
### Examples
- `zenith-karma-stack` - Karma watcher deployment
- `urbit-stack` - Fake Urbit ship for testing
- `zenith-desk-stack` - Desk deployment stack
## Architecture: k8s-kind Deployments
### One Cluster Per Host
One Kind cluster per host by design. Never request or expect separate clusters.
- `create_cluster()` in `helpers.py` reuses any existing cluster
- `cluster-id` in deployment.yml is an identifier, not a cluster request
- All deployments share: ingress controller, etcd, certificates
### Stack Resolution
- External stacks detected via `Path(stack).exists()` in `util.py`
- Config/compose resolution: external path first, then internal fallback
- External path structure: `stack_orchestrator/data/stacks/<name>/stack.yml`
### Secret Generation Implementation
- `GENERATE_TOKEN_PATTERN` in `deployment_create.py` matches `$generate:type:length$`
- `_generate_and_store_secrets()` creates K8s Secret
- `cluster_info.py` adds `envFrom` with `secretRef` to containers
- Non-secret config written to `config.env`
### Repository Cloning
`setup-repositories --git-ssh` clones repos defined in stack.yml's `repos:` field. Requires SSH agent.
### Key Files (for codebase navigation)
- `repos/setup_repositories.py`: `setup-repositories` command (git clone)
- `deployment_create.py`: `deploy create` command, secret generation
- `deployment.py`: `deployment start/stop/restart` commands
- `deploy_k8s.py`: K8s deployer, cluster management calls
- `helpers.py`: `create_cluster()`, etcd cleanup, kind operations
- `cluster_info.py`: K8s resource generation (Deployment, Service, Ingress)
## Insights and Observations
### Design Principles

View File

@ -71,59 +71,6 @@ The various [stacks](/stack_orchestrator/data/stacks) each contain instructions
- [laconicd with console and CLI](stack_orchestrator/data/stacks/fixturenet-laconic-loaded)
- [kubo (IPFS)](stack_orchestrator/data/stacks/kubo)
## Deployment Types
- **compose**: Docker Compose on local machine
- **k8s**: External Kubernetes cluster (requires kubeconfig)
- **k8s-kind**: Local Kubernetes via Kind - one cluster per host, shared by all deployments
## External Stacks
Stacks can live in external git repositories. Required structure:
```
<repo>/
stack_orchestrator/data/
stacks/<stack-name>/stack.yml
compose/docker-compose-<pod-name>.yml
deployment/spec.yml
```
## Deployment Commands
```bash
# Create deployment from spec
laconic-so --stack <path> deploy create --spec-file <spec.yml> --deployment-dir <dir>
# Start (creates cluster on first run)
laconic-so deployment --dir <dir> start
# GitOps restart (git pull + redeploy, preserves data)
laconic-so deployment --dir <dir> restart
# Stop
laconic-so deployment --dir <dir> stop
```
## spec.yml Reference
```yaml
stack: stack-name-or-path
deploy-to: k8s-kind
network:
http-proxy:
- host-name: app.example.com
routes:
- path: /
proxy-to: service-name:port
acme-email: admin@example.com
config:
ENV_VAR: value
SECRET_VAR: $generate:hex:32$ # Auto-generated, stored in K8s Secret
volumes:
volume-name:
```
## Contributing
See the [CONTRIBUTING.md](/docs/CONTRIBUTING.md) for developer mode install.

View File

@ -1,413 +0,0 @@
# Implementing `laconic-so create-stack` Command
A plan for adding a new CLI command to scaffold stack files automatically.
---
## Overview
Add a `create-stack` command that generates all required files for a new stack:
```bash
laconic-so create-stack --name my-stack --type webapp
```
**Output:**
```
stack_orchestrator/data/
├── stacks/my-stack/stack.yml
├── container-build/cerc-my-stack/
│ ├── Dockerfile
│ └── build.sh
└── compose/docker-compose-my-stack.yml
Updated: repository-list.txt, container-image-list.txt, pod-list.txt
```
---
## CLI Architecture Summary
### Command Registration Pattern
Commands are Click functions registered in `main.py`:
```python
# main.py (line ~70)
from stack_orchestrator.create import create_stack
cli.add_command(create_stack.command, "create-stack")
```
### Global Options Access
```python
from stack_orchestrator.opts import opts
if not opts.o.quiet:
print("message")
if opts.o.dry_run:
print("(would create files)")
```
### Key Utilities
| Function | Location | Purpose |
|----------|----------|---------|
| `get_yaml()` | `util.py` | YAML parser (ruamel.yaml) |
| `get_stack_path(stack)` | `util.py` | Resolve stack directory path |
| `error_exit(msg)` | `util.py` | Print error and exit(1) |
---
## Files to Create
### 1. Command Module
**`stack_orchestrator/create/__init__.py`**
```python
# Empty file to make this a package
```
**`stack_orchestrator/create/create_stack.py`**
```python
import click
import os
from pathlib import Path
from shutil import copy
from stack_orchestrator.opts import opts
from stack_orchestrator.util import error_exit, get_yaml
# Template types
STACK_TEMPLATES = {
"webapp": {
"description": "Web application with Node.js",
"base_image": "node:20-bullseye-slim",
"port": 3000,
},
"service": {
"description": "Backend service",
"base_image": "python:3.11-slim",
"port": 8080,
},
"empty": {
"description": "Minimal stack with no defaults",
"base_image": None,
"port": None,
},
}
def get_data_dir() -> Path:
"""Get path to stack_orchestrator/data directory"""
return Path(__file__).absolute().parent.parent.joinpath("data")
def validate_stack_name(name: str) -> None:
"""Validate stack name follows conventions"""
import re
if not re.match(r'^[a-z0-9][a-z0-9-]*[a-z0-9]$', name) and len(name) > 2:
error_exit(f"Invalid stack name '{name}'. Use lowercase alphanumeric with hyphens.")
if name.startswith("cerc-"):
error_exit("Stack name should not start with 'cerc-' (container names will add this prefix)")
def create_stack_yml(stack_dir: Path, name: str, template: dict, repo_url: str) -> None:
"""Create stack.yml file"""
config = {
"version": "1.2",
"name": name,
"description": template.get("description", f"Stack: {name}"),
"repos": [repo_url] if repo_url else [],
"containers": [f"cerc/{name}"],
"pods": [name],
}
stack_dir.mkdir(parents=True, exist_ok=True)
with open(stack_dir / "stack.yml", "w") as f:
get_yaml().dump(config, f)
def create_dockerfile(container_dir: Path, name: str, template: dict) -> None:
"""Create Dockerfile"""
base_image = template.get("base_image", "node:20-bullseye-slim")
port = template.get("port", 3000)
dockerfile_content = f'''# Build stage
FROM {base_image} AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM {base_image}
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
EXPOSE {port}
CMD ["npm", "run", "start"]
'''
container_dir.mkdir(parents=True, exist_ok=True)
with open(container_dir / "Dockerfile", "w") as f:
f.write(dockerfile_content)
def create_build_script(container_dir: Path, name: str) -> None:
"""Create build.sh script"""
build_script = f'''#!/usr/bin/env bash
# Build cerc/{name}
source ${{CERC_CONTAINER_BASE_DIR}}/build-base.sh
SCRIPT_DIR=$( cd -- "$( dirname -- "${{BASH_SOURCE[0]}}" )" &> /dev/null && pwd )
docker build -t cerc/{name}:local \\
-f ${{SCRIPT_DIR}}/Dockerfile \\
${{build_command_args}} \\
${{CERC_REPO_BASE_DIR}}/{name}
'''
build_path = container_dir / "build.sh"
with open(build_path, "w") as f:
f.write(build_script)
# Make executable
os.chmod(build_path, 0o755)
def create_compose_file(compose_dir: Path, name: str, template: dict) -> None:
"""Create docker-compose file"""
port = template.get("port", 3000)
compose_content = {
"version": "3.8",
"services": {
name: {
"image": f"cerc/{name}:local",
"restart": "unless-stopped",
"ports": [f"${{HOST_PORT:-{port}}}:{port}"],
"environment": {
"NODE_ENV": "${NODE_ENV:-production}",
},
}
}
}
with open(compose_dir / f"docker-compose-{name}.yml", "w") as f:
get_yaml().dump(compose_content, f)
def update_list_file(data_dir: Path, filename: str, entry: str) -> None:
"""Add entry to a list file if not already present"""
list_path = data_dir / filename
# Read existing entries
existing = set()
if list_path.exists():
with open(list_path, "r") as f:
existing = set(line.strip() for line in f if line.strip())
# Add new entry
if entry not in existing:
with open(list_path, "a") as f:
f.write(f"{entry}\n")
@click.command()
@click.option("--name", required=True, help="Name of the new stack (lowercase, hyphens)")
@click.option("--type", "stack_type", default="webapp",
type=click.Choice(list(STACK_TEMPLATES.keys())),
help="Stack template type")
@click.option("--repo", help="Git repository URL (e.g., github.com/org/repo)")
@click.option("--force", is_flag=True, help="Overwrite existing files")
@click.pass_context
def command(ctx, name: str, stack_type: str, repo: str, force: bool):
"""Create a new stack with all required files.
Examples:
laconic-so create-stack --name my-app --type webapp
laconic-so create-stack --name my-service --type service --repo github.com/org/repo
"""
# Validate
validate_stack_name(name)
template = STACK_TEMPLATES[stack_type]
data_dir = get_data_dir()
# Define paths
stack_dir = data_dir / "stacks" / name
container_dir = data_dir / "container-build" / f"cerc-{name}"
compose_dir = data_dir / "compose"
# Check for existing files
if not force:
if stack_dir.exists():
error_exit(f"Stack already exists: {stack_dir}\nUse --force to overwrite")
if container_dir.exists():
error_exit(f"Container build dir exists: {container_dir}\nUse --force to overwrite")
# Dry run check
if opts.o.dry_run:
print(f"Would create stack '{name}' with template '{stack_type}':")
print(f" - {stack_dir}/stack.yml")
print(f" - {container_dir}/Dockerfile")
print(f" - {container_dir}/build.sh")
print(f" - {compose_dir}/docker-compose-{name}.yml")
print(f" - Update repository-list.txt")
print(f" - Update container-image-list.txt")
print(f" - Update pod-list.txt")
return
# Create files
if not opts.o.quiet:
print(f"Creating stack '{name}' with template '{stack_type}'...")
create_stack_yml(stack_dir, name, template, repo)
if opts.o.verbose:
print(f" Created {stack_dir}/stack.yml")
create_dockerfile(container_dir, name, template)
if opts.o.verbose:
print(f" Created {container_dir}/Dockerfile")
create_build_script(container_dir, name)
if opts.o.verbose:
print(f" Created {container_dir}/build.sh")
create_compose_file(compose_dir, name, template)
if opts.o.verbose:
print(f" Created {compose_dir}/docker-compose-{name}.yml")
# Update list files
if repo:
update_list_file(data_dir, "repository-list.txt", repo)
if opts.o.verbose:
print(f" Added {repo} to repository-list.txt")
update_list_file(data_dir, "container-image-list.txt", f"cerc/{name}")
if opts.o.verbose:
print(f" Added cerc/{name} to container-image-list.txt")
update_list_file(data_dir, "pod-list.txt", name)
if opts.o.verbose:
print(f" Added {name} to pod-list.txt")
# Summary
if not opts.o.quiet:
print(f"\nStack '{name}' created successfully!")
print(f"\nNext steps:")
print(f" 1. Edit {stack_dir}/stack.yml")
print(f" 2. Customize {container_dir}/Dockerfile")
print(f" 3. Run: laconic-so --stack {name} build-containers")
print(f" 4. Run: laconic-so --stack {name} deploy-system up")
```
### 2. Register Command in main.py
**Edit `stack_orchestrator/main.py`**
Add import:
```python
from stack_orchestrator.create import create_stack
```
Add command registration (after line ~78):
```python
cli.add_command(create_stack.command, "create-stack")
```
---
## Implementation Steps
### Step 1: Create module structure
```bash
mkdir -p stack_orchestrator/create
touch stack_orchestrator/create/__init__.py
```
### Step 2: Create the command file
Create `stack_orchestrator/create/create_stack.py` with the code above.
### Step 3: Register in main.py
Add the import and `cli.add_command()` line.
### Step 4: Test the command
```bash
# Show help
laconic-so create-stack --help
# Dry run
laconic-so --dry-run create-stack --name test-app --type webapp
# Create a stack
laconic-so create-stack --name test-app --type webapp --repo github.com/org/test-app
# Verify
ls -la stack_orchestrator/data/stacks/test-app/
cat stack_orchestrator/data/stacks/test-app/stack.yml
```
---
## Template Types
| Type | Base Image | Port | Use Case |
|------|------------|------|----------|
| `webapp` | node:20-bullseye-slim | 3000 | React/Vue/Next.js apps |
| `service` | python:3.11-slim | 8080 | Python backend services |
| `empty` | none | none | Custom from scratch |
---
## Future Enhancements
1. **Interactive mode** - Prompt for values if not provided
2. **More templates** - Go, Rust, database stacks
3. **Template from existing** - `--from-stack existing-stack`
4. **External stack support** - Create in custom directory
5. **Validation command** - `laconic-so validate-stack --name my-stack`
---
## Files Modified
| File | Change |
|------|--------|
| `stack_orchestrator/create/__init__.py` | New (empty) |
| `stack_orchestrator/create/create_stack.py` | New (command implementation) |
| `stack_orchestrator/main.py` | Add import and `cli.add_command()` |
---
## Verification
```bash
# 1. Command appears in help
laconic-so --help | grep create-stack
# 2. Dry run works
laconic-so --dry-run create-stack --name verify-test --type webapp
# 3. Full creation works
laconic-so create-stack --name verify-test --type webapp
ls stack_orchestrator/data/stacks/verify-test/
ls stack_orchestrator/data/container-build/cerc-verify-test/
ls stack_orchestrator/data/compose/docker-compose-verify-test.yml
# 4. Build works
laconic-so --stack verify-test build-containers
# 5. Cleanup
rm -rf stack_orchestrator/data/stacks/verify-test
rm -rf stack_orchestrator/data/container-build/cerc-verify-test
rm stack_orchestrator/data/compose/docker-compose-verify-test.yml
```

View File

@ -65,71 +65,3 @@ Force full rebuild of packages:
```
$ laconic-so build-npms --include <package-name> --force-rebuild
```
## deploy
The `deploy` command group manages persistent deployments. The general workflow is `deploy init` to generate a spec file, then `deploy create` to create a deployment directory from the spec, then runtime commands like `deploy up` and `deploy down`.
### deploy init
Generate a deployment spec file from a stack definition:
```
$ laconic-so --stack <stack-name> deploy init --output <spec-file>
```
Options:
- `--output` (required): write spec file here
- `--config`: provide config variables for the deployment
- `--config-file`: provide config variables in a file
- `--kube-config`: provide a config file for a k8s deployment
- `--image-registry`: provide a container image registry url for this k8s cluster
- `--map-ports-to-host`: map ports to the host (`any-variable-random`, `localhost-same`, `any-same`, `localhost-fixed-random`, `any-fixed-random`)
### deploy create
Create a deployment directory from a spec file:
```
$ laconic-so --stack <stack-name> deploy create --spec-file <spec-file> --deployment-dir <dir>
```
Update an existing deployment in-place (preserving data volumes and env file):
```
$ laconic-so --stack <stack-name> deploy create --spec-file <spec-file> --deployment-dir <dir> --update
```
Options:
- `--spec-file` (required): spec file to use
- `--deployment-dir`: target directory for deployment files
- `--update`: update an existing deployment directory, preserving data volumes and env file. Changed files are backed up with a `.bak` suffix. The deployment's `config.env` and `deployment.yml` are also preserved.
- `--network-dir`: network configuration supplied in this directory
- `--initial-peers`: initial set of persistent peers
### deploy up
Start a deployment:
```
$ laconic-so deployment --dir <deployment-dir> up
```
### deploy down
Stop a deployment:
```
$ laconic-so deployment --dir <deployment-dir> down
```
Use `--delete-volumes` to also remove data volumes.
### deploy ps
Show running services:
```
$ laconic-so deployment --dir <deployment-dir> ps
```
### deploy logs
View service logs:
```
$ laconic-so deployment --dir <deployment-dir> logs
```
Use `-f` to follow and `-n <count>` to tail.

View File

@ -1,202 +0,0 @@
# Deployment Patterns
## GitOps Pattern
For production deployments, we recommend a GitOps approach where your deployment configuration is tracked in version control.
### Overview
- **spec.yml is your source of truth**: Maintain it in your operator repository
- **Don't regenerate on every restart**: Run `deploy init` once, then customize and commit
- **Use restart for updates**: The restart command respects your git-tracked spec.yml
### Workflow
1. **Initial setup**: Run `deploy init` once to generate a spec.yml template
2. **Customize and commit**: Edit spec.yml with your configuration (hostnames, resources, etc.) and commit to your operator repo
3. **Deploy from git**: Use the committed spec.yml for deployments
4. **Update via git**: Make changes in git, then restart to apply
```bash
# Initial setup (run once)
laconic-so --stack my-stack deploy init --output spec.yml
# Customize for your environment
vim spec.yml # Set hostname, resources, etc.
# Commit to your operator repository
git add spec.yml
git commit -m "Add my-stack deployment configuration"
git push
# On deployment server: deploy from git-tracked spec
laconic-so deploy create \
--spec-file /path/to/operator-repo/spec.yml \
--deployment-dir my-deployment
laconic-so deployment --dir my-deployment start
```
### Updating Deployments
When you need to update a deployment:
```bash
# 1. Make changes in your operator repo
vim /path/to/operator-repo/spec.yml
git commit -am "Update configuration"
git push
# 2. On deployment server: pull and restart
cd /path/to/operator-repo && git pull
laconic-so deployment --dir my-deployment restart
```
The `restart` command:
- Pulls latest code from the stack repository
- Uses your git-tracked spec.yml (does NOT regenerate from defaults)
- Syncs the deployment directory
- Restarts services
### Anti-patterns
**Don't do this:**
```bash
# BAD: Regenerating spec on every deployment
laconic-so --stack my-stack deploy init --output spec.yml
laconic-so deploy create --spec-file spec.yml ...
```
This overwrites your customizations with defaults from the stack's `commands.py`.
**Do this instead:**
```bash
# GOOD: Use your git-tracked spec
git pull # Get latest spec.yml from your operator repo
laconic-so deployment --dir my-deployment restart
```
## Private Registry Authentication
For deployments using images from private container registries (e.g., GitHub Container Registry), configure authentication in your spec.yml:
### Configuration
Add a `registry-credentials` section to your spec.yml:
```yaml
registry-credentials:
server: ghcr.io
username: your-org-or-username
token-env: REGISTRY_TOKEN
```
**Fields:**
- `server`: The registry hostname (e.g., `ghcr.io`, `docker.io`, `gcr.io`)
- `username`: Registry username (for GHCR, use your GitHub username or org name)
- `token-env`: Name of the environment variable containing your API token/PAT
### Token Environment Variable
The `token-env` pattern keeps credentials out of version control. Set the environment variable when running `deployment start`:
```bash
export REGISTRY_TOKEN="your-personal-access-token"
laconic-so deployment --dir my-deployment start
```
For GHCR, create a Personal Access Token (PAT) with `read:packages` scope.
### Ansible Integration
When using Ansible for deployments, pass the token from a credentials file:
```yaml
- name: Start deployment
ansible.builtin.command:
cmd: laconic-so deployment --dir {{ deployment_dir }} start
environment:
REGISTRY_TOKEN: "{{ lookup('file', '~/.credentials/ghcr_token') }}"
```
### How It Works
1. laconic-so reads the `registry-credentials` config from spec.yml
2. Creates a Kubernetes `docker-registry` secret named `{deployment}-registry`
3. The deployment's pods reference this secret for image pulls
## Cluster and Volume Management
### Stopping Deployments
The `deployment stop` command has two important flags:
```bash
# Default: stops deployment, deletes cluster, PRESERVES volumes
laconic-so deployment --dir my-deployment stop
# Explicitly delete volumes (USE WITH CAUTION)
laconic-so deployment --dir my-deployment stop --delete-volumes
```
### Volume Persistence
Volumes persist across cluster deletion by design. This is important because:
- **Data survives cluster recreation**: Ledger data, databases, and other state are preserved
- **Faster recovery**: No need to re-sync or rebuild data after cluster issues
- **Safe cluster upgrades**: Delete and recreate cluster without data loss
**Only use `--delete-volumes` when:**
- You explicitly want to start fresh with no data
- The user specifically requests volume deletion
- You're cleaning up a test/dev environment completely
### Shared Cluster Architecture
In kind deployments, multiple stacks share a single cluster:
- First `deployment start` creates the cluster
- Subsequent deployments reuse the existing cluster
- `deployment stop` on ANY deployment deletes the shared cluster
- Other deployments will fail until cluster is recreated
To stop a single deployment without affecting the cluster:
```bash
laconic-so deployment --dir my-deployment stop --skip-cluster-management
```
## Volume Persistence in k8s-kind
k8s-kind has 3 storage layers:
- **Docker Host**: The physical server running Docker
- **Kind Node**: A Docker container simulating a k8s node
- **Pod Container**: Your workload
For k8s-kind, volumes with paths are mounted from Docker Host → Kind Node → Pod via extraMounts.
| spec.yml volume | Storage Location | Survives Pod Restart | Survives Cluster Restart |
|-----------------|------------------|---------------------|-------------------------|
| `vol:` (empty) | Kind Node PVC | ✅ | ❌ |
| `vol: ./data/x` | Docker Host | ✅ | ✅ |
| `vol: /abs/path`| Docker Host | ✅ | ✅ |
**Recommendation**: Always use paths for data you want to keep. Relative paths
(e.g., `./data/rpc-config`) resolve to `$DEPLOYMENT_DIR/data/rpc-config` on the
Docker Host.
### Example
```yaml
# In spec.yml
volumes:
rpc-config: ./data/rpc-config # Persists to $DEPLOYMENT_DIR/data/rpc-config
chain-data: ./data/chain # Persists to $DEPLOYMENT_DIR/data/chain
temp-cache: # Empty = Kind Node PVC (lost on cluster delete)
```
### The Antipattern
Empty-path volumes appear persistent because they survive pod restarts (data lives
in Kind Node container). However, this data is lost when the kind cluster is
recreated. This "false persistence" has caused data loss when operators assumed
their data was safe.

View File

@ -1,550 +0,0 @@
# Docker Compose Deployment Guide
## Introduction
### What is a Deployer?
In stack-orchestrator, a **deployer** provides a uniform interface for orchestrating containerized applications. This guide focuses on Docker Compose deployments, which is the default and recommended deployment mode.
While stack-orchestrator also supports Kubernetes (`k8s`) and Kind (`k8s-kind`) deployments, those are out of scope for this guide. See the [Kubernetes Enhancements](./k8s-deployment-enhancements.md) documentation for advanced deployment options.
## Prerequisites
To deploy stacks using Docker Compose, you need:
- Docker Engine (20.10+)
- Docker Compose plugin (v2.0+)
- Python 3.8+
- stack-orchestrator installed (`laconic-so`)
**That's it!** No additional infrastructure is required. If you have Docker installed, you're ready to deploy.
## Deployment Workflow
The typical deployment workflow consists of four main steps:
1. **Setup repositories and build containers** (first time only)
2. **Initialize deployment specification**
3. **Create deployment directory**
4. **Start and manage services**
## Quick Start Example
Here's a complete example using the built-in `test` stack:
```bash
# Step 1: Setup (first time only)
laconic-so --stack test setup-repositories
laconic-so --stack test build-containers
# Step 2: Initialize deployment spec
laconic-so --stack test deploy init --output test-spec.yml
# Step 3: Create deployment directory
laconic-so --stack test deploy create \
--spec-file test-spec.yml \
--deployment-dir test-deployment
# Step 4: Start services
laconic-so deployment --dir test-deployment start
# View running services
laconic-so deployment --dir test-deployment ps
# View logs
laconic-so deployment --dir test-deployment logs
# Stop services (preserves data)
laconic-so deployment --dir test-deployment stop
```
## Deployment Workflows
Stack-orchestrator supports two deployment workflows:
### 1. Deployment Directory Workflow (Recommended)
This workflow creates a persistent deployment directory that contains all configuration and data.
**When to use:**
- Production deployments
- When you need to preserve configuration
- When you want to manage multiple deployments
- When you need persistent volume data
**Example:**
```bash
# Initialize deployment spec
laconic-so --stack fixturenet-eth deploy init --output eth-spec.yml
# Optionally edit eth-spec.yml to customize configuration
# Create deployment directory
laconic-so --stack fixturenet-eth deploy create \
--spec-file eth-spec.yml \
--deployment-dir my-eth-deployment
# Start the deployment
laconic-so deployment --dir my-eth-deployment start
# Manage the deployment
laconic-so deployment --dir my-eth-deployment ps
laconic-so deployment --dir my-eth-deployment logs
laconic-so deployment --dir my-eth-deployment stop
```
### 2. Quick Deploy Workflow
This workflow deploys directly without creating a persistent deployment directory.
**When to use:**
- Quick testing
- Temporary deployments
- Simple stacks that don't require customization
**Example:**
```bash
# Start the stack directly
laconic-so --stack test deploy up
# Check service status
laconic-so --stack test deploy port test 80
# View logs
laconic-so --stack test deploy logs
# Stop (preserves volumes)
laconic-so --stack test deploy down
# Stop and remove volumes
laconic-so --stack test deploy down --delete-volumes
```
## Real-World Example: Ethereum Fixturenet
Deploy a local Ethereum testnet with Geth and Lighthouse:
```bash
# Setup (first time only)
laconic-so --stack fixturenet-eth setup-repositories
laconic-so --stack fixturenet-eth build-containers
# Initialize with default configuration
laconic-so --stack fixturenet-eth deploy init --output eth-spec.yml
# Create deployment
laconic-so --stack fixturenet-eth deploy create \
--spec-file eth-spec.yml \
--deployment-dir fixturenet-eth-deployment
# Start the network
laconic-so deployment --dir fixturenet-eth-deployment start
# Check status
laconic-so deployment --dir fixturenet-eth-deployment ps
# Access logs from specific service
laconic-so deployment --dir fixturenet-eth-deployment logs fixturenet-eth-geth-1
# Stop the network (preserves blockchain data)
laconic-so deployment --dir fixturenet-eth-deployment stop
# Start again - blockchain data is preserved
laconic-so deployment --dir fixturenet-eth-deployment start
# Clean up everything including data
laconic-so deployment --dir fixturenet-eth-deployment stop --delete-volumes
```
## Configuration
### Passing Configuration Parameters
Configuration can be passed in three ways:
**1. At init time via `--config` flag:**
```bash
laconic-so --stack test deploy init --output spec.yml \
--config PARAM1=value1,PARAM2=value2
```
**2. Edit the spec file after init:**
```bash
# Initialize
laconic-so --stack test deploy init --output spec.yml
# Edit spec.yml
vim spec.yml
```
Example spec.yml:
```yaml
stack: test
config:
PARAM1: value1
PARAM2: value2
```
**3. Docker Compose defaults:**
Environment variables defined in the stack's `docker-compose-*.yml` files are used as defaults. Configuration from the spec file overrides these defaults.
### Port Mapping
By default, services are accessible on randomly assigned host ports. To find the mapped port:
```bash
# Find the host port for container port 80 on service 'webapp'
laconic-so deployment --dir my-deployment port webapp 80
# Output example: 0.0.0.0:32768
```
To configure fixed ports, edit the spec file before creating the deployment:
```yaml
network:
ports:
webapp:
- '8080:80' # Maps host port 8080 to container port 80
api:
- '3000:3000'
```
Then create the deployment:
```bash
laconic-so --stack my-stack deploy create \
--spec-file spec.yml \
--deployment-dir my-deployment
```
### Volume Persistence
Volumes are preserved between stop/start cycles by default:
```bash
# Stop but keep data
laconic-so deployment --dir my-deployment stop
# Start again - data is still there
laconic-so deployment --dir my-deployment start
```
To completely remove all data:
```bash
# Stop and delete all volumes
laconic-so deployment --dir my-deployment stop --delete-volumes
```
Volume data is stored in `<deployment-dir>/data/`.
## Common Operations
### Viewing Logs
```bash
# All services, continuous follow
laconic-so deployment --dir my-deployment logs --follow
# Last 100 lines from all services
laconic-so deployment --dir my-deployment logs --tail 100
# Specific service only
laconic-so deployment --dir my-deployment logs webapp
# Combine options
laconic-so deployment --dir my-deployment logs --tail 50 --follow webapp
```
### Executing Commands in Containers
```bash
# Execute a command in a running service
laconic-so deployment --dir my-deployment exec webapp ls -la
# Interactive shell
laconic-so deployment --dir my-deployment exec webapp /bin/bash
# Run command with specific environment variables
laconic-so deployment --dir my-deployment exec webapp env VAR=value command
```
### Checking Service Status
```bash
# List all running services
laconic-so deployment --dir my-deployment ps
# Check using Docker directly
docker ps
```
### Updating a Running Deployment
If you need to change configuration after deployment:
```bash
# 1. Edit the spec file
vim my-deployment/spec.yml
# 2. Regenerate configuration
laconic-so deployment --dir my-deployment update
# 3. Restart services to apply changes
laconic-so deployment --dir my-deployment stop
laconic-so deployment --dir my-deployment start
```
## Multi-Service Deployments
Many stacks deploy multiple services that work together:
```bash
# Deploy a stack with multiple services
laconic-so --stack laconicd-with-console deploy init --output spec.yml
laconic-so --stack laconicd-with-console deploy create \
--spec-file spec.yml \
--deployment-dir laconicd-deployment
laconic-so deployment --dir laconicd-deployment start
# View all services
laconic-so deployment --dir laconicd-deployment ps
# View logs from specific services
laconic-so deployment --dir laconicd-deployment logs laconicd
laconic-so deployment --dir laconicd-deployment logs console
```
## ConfigMaps
ConfigMaps allow you to mount configuration files into containers:
```bash
# 1. Create the config directory in your deployment
mkdir -p my-deployment/data/my-config
echo "database_url=postgres://localhost" > my-deployment/data/my-config/app.conf
# 2. Reference in spec file
vim my-deployment/spec.yml
```
Add to spec.yml:
```yaml
configmaps:
my-config: ./data/my-config
```
```bash
# 3. Restart to apply
laconic-so deployment --dir my-deployment stop
laconic-so deployment --dir my-deployment start
```
The files will be mounted in the container at `/config/` (or as specified by the stack).
## Deployment Directory Structure
A typical deployment directory contains:
```
my-deployment/
├── compose/
│ └── docker-compose-*.yml # Generated compose files
├── config.env # Environment variables
├── deployment.yml # Deployment metadata
├── spec.yml # Deployment specification
└── data/ # Volume mounts and configs
├── service-data/ # Persistent service data
└── config-maps/ # ConfigMap files
```
## Troubleshooting
### Common Issues
**Problem: "Cannot connect to Docker daemon"**
```bash
# Ensure Docker is running
docker ps
# Start Docker if needed (macOS)
open -a Docker
# Start Docker (Linux)
sudo systemctl start docker
```
**Problem: "Port already in use"**
```bash
# Either stop the conflicting service or use different ports
# Edit spec.yml before creating deployment:
network:
ports:
webapp:
- '8081:80' # Use 8081 instead of 8080
```
**Problem: "Image not found"**
```bash
# Build containers first
laconic-so --stack your-stack build-containers
```
**Problem: Volumes not persisting**
```bash
# Check if you used --delete-volumes when stopping
# Volume data is in: <deployment-dir>/data/
# Don't use --delete-volumes if you want to keep data:
laconic-so deployment --dir my-deployment stop
# Only use --delete-volumes when you want to reset completely:
laconic-so deployment --dir my-deployment stop --delete-volumes
```
**Problem: Services not starting**
```bash
# Check logs for errors
laconic-so deployment --dir my-deployment logs
# Check Docker container status
docker ps -a
# Try stopping and starting again
laconic-so deployment --dir my-deployment stop
laconic-so deployment --dir my-deployment start
```
### Inspecting Deployment State
```bash
# Check deployment directory structure
ls -la my-deployment/
# Check running containers
docker ps
# Check container details
docker inspect <container-name>
# Check networks
docker network ls
# Check volumes
docker volume ls
```
## CLI Commands Reference
### Stack Operations
```bash
# Clone required repositories
laconic-so --stack <name> setup-repositories
# Build container images
laconic-so --stack <name> build-containers
```
### Deployment Initialization
```bash
# Initialize deployment spec with defaults
laconic-so --stack <name> deploy init --output <spec-file>
# Initialize with configuration
laconic-so --stack <name> deploy init --output <spec-file> \
--config PARAM1=value1,PARAM2=value2
```
### Deployment Creation
```bash
# Create deployment directory from spec
laconic-so --stack <name> deploy create \
--spec-file <spec-file> \
--deployment-dir <dir>
```
### Deployment Management
```bash
# Start all services
laconic-so deployment --dir <dir> start
# Stop services (preserves volumes)
laconic-so deployment --dir <dir> stop
# Stop and remove volumes
laconic-so deployment --dir <dir> stop --delete-volumes
# List running services
laconic-so deployment --dir <dir> ps
# View logs
laconic-so deployment --dir <dir> logs [--tail N] [--follow] [service]
# Show mapped port
laconic-so deployment --dir <dir> port <service> <private-port>
# Execute command in service
laconic-so deployment --dir <dir> exec <service> <command>
# Update configuration
laconic-so deployment --dir <dir> update
```
### Quick Deploy Commands
```bash
# Start stack directly
laconic-so --stack <name> deploy up
# Stop stack
laconic-so --stack <name> deploy down [--delete-volumes]
# View logs
laconic-so --stack <name> deploy logs
# Show port mapping
laconic-so --stack <name> deploy port <service> <port>
```
## Related Documentation
- [CLI Reference](./cli.md) - Complete CLI command documentation
- [Adding a New Stack](./adding-a-new-stack.md) - Creating custom stacks
- [Specification](./spec.md) - Internal structure and design
- [Kubernetes Enhancements](./k8s-deployment-enhancements.md) - Advanced K8s deployment options
- [Web App Deployment](./webapp.md) - Deploying web applications
## Examples
For more examples, see the test scripts:
- `scripts/quick-deploy-test.sh` - Quick deployment example
- `tests/deploy/run-deploy-test.sh` - Comprehensive test showing all features
## Summary
- Docker Compose is the default and recommended deployment mode
- Two workflows: deployment directory (recommended) or quick deploy
- The standard workflow is: setup → build → init → create → start
- Configuration is flexible with multiple override layers
- Volume persistence is automatic unless explicitly deleted
- All deployment state is contained in the deployment directory
- For Kubernetes deployments, see separate K8s documentation
You're now ready to deploy stacks using stack-orchestrator with Docker Compose!

View File

@ -1,128 +0,0 @@
# Deploying to the Laconic Network
## Overview
The Laconic network uses a **registry-based deployment model** where everything is published as blockchain records.
## Key Documentation in stack-orchestrator
- `docs/laconicd-with-console.md` - Setting up a laconicd network
- `docs/webapp.md` - Webapp building/running
- `stack_orchestrator/deploy/webapp/` - Implementation (14 modules)
## Core Concepts
### LRN (Laconic Resource Name)
Format: `lrn://laconic/[namespace]/[name]`
Examples:
- `lrn://laconic/deployers/my-deployer-name`
- `lrn://laconic/dns/example.com`
- `lrn://laconic/deployments/example.com`
### Registry Record Types
| Record Type | Purpose |
|-------------|---------|
| `ApplicationRecord` | Published app metadata |
| `WebappDeployer` | Deployment service offering |
| `ApplicationDeploymentRequest` | User's request to deploy |
| `ApplicationDeploymentAuction` | Optional bidding for deployers |
| `ApplicationDeploymentRecord` | Completed deployment result |
## Deployment Workflows
### 1. Direct Deployment
```
User publishes ApplicationDeploymentRequest
→ targets specific WebappDeployer (by LRN)
→ includes payment TX hash
→ Deployer picks up request, builds, deploys, publishes result
```
### 2. Auction-Based Deployment
```
User publishes ApplicationDeploymentAuction
→ Deployers bid (commit/reveal phases)
→ Winner selected
→ User publishes request targeting winner
```
## Key CLI Commands
### Publish a Deployer Service
```bash
laconic-so publish-webapp-deployer --laconic-config config.yml \
--api-url https://deployer-api.example.com \
--name my-deployer \
--payment-address laconic1... \
--minimum-payment 1000alnt
```
### Request Deployment (User Side)
```bash
laconic-so request-webapp-deployment --laconic-config config.yml \
--app lrn://laconic/apps/my-app \
--deployer lrn://laconic/deployers/xyz \
--make-payment auto
```
### Run Deployer Service (Deployer Side)
```bash
laconic-so deploy-webapp-from-registry --laconic-config config.yml --discover
```
## Laconic Config File
All tools require a laconic config file (`laconic.toml`):
```toml
[cosmos]
address_prefix = "laconic"
chain_id = "laconic_9000-1"
endpoint = "http://localhost:26657"
key = "<account-name>"
password = "<account-password>"
```
## Setting Up a Local Laconicd Network
```bash
# Clone and build
laconic-so --stack fixturenet-laconic-loaded setup-repositories
laconic-so --stack fixturenet-laconic-loaded build-containers
laconic-so --stack fixturenet-laconic-loaded deploy create
laconic-so deployment --dir laconic-loaded-deployment start
# Check status
laconic-so deployment --dir laconic-loaded-deployment exec cli "laconic registry status"
```
## Key Implementation Files
| File | Purpose |
|------|---------|
| `publish_webapp_deployer.py` | Register deployment service on network |
| `publish_deployment_auction.py` | Create auction for deployers to bid on |
| `handle_deployment_auction.py` | Monitor and bid on auctions (deployer-side) |
| `request_webapp_deployment.py` | Create deployment request (user-side) |
| `deploy_webapp_from_registry.py` | Process requests and deploy (deployer-side) |
| `request_webapp_undeployment.py` | Request app removal |
| `undeploy_webapp_from_registry.py` | Process removal requests |
| `util.py` | LaconicRegistryClient - all registry interactions |
## Payment System
- **Token Denom**: `alnt` (Laconic network tokens)
- **Payment Options**:
- `--make-payment`: Create new payment with amount (or "auto" for deployer's minimum)
- `--use-payment`: Reference existing payment TX
## What's NOT Well-Documented
1. No end-to-end tutorial for full deployment workflow
2. Stack publishing (vs webapp) process unclear
3. LRN naming conventions not formally specified
4. Payment economics and token mechanics

View File

@ -44,4 +44,3 @@ unlimited_memlock_key = "unlimited-memlock"
runtime_class_key = "runtime-class"
high_memlock_runtime = "high-memlock"
high_memlock_spec_filename = "high-memlock-spec.json"
acme_email_key = "acme-email"

View File

@ -8,8 +8,6 @@ services:
CERC_TEST_PARAM_2: "CERC_TEST_PARAM_2_VALUE"
CERC_TEST_PARAM_3: ${CERC_TEST_PARAM_3:-FAILED}
volumes:
- ../config/test/script.sh:/opt/run.sh
- ../config/test/settings.env:/opt/settings.env
- test-data-bind:/data
- test-data-auto:/data2
- test-config:/config:ro

View File

@ -1,3 +0,0 @@
#!/bin/sh
echo "Hello"

View File

@ -1 +0,0 @@
ANSWER=42

View File

@ -1,6 +1,9 @@
FROM alpine:latest
FROM ubuntu:latest
RUN apk add --no-cache nginx
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NOWARNINGS="yes" && \
apt-get install -y software-properties-common && \
apt-get install -y nginx && \
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
EXPOSE 80

View File

@ -1,4 +1,4 @@
#!/usr/bin/env sh
#!/usr/bin/env bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
@ -8,14 +8,14 @@ fi
echo "Test container starting"
DATA_DEVICE=$(df | grep "/data$" | awk '{ print $1 }')
if [ -n "$DATA_DEVICE" ]; then
if [[ -n "$DATA_DEVICE" ]]; then
echo "/data: MOUNTED dev=${DATA_DEVICE}"
else
echo "/data: not mounted"
fi
DATA2_DEVICE=$(df | grep "/data2$" | awk '{ print $1 }')
if [ -n "$DATA_DEVICE" ]; then
if [[ -n "$DATA_DEVICE" ]]; then
echo "/data2: MOUNTED dev=${DATA2_DEVICE}"
else
echo "/data2: not mounted"
@ -23,7 +23,7 @@ fi
# Test if the container's filesystem is old (run previously) or new
for d in /data /data2; do
if [ -f "$d/exists" ];
if [[ -f "$d/exists" ]];
then
TIMESTAMP=`cat $d/exists`
echo "$d filesystem is old, created: $TIMESTAMP"
@ -52,7 +52,7 @@ fi
if [ -d "/config" ]; then
echo "/config: EXISTS"
for f in /config/*; do
if [ -f "$f" ] || [ -L "$f" ]; then
if [[ -f "$f" ]] || [[ -L "$f" ]]; then
echo "$f:"
cat "$f"
echo ""
@ -64,4 +64,4 @@ else
fi
# Run nginx which will block here forever
nginx -g "daemon off;"
/usr/sbin/nginx -g "daemon off;"

View File

@ -93,7 +93,6 @@ rules:
- get
- create
- update
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding

View File

@ -14,6 +14,7 @@
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from stack_orchestrator.deploy.deployment_context import DeploymentContext
from ruamel.yaml import YAML
def create(context: DeploymentContext, extra_args):
@ -27,12 +28,17 @@ def create(context: DeploymentContext, extra_args):
"compose", "docker-compose-fixturenet-eth.yml"
)
with open(fixturenet_eth_compose_file, "r") as yaml_file:
yaml = YAML()
yaml_data = yaml.load(yaml_file)
new_script = "../config/fixturenet-optimism/run-geth.sh:/opt/testnet/run.sh"
def add_geth_volume(yaml_data):
if new_script not in yaml_data["services"]["fixturenet-eth-geth-1"]["volumes"]:
yaml_data["services"]["fixturenet-eth-geth-1"]["volumes"].append(new_script)
if new_script not in yaml_data["services"]["fixturenet-eth-geth-1"]["volumes"]:
yaml_data["services"]["fixturenet-eth-geth-1"]["volumes"].append(new_script)
context.modify_yaml(fixturenet_eth_compose_file, add_geth_volume)
with open(fixturenet_eth_compose_file, "w") as yaml_file:
yaml = YAML()
yaml.dump(yaml_data, yaml_file)
return None

View File

@ -2,6 +2,7 @@ version: "1.0"
name: test
description: "A test stack"
repos:
- git.vdb.to/cerc-io/laconicd
- git.vdb.to/cerc-io/test-project@test-branch
containers:
- cerc/test-container

View File

@ -15,9 +15,7 @@
import click
from pathlib import Path
import subprocess
import sys
import time
from stack_orchestrator import constants
from stack_orchestrator.deploy.images import push_images_operation
from stack_orchestrator.deploy.deploy import (
@ -230,176 +228,3 @@ def run_job(ctx, job_name, helm_release):
ctx.obj = make_deploy_context(ctx)
run_job_operation(ctx, job_name, helm_release)
@command.command()
@click.option("--stack-path", help="Path to stack git repo (overrides stored path)")
@click.option(
"--spec-file", help="Path to GitOps spec.yml in repo (e.g., deployment/spec.yml)"
)
@click.option("--config-file", help="Config file to pass to deploy init")
@click.option(
"--force",
is_flag=True,
default=False,
help="Skip DNS verification",
)
@click.option(
"--expected-ip",
help="Expected IP for DNS verification (if different from egress)",
)
@click.pass_context
def restart(ctx, stack_path, spec_file, config_file, force, expected_ip):
"""Pull latest code and restart deployment using git-tracked spec.
GitOps workflow:
1. Operator maintains spec.yml in their git repository
2. This command pulls latest code (including updated spec.yml)
3. If hostname changed, verifies DNS routes to this server
4. Syncs deployment directory with the git-tracked spec
5. Stops and restarts the deployment
Data volumes are always preserved. The cluster is never destroyed.
Stack source resolution (in order):
1. --stack-path argument (if provided)
2. stack-source field in deployment.yml (if stored)
3. Error if neither available
Note: spec.yml should be maintained in git, not regenerated from
commands.py on each restart. Use 'deploy init' only for initial
spec generation, then customize and commit to your operator repo.
"""
from stack_orchestrator.util import get_yaml, get_parsed_deployment_spec
from stack_orchestrator.deploy.deployment_create import create_operation
from stack_orchestrator.deploy.dns_probe import verify_dns_via_probe
deployment_context: DeploymentContext = ctx.obj
# Get current spec info (before git pull)
current_spec = deployment_context.spec
current_http_proxy = current_spec.get_http_proxy()
current_hostname = (
current_http_proxy[0]["host-name"] if current_http_proxy else None
)
# Resolve stack source path
if stack_path:
stack_source = Path(stack_path).resolve()
else:
# Try to get from deployment.yml
deployment_file = (
deployment_context.deployment_dir / constants.deployment_file_name
)
deployment_data = get_yaml().load(open(deployment_file))
stack_source_str = deployment_data.get("stack-source")
if not stack_source_str:
print(
"Error: No stack-source in deployment.yml and --stack-path not provided"
)
print("Use --stack-path to specify the stack git repository location")
sys.exit(1)
stack_source = Path(stack_source_str)
if not stack_source.exists():
print(f"Error: Stack source path does not exist: {stack_source}")
sys.exit(1)
print("=== Deployment Restart ===")
print(f"Deployment dir: {deployment_context.deployment_dir}")
print(f"Stack source: {stack_source}")
print(f"Current hostname: {current_hostname}")
# Step 1: Git pull (brings in updated spec.yml from operator's repo)
print("\n[1/4] Pulling latest code from stack repository...")
git_result = subprocess.run(
["git", "pull"], cwd=stack_source, capture_output=True, text=True
)
if git_result.returncode != 0:
print(f"Git pull failed: {git_result.stderr}")
sys.exit(1)
print(f"Git pull: {git_result.stdout.strip()}")
# Determine spec file location
# Priority: --spec-file argument > repo's deployment/spec.yml > deployment dir
# Stack path is like: repo/stack_orchestrator/data/stacks/stack-name
# So repo root is 4 parents up
repo_root = stack_source.parent.parent.parent.parent
if spec_file:
# Spec file relative to repo root
spec_file_path = repo_root / spec_file
else:
# Try standard GitOps location in repo
gitops_spec = repo_root / "deployment" / "spec.yml"
if gitops_spec.exists():
spec_file_path = gitops_spec
else:
# Fall back to deployment directory
spec_file_path = deployment_context.deployment_dir / "spec.yml"
if not spec_file_path.exists():
print(f"Error: spec.yml not found at {spec_file_path}")
print("For GitOps, add spec.yml to your repo at deployment/spec.yml")
print("Or specify --spec-file with path relative to repo root")
sys.exit(1)
print(f"Using spec: {spec_file_path}")
# Parse spec to check for hostname changes
new_spec_obj = get_parsed_deployment_spec(str(spec_file_path))
new_http_proxy = new_spec_obj.get("network", {}).get("http-proxy", [])
new_hostname = new_http_proxy[0]["host-name"] if new_http_proxy else None
print(f"Spec hostname: {new_hostname}")
# Step 2: DNS verification (only if hostname changed)
if new_hostname and new_hostname != current_hostname:
print(f"\n[2/4] Hostname changed: {current_hostname} -> {new_hostname}")
if force:
print("DNS verification skipped (--force)")
else:
print("Verifying DNS via probe...")
if not verify_dns_via_probe(new_hostname):
print(f"\nDNS verification failed for {new_hostname}")
print("Ensure DNS is configured before restarting.")
print("Use --force to skip this check.")
sys.exit(1)
else:
print("\n[2/4] Hostname unchanged, skipping DNS verification")
# Step 3: Sync deployment directory with spec
print("\n[3/4] Syncing deployment directory...")
deploy_ctx = make_deploy_context(ctx)
create_operation(
deployment_command_context=deploy_ctx,
spec_file=str(spec_file_path),
deployment_dir=str(deployment_context.deployment_dir),
update=True,
network_dir=None,
initial_peers=None,
)
# Reload deployment context with updated spec
deployment_context.init(deployment_context.deployment_dir)
ctx.obj = deployment_context
# Stop deployment
print("\n[4/4] Restarting deployment...")
ctx.obj = make_deploy_context(ctx)
down_operation(
ctx, delete_volumes=False, extra_args_list=[], skip_cluster_management=True
)
# Brief pause to ensure clean shutdown
time.sleep(5)
# Start deployment
up_operation(
ctx, services_list=None, stay_attached=False, skip_cluster_management=True
)
print("\n=== Restart Complete ===")
print("Deployment restarted with git-tracked configuration.")
if new_hostname and new_hostname != current_hostname:
print(f"\nNew hostname: {new_hostname}")
print("Caddy will automatically provision TLS certificate.")

View File

@ -47,6 +47,20 @@ class DeploymentContext:
def get_compose_file(self, name: str):
return self.get_compose_dir() / f"docker-compose-{name}.yml"
def modify_yaml(self, file_path: Path, modifier_func):
"""Load a YAML from the deployment, apply a modifier, and write back."""
if not file_path.absolute().is_relative_to(self.deployment_dir):
raise ValueError(f"File is not inside deployment directory: {file_path}")
yaml = get_yaml()
with open(file_path, "r") as f:
yaml_data = yaml.load(f)
modifier_func(yaml_data)
with open(file_path, "w") as f:
yaml.dump(yaml_data, f)
def get_cluster_id(self):
return self.id
@ -68,17 +82,3 @@ class DeploymentContext:
unique_cluster_descriptor = f"{path},{self.get_stack_file()},None,None"
hash = hashlib.md5(unique_cluster_descriptor.encode()).hexdigest()[:16]
self.id = f"{constants.cluster_name_prefix}{hash}"
def modify_yaml(self, file_path: Path, modifier_func):
"""Load a YAML, apply a modification function, and write it back."""
if not file_path.absolute().is_relative_to(self.deployment_dir):
raise ValueError(f"File is not inside deployment directory: {file_path}")
yaml = get_yaml()
with open(file_path, "r") as f:
yaml_data = yaml.load(f)
modifier_func(yaml_data)
with open(file_path, "w") as f:
yaml.dump(yaml_data, f)

View File

@ -15,19 +15,13 @@
import click
from importlib import util
import json
import os
import re
import base64
from pathlib import Path
from typing import List, Optional
from typing import List
import random
from shutil import copy, copyfile, copytree, rmtree
from shutil import copy, copyfile, copytree
from secrets import token_hex
import sys
import filecmp
import tempfile
from stack_orchestrator import constants
from stack_orchestrator.opts import opts
from stack_orchestrator.util import (
@ -471,10 +465,7 @@ def init_operation(
else:
volume_descriptors[named_volume] = f"./data/{named_volume}"
if volume_descriptors:
# Merge with existing volumes from stack init()
# init() volumes take precedence over compose defaults
orig_volumes = spec_file_content.get("volumes", {})
spec_file_content["volumes"] = {**volume_descriptors, **orig_volumes}
spec_file_content["volumes"] = volume_descriptors
if configmap_descriptors:
spec_file_content["configmaps"] = configmap_descriptors
@ -487,180 +478,15 @@ def init_operation(
get_yaml().dump(spec_file_content, output_file)
# Token pattern: $generate:hex:32$ or $generate:base64:16$
GENERATE_TOKEN_PATTERN = re.compile(r"\$generate:(\w+):(\d+)\$")
def _generate_and_store_secrets(config_vars: dict, deployment_name: str):
"""Generate secrets for $generate:...$ tokens and store in K8s Secret.
Called by `deploy create` - generates fresh secrets and stores them.
Returns the generated secrets dict for reference.
"""
from kubernetes import client, config as k8s_config
secrets = {}
for name, value in config_vars.items():
if not isinstance(value, str):
continue
match = GENERATE_TOKEN_PATTERN.search(value)
if not match:
continue
secret_type, length = match.group(1), int(match.group(2))
if secret_type == "hex":
secrets[name] = token_hex(length)
elif secret_type == "base64":
secrets[name] = base64.b64encode(os.urandom(length)).decode()
else:
secrets[name] = token_hex(length)
if not secrets:
return secrets
# Store in K8s Secret
try:
k8s_config.load_kube_config()
except Exception:
# Fall back to in-cluster config if available
try:
k8s_config.load_incluster_config()
except Exception:
print(
"Warning: Could not load kube config, secrets will not be stored in K8s"
)
return secrets
v1 = client.CoreV1Api()
secret_name = f"{deployment_name}-generated-secrets"
namespace = "default"
secret_data = {k: base64.b64encode(v.encode()).decode() for k, v in secrets.items()}
k8s_secret = client.V1Secret(
metadata=client.V1ObjectMeta(name=secret_name), data=secret_data, type="Opaque"
)
try:
v1.create_namespaced_secret(namespace, k8s_secret)
num_secrets = len(secrets)
print(f"Created K8s Secret '{secret_name}' with {num_secrets} secret(s)")
except client.exceptions.ApiException as e:
if e.status == 409: # Already exists
v1.replace_namespaced_secret(secret_name, namespace, k8s_secret)
num_secrets = len(secrets)
print(f"Updated K8s Secret '{secret_name}' with {num_secrets} secret(s)")
else:
raise
return secrets
def create_registry_secret(spec: Spec, deployment_name: str) -> Optional[str]:
"""Create K8s docker-registry secret from spec + environment.
Reads registry configuration from spec.yml and creates a Kubernetes
secret of type kubernetes.io/dockerconfigjson for image pulls.
Args:
spec: The deployment spec containing image-registry config
deployment_name: Name of the deployment (used for secret naming)
Returns:
The secret name if created, None if no registry config
"""
from kubernetes import client, config as k8s_config
registry_config = spec.get_image_registry_config()
if not registry_config:
return None
server = registry_config.get("server")
username = registry_config.get("username")
token_env = registry_config.get("token-env")
if not all([server, username, token_env]):
return None
# Type narrowing for pyright - we've validated these aren't None above
assert token_env is not None
token = os.environ.get(token_env)
if not token:
print(
f"Warning: Registry token env var '{token_env}' not set, "
"skipping registry secret"
)
return None
# Create dockerconfigjson format (Docker API uses "password" field for tokens)
auth = base64.b64encode(f"{username}:{token}".encode()).decode()
docker_config = {
"auths": {server: {"username": username, "password": token, "auth": auth}}
}
# Secret name derived from deployment name
secret_name = f"{deployment_name}-registry"
# Load kube config
try:
k8s_config.load_kube_config()
except Exception:
try:
k8s_config.load_incluster_config()
except Exception:
print("Warning: Could not load kube config, registry secret not created")
return None
v1 = client.CoreV1Api()
namespace = "default"
k8s_secret = client.V1Secret(
metadata=client.V1ObjectMeta(name=secret_name),
data={
".dockerconfigjson": base64.b64encode(
json.dumps(docker_config).encode()
).decode()
},
type="kubernetes.io/dockerconfigjson",
)
try:
v1.create_namespaced_secret(namespace, k8s_secret)
print(f"Created registry secret '{secret_name}' for {server}")
except client.exceptions.ApiException as e:
if e.status == 409: # Already exists
v1.replace_namespaced_secret(secret_name, namespace, k8s_secret)
print(f"Updated registry secret '{secret_name}' for {server}")
else:
raise
return secret_name
def _write_config_file(
spec_file: Path, config_env_file: Path, deployment_name: Optional[str] = None
):
def _write_config_file(spec_file: Path, config_env_file: Path):
spec_content = get_parsed_deployment_spec(spec_file)
config_vars = spec_content.get("config", {}) or {}
# Generate and store secrets in K8s if deployment_name provided and tokens exist
if deployment_name and config_vars:
has_generate_tokens = any(
isinstance(v, str) and GENERATE_TOKEN_PATTERN.search(v)
for v in config_vars.values()
)
if has_generate_tokens:
_generate_and_store_secrets(config_vars, deployment_name)
# Write non-secret config to config.env (exclude $generate:...$ tokens)
# Note: we want to write an empty file even if we have no config variables
with open(config_env_file, "w") as output_file:
if config_vars:
for variable_name, variable_value in config_vars.items():
# Skip variables with generate tokens - they go to K8s Secret
if isinstance(variable_value, str) and GENERATE_TOKEN_PATTERN.search(
variable_value
):
continue
output_file.write(f"{variable_name}={variable_value}\n")
if "config" in spec_content and spec_content["config"]:
config_vars = spec_content["config"]
if config_vars:
for variable_name, variable_value in config_vars.items():
output_file.write(f"{variable_name}={variable_value}\n")
def _write_kube_config_file(external_path: Path, internal_path: Path):
@ -675,14 +501,11 @@ def _copy_files_to_directory(file_paths: List[Path], directory: Path):
copy(path, os.path.join(directory, os.path.basename(path)))
def _create_deployment_file(deployment_dir: Path, stack_source: Optional[Path] = None):
def _create_deployment_file(deployment_dir: Path):
deployment_file_path = deployment_dir.joinpath(constants.deployment_file_name)
cluster = f"{constants.cluster_name_prefix}{token_hex(8)}"
deployment_content = {constants.cluster_id_key: cluster}
if stack_source:
deployment_content["stack-source"] = str(stack_source)
with open(deployment_file_path, "w") as output_file:
get_yaml().dump(deployment_content, output_file)
output_file.write(f"{constants.cluster_id_key}: {cluster}\n")
def _check_volume_definitions(spec):
@ -690,14 +513,10 @@ def _check_volume_definitions(spec):
for volume_name, volume_path in spec.get_volumes().items():
if volume_path:
if not os.path.isabs(volume_path):
# For k8s-kind: allow relative paths, they'll be resolved
# by _make_absolute_host_path() during kind config generation
if not spec.is_kind_deployment():
deploy_type = spec.get_deployment_type()
raise Exception(
f"Relative path {volume_path} for volume "
f"{volume_name} not supported for {deploy_type}"
)
raise Exception(
f"Relative path {volume_path} for volume {volume_name} not "
f"supported for deployment type {spec.get_deployment_type()}"
)
@click.command()
@ -705,12 +524,6 @@ def _check_volume_definitions(spec):
"--spec-file", required=True, help="Spec file to use to create this deployment"
)
@click.option("--deployment-dir", help="Create deployment files in this directory")
@click.option(
"--update",
is_flag=True,
default=False,
help="Update existing deployment directory, preserving data volumes and env file",
)
@click.option(
"--helm-chart",
is_flag=True,
@ -723,21 +536,13 @@ def _check_volume_definitions(spec):
@click.argument("extra_args", nargs=-1, type=click.UNPROCESSED)
@click.pass_context
def create(
ctx,
spec_file,
deployment_dir,
update,
helm_chart,
network_dir,
initial_peers,
extra_args,
ctx, spec_file, deployment_dir, helm_chart, network_dir, initial_peers, extra_args
):
deployment_command_context = ctx.obj
return create_operation(
deployment_command_context,
spec_file,
deployment_dir,
update,
helm_chart,
network_dir,
initial_peers,
@ -751,10 +556,9 @@ def create_operation(
deployment_command_context,
spec_file,
deployment_dir,
update=False,
helm_chart=False,
network_dir=None,
initial_peers=None,
helm_chart,
network_dir,
initial_peers,
extra_args=(),
):
parsed_spec = Spec(
@ -764,23 +568,23 @@ def create_operation(
stack_name = parsed_spec["stack"]
deployment_type = parsed_spec[constants.deploy_to_key]
stack_file = get_stack_path(stack_name).joinpath(constants.stack_file_name)
parsed_stack = get_parsed_stack_config(stack_name)
if opts.o.debug:
print(f"parsed spec: {parsed_spec}")
if deployment_dir is None:
deployment_dir_path = _make_default_deployment_dir()
else:
deployment_dir_path = Path(deployment_dir)
if deployment_dir_path.exists():
if not update:
error_exit(f"{deployment_dir_path} already exists")
if opts.o.debug:
print(f"Updating existing deployment at {deployment_dir_path}")
else:
if update:
error_exit(f"--update requires that {deployment_dir_path} already exists")
os.mkdir(deployment_dir_path)
error_exit(f"{deployment_dir_path} already exists")
os.mkdir(deployment_dir_path)
# Copy spec file and the stack file into the deployment dir
copyfile(spec_file, deployment_dir_path.joinpath(constants.spec_file_name))
copyfile(stack_file, deployment_dir_path.joinpath(constants.stack_file_name))
# Create deployment.yml with cluster-id
_create_deployment_file(deployment_dir_path)
# Branch to Helm chart generation flow if --helm-chart flag is set
if deployment_type == "k8s" and helm_chart:
@ -791,48 +595,104 @@ def create_operation(
generate_helm_chart(stack_name, spec_file, deployment_dir_path)
return # Exit early for helm chart generation
# Resolve stack source path for restart capability
stack_source = get_stack_path(stack_name)
if update:
# Sync mode: write to temp dir, then copy to deployment dir with backups
temp_dir = Path(tempfile.mkdtemp(prefix="deployment-sync-"))
try:
# Write deployment files to temp dir
# (skip deployment.yml to preserve cluster ID)
_write_deployment_files(
temp_dir,
Path(spec_file),
parsed_spec,
stack_name,
deployment_type,
include_deployment_file=False,
stack_source=stack_source,
)
# Copy from temp to deployment dir, excluding data volumes
# and backing up changed files.
# Exclude data/* to avoid touching user data volumes.
# Exclude config file to preserve deployment settings
# (XXX breaks passing config vars from spec)
exclude_patterns = ["data", "data/*", constants.config_file_name]
_safe_copy_tree(
temp_dir, deployment_dir_path, exclude_patterns=exclude_patterns
)
finally:
# Clean up temp dir
rmtree(temp_dir)
else:
# Normal mode: write directly to deployment dir
_write_deployment_files(
deployment_dir_path,
Path(spec_file),
parsed_spec,
stack_name,
deployment_type,
include_deployment_file=True,
stack_source=stack_source,
# Existing deployment flow continues unchanged
# Copy any config varibles from the spec file into an env file suitable for compose
_write_config_file(
spec_file, deployment_dir_path.joinpath(constants.config_file_name)
)
# Copy any k8s config file into the deployment dir
if deployment_type == "k8s":
_write_kube_config_file(
Path(parsed_spec[constants.kube_config_key]),
deployment_dir_path.joinpath(constants.kube_config_filename),
)
# Copy the pod files into the deployment dir, fixing up content
pods = get_pod_list(parsed_stack)
destination_compose_dir = deployment_dir_path.joinpath("compose")
os.mkdir(destination_compose_dir)
destination_pods_dir = deployment_dir_path.joinpath("pods")
os.mkdir(destination_pods_dir)
yaml = get_yaml()
for pod in pods:
pod_file_path = get_pod_file_path(stack_name, parsed_stack, pod)
if pod_file_path is None:
continue
parsed_pod_file = yaml.load(open(pod_file_path, "r"))
extra_config_dirs = _find_extra_config_dirs(parsed_pod_file, pod)
destination_pod_dir = destination_pods_dir.joinpath(pod)
os.mkdir(destination_pod_dir)
if opts.o.debug:
print(f"extra config dirs: {extra_config_dirs}")
_fixup_pod_file(parsed_pod_file, parsed_spec, destination_compose_dir)
with open(
destination_compose_dir.joinpath("docker-compose-%s.yml" % pod), "w"
) as output_file:
yaml.dump(parsed_pod_file, output_file)
# Copy the config files for the pod, if any
config_dirs = {pod}
config_dirs = config_dirs.union(extra_config_dirs)
for config_dir in config_dirs:
source_config_dir = resolve_config_dir(stack_name, config_dir)
if os.path.exists(source_config_dir):
destination_config_dir = deployment_dir_path.joinpath(
"config", config_dir
)
# If the same config dir appears in multiple pods, it may already have
# been copied
if not os.path.exists(destination_config_dir):
copytree(source_config_dir, destination_config_dir)
# Copy the script files for the pod, if any
if pod_has_scripts(parsed_stack, pod):
destination_script_dir = destination_pod_dir.joinpath("scripts")
os.mkdir(destination_script_dir)
script_paths = get_pod_script_paths(parsed_stack, pod)
_copy_files_to_directory(script_paths, destination_script_dir)
if parsed_spec.is_kubernetes_deployment():
for configmap in parsed_spec.get_configmaps():
source_config_dir = resolve_config_dir(stack_name, configmap)
if os.path.exists(source_config_dir):
destination_config_dir = deployment_dir_path.joinpath(
"configmaps", configmap
)
copytree(
source_config_dir, destination_config_dir, dirs_exist_ok=True
)
else:
# TODO: We should probably only do this if the volume is marked :ro.
for volume_name, volume_path in parsed_spec.get_volumes().items():
source_config_dir = resolve_config_dir(stack_name, volume_name)
# Only copy if the source exists and is _not_ empty.
if os.path.exists(source_config_dir) and os.listdir(source_config_dir):
destination_config_dir = deployment_dir_path.joinpath(volume_path)
# Only copy if the destination exists and _is_ empty.
if os.path.exists(destination_config_dir) and not os.listdir(
destination_config_dir
):
copytree(
source_config_dir,
destination_config_dir,
dirs_exist_ok=True,
)
# Copy the job files into the deployment dir (for Docker deployments)
jobs = get_job_list(parsed_stack)
if jobs and not parsed_spec.is_kubernetes_deployment():
destination_compose_jobs_dir = deployment_dir_path.joinpath("compose-jobs")
os.mkdir(destination_compose_jobs_dir)
for job in jobs:
job_file_path = get_job_file_path(stack_name, parsed_stack, job)
if job_file_path and job_file_path.exists():
parsed_job_file = yaml.load(open(job_file_path, "r"))
_fixup_pod_file(parsed_job_file, parsed_spec, destination_compose_dir)
with open(
destination_compose_jobs_dir.joinpath(
"docker-compose-%s.yml" % job
),
"w",
) as output_file:
yaml.dump(parsed_job_file, output_file)
if opts.o.debug:
print(f"Copied job compose file: {job}")
# Delegate to the stack's Python code
# The deploy create command doesn't require a --stack argument so we need
@ -852,189 +712,6 @@ def create_operation(
)
def _safe_copy_tree(src: Path, dst: Path, exclude_patterns: Optional[List[str]] = None):
"""
Recursively copy a directory tree, backing up changed files with .bak suffix.
:param src: Source directory
:param dst: Destination directory
:param exclude_patterns: List of path patterns to exclude (relative to src)
"""
if exclude_patterns is None:
exclude_patterns = []
def should_exclude(path: Path) -> bool:
"""Check if path matches any exclude pattern."""
rel_path = path.relative_to(src)
for pattern in exclude_patterns:
if rel_path.match(pattern):
return True
return False
def safe_copy_file(src_file: Path, dst_file: Path):
"""Copy file, backing up destination if it differs."""
if (
dst_file.exists()
and not dst_file.is_dir()
and not filecmp.cmp(src_file, dst_file)
):
os.rename(dst_file, f"{dst_file}.bak")
copy(src_file, dst_file)
# Walk the source tree
for src_path in src.rglob("*"):
if should_exclude(src_path):
continue
rel_path = src_path.relative_to(src)
dst_path = dst / rel_path
if src_path.is_dir():
dst_path.mkdir(parents=True, exist_ok=True)
else:
dst_path.parent.mkdir(parents=True, exist_ok=True)
safe_copy_file(src_path, dst_path)
def _write_deployment_files(
target_dir: Path,
spec_file: Path,
parsed_spec: Spec,
stack_name: str,
deployment_type: str,
include_deployment_file: bool = True,
stack_source: Optional[Path] = None,
):
"""
Write deployment files to target directory.
:param target_dir: Directory to write files to
:param spec_file: Path to spec file
:param parsed_spec: Parsed spec object
:param stack_name: Name of stack
:param deployment_type: Type of deployment
:param include_deployment_file: Whether to create deployment.yml (skip for update)
:param stack_source: Path to stack source (git repo) for restart capability
"""
stack_file = get_stack_path(stack_name).joinpath(constants.stack_file_name)
parsed_stack = get_parsed_stack_config(stack_name)
# Copy spec file and the stack file into the target dir
copyfile(spec_file, target_dir.joinpath(constants.spec_file_name))
copyfile(stack_file, target_dir.joinpath(constants.stack_file_name))
# Create deployment file if requested
if include_deployment_file:
_create_deployment_file(target_dir, stack_source=stack_source)
# Copy any config variables from the spec file into an env file suitable for compose
# Use stack_name as deployment_name for K8s secret naming
# Extract just the name part if stack_name is a path ("path/to/stack" -> "stack")
deployment_name = Path(stack_name).name.replace("_", "-")
_write_config_file(
spec_file, target_dir.joinpath(constants.config_file_name), deployment_name
)
# Copy any k8s config file into the target dir
if deployment_type == "k8s":
_write_kube_config_file(
Path(parsed_spec[constants.kube_config_key]),
target_dir.joinpath(constants.kube_config_filename),
)
# Copy the pod files into the target dir, fixing up content
pods = get_pod_list(parsed_stack)
destination_compose_dir = target_dir.joinpath("compose")
os.makedirs(destination_compose_dir, exist_ok=True)
destination_pods_dir = target_dir.joinpath("pods")
os.makedirs(destination_pods_dir, exist_ok=True)
yaml = get_yaml()
for pod in pods:
pod_file_path = get_pod_file_path(stack_name, parsed_stack, pod)
if pod_file_path is None:
continue
parsed_pod_file = yaml.load(open(pod_file_path, "r"))
extra_config_dirs = _find_extra_config_dirs(parsed_pod_file, pod)
destination_pod_dir = destination_pods_dir.joinpath(pod)
os.makedirs(destination_pod_dir, exist_ok=True)
if opts.o.debug:
print(f"extra config dirs: {extra_config_dirs}")
_fixup_pod_file(parsed_pod_file, parsed_spec, destination_compose_dir)
with open(
destination_compose_dir.joinpath("docker-compose-%s.yml" % pod), "w"
) as output_file:
yaml.dump(parsed_pod_file, output_file)
# Copy the config files for the pod, if any
config_dirs = {pod}
config_dirs = config_dirs.union(extra_config_dirs)
for config_dir in config_dirs:
source_config_dir = resolve_config_dir(stack_name, config_dir)
if os.path.exists(source_config_dir):
destination_config_dir = target_dir.joinpath("config", config_dir)
copytree(source_config_dir, destination_config_dir, dirs_exist_ok=True)
# Copy the script files for the pod, if any
if pod_has_scripts(parsed_stack, pod):
destination_script_dir = destination_pod_dir.joinpath("scripts")
os.makedirs(destination_script_dir, exist_ok=True)
script_paths = get_pod_script_paths(parsed_stack, pod)
_copy_files_to_directory(script_paths, destination_script_dir)
if parsed_spec.is_kubernetes_deployment():
for configmap in parsed_spec.get_configmaps():
source_config_dir = resolve_config_dir(stack_name, configmap)
if os.path.exists(source_config_dir):
destination_config_dir = target_dir.joinpath(
"configmaps", configmap
)
copytree(
source_config_dir, destination_config_dir, dirs_exist_ok=True
)
else:
# TODO:
# This is odd - looks up config dir that matches a volume name,
# then copies as a mount dir?
# AFAICT not used by or relevant to any existing stack - roy
# TODO: We should probably only do this if the volume is marked :ro.
for volume_name, volume_path in parsed_spec.get_volumes().items():
source_config_dir = resolve_config_dir(stack_name, volume_name)
# Only copy if the source exists and is _not_ empty.
if os.path.exists(source_config_dir) and os.listdir(source_config_dir):
destination_config_dir = target_dir.joinpath(volume_path)
# Only copy if the destination exists and _is_ empty.
if os.path.exists(destination_config_dir) and not os.listdir(
destination_config_dir
):
copytree(
source_config_dir,
destination_config_dir,
dirs_exist_ok=True,
)
# Copy the job files into the target dir (for Docker deployments)
jobs = get_job_list(parsed_stack)
if jobs and not parsed_spec.is_kubernetes_deployment():
destination_compose_jobs_dir = target_dir.joinpath("compose-jobs")
os.makedirs(destination_compose_jobs_dir, exist_ok=True)
for job in jobs:
job_file_path = get_job_file_path(stack_name, parsed_stack, job)
if job_file_path and job_file_path.exists():
parsed_job_file = yaml.load(open(job_file_path, "r"))
_fixup_pod_file(parsed_job_file, parsed_spec, destination_compose_dir)
with open(
destination_compose_jobs_dir.joinpath(
"docker-compose-%s.yml" % job
),
"w",
) as output_file:
yaml.dump(parsed_job_file, output_file)
if opts.o.debug:
print(f"Copied job compose file: {job}")
# TODO: this code should be in the stack .py files but
# we haven't yet figured out how to integrate click across
# the plugin boundary

View File

@ -1,159 +0,0 @@
# Copyright © 2024 Vulcanize
# SPDX-License-Identifier: AGPL-3.0
"""DNS verification via temporary ingress probe."""
import secrets
import socket
import time
from typing import Optional
import requests
from kubernetes import client
def get_server_egress_ip() -> str:
"""Get this server's public egress IP via ipify."""
response = requests.get("https://api.ipify.org", timeout=10)
response.raise_for_status()
return response.text.strip()
def resolve_hostname(hostname: str) -> list[str]:
"""Resolve hostname to list of IP addresses."""
try:
_, _, ips = socket.gethostbyname_ex(hostname)
return ips
except socket.gaierror:
return []
def verify_dns_simple(hostname: str, expected_ip: Optional[str] = None) -> bool:
"""Simple DNS verification - check hostname resolves to expected IP.
If expected_ip not provided, uses server's egress IP.
Returns True if hostname resolves to expected IP.
"""
resolved_ips = resolve_hostname(hostname)
if not resolved_ips:
print(f"DNS FAIL: {hostname} does not resolve")
return False
if expected_ip is None:
expected_ip = get_server_egress_ip()
if expected_ip in resolved_ips:
print(f"DNS OK: {hostname} -> {resolved_ips} (includes {expected_ip})")
return True
else:
print(f"DNS WARN: {hostname} -> {resolved_ips} (expected {expected_ip})")
return False
def create_probe_ingress(hostname: str, namespace: str = "default") -> str:
"""Create a temporary ingress for DNS probing.
Returns the probe token that the ingress will respond with.
"""
token = secrets.token_hex(16)
networking_api = client.NetworkingV1Api()
# Create a simple ingress that Caddy will pick up
ingress = client.V1Ingress(
metadata=client.V1ObjectMeta(
name="laconic-dns-probe",
annotations={
"kubernetes.io/ingress.class": "caddy",
"laconic.com/probe-token": token,
},
),
spec=client.V1IngressSpec(
rules=[
client.V1IngressRule(
host=hostname,
http=client.V1HTTPIngressRuleValue(
paths=[
client.V1HTTPIngressPath(
path="/.well-known/laconic-probe",
path_type="Exact",
backend=client.V1IngressBackend(
service=client.V1IngressServiceBackend(
name="caddy-ingress-controller",
port=client.V1ServiceBackendPort(number=80),
)
),
)
]
),
)
]
),
)
networking_api.create_namespaced_ingress(namespace=namespace, body=ingress)
return token
def delete_probe_ingress(namespace: str = "default"):
"""Delete the temporary probe ingress."""
networking_api = client.NetworkingV1Api()
try:
networking_api.delete_namespaced_ingress(
name="laconic-dns-probe", namespace=namespace
)
except client.exceptions.ApiException:
pass # Ignore if already deleted
def verify_dns_via_probe(
hostname: str, namespace: str = "default", timeout: int = 30, poll_interval: int = 2
) -> bool:
"""Verify DNS by creating temp ingress and probing it.
This definitively proves that traffic to the hostname reaches this cluster.
Args:
hostname: The hostname to verify
namespace: Kubernetes namespace for probe ingress
timeout: Total seconds to wait for probe to succeed
poll_interval: Seconds between probe attempts
Returns:
True if probe succeeds, False otherwise
"""
# First check DNS resolves at all
if not resolve_hostname(hostname):
print(f"DNS FAIL: {hostname} does not resolve")
return False
print(f"Creating probe ingress for {hostname}...")
create_probe_ingress(hostname, namespace)
try:
# Wait for Caddy to pick up the ingress
time.sleep(3)
# Poll until success or timeout
probe_url = f"http://{hostname}/.well-known/laconic-probe"
start_time = time.time()
last_error = None
while time.time() - start_time < timeout:
try:
response = requests.get(probe_url, timeout=5)
# For now, just verify we get a response from this cluster
# A more robust check would verify a unique token
if response.status_code < 500:
print(f"DNS PROBE OK: {hostname} routes to this cluster")
return True
except requests.RequestException as e:
last_error = e
time.sleep(poll_interval)
print(f"DNS PROBE FAIL: {hostname} - {last_error}")
return False
finally:
print("Cleaning up probe ingress...")
delete_probe_ingress(namespace)

View File

@ -31,7 +31,6 @@ from stack_orchestrator.deploy.k8s.helpers import (
envs_from_environment_variables_map,
envs_from_compose_file,
merge_envs,
translate_sidecar_service_names,
)
from stack_orchestrator.deploy.deploy_util import (
parsed_pod_files_map_from_file_names,
@ -126,8 +125,7 @@ class ClusterInfo:
name=(
f"{self.app_name}-nodeport-"
f"{pod_port}-{protocol.lower()}"
),
labels={"app": self.app_name},
)
),
spec=client.V1ServiceSpec(
type="NodePort",
@ -210,9 +208,7 @@ class ClusterInfo:
ingress = client.V1Ingress(
metadata=client.V1ObjectMeta(
name=f"{self.app_name}-ingress",
labels={"app": self.app_name},
annotations=ingress_annotations,
name=f"{self.app_name}-ingress", annotations=ingress_annotations
),
spec=spec,
)
@ -220,35 +216,23 @@ class ClusterInfo:
# TODO: suppoprt multiple services
def get_service(self):
# Collect all ports from http-proxy routes
ports_set = set()
http_proxy_list = self.spec.get_http_proxy()
if http_proxy_list:
for http_proxy in http_proxy_list:
for route in http_proxy.get("routes", []):
proxy_to = route.get("proxy-to", "")
if ":" in proxy_to:
port = int(proxy_to.split(":")[1])
ports_set.add(port)
if opts.o.debug:
print(f"http-proxy route port: {port}")
if not ports_set:
port = None
for pod_name in self.parsed_pod_yaml_map:
pod = self.parsed_pod_yaml_map[pod_name]
services = pod["services"]
for service_name in services:
service_info = services[service_name]
if "ports" in service_info:
port = int(service_info["ports"][0])
if opts.o.debug:
print(f"service port: {port}")
if port is None:
return None
service_ports = [
client.V1ServicePort(port=p, target_port=p, name=f"port-{p}")
for p in sorted(ports_set)
]
service = client.V1Service(
metadata=client.V1ObjectMeta(
name=f"{self.app_name}-service",
labels={"app": self.app_name},
),
metadata=client.V1ObjectMeta(name=f"{self.app_name}-service"),
spec=client.V1ServiceSpec(
type="ClusterIP",
ports=service_ports,
ports=[client.V1ServicePort(port=port, target_port=port)],
selector={"app": self.app_name},
),
)
@ -327,7 +311,7 @@ class ClusterInfo:
spec = client.V1ConfigMap(
metadata=client.V1ObjectMeta(
name=f"{self.app_name}-{cfg_map_name}",
labels={"app": self.app_name, "configmap-label": cfg_map_name},
labels={"configmap-label": cfg_map_name},
),
binary_data=data,
)
@ -359,15 +343,11 @@ class ClusterInfo:
continue
if not os.path.isabs(volume_path):
# For k8s-kind, allow relative paths:
# - PV uses /mnt/{volume_name} (path inside kind node)
# - extraMounts resolve the relative path to Docker Host
if not self.spec.is_kind_deployment():
print(
f"WARNING: {volume_name}:{volume_path} is not absolute, "
"cannot bind volume."
)
continue
print(
f"WARNING: {volume_name}:{volume_path} is not absolute, "
"cannot bind volume."
)
continue
if self.spec.is_kind_deployment():
host_path = client.V1HostPathVolumeSource(
@ -384,10 +364,7 @@ class ClusterInfo:
pv = client.V1PersistentVolume(
metadata=client.V1ObjectMeta(
name=f"{self.app_name}-{volume_name}",
labels={
"app": self.app_name,
"volume-label": f"{self.app_name}-{volume_name}",
},
labels={"volume-label": f"{self.app_name}-{volume_name}"},
),
spec=spec,
)
@ -440,12 +417,6 @@ class ClusterInfo:
if "environment" in service_info
else self.environment_variables.map
)
# Translate docker-compose service names to localhost for sidecars
# All services in the same pod share the network namespace
sibling_services = [s for s in services.keys() if s != service_name]
merged_envs = translate_sidecar_service_names(
merged_envs, sibling_services
)
envs = envs_from_environment_variables_map(merged_envs)
if opts.o.debug:
print(f"Merged envs: {envs}")
@ -473,16 +444,6 @@ class ClusterInfo:
if "command" in service_info:
cmd = service_info["command"]
container_args = cmd if isinstance(cmd, list) else cmd.split()
# Add env_from to pull secrets from K8s Secret
secret_name = f"{self.app_name}-generated-secrets"
env_from = [
client.V1EnvFromSource(
secret_ref=client.V1SecretEnvSource(
name=secret_name,
optional=True, # Don't fail if no secrets
)
)
]
container = client.V1Container(
name=container_name,
image=image_to_use,
@ -490,7 +451,6 @@ class ClusterInfo:
command=container_command,
args=container_args,
env=envs,
env_from=env_from,
ports=container_ports if container_ports else None,
volume_mounts=volume_mounts,
security_context=client.V1SecurityContext(
@ -507,12 +467,7 @@ class ClusterInfo:
volumes = volumes_for_pod_files(
self.parsed_pod_yaml_map, self.spec, self.app_name
)
registry_config = self.spec.get_image_registry_config()
if registry_config:
secret_name = f"{self.app_name}-registry"
image_pull_secrets = [client.V1LocalObjectReference(name=secret_name)]
else:
image_pull_secrets = []
image_pull_secrets = [client.V1LocalObjectReference(name="laconic-registry")]
annotations = None
labels = {"app": self.app_name}

View File

@ -29,7 +29,6 @@ from stack_orchestrator.deploy.k8s.helpers import (
from stack_orchestrator.deploy.k8s.helpers import (
install_ingress_for_kind,
wait_for_ingress_in_kind,
is_ingress_running,
)
from stack_orchestrator.deploy.k8s.helpers import (
pods_in_deployment,
@ -96,7 +95,7 @@ class K8sDeployer(Deployer):
core_api: client.CoreV1Api
apps_api: client.AppsV1Api
networking_api: client.NetworkingV1Api
k8s_namespace: str
k8s_namespace: str = "default"
kind_cluster_name: str
skip_cluster_management: bool
cluster_info: ClusterInfo
@ -113,7 +112,6 @@ class K8sDeployer(Deployer):
) -> None:
self.type = type
self.skip_cluster_management = False
self.k8s_namespace = "default" # Will be overridden below if context exists
# TODO: workaround pending refactoring above to cope with being
# created with a null deployment_context
if deployment_context is None:
@ -121,8 +119,6 @@ class K8sDeployer(Deployer):
self.deployment_dir = deployment_context.deployment_dir
self.deployment_context = deployment_context
self.kind_cluster_name = compose_project_name
# Use deployment-specific namespace for resource isolation and easy cleanup
self.k8s_namespace = f"laconic-{compose_project_name}"
self.cluster_info = ClusterInfo()
self.cluster_info.int(
compose_files,
@ -152,46 +148,6 @@ class K8sDeployer(Deployer):
self.apps_api = client.AppsV1Api()
self.custom_obj_api = client.CustomObjectsApi()
def _ensure_namespace(self):
"""Create the deployment namespace if it doesn't exist."""
if opts.o.dry_run:
print(f"Dry run: would create namespace {self.k8s_namespace}")
return
try:
self.core_api.read_namespace(name=self.k8s_namespace)
if opts.o.debug:
print(f"Namespace {self.k8s_namespace} already exists")
except ApiException as e:
if e.status == 404:
# Create the namespace
ns = client.V1Namespace(
metadata=client.V1ObjectMeta(
name=self.k8s_namespace,
labels={"app": self.cluster_info.app_name},
)
)
self.core_api.create_namespace(body=ns)
if opts.o.debug:
print(f"Created namespace {self.k8s_namespace}")
else:
raise
def _delete_namespace(self):
"""Delete the deployment namespace and all resources within it."""
if opts.o.dry_run:
print(f"Dry run: would delete namespace {self.k8s_namespace}")
return
try:
self.core_api.delete_namespace(name=self.k8s_namespace)
if opts.o.debug:
print(f"Deleted namespace {self.k8s_namespace}")
except ApiException as e:
if e.status == 404:
if opts.o.debug:
print(f"Namespace {self.k8s_namespace} not found")
else:
raise
def _create_volume_data(self):
# Create the host-path-mounted PVs for this deployment
pvs = self.cluster_info.get_pvs()
@ -285,7 +241,7 @@ class K8sDeployer(Deployer):
service = self.cluster_info.get_service()
if opts.o.debug:
print(f"Sending this service: {service}")
if service and not opts.o.dry_run:
if not opts.o.dry_run:
service_resp = self.core_api.create_namespaced_service(
namespace=self.k8s_namespace, body=service
)
@ -333,40 +289,22 @@ class K8sDeployer(Deployer):
self.skip_cluster_management = skip_cluster_management
if not opts.o.dry_run:
if self.is_kind() and not self.skip_cluster_management:
# Create the kind cluster (or reuse existing one)
kind_config = str(
self.deployment_dir.joinpath(constants.kind_config_filename)
# Create the kind cluster
create_cluster(
self.kind_cluster_name,
str(self.deployment_dir.joinpath(constants.kind_config_filename)),
)
actual_cluster = create_cluster(self.kind_cluster_name, kind_config)
if actual_cluster != self.kind_cluster_name:
# An existing cluster was found, use it instead
self.kind_cluster_name = actual_cluster
# Only load locally-built images into kind
# Registry images (docker.io, ghcr.io, etc.) will be pulled by k8s
local_containers = self.deployment_context.stack.obj.get(
"containers", []
# Ensure the referenced containers are copied into kind
load_images_into_kind(
self.kind_cluster_name, self.cluster_info.image_set
)
if local_containers:
# Filter image_set to only images matching local containers
local_images = {
img
for img in self.cluster_info.image_set
if any(c in img for c in local_containers)
}
if local_images:
load_images_into_kind(self.kind_cluster_name, local_images)
# Note: if no local containers defined, all images come from registries
self.connect_api()
# Create deployment-specific namespace for resource isolation
self._ensure_namespace()
if self.is_kind() and not self.skip_cluster_management:
# Configure ingress controller (not installed by default in kind)
# Skip if already running (idempotent for shared cluster)
if not is_ingress_running():
install_ingress_for_kind(self.cluster_info.spec.get_acme_email())
# Wait for ingress to start
# (deployment provisioning will fail unless this is done)
wait_for_ingress_in_kind()
install_ingress_for_kind()
# Wait for ingress to start
# (deployment provisioning will fail unless this is done)
wait_for_ingress_in_kind()
# Create RuntimeClass if unlimited_memlock is enabled
if self.cluster_info.spec.get_unlimited_memlock():
_create_runtime_class(
@ -377,11 +315,6 @@ class K8sDeployer(Deployer):
else:
print("Dry run mode enabled, skipping k8s API connect")
# Create registry secret if configured
from stack_orchestrator.deploy.deployment_create import create_registry_secret
create_registry_secret(self.cluster_info.spec, self.cluster_info.app_name)
self._create_volume_data()
self._create_deployment()
@ -426,30 +359,107 @@ class K8sDeployer(Deployer):
print("NodePort created:")
print(f"{nodeport_resp}")
def down(self, timeout, volumes, skip_cluster_management):
def down(self, timeout, volumes, skip_cluster_management): # noqa: C901
self.skip_cluster_management = skip_cluster_management
self.connect_api()
# Delete the k8s objects
# PersistentVolumes are cluster-scoped (not namespaced), so delete by label
if volumes:
try:
pvs = self.core_api.list_persistent_volume(
label_selector=f"app={self.cluster_info.app_name}"
)
for pv in pvs.items:
if opts.o.debug:
print(f"Deleting PV: {pv.metadata.name}")
try:
self.core_api.delete_persistent_volume(name=pv.metadata.name)
except ApiException as e:
_check_delete_exception(e)
except ApiException as e:
# Create the host-path-mounted PVs for this deployment
pvs = self.cluster_info.get_pvs()
for pv in pvs:
if opts.o.debug:
print(f"Error listing PVs: {e}")
print(f"Deleting this pv: {pv}")
try:
pv_resp = self.core_api.delete_persistent_volume(
name=pv.metadata.name
)
if opts.o.debug:
print("PV deleted:")
print(f"{pv_resp}")
except ApiException as e:
_check_delete_exception(e)
# Delete the deployment namespace - this cascades to all namespaced resources
# (PVCs, ConfigMaps, Deployments, Services, Ingresses, etc.)
self._delete_namespace()
# Figure out the PVCs for this deployment
pvcs = self.cluster_info.get_pvcs()
for pvc in pvcs:
if opts.o.debug:
print(f"Deleting this pvc: {pvc}")
try:
pvc_resp = self.core_api.delete_namespaced_persistent_volume_claim(
name=pvc.metadata.name, namespace=self.k8s_namespace
)
if opts.o.debug:
print("PVCs deleted:")
print(f"{pvc_resp}")
except ApiException as e:
_check_delete_exception(e)
# Figure out the ConfigMaps for this deployment
cfg_maps = self.cluster_info.get_configmaps()
for cfg_map in cfg_maps:
if opts.o.debug:
print(f"Deleting this ConfigMap: {cfg_map}")
try:
cfg_map_resp = self.core_api.delete_namespaced_config_map(
name=cfg_map.metadata.name, namespace=self.k8s_namespace
)
if opts.o.debug:
print("ConfigMap deleted:")
print(f"{cfg_map_resp}")
except ApiException as e:
_check_delete_exception(e)
deployment = self.cluster_info.get_deployment()
if opts.o.debug:
print(f"Deleting this deployment: {deployment}")
if deployment and deployment.metadata and deployment.metadata.name:
try:
self.apps_api.delete_namespaced_deployment(
name=deployment.metadata.name, namespace=self.k8s_namespace
)
except ApiException as e:
_check_delete_exception(e)
service = self.cluster_info.get_service()
if opts.o.debug:
print(f"Deleting service: {service}")
if service and service.metadata and service.metadata.name:
try:
self.core_api.delete_namespaced_service(
namespace=self.k8s_namespace, name=service.metadata.name
)
except ApiException as e:
_check_delete_exception(e)
ingress = self.cluster_info.get_ingress(use_tls=not self.is_kind())
if ingress and ingress.metadata and ingress.metadata.name:
if opts.o.debug:
print(f"Deleting this ingress: {ingress}")
try:
self.networking_api.delete_namespaced_ingress(
name=ingress.metadata.name, namespace=self.k8s_namespace
)
except ApiException as e:
_check_delete_exception(e)
else:
if opts.o.debug:
print("No ingress to delete")
nodeports: List[client.V1Service] = self.cluster_info.get_nodeports()
for nodeport in nodeports:
if opts.o.debug:
print(f"Deleting this nodeport: {nodeport}")
if nodeport.metadata and nodeport.metadata.name:
try:
self.core_api.delete_namespaced_service(
namespace=self.k8s_namespace, name=nodeport.metadata.name
)
except ApiException as e:
_check_delete_exception(e)
else:
if opts.o.debug:
print("No nodeport to delete")
if self.is_kind() and not self.skip_cluster_management:
# Destroy the kind cluster
@ -587,7 +597,7 @@ class K8sDeployer(Deployer):
log_data = ""
for container in containers:
container_log = self.core_api.read_namespaced_pod_log(
k8s_pod_name, namespace=self.k8s_namespace, container=container
k8s_pod_name, namespace="default", container=container
)
container_log_lines = container_log.splitlines()
for line in container_log_lines:

View File

@ -14,13 +14,11 @@
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from kubernetes import client, utils, watch
from kubernetes.client.exceptions import ApiException
import os
from pathlib import Path
import subprocess
import re
from typing import Set, Mapping, List, Optional, cast
import yaml
from stack_orchestrator.util import get_k8s_dir, error_exit
from stack_orchestrator.opts import opts
@ -29,48 +27,6 @@ from stack_orchestrator.deploy.deployer import DeployerException
from stack_orchestrator import constants
def is_host_path_mount(volume_name: str) -> bool:
"""Check if a volume name is a host path mount (starts with /, ., or ~)."""
return volume_name.startswith(("/", ".", "~"))
def sanitize_host_path_to_volume_name(host_path: str) -> str:
"""Convert a host path to a valid k8s volume name.
K8s volume names must be lowercase, alphanumeric, with - allowed.
E.g., '../config/test/script.sh' -> 'host-path-config-test-script-sh'
"""
# Remove leading ./ or ../
clean_path = re.sub(r"^\.+/", "", host_path)
# Replace path separators and dots with hyphens
name = re.sub(r"[/.]", "-", clean_path)
# Remove any non-alphanumeric characters except hyphens
name = re.sub(r"[^a-zA-Z0-9-]", "", name)
# Convert to lowercase
name = name.lower()
# Remove leading/trailing hyphens and collapse multiple hyphens
name = re.sub(r"-+", "-", name).strip("-")
# Prefix with 'host-path-' to distinguish from named volumes
return f"host-path-{name}"
def resolve_host_path_for_kind(host_path: str, deployment_dir: Path) -> Path:
"""Resolve a host path mount (relative to compose file) to absolute path.
Compose files are in deployment_dir/compose/, so '../config/foo'
resolves to deployment_dir/config/foo.
"""
# The path is relative to the compose directory
compose_dir = deployment_dir.joinpath("compose")
resolved = compose_dir.joinpath(host_path).resolve()
return resolved
def get_kind_host_path_mount_path(sanitized_name: str) -> str:
"""Get the path inside the kind node where a host path mount will be available."""
return f"/mnt/{sanitized_name}"
def get_kind_cluster():
"""Get an existing kind cluster, if any.
@ -98,227 +54,16 @@ def _run_command(command: str):
return result
def _get_etcd_host_path_from_kind_config(config_file: str) -> Optional[str]:
"""Extract etcd host path from kind config extraMounts."""
import yaml
try:
with open(config_file, "r") as f:
config = yaml.safe_load(f)
except Exception:
return None
nodes = config.get("nodes", [])
for node in nodes:
extra_mounts = node.get("extraMounts", [])
for mount in extra_mounts:
if mount.get("containerPath") == "/var/lib/etcd":
return mount.get("hostPath")
return None
def _clean_etcd_keeping_certs(etcd_path: str) -> bool:
"""Clean persisted etcd, keeping only TLS certificates.
When etcd is persisted and a cluster is recreated, kind tries to install
resources fresh but they already exist. Instead of trying to delete
specific stale resources (blacklist), we keep only the valuable data
(caddy TLS certs) and delete everything else (whitelist approach).
The etcd image is distroless (no shell), so we extract the statically-linked
etcdctl binary and run it from alpine which has shell support.
Returns True if cleanup succeeded, False if no action needed or failed.
"""
db_path = Path(etcd_path) / "member" / "snap" / "db"
# Check existence using docker since etcd dir is root-owned
check_cmd = (
f"docker run --rm -v {etcd_path}:/etcd:ro alpine:3.19 "
"test -f /etcd/member/snap/db"
)
check_result = subprocess.run(check_cmd, shell=True, capture_output=True)
if check_result.returncode != 0:
if opts.o.debug:
print(f"No etcd snapshot at {db_path}, skipping cleanup")
return False
if opts.o.debug:
print(f"Cleaning persisted etcd at {etcd_path}, keeping only TLS certs")
etcd_image = "gcr.io/etcd-development/etcd:v3.5.9"
temp_dir = "/tmp/laconic-etcd-cleanup"
# Whitelist: prefixes to KEEP - everything else gets deleted
keep_prefixes = "/registry/secrets/caddy-system"
# The etcd image is distroless (no shell). We extract the statically-linked
# etcdctl binary and run it from alpine which has shell + jq support.
cleanup_script = f"""
set -e
ALPINE_IMAGE="alpine:3.19"
# Cleanup previous runs
docker rm -f laconic-etcd-cleanup 2>/dev/null || true
docker rm -f etcd-extract 2>/dev/null || true
docker run --rm -v /tmp:/tmp $ALPINE_IMAGE rm -rf {temp_dir}
# Create temp dir
docker run --rm -v /tmp:/tmp $ALPINE_IMAGE mkdir -p {temp_dir}
# Extract etcdctl binary (it's statically linked)
docker create --name etcd-extract {etcd_image}
docker cp etcd-extract:/usr/local/bin/etcdctl /tmp/etcdctl-bin
docker rm etcd-extract
docker run --rm -v /tmp/etcdctl-bin:/src:ro -v {temp_dir}:/dst $ALPINE_IMAGE \
sh -c "cp /src /dst/etcdctl && chmod +x /dst/etcdctl"
# Copy db to temp location
docker run --rm \
-v {etcd_path}:/etcd:ro \
-v {temp_dir}:/tmp-work \
$ALPINE_IMAGE cp /etcd/member/snap/db /tmp-work/etcd-snapshot.db
# Restore snapshot
docker run --rm -v {temp_dir}:/work {etcd_image} \
etcdutl snapshot restore /work/etcd-snapshot.db \
--data-dir=/work/etcd-data --skip-hash-check 2>/dev/null
# Start temp etcd (runs the etcd binary, no shell needed)
docker run -d --name laconic-etcd-cleanup \
-v {temp_dir}/etcd-data:/etcd-data \
-v {temp_dir}:/backup \
{etcd_image} etcd \
--data-dir=/etcd-data \
--listen-client-urls=http://0.0.0.0:2379 \
--advertise-client-urls=http://localhost:2379
sleep 3
# Use alpine with extracted etcdctl to run commands (alpine has shell + jq)
# Export caddy secrets
docker run --rm \
-v {temp_dir}:/backup \
--network container:laconic-etcd-cleanup \
$ALPINE_IMAGE sh -c \
'/backup/etcdctl get --prefix "{keep_prefixes}" -w json \
> /backup/kept.json 2>/dev/null || echo "{{}}" > /backup/kept.json'
# Delete ALL registry keys
docker run --rm \
-v {temp_dir}:/backup \
--network container:laconic-etcd-cleanup \
$ALPINE_IMAGE /backup/etcdctl del --prefix /registry
# Restore kept keys using jq
docker run --rm \
-v {temp_dir}:/backup \
--network container:laconic-etcd-cleanup \
$ALPINE_IMAGE sh -c '
apk add --no-cache jq >/dev/null 2>&1
jq -r ".kvs[] | @base64" /backup/kept.json 2>/dev/null | \
while read encoded; do
key=$(echo $encoded | base64 -d | jq -r ".key" | base64 -d)
val=$(echo $encoded | base64 -d | jq -r ".value" | base64 -d)
echo "$val" | /backup/etcdctl put "$key"
done
' || true
# Save cleaned snapshot
docker exec laconic-etcd-cleanup \
etcdctl snapshot save /etcd-data/cleaned-snapshot.db
docker stop laconic-etcd-cleanup
docker rm laconic-etcd-cleanup
# Restore to temp location first to verify it works
docker run --rm \
-v {temp_dir}/etcd-data/cleaned-snapshot.db:/data/db:ro \
-v {temp_dir}:/restore \
{etcd_image} \
etcdutl snapshot restore /data/db --data-dir=/restore/new-etcd \
--skip-hash-check 2>/dev/null
# Create timestamped backup of original (kept forever)
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
docker run --rm -v {etcd_path}:/etcd $ALPINE_IMAGE \
cp -a /etcd/member /etcd/member.backup-$TIMESTAMP
# Replace original with cleaned version
docker run --rm -v {etcd_path}:/etcd -v {temp_dir}:/tmp-work $ALPINE_IMAGE \
sh -c "rm -rf /etcd/member && mv /tmp-work/new-etcd/member /etcd/member"
# Cleanup temp files (but NOT the timestamped backup in etcd_path)
docker run --rm -v /tmp:/tmp $ALPINE_IMAGE rm -rf {temp_dir}
rm -f /tmp/etcdctl-bin
"""
result = subprocess.run(cleanup_script, shell=True, capture_output=True, text=True)
if result.returncode != 0:
if opts.o.debug:
print(f"Warning: etcd cleanup failed: {result.stderr}")
return False
if opts.o.debug:
print("Cleaned etcd, kept only TLS certificates")
return True
def create_cluster(name: str, config_file: str):
"""Create or reuse the single kind cluster for this host.
There is only one kind cluster per host by design. Multiple deployments
share this cluster. If a cluster already exists, it is reused.
Args:
name: Cluster name (used only when creating the first cluster)
config_file: Path to kind config file (used only when creating)
Returns:
The name of the cluster being used
"""
existing = get_kind_cluster()
if existing:
print(f"Using existing cluster: {existing}")
return existing
# Clean persisted etcd, keeping only TLS certificates
etcd_path = _get_etcd_host_path_from_kind_config(config_file)
if etcd_path:
_clean_etcd_keeping_certs(etcd_path)
print(f"Creating new cluster: {name}")
result = _run_command(f"kind create cluster --name {name} --config {config_file}")
if result.returncode != 0:
raise DeployerException(f"kind create cluster failed: {result}")
return name
def destroy_cluster(name: str):
_run_command(f"kind delete cluster --name {name}")
def is_ingress_running() -> bool:
"""Check if the Caddy ingress controller is already running in the cluster."""
try:
core_v1 = client.CoreV1Api()
pods = core_v1.list_namespaced_pod(
namespace="caddy-system",
label_selector=(
"app.kubernetes.io/name=caddy-ingress-controller,"
"app.kubernetes.io/component=controller"
),
)
for pod in pods.items:
if pod.status and pod.status.container_statuses:
if pod.status.container_statuses[0].ready is True:
if opts.o.debug:
print("Caddy ingress controller already running")
return True
return False
except ApiException:
return False
def wait_for_ingress_in_kind():
core_v1 = client.CoreV1Api()
for i in range(20):
@ -345,7 +90,7 @@ def wait_for_ingress_in_kind():
error_exit("ERROR: Timed out waiting for Caddy ingress to become ready")
def install_ingress_for_kind(acme_email: str = ""):
def install_ingress_for_kind():
api_client = client.ApiClient()
ingress_install = os.path.abspath(
get_k8s_dir().joinpath(
@ -354,34 +99,7 @@ def install_ingress_for_kind(acme_email: str = ""):
)
if opts.o.debug:
print("Installing Caddy ingress controller in kind cluster")
# Template the YAML with email before applying
with open(ingress_install) as f:
yaml_content = f.read()
if acme_email:
yaml_content = yaml_content.replace('email: ""', f'email: "{acme_email}"')
if opts.o.debug:
print(f"Configured Caddy with ACME email: {acme_email}")
# Apply templated YAML
yaml_objects = list(yaml.safe_load_all(yaml_content))
utils.create_from_yaml(api_client, yaml_objects=yaml_objects)
# Patch ConfigMap with ACME email if provided
if acme_email:
if opts.o.debug:
print(f"Configuring ACME email: {acme_email}")
core_api = client.CoreV1Api()
configmap = core_api.read_namespaced_config_map(
name="caddy-ingress-controller-configmap", namespace="caddy-system"
)
configmap.data["email"] = acme_email
core_api.patch_namespaced_config_map(
name="caddy-ingress-controller-configmap",
namespace="caddy-system",
body=configmap,
)
utils.create_from_yaml(api_client, yaml_file=ingress_install)
def load_images_into_kind(kind_cluster_name: str, image_set: Set[str]):
@ -459,7 +177,6 @@ def volume_mounts_for_service(parsed_pod_files, service):
for mount_string in volumes:
# Looks like: test-data:/data
# or test-data:/data:ro or test-data:/data:rw
# or ../config/file.sh:/opt/file.sh (host path mount)
if opts.o.debug:
print(f"mount_string: {mount_string}")
mount_split = mount_string.split(":")
@ -468,21 +185,13 @@ def volume_mounts_for_service(parsed_pod_files, service):
mount_options = (
mount_split[2] if len(mount_split) == 3 else None
)
# For host path mounts, use sanitized name
if is_host_path_mount(volume_name):
k8s_volume_name = sanitize_host_path_to_volume_name(
volume_name
)
else:
k8s_volume_name = volume_name
if opts.o.debug:
print(f"volume_name: {volume_name}")
print(f"k8s_volume_name: {k8s_volume_name}")
print(f"mount path: {mount_path}")
print(f"mount options: {mount_options}")
volume_device = client.V1VolumeMount(
mount_path=mount_path,
name=k8s_volume_name,
name=volume_name,
read_only="ro" == mount_options,
)
result.append(volume_device)
@ -491,12 +200,8 @@ def volume_mounts_for_service(parsed_pod_files, service):
def volumes_for_pod_files(parsed_pod_files, spec, app_name):
result = []
seen_host_path_volumes = set() # Track host path volumes to avoid duplicates
for pod in parsed_pod_files:
parsed_pod_file = parsed_pod_files[pod]
# Handle named volumes from top-level volumes section
if "volumes" in parsed_pod_file:
volumes = parsed_pod_file["volumes"]
for volume_name in volumes.keys():
@ -515,35 +220,6 @@ def volumes_for_pod_files(parsed_pod_files, spec, app_name):
name=volume_name, persistent_volume_claim=claim
)
result.append(volume)
# Handle host path mounts from service volumes
if "services" in parsed_pod_file:
services = parsed_pod_file["services"]
for service_name in services:
service_obj = services[service_name]
if "volumes" in service_obj:
for mount_string in service_obj["volumes"]:
mount_split = mount_string.split(":")
volume_source = mount_split[0]
if is_host_path_mount(volume_source):
sanitized_name = sanitize_host_path_to_volume_name(
volume_source
)
if sanitized_name not in seen_host_path_volumes:
seen_host_path_volumes.add(sanitized_name)
# Create hostPath volume for mount inside kind node
kind_mount_path = get_kind_host_path_mount_path(
sanitized_name
)
host_path_source = client.V1HostPathVolumeSource(
path=kind_mount_path, type="FileOrCreate"
)
volume = client.V1Volume(
name=sanitized_name, host_path=host_path_source
)
result.append(volume)
if opts.o.debug:
print(f"Created hostPath volume: {sanitized_name}")
return result
@ -562,27 +238,6 @@ def _make_absolute_host_path(data_mount_path: Path, deployment_dir: Path) -> Pat
def _generate_kind_mounts(parsed_pod_files, deployment_dir, deployment_context):
volume_definitions = []
volume_host_path_map = _get_host_paths_for_volumes(deployment_context)
seen_host_path_mounts = set() # Track to avoid duplicate mounts
# Cluster state backup for offline data recovery (unique per deployment)
# etcd contains all k8s state; PKI certs needed to decrypt etcd offline
deployment_id = deployment_context.id
backup_subdir = f"cluster-backups/{deployment_id}"
etcd_host_path = _make_absolute_host_path(
Path(f"./data/{backup_subdir}/etcd"), deployment_dir
)
volume_definitions.append(
f" - hostPath: {etcd_host_path}\n" f" containerPath: /var/lib/etcd\n"
)
pki_host_path = _make_absolute_host_path(
Path(f"./data/{backup_subdir}/pki"), deployment_dir
)
volume_definitions.append(
f" - hostPath: {pki_host_path}\n" f" containerPath: /etc/kubernetes/pki\n"
)
# Note these paths are relative to the location of the pod files (at present)
# So we need to fix up to make them correct and absolute because kind assumes
# relative to the cwd.
@ -597,58 +252,28 @@ def _generate_kind_mounts(parsed_pod_files, deployment_dir, deployment_context):
for mount_string in volumes:
# Looks like: test-data:/data
# or test-data:/data:ro or test-data:/data:rw
# or ../config/file.sh:/opt/file.sh (host path mount)
if opts.o.debug:
print(f"mount_string: {mount_string}")
mount_split = mount_string.split(":")
volume_name = mount_split[0]
mount_path = mount_split[1]
if is_host_path_mount(volume_name):
# Host path mount - add extraMount for kind
sanitized_name = sanitize_host_path_to_volume_name(
volume_name
)
if sanitized_name not in seen_host_path_mounts:
seen_host_path_mounts.add(sanitized_name)
# Resolve path relative to compose directory
host_path = resolve_host_path_for_kind(
volume_name, deployment_dir
if opts.o.debug:
print(f"volume_name: {volume_name}")
print(f"map: {volume_host_path_map}")
print(f"mount path: {mount_path}")
if volume_name not in deployment_context.spec.get_configmaps():
if volume_host_path_map[volume_name]:
host_path = _make_absolute_host_path(
volume_host_path_map[volume_name],
deployment_dir,
)
container_path = get_kind_host_path_mount_path(
sanitized_name
container_path = get_kind_pv_bind_mount_path(
volume_name
)
volume_definitions.append(
f" - hostPath: {host_path}\n"
f" containerPath: {container_path}\n"
)
if opts.o.debug:
print(f"Added host path mount: {host_path}")
else:
# Named volume
if opts.o.debug:
print(f"volume_name: {volume_name}")
print(f"map: {volume_host_path_map}")
print(f"mount path: {mount_path}")
if (
volume_name
not in deployment_context.spec.get_configmaps()
):
if (
volume_name in volume_host_path_map
and volume_host_path_map[volume_name]
):
host_path = _make_absolute_host_path(
volume_host_path_map[volume_name],
deployment_dir,
)
container_path = get_kind_pv_bind_mount_path(
volume_name
)
volume_definitions.append(
f" - hostPath: {host_path}\n"
f" containerPath: {container_path}\n"
)
return (
""
if len(volume_definitions) == 0
@ -942,41 +567,6 @@ def envs_from_compose_file(
return result
def translate_sidecar_service_names(
envs: Mapping[str, str], sibling_service_names: List[str]
) -> Mapping[str, str]:
"""Translate docker-compose service names to localhost for sidecar containers.
In docker-compose, services can reference each other by name (e.g., 'db:5432').
In Kubernetes, when multiple containers are in the same pod (sidecars), they
share the same network namespace and must use 'localhost' instead.
This function replaces service name references with 'localhost' in env values.
"""
import re
if not sibling_service_names:
return envs
result = {}
for env_var, env_val in envs.items():
if env_val is None:
result[env_var] = env_val
continue
new_val = str(env_val)
for service_name in sibling_service_names:
# Match service name followed by optional port (e.g., 'db:5432', 'db')
# Handle URLs like: postgres://user:pass@db:5432/dbname
# and simple refs like: db:5432 or just db
pattern = rf"\b{re.escape(service_name)}(:\d+)?\b"
new_val = re.sub(pattern, lambda m: f'localhost{m.group(1) or ""}', new_val)
result[env_var] = new_val
return result
def envs_from_environment_variables_map(
map: Mapping[str, str]
) -> List[client.V1EnvVar]:

View File

@ -98,17 +98,6 @@ class Spec:
def get_image_registry(self):
return self.obj.get(constants.image_registry_key)
def get_image_registry_config(self) -> typing.Optional[typing.Dict]:
"""Returns registry auth config: {server, username, token-env}.
Used for private container registries like GHCR. The token-env field
specifies an environment variable containing the API token/PAT.
Note: Uses 'registry-credentials' key to avoid collision with
'image-registry' key which is for pushing images.
"""
return self.obj.get("registry-credentials")
def get_volumes(self):
return self.obj.get(constants.volumes_key, {})
@ -190,9 +179,6 @@ class Spec:
def get_deployment_type(self):
return self.obj.get(constants.deploy_to_key)
def get_acme_email(self):
return self.obj.get(constants.network_key, {}).get(constants.acme_email_key, "")
def is_kubernetes_deployment(self):
return self.get_deployment_type() in [
constants.k8s_kind_deploy_type,

View File

@ -94,7 +94,7 @@ def create_deployment(
# Add the TLS and DNS spec
_fixup_url_spec(spec_file_name, url)
create_operation(
deploy_command_context, spec_file_name, deployment_dir, False, False, None, None
deploy_command_context, spec_file_name, deployment_dir, False, None, None
)
# Fix up the container tag inside the deployment compose file
_fixup_container_tag(deployment_dir, image)

View File

@ -86,7 +86,7 @@ fi
echo "deploy init test: passed"
# Switch to a full path for the data dir so it gets provisioned as a host bind mounted volume and preserved beyond cluster lifetime
sed -i.bak "s|^\(\s*db-data:$\)$|\1 ${test_deployment_dir}/data/db-data|" $test_deployment_spec
sed -i "s|^\(\s*db-data:$\)$|\1 ${test_deployment_dir}/data/db-data|" $test_deployment_spec
$TEST_TARGET_SO --stack ${stack} deploy create --spec-file $test_deployment_spec --deployment-dir $test_deployment_dir
# Check the deployment dir exists

View File

@ -14,13 +14,8 @@ delete_cluster_exit () {
# Test basic stack-orchestrator deploy
echo "Running stack-orchestrator deploy test"
if [ "$1" == "from-path" ]; then
TEST_TARGET_SO="laconic-so"
else
TEST_TARGET_SO=$( ls -t1 ./package/laconic-so* | head -1 )
fi
# Bit of a hack, test the most recent package
TEST_TARGET_SO=$( ls -t1 ./package/laconic-so* | head -1 )
# Set a non-default repo dir
export CERC_REPO_BASE_DIR=~/stack-orchestrator-test/repo-base-dir
echo "Testing this package: $TEST_TARGET_SO"
@ -34,7 +29,6 @@ mkdir -p $CERC_REPO_BASE_DIR
# with and without volume removal
$TEST_TARGET_SO --stack test setup-repositories
$TEST_TARGET_SO --stack test build-containers
# Test deploy command execution
$TEST_TARGET_SO --stack test deploy setup $CERC_REPO_BASE_DIR
# Check that we now have the expected output directory
@ -86,7 +80,6 @@ else
exit 1
fi
$TEST_TARGET_SO --stack test deploy down --delete-volumes
# Basic test of creating a deployment
test_deployment_dir=$CERC_REPO_BASE_DIR/test-deployment-dir
test_deployment_spec=$CERC_REPO_BASE_DIR/test-deployment-spec.yml
@ -124,101 +117,6 @@ fi
echo "dbfc7a4d-44a7-416d-b5f3-29842cc47650" > $test_deployment_dir/data/test-config/test_config
echo "deploy create output file test: passed"
# Test sync functionality: update deployment without destroying data
# First, create a marker file in the data directory to verify it's preserved
test_data_marker="$test_deployment_dir/data/test-data-bind/sync-test-marker.txt"
echo "original-data-$(date +%s)" > "$test_data_marker"
original_marker_content=$(<$test_data_marker)
# Modify a config file in the deployment to differ from source (to test backup)
test_config_file="$test_deployment_dir/config/test/settings.env"
test_config_file_original_content=$(<$test_config_file)
test_config_file_changed_content="ANSWER=69"
echo "$test_config_file_changed_content" > "$test_config_file"
# Check a config file that matches the source (to test no backup for unchanged files)
test_unchanged_config="$test_deployment_dir/config/test/script.sh"
# Modify spec file to simulate an update
sed -i.bak 's/CERC_TEST_PARAM_3:/CERC_TEST_PARAM_3: FASTER/' $test_deployment_spec
# Create/modify config.env to test it isn't overwritten during sync
config_env_file="$test_deployment_dir/config.env"
config_env_persistent_content="PERSISTENT_VALUE=should-not-be-overwritten-$(date +%s)"
echo "$config_env_persistent_content" >> "$config_env_file"
original_config_env_content=$(<$config_env_file)
# Run sync to update deployment files without destroying data
$TEST_TARGET_SO --stack test deploy create --spec-file $test_deployment_spec --deployment-dir $test_deployment_dir --update
# Verify config.env was not overwritten
synced_config_env_content=$(<$config_env_file)
if [ "$synced_config_env_content" == "$original_config_env_content" ]; then
echo "deployment update test: config.env preserved - passed"
else
echo "deployment update test: config.env was overwritten - FAILED"
echo "Expected: $original_config_env_content"
echo "Got: $synced_config_env_content"
exit 1
fi
# Verify the spec file was updated in deployment dir
updated_deployed_spec=$(<$test_deployment_dir/spec.yml)
if [[ "$updated_deployed_spec" == *"FASTER"* ]]; then
echo "deployment update test: spec file updated"
else
echo "deployment update test: spec file not updated - FAILED"
exit 1
fi
# Verify changed config file was backed up
test_config_backup="${test_config_file}.bak"
if [ -f "$test_config_backup" ]; then
backup_content=$(<$test_config_backup)
if [ "$backup_content" == "$test_config_file_changed_content" ]; then
echo "deployment update test: changed config file backed up - passed"
else
echo "deployment update test: backup content incorrect - FAILED"
exit 1
fi
else
echo "deployment update test: backup file not created for changed file - FAILED"
exit 1
fi
# Verify unchanged config file was NOT backed up
test_unchanged_backup="$test_unchanged_config.bak"
if [ -f "$test_unchanged_backup" ]; then
echo "deployment update test: backup created for unchanged file - FAILED"
exit 1
else
echo "deployment update test: no backup for unchanged file - passed"
fi
# Verify the config file was updated from source
updated_config_content=$(<$test_config_file)
if [ "$updated_config_content" == "$test_config_file_original_content" ]; then
echo "deployment update test: config file updated from source - passed"
else
echo "deployment update test: config file not updated correctly - FAILED"
exit 1
fi
# Verify the data marker file still exists with original content
if [ ! -f "$test_data_marker" ]; then
echo "deployment update test: data file deleted - FAILED"
exit 1
fi
synced_marker_content=$(<$test_data_marker)
if [ "$synced_marker_content" == "$original_marker_content" ]; then
echo "deployment update test: data preserved - passed"
else
echo "deployment update test: data corrupted - FAILED"
exit 1
fi
echo "deployment update test: passed"
# Try to start the deployment
$TEST_TARGET_SO deployment --dir $test_deployment_dir start
# Check logs command works

View File

@ -125,49 +125,6 @@ fi
echo "dbfc7a4d-44a7-416d-b5f3-29842cc47650" > $test_deployment_dir/data/test-config/test_config
echo "deploy create output file test: passed"
# Test sync functionality: update deployment without destroying data
# First, create a marker file in the data directory to verify it's preserved
test_data_marker="$test_deployment_dir/data/test-data/sync-test-marker.txt"
mkdir -p "$test_deployment_dir/data/test-data"
echo "external-stack-data-$(date +%s)" > "$test_data_marker"
original_marker_content=$(<$test_data_marker)
# Verify deployment file exists and preserve its cluster ID
original_cluster_id=$(grep "cluster-id:" "$test_deployment_dir/deployment.yml" 2>/dev/null || echo "")
# Modify spec file to simulate an update
sed -i.bak 's/CERC_TEST_PARAM_1=PASSED/CERC_TEST_PARAM_1=UPDATED/' $test_deployment_spec
# Run sync to update deployment files without destroying data
$TEST_TARGET_SO_STACK deploy create --spec-file $test_deployment_spec --deployment-dir $test_deployment_dir --update
# Verify the spec file was updated in deployment dir
updated_deployed_spec=$(<$test_deployment_dir/spec.yml)
if [[ "$updated_deployed_spec" == *"UPDATED"* ]]; then
echo "deploy sync test: spec file updated"
else
echo "deploy sync test: spec file not updated - FAILED"
exit 1
fi
# Verify the data marker file still exists with original content
if [ ! -f "$test_data_marker" ]; then
echo "deploy sync test: data file deleted - FAILED"
exit 1
fi
synced_marker_content=$(<$test_data_marker)
if [ "$synced_marker_content" == "$original_marker_content" ]; then
echo "deploy sync test: data preserved - passed"
else
echo "deploy sync test: data corrupted - FAILED"
exit 1
fi
# Verify cluster ID was preserved (not regenerated)
new_cluster_id=$(grep "cluster-id:" "$test_deployment_dir/deployment.yml" 2>/dev/null || echo "")
if [ -n "$original_cluster_id" ] && [ "$original_cluster_id" == "$new_cluster_id" ]; then
echo "deploy sync test: cluster ID preserved - passed"
else
echo "deploy sync test: cluster ID not preserved - FAILED"
exit 1
fi
echo "deploy sync test: passed"
# Try to start the deployment
$TEST_TARGET_SO deployment --dir $test_deployment_dir start
# Check logs command works

View File

@ -40,7 +40,7 @@ sleep 3
wget --tries 20 --retry-connrefused --waitretry=3 -O test.before -m http://localhost:3000
docker logs $CONTAINER_ID
docker rm -f $CONTAINER_ID
docker remove -f $CONTAINER_ID
echo "Running app container test"
CONTAINER_ID=$(docker run -p 3000:80 -e CERC_WEBAPP_DEBUG=$CHECK -e CERC_SCRIPT_DEBUG=$CERC_SCRIPT_DEBUG -d ${app_image_name})
@ -48,7 +48,7 @@ sleep 3
wget --tries 20 --retry-connrefused --waitretry=3 -O test.after -m http://localhost:3000
docker logs $CONTAINER_ID
docker rm -f $CONTAINER_ID
docker remove -f $CONTAINER_ID
echo "###########################################################################"
echo ""