diff --git a/.gitea/workflows/test-deploy.yml b/.gitea/workflows/test-deploy.yml index 2ea72c08..b0d5194d 100644 --- a/.gitea/workflows/test-deploy.yml +++ b/.gitea/workflows/test-deploy.yml @@ -2,7 +2,8 @@ name: Deploy Test on: pull_request: - branches: '*' + branches: + - main push: branches: - main diff --git a/.gitea/workflows/test-k8s-deploy.yml b/.gitea/workflows/test-k8s-deploy.yml index a9964b72..bbd1d508 100644 --- a/.gitea/workflows/test-k8s-deploy.yml +++ b/.gitea/workflows/test-k8s-deploy.yml @@ -2,7 +2,8 @@ name: K8s Deploy Test on: pull_request: - branches: '*' + branches: + - main push: branches: '*' paths: diff --git a/.gitea/workflows/test-k8s-deployment-control.yml b/.gitea/workflows/test-k8s-deployment-control.yml index 9ab2526d..3784451b 100644 --- a/.gitea/workflows/test-k8s-deployment-control.yml +++ b/.gitea/workflows/test-k8s-deployment-control.yml @@ -2,7 +2,8 @@ name: K8s Deployment Control Test on: pull_request: - branches: '*' + branches: + - main push: branches: '*' paths: diff --git a/.gitea/workflows/test-webapp.yml b/.gitea/workflows/test-webapp.yml index 99c5138f..8a3a60f9 100644 --- a/.gitea/workflows/test-webapp.yml +++ b/.gitea/workflows/test-webapp.yml @@ -2,7 +2,8 @@ name: Webapp Test on: pull_request: - branches: '*' + branches: + - main push: branches: - main diff --git a/AI-FRIENDLY-PLAN.md b/AI-FRIENDLY-PLAN.md new file mode 100644 index 00000000..58d239f3 --- /dev/null +++ b/AI-FRIENDLY-PLAN.md @@ -0,0 +1,151 @@ +# Plan: Make Stack-Orchestrator AI-Friendly + +## Goal + +Make the stack-orchestrator repository easier for AI tools (Claude Code, Cursor, Copilot) to understand and use for generating stacks, including adding a `create-stack` command. + +--- + +## Part 1: Documentation & Context Files + +### 1.1 Add CLAUDE.md + +Create a root-level context file for AI assistants. + +**File:** `CLAUDE.md` + +Contents: +- Project overview (what stack-orchestrator does) +- Stack creation workflow (step-by-step) +- File naming conventions +- Required vs optional fields in stack.yml +- Common patterns and anti-patterns +- Links to example stacks (simple, medium, complex) + +### 1.2 Add JSON Schema for stack.yml + +Create formal validation schema. + +**File:** `schemas/stack-schema.json` + +Benefits: +- AI tools can validate generated stacks +- IDEs provide autocomplete +- CI can catch errors early + +### 1.3 Add Template Stack with Comments + +Create an annotated template for reference. + +**File:** `stack_orchestrator/data/stacks/_template/stack.yml` + +```yaml +# Stack definition template - copy this directory to create a new stack +version: "1.2" # Required: 1.0, 1.1, or 1.2 +name: my-stack # Required: lowercase, hyphens only +description: "Human-readable description" # Optional +repos: # Git repositories to clone + - github.com/org/repo +containers: # Container images to build (must have matching container-build/) + - cerc/my-container +pods: # Deployment units (must have matching docker-compose-{pod}.yml) + - my-pod +``` + +### 1.4 Document Validation Rules + +Create explicit documentation of constraints currently scattered in code. + +**File:** `docs/stack-format.md` + +Contents: +- Container names must start with `cerc/` +- Pod names must match compose file: `docker-compose-{pod}.yml` +- Repository format: `host/org/repo[@ref]` +- Stack directory name should match `name` field +- Version field options and differences + +--- + +## Part 2: Add `create-stack` Command + +### 2.1 Command Overview + +```bash +laconic-so create-stack --repo github.com/org/my-app [--name my-app] [--type webapp] +``` + +**Behavior:** +1. Parse repo URL to extract app name (if --name not provided) +2. Create `stacks/{name}/stack.yml` +3. Create `container-build/cerc-{name}/Dockerfile` and `build.sh` +4. Create `compose/docker-compose-{name}.yml` +5. Update list files (repository-list.txt, container-image-list.txt, pod-list.txt) + +### 2.2 Files to Create + +| File | Purpose | +|------|---------| +| `stack_orchestrator/create/__init__.py` | Package init | +| `stack_orchestrator/create/create_stack.py` | Command implementation | + +### 2.3 Files to Modify + +| File | Change | +|------|--------| +| `stack_orchestrator/main.py` | Add import and `cli.add_command()` | + +### 2.4 Command Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--repo` | Yes | Git repository URL (e.g., github.com/org/repo) | +| `--name` | No | Stack name (defaults to repo name) | +| `--type` | No | Template type: webapp, service, empty (default: webapp) | +| `--force` | No | Overwrite existing files | + +### 2.5 Template Types + +| Type | Base Image | Port | Use Case | +|------|------------|------|----------| +| webapp | node:20-bullseye-slim | 3000 | React/Vue/Next.js apps | +| service | python:3.11-slim | 8080 | Python backend services | +| empty | none | none | Custom from scratch | + +--- + +## Part 3: Implementation Summary + +### New Files (6) + +1. `CLAUDE.md` - AI assistant context +2. `schemas/stack-schema.json` - Validation schema +3. `stack_orchestrator/data/stacks/_template/stack.yml` - Annotated template +4. `docs/stack-format.md` - Stack format documentation +5. `stack_orchestrator/create/__init__.py` - Package init +6. `stack_orchestrator/create/create_stack.py` - Command implementation + +### Modified Files (1) + +1. `stack_orchestrator/main.py` - Register create-stack command + +--- + +## Verification + +```bash +# 1. Command appears in help +laconic-so --help | grep create-stack + +# 2. Dry run works +laconic-so --dry-run create-stack --repo github.com/org/test-app + +# 3. Creates all expected files +laconic-so create-stack --repo github.com/org/test-app +ls stack_orchestrator/data/stacks/test-app/ +ls stack_orchestrator/data/container-build/cerc-test-app/ +ls stack_orchestrator/data/compose/docker-compose-test-app.yml + +# 4. Build works with generated stack +laconic-so --stack test-app build-containers +``` diff --git a/STACK-CREATION-GUIDE.md b/STACK-CREATION-GUIDE.md new file mode 100644 index 00000000..57d4c7f5 --- /dev/null +++ b/STACK-CREATION-GUIDE.md @@ -0,0 +1,413 @@ +# Implementing `laconic-so create-stack` Command + +A plan for adding a new CLI command to scaffold stack files automatically. + +--- + +## Overview + +Add a `create-stack` command that generates all required files for a new stack: + +```bash +laconic-so create-stack --name my-stack --type webapp +``` + +**Output:** +``` +stack_orchestrator/data/ +├── stacks/my-stack/stack.yml +├── container-build/cerc-my-stack/ +│ ├── Dockerfile +│ └── build.sh +└── compose/docker-compose-my-stack.yml + +Updated: repository-list.txt, container-image-list.txt, pod-list.txt +``` + +--- + +## CLI Architecture Summary + +### Command Registration Pattern + +Commands are Click functions registered in `main.py`: + +```python +# main.py (line ~70) +from stack_orchestrator.create import create_stack +cli.add_command(create_stack.command, "create-stack") +``` + +### Global Options Access + +```python +from stack_orchestrator.opts import opts + +if not opts.o.quiet: + print("message") +if opts.o.dry_run: + print("(would create files)") +``` + +### Key Utilities + +| Function | Location | Purpose | +|----------|----------|---------| +| `get_yaml()` | `util.py` | YAML parser (ruamel.yaml) | +| `get_stack_path(stack)` | `util.py` | Resolve stack directory path | +| `error_exit(msg)` | `util.py` | Print error and exit(1) | + +--- + +## Files to Create + +### 1. Command Module + +**`stack_orchestrator/create/__init__.py`** +```python +# Empty file to make this a package +``` + +**`stack_orchestrator/create/create_stack.py`** +```python +import click +import os +from pathlib import Path +from shutil import copy +from stack_orchestrator.opts import opts +from stack_orchestrator.util import error_exit, get_yaml + +# Template types +STACK_TEMPLATES = { + "webapp": { + "description": "Web application with Node.js", + "base_image": "node:20-bullseye-slim", + "port": 3000, + }, + "service": { + "description": "Backend service", + "base_image": "python:3.11-slim", + "port": 8080, + }, + "empty": { + "description": "Minimal stack with no defaults", + "base_image": None, + "port": None, + }, +} + + +def get_data_dir() -> Path: + """Get path to stack_orchestrator/data directory""" + return Path(__file__).absolute().parent.parent.joinpath("data") + + +def validate_stack_name(name: str) -> None: + """Validate stack name follows conventions""" + import re + if not re.match(r'^[a-z0-9][a-z0-9-]*[a-z0-9]$', name) and len(name) > 2: + error_exit(f"Invalid stack name '{name}'. Use lowercase alphanumeric with hyphens.") + if name.startswith("cerc-"): + error_exit("Stack name should not start with 'cerc-' (container names will add this prefix)") + + +def create_stack_yml(stack_dir: Path, name: str, template: dict, repo_url: str) -> None: + """Create stack.yml file""" + config = { + "version": "1.2", + "name": name, + "description": template.get("description", f"Stack: {name}"), + "repos": [repo_url] if repo_url else [], + "containers": [f"cerc/{name}"], + "pods": [name], + } + + stack_dir.mkdir(parents=True, exist_ok=True) + with open(stack_dir / "stack.yml", "w") as f: + get_yaml().dump(config, f) + + +def create_dockerfile(container_dir: Path, name: str, template: dict) -> None: + """Create Dockerfile""" + base_image = template.get("base_image", "node:20-bullseye-slim") + port = template.get("port", 3000) + + dockerfile_content = f'''# Build stage +FROM {base_image} AS builder + +WORKDIR /app +COPY package*.json ./ +RUN npm ci +COPY . . +RUN npm run build + +# Production stage +FROM {base_image} + +WORKDIR /app +COPY package*.json ./ +RUN npm ci --only=production +COPY --from=builder /app/dist ./dist + +EXPOSE {port} +CMD ["npm", "run", "start"] +''' + + container_dir.mkdir(parents=True, exist_ok=True) + with open(container_dir / "Dockerfile", "w") as f: + f.write(dockerfile_content) + + +def create_build_script(container_dir: Path, name: str) -> None: + """Create build.sh script""" + build_script = f'''#!/usr/bin/env bash +# Build cerc/{name} + +source ${{CERC_CONTAINER_BASE_DIR}}/build-base.sh + +SCRIPT_DIR=$( cd -- "$( dirname -- "${{BASH_SOURCE[0]}}" )" &> /dev/null && pwd ) + +docker build -t cerc/{name}:local \\ + -f ${{SCRIPT_DIR}}/Dockerfile \\ + ${{build_command_args}} \\ + ${{CERC_REPO_BASE_DIR}}/{name} +''' + + build_path = container_dir / "build.sh" + with open(build_path, "w") as f: + f.write(build_script) + + # Make executable + os.chmod(build_path, 0o755) + + +def create_compose_file(compose_dir: Path, name: str, template: dict) -> None: + """Create docker-compose file""" + port = template.get("port", 3000) + + compose_content = { + "version": "3.8", + "services": { + name: { + "image": f"cerc/{name}:local", + "restart": "unless-stopped", + "ports": [f"${{HOST_PORT:-{port}}}:{port}"], + "environment": { + "NODE_ENV": "${NODE_ENV:-production}", + }, + } + } + } + + with open(compose_dir / f"docker-compose-{name}.yml", "w") as f: + get_yaml().dump(compose_content, f) + + +def update_list_file(data_dir: Path, filename: str, entry: str) -> None: + """Add entry to a list file if not already present""" + list_path = data_dir / filename + + # Read existing entries + existing = set() + if list_path.exists(): + with open(list_path, "r") as f: + existing = set(line.strip() for line in f if line.strip()) + + # Add new entry + if entry not in existing: + with open(list_path, "a") as f: + f.write(f"{entry}\n") + + +@click.command() +@click.option("--name", required=True, help="Name of the new stack (lowercase, hyphens)") +@click.option("--type", "stack_type", default="webapp", + type=click.Choice(list(STACK_TEMPLATES.keys())), + help="Stack template type") +@click.option("--repo", help="Git repository URL (e.g., github.com/org/repo)") +@click.option("--force", is_flag=True, help="Overwrite existing files") +@click.pass_context +def command(ctx, name: str, stack_type: str, repo: str, force: bool): + """Create a new stack with all required files. + + Examples: + + laconic-so create-stack --name my-app --type webapp + + laconic-so create-stack --name my-service --type service --repo github.com/org/repo + """ + # Validate + validate_stack_name(name) + + template = STACK_TEMPLATES[stack_type] + data_dir = get_data_dir() + + # Define paths + stack_dir = data_dir / "stacks" / name + container_dir = data_dir / "container-build" / f"cerc-{name}" + compose_dir = data_dir / "compose" + + # Check for existing files + if not force: + if stack_dir.exists(): + error_exit(f"Stack already exists: {stack_dir}\nUse --force to overwrite") + if container_dir.exists(): + error_exit(f"Container build dir exists: {container_dir}\nUse --force to overwrite") + + # Dry run check + if opts.o.dry_run: + print(f"Would create stack '{name}' with template '{stack_type}':") + print(f" - {stack_dir}/stack.yml") + print(f" - {container_dir}/Dockerfile") + print(f" - {container_dir}/build.sh") + print(f" - {compose_dir}/docker-compose-{name}.yml") + print(f" - Update repository-list.txt") + print(f" - Update container-image-list.txt") + print(f" - Update pod-list.txt") + return + + # Create files + if not opts.o.quiet: + print(f"Creating stack '{name}' with template '{stack_type}'...") + + create_stack_yml(stack_dir, name, template, repo) + if opts.o.verbose: + print(f" Created {stack_dir}/stack.yml") + + create_dockerfile(container_dir, name, template) + if opts.o.verbose: + print(f" Created {container_dir}/Dockerfile") + + create_build_script(container_dir, name) + if opts.o.verbose: + print(f" Created {container_dir}/build.sh") + + create_compose_file(compose_dir, name, template) + if opts.o.verbose: + print(f" Created {compose_dir}/docker-compose-{name}.yml") + + # Update list files + if repo: + update_list_file(data_dir, "repository-list.txt", repo) + if opts.o.verbose: + print(f" Added {repo} to repository-list.txt") + + update_list_file(data_dir, "container-image-list.txt", f"cerc/{name}") + if opts.o.verbose: + print(f" Added cerc/{name} to container-image-list.txt") + + update_list_file(data_dir, "pod-list.txt", name) + if opts.o.verbose: + print(f" Added {name} to pod-list.txt") + + # Summary + if not opts.o.quiet: + print(f"\nStack '{name}' created successfully!") + print(f"\nNext steps:") + print(f" 1. Edit {stack_dir}/stack.yml") + print(f" 2. Customize {container_dir}/Dockerfile") + print(f" 3. Run: laconic-so --stack {name} build-containers") + print(f" 4. Run: laconic-so --stack {name} deploy-system up") +``` + +### 2. Register Command in main.py + +**Edit `stack_orchestrator/main.py`** + +Add import: +```python +from stack_orchestrator.create import create_stack +``` + +Add command registration (after line ~78): +```python +cli.add_command(create_stack.command, "create-stack") +``` + +--- + +## Implementation Steps + +### Step 1: Create module structure +```bash +mkdir -p stack_orchestrator/create +touch stack_orchestrator/create/__init__.py +``` + +### Step 2: Create the command file +Create `stack_orchestrator/create/create_stack.py` with the code above. + +### Step 3: Register in main.py +Add the import and `cli.add_command()` line. + +### Step 4: Test the command +```bash +# Show help +laconic-so create-stack --help + +# Dry run +laconic-so --dry-run create-stack --name test-app --type webapp + +# Create a stack +laconic-so create-stack --name test-app --type webapp --repo github.com/org/test-app + +# Verify +ls -la stack_orchestrator/data/stacks/test-app/ +cat stack_orchestrator/data/stacks/test-app/stack.yml +``` + +--- + +## Template Types + +| Type | Base Image | Port | Use Case | +|------|------------|------|----------| +| `webapp` | node:20-bullseye-slim | 3000 | React/Vue/Next.js apps | +| `service` | python:3.11-slim | 8080 | Python backend services | +| `empty` | none | none | Custom from scratch | + +--- + +## Future Enhancements + +1. **Interactive mode** - Prompt for values if not provided +2. **More templates** - Go, Rust, database stacks +3. **Template from existing** - `--from-stack existing-stack` +4. **External stack support** - Create in custom directory +5. **Validation command** - `laconic-so validate-stack --name my-stack` + +--- + +## Files Modified + +| File | Change | +|------|--------| +| `stack_orchestrator/create/__init__.py` | New (empty) | +| `stack_orchestrator/create/create_stack.py` | New (command implementation) | +| `stack_orchestrator/main.py` | Add import and `cli.add_command()` | + +--- + +## Verification + +```bash +# 1. Command appears in help +laconic-so --help | grep create-stack + +# 2. Dry run works +laconic-so --dry-run create-stack --name verify-test --type webapp + +# 3. Full creation works +laconic-so create-stack --name verify-test --type webapp +ls stack_orchestrator/data/stacks/verify-test/ +ls stack_orchestrator/data/container-build/cerc-verify-test/ +ls stack_orchestrator/data/compose/docker-compose-verify-test.yml + +# 4. Build works +laconic-so --stack verify-test build-containers + +# 5. Cleanup +rm -rf stack_orchestrator/data/stacks/verify-test +rm -rf stack_orchestrator/data/container-build/cerc-verify-test +rm stack_orchestrator/data/compose/docker-compose-verify-test.yml +``` diff --git a/docs/cli.md b/docs/cli.md index 1421291e..92cf776a 100644 --- a/docs/cli.md +++ b/docs/cli.md @@ -65,3 +65,71 @@ Force full rebuild of packages: ``` $ laconic-so build-npms --include --force-rebuild ``` + +## deploy + +The `deploy` command group manages persistent deployments. The general workflow is `deploy init` to generate a spec file, then `deploy create` to create a deployment directory from the spec, then runtime commands like `deploy up` and `deploy down`. + +### deploy init + +Generate a deployment spec file from a stack definition: +``` +$ laconic-so --stack deploy init --output +``` + +Options: +- `--output` (required): write spec file here +- `--config`: provide config variables for the deployment +- `--config-file`: provide config variables in a file +- `--kube-config`: provide a config file for a k8s deployment +- `--image-registry`: provide a container image registry url for this k8s cluster +- `--map-ports-to-host`: map ports to the host (`any-variable-random`, `localhost-same`, `any-same`, `localhost-fixed-random`, `any-fixed-random`) + +### deploy create + +Create a deployment directory from a spec file: +``` +$ laconic-so --stack deploy create --spec-file --deployment-dir +``` + +Update an existing deployment in-place (preserving data volumes and env file): +``` +$ laconic-so --stack deploy create --spec-file --deployment-dir --update +``` + +Options: +- `--spec-file` (required): spec file to use +- `--deployment-dir`: target directory for deployment files +- `--update`: update an existing deployment directory, preserving data volumes and env file. Changed files are backed up with a `.bak` suffix. The deployment's `config.env` and `deployment.yml` are also preserved. +- `--network-dir`: network configuration supplied in this directory +- `--initial-peers`: initial set of persistent peers + +### deploy up + +Start a deployment: +``` +$ laconic-so deployment --dir up +``` + +### deploy down + +Stop a deployment: +``` +$ laconic-so deployment --dir down +``` +Use `--delete-volumes` to also remove data volumes. + +### deploy ps + +Show running services: +``` +$ laconic-so deployment --dir ps +``` + +### deploy logs + +View service logs: +``` +$ laconic-so deployment --dir logs +``` +Use `-f` to follow and `-n ` to tail. diff --git a/docs/docker-compose-deployment.md b/docs/docker-compose-deployment.md new file mode 100644 index 00000000..55688600 --- /dev/null +++ b/docs/docker-compose-deployment.md @@ -0,0 +1,550 @@ +# Docker Compose Deployment Guide + +## Introduction + +### What is a Deployer? + +In stack-orchestrator, a **deployer** provides a uniform interface for orchestrating containerized applications. This guide focuses on Docker Compose deployments, which is the default and recommended deployment mode. + +While stack-orchestrator also supports Kubernetes (`k8s`) and Kind (`k8s-kind`) deployments, those are out of scope for this guide. See the [Kubernetes Enhancements](./k8s-deployment-enhancements.md) documentation for advanced deployment options. + +## Prerequisites + +To deploy stacks using Docker Compose, you need: + +- Docker Engine (20.10+) +- Docker Compose plugin (v2.0+) +- Python 3.8+ +- stack-orchestrator installed (`laconic-so`) + +**That's it!** No additional infrastructure is required. If you have Docker installed, you're ready to deploy. + +## Deployment Workflow + +The typical deployment workflow consists of four main steps: + +1. **Setup repositories and build containers** (first time only) +2. **Initialize deployment specification** +3. **Create deployment directory** +4. **Start and manage services** + +## Quick Start Example + +Here's a complete example using the built-in `test` stack: + +```bash +# Step 1: Setup (first time only) +laconic-so --stack test setup-repositories +laconic-so --stack test build-containers + +# Step 2: Initialize deployment spec +laconic-so --stack test deploy init --output test-spec.yml + +# Step 3: Create deployment directory +laconic-so --stack test deploy create \ + --spec-file test-spec.yml \ + --deployment-dir test-deployment + +# Step 4: Start services +laconic-so deployment --dir test-deployment start + +# View running services +laconic-so deployment --dir test-deployment ps + +# View logs +laconic-so deployment --dir test-deployment logs + +# Stop services (preserves data) +laconic-so deployment --dir test-deployment stop +``` + +## Deployment Workflows + +Stack-orchestrator supports two deployment workflows: + +### 1. Deployment Directory Workflow (Recommended) + +This workflow creates a persistent deployment directory that contains all configuration and data. + +**When to use:** +- Production deployments +- When you need to preserve configuration +- When you want to manage multiple deployments +- When you need persistent volume data + +**Example:** + +```bash +# Initialize deployment spec +laconic-so --stack fixturenet-eth deploy init --output eth-spec.yml + +# Optionally edit eth-spec.yml to customize configuration + +# Create deployment directory +laconic-so --stack fixturenet-eth deploy create \ + --spec-file eth-spec.yml \ + --deployment-dir my-eth-deployment + +# Start the deployment +laconic-so deployment --dir my-eth-deployment start + +# Manage the deployment +laconic-so deployment --dir my-eth-deployment ps +laconic-so deployment --dir my-eth-deployment logs +laconic-so deployment --dir my-eth-deployment stop +``` + +### 2. Quick Deploy Workflow + +This workflow deploys directly without creating a persistent deployment directory. + +**When to use:** +- Quick testing +- Temporary deployments +- Simple stacks that don't require customization + +**Example:** + +```bash +# Start the stack directly +laconic-so --stack test deploy up + +# Check service status +laconic-so --stack test deploy port test 80 + +# View logs +laconic-so --stack test deploy logs + +# Stop (preserves volumes) +laconic-so --stack test deploy down + +# Stop and remove volumes +laconic-so --stack test deploy down --delete-volumes +``` + +## Real-World Example: Ethereum Fixturenet + +Deploy a local Ethereum testnet with Geth and Lighthouse: + +```bash +# Setup (first time only) +laconic-so --stack fixturenet-eth setup-repositories +laconic-so --stack fixturenet-eth build-containers + +# Initialize with default configuration +laconic-so --stack fixturenet-eth deploy init --output eth-spec.yml + +# Create deployment +laconic-so --stack fixturenet-eth deploy create \ + --spec-file eth-spec.yml \ + --deployment-dir fixturenet-eth-deployment + +# Start the network +laconic-so deployment --dir fixturenet-eth-deployment start + +# Check status +laconic-so deployment --dir fixturenet-eth-deployment ps + +# Access logs from specific service +laconic-so deployment --dir fixturenet-eth-deployment logs fixturenet-eth-geth-1 + +# Stop the network (preserves blockchain data) +laconic-so deployment --dir fixturenet-eth-deployment stop + +# Start again - blockchain data is preserved +laconic-so deployment --dir fixturenet-eth-deployment start + +# Clean up everything including data +laconic-so deployment --dir fixturenet-eth-deployment stop --delete-volumes +``` + +## Configuration + +### Passing Configuration Parameters + +Configuration can be passed in three ways: + +**1. At init time via `--config` flag:** + +```bash +laconic-so --stack test deploy init --output spec.yml \ + --config PARAM1=value1,PARAM2=value2 +``` + +**2. Edit the spec file after init:** + +```bash +# Initialize +laconic-so --stack test deploy init --output spec.yml + +# Edit spec.yml +vim spec.yml +``` + +Example spec.yml: +```yaml +stack: test +config: + PARAM1: value1 + PARAM2: value2 +``` + +**3. Docker Compose defaults:** + +Environment variables defined in the stack's `docker-compose-*.yml` files are used as defaults. Configuration from the spec file overrides these defaults. + +### Port Mapping + +By default, services are accessible on randomly assigned host ports. To find the mapped port: + +```bash +# Find the host port for container port 80 on service 'webapp' +laconic-so deployment --dir my-deployment port webapp 80 + +# Output example: 0.0.0.0:32768 +``` + +To configure fixed ports, edit the spec file before creating the deployment: + +```yaml +network: + ports: + webapp: + - '8080:80' # Maps host port 8080 to container port 80 + api: + - '3000:3000' +``` + +Then create the deployment: + +```bash +laconic-so --stack my-stack deploy create \ + --spec-file spec.yml \ + --deployment-dir my-deployment +``` + +### Volume Persistence + +Volumes are preserved between stop/start cycles by default: + +```bash +# Stop but keep data +laconic-so deployment --dir my-deployment stop + +# Start again - data is still there +laconic-so deployment --dir my-deployment start +``` + +To completely remove all data: + +```bash +# Stop and delete all volumes +laconic-so deployment --dir my-deployment stop --delete-volumes +``` + +Volume data is stored in `/data/`. + +## Common Operations + +### Viewing Logs + +```bash +# All services, continuous follow +laconic-so deployment --dir my-deployment logs --follow + +# Last 100 lines from all services +laconic-so deployment --dir my-deployment logs --tail 100 + +# Specific service only +laconic-so deployment --dir my-deployment logs webapp + +# Combine options +laconic-so deployment --dir my-deployment logs --tail 50 --follow webapp +``` + +### Executing Commands in Containers + +```bash +# Execute a command in a running service +laconic-so deployment --dir my-deployment exec webapp ls -la + +# Interactive shell +laconic-so deployment --dir my-deployment exec webapp /bin/bash + +# Run command with specific environment variables +laconic-so deployment --dir my-deployment exec webapp env VAR=value command +``` + +### Checking Service Status + +```bash +# List all running services +laconic-so deployment --dir my-deployment ps + +# Check using Docker directly +docker ps +``` + +### Updating a Running Deployment + +If you need to change configuration after deployment: + +```bash +# 1. Edit the spec file +vim my-deployment/spec.yml + +# 2. Regenerate configuration +laconic-so deployment --dir my-deployment update + +# 3. Restart services to apply changes +laconic-so deployment --dir my-deployment stop +laconic-so deployment --dir my-deployment start +``` + +## Multi-Service Deployments + +Many stacks deploy multiple services that work together: + +```bash +# Deploy a stack with multiple services +laconic-so --stack laconicd-with-console deploy init --output spec.yml +laconic-so --stack laconicd-with-console deploy create \ + --spec-file spec.yml \ + --deployment-dir laconicd-deployment + +laconic-so deployment --dir laconicd-deployment start + +# View all services +laconic-so deployment --dir laconicd-deployment ps + +# View logs from specific services +laconic-so deployment --dir laconicd-deployment logs laconicd +laconic-so deployment --dir laconicd-deployment logs console +``` + +## ConfigMaps + +ConfigMaps allow you to mount configuration files into containers: + +```bash +# 1. Create the config directory in your deployment +mkdir -p my-deployment/data/my-config +echo "database_url=postgres://localhost" > my-deployment/data/my-config/app.conf + +# 2. Reference in spec file +vim my-deployment/spec.yml +``` + +Add to spec.yml: +```yaml +configmaps: + my-config: ./data/my-config +``` + +```bash +# 3. Restart to apply +laconic-so deployment --dir my-deployment stop +laconic-so deployment --dir my-deployment start +``` + +The files will be mounted in the container at `/config/` (or as specified by the stack). + +## Deployment Directory Structure + +A typical deployment directory contains: + +``` +my-deployment/ +├── compose/ +│ └── docker-compose-*.yml # Generated compose files +├── config.env # Environment variables +├── deployment.yml # Deployment metadata +├── spec.yml # Deployment specification +└── data/ # Volume mounts and configs + ├── service-data/ # Persistent service data + └── config-maps/ # ConfigMap files +``` + +## Troubleshooting + +### Common Issues + +**Problem: "Cannot connect to Docker daemon"** + +```bash +# Ensure Docker is running +docker ps + +# Start Docker if needed (macOS) +open -a Docker + +# Start Docker (Linux) +sudo systemctl start docker +``` + +**Problem: "Port already in use"** + +```bash +# Either stop the conflicting service or use different ports +# Edit spec.yml before creating deployment: + +network: + ports: + webapp: + - '8081:80' # Use 8081 instead of 8080 +``` + +**Problem: "Image not found"** + +```bash +# Build containers first +laconic-so --stack your-stack build-containers +``` + +**Problem: Volumes not persisting** + +```bash +# Check if you used --delete-volumes when stopping +# Volume data is in: /data/ + +# Don't use --delete-volumes if you want to keep data: +laconic-so deployment --dir my-deployment stop + +# Only use --delete-volumes when you want to reset completely: +laconic-so deployment --dir my-deployment stop --delete-volumes +``` + +**Problem: Services not starting** + +```bash +# Check logs for errors +laconic-so deployment --dir my-deployment logs + +# Check Docker container status +docker ps -a + +# Try stopping and starting again +laconic-so deployment --dir my-deployment stop +laconic-so deployment --dir my-deployment start +``` + +### Inspecting Deployment State + +```bash +# Check deployment directory structure +ls -la my-deployment/ + +# Check running containers +docker ps + +# Check container details +docker inspect + +# Check networks +docker network ls + +# Check volumes +docker volume ls +``` + +## CLI Commands Reference + +### Stack Operations + +```bash +# Clone required repositories +laconic-so --stack setup-repositories + +# Build container images +laconic-so --stack build-containers +``` + +### Deployment Initialization + +```bash +# Initialize deployment spec with defaults +laconic-so --stack deploy init --output + +# Initialize with configuration +laconic-so --stack deploy init --output \ + --config PARAM1=value1,PARAM2=value2 +``` + +### Deployment Creation + +```bash +# Create deployment directory from spec +laconic-so --stack deploy create \ + --spec-file \ + --deployment-dir +``` + +### Deployment Management + +```bash +# Start all services +laconic-so deployment --dir start + +# Stop services (preserves volumes) +laconic-so deployment --dir stop + +# Stop and remove volumes +laconic-so deployment --dir stop --delete-volumes + +# List running services +laconic-so deployment --dir ps + +# View logs +laconic-so deployment --dir logs [--tail N] [--follow] [service] + +# Show mapped port +laconic-so deployment --dir port + +# Execute command in service +laconic-so deployment --dir exec + +# Update configuration +laconic-so deployment --dir update +``` + +### Quick Deploy Commands + +```bash +# Start stack directly +laconic-so --stack deploy up + +# Stop stack +laconic-so --stack deploy down [--delete-volumes] + +# View logs +laconic-so --stack deploy logs + +# Show port mapping +laconic-so --stack deploy port +``` + +## Related Documentation + +- [CLI Reference](./cli.md) - Complete CLI command documentation +- [Adding a New Stack](./adding-a-new-stack.md) - Creating custom stacks +- [Specification](./spec.md) - Internal structure and design +- [Kubernetes Enhancements](./k8s-deployment-enhancements.md) - Advanced K8s deployment options +- [Web App Deployment](./webapp.md) - Deploying web applications + +## Examples + +For more examples, see the test scripts: +- `scripts/quick-deploy-test.sh` - Quick deployment example +- `tests/deploy/run-deploy-test.sh` - Comprehensive test showing all features + +## Summary + +- Docker Compose is the default and recommended deployment mode +- Two workflows: deployment directory (recommended) or quick deploy +- The standard workflow is: setup → build → init → create → start +- Configuration is flexible with multiple override layers +- Volume persistence is automatic unless explicitly deleted +- All deployment state is contained in the deployment directory +- For Kubernetes deployments, see separate K8s documentation + +You're now ready to deploy stacks using stack-orchestrator with Docker Compose! diff --git a/laconic-network-deployment.md b/laconic-network-deployment.md new file mode 100644 index 00000000..d30661b2 --- /dev/null +++ b/laconic-network-deployment.md @@ -0,0 +1,128 @@ +# Deploying to the Laconic Network + +## Overview + +The Laconic network uses a **registry-based deployment model** where everything is published as blockchain records. + +## Key Documentation in stack-orchestrator + +- `docs/laconicd-with-console.md` - Setting up a laconicd network +- `docs/webapp.md` - Webapp building/running +- `stack_orchestrator/deploy/webapp/` - Implementation (14 modules) + +## Core Concepts + +### LRN (Laconic Resource Name) +Format: `lrn://laconic/[namespace]/[name]` + +Examples: +- `lrn://laconic/deployers/my-deployer-name` +- `lrn://laconic/dns/example.com` +- `lrn://laconic/deployments/example.com` + +### Registry Record Types + +| Record Type | Purpose | +|-------------|---------| +| `ApplicationRecord` | Published app metadata | +| `WebappDeployer` | Deployment service offering | +| `ApplicationDeploymentRequest` | User's request to deploy | +| `ApplicationDeploymentAuction` | Optional bidding for deployers | +| `ApplicationDeploymentRecord` | Completed deployment result | + +## Deployment Workflows + +### 1. Direct Deployment + +``` +User publishes ApplicationDeploymentRequest + → targets specific WebappDeployer (by LRN) + → includes payment TX hash + → Deployer picks up request, builds, deploys, publishes result +``` + +### 2. Auction-Based Deployment + +``` +User publishes ApplicationDeploymentAuction + → Deployers bid (commit/reveal phases) + → Winner selected + → User publishes request targeting winner +``` + +## Key CLI Commands + +### Publish a Deployer Service +```bash +laconic-so publish-webapp-deployer --laconic-config config.yml \ + --api-url https://deployer-api.example.com \ + --name my-deployer \ + --payment-address laconic1... \ + --minimum-payment 1000alnt +``` + +### Request Deployment (User Side) +```bash +laconic-so request-webapp-deployment --laconic-config config.yml \ + --app lrn://laconic/apps/my-app \ + --deployer lrn://laconic/deployers/xyz \ + --make-payment auto +``` + +### Run Deployer Service (Deployer Side) +```bash +laconic-so deploy-webapp-from-registry --laconic-config config.yml --discover +``` + +## Laconic Config File + +All tools require a laconic config file (`laconic.toml`): + +```toml +[cosmos] +address_prefix = "laconic" +chain_id = "laconic_9000-1" +endpoint = "http://localhost:26657" +key = "" +password = "" +``` + +## Setting Up a Local Laconicd Network + +```bash +# Clone and build +laconic-so --stack fixturenet-laconic-loaded setup-repositories +laconic-so --stack fixturenet-laconic-loaded build-containers +laconic-so --stack fixturenet-laconic-loaded deploy create +laconic-so deployment --dir laconic-loaded-deployment start + +# Check status +laconic-so deployment --dir laconic-loaded-deployment exec cli "laconic registry status" +``` + +## Key Implementation Files + +| File | Purpose | +|------|---------| +| `publish_webapp_deployer.py` | Register deployment service on network | +| `publish_deployment_auction.py` | Create auction for deployers to bid on | +| `handle_deployment_auction.py` | Monitor and bid on auctions (deployer-side) | +| `request_webapp_deployment.py` | Create deployment request (user-side) | +| `deploy_webapp_from_registry.py` | Process requests and deploy (deployer-side) | +| `request_webapp_undeployment.py` | Request app removal | +| `undeploy_webapp_from_registry.py` | Process removal requests | +| `util.py` | LaconicRegistryClient - all registry interactions | + +## Payment System + +- **Token Denom**: `alnt` (Laconic network tokens) +- **Payment Options**: + - `--make-payment`: Create new payment with amount (or "auto" for deployer's minimum) + - `--use-payment`: Reference existing payment TX + +## What's NOT Well-Documented + +1. No end-to-end tutorial for full deployment workflow +2. Stack publishing (vs webapp) process unclear +3. LRN naming conventions not formally specified +4. Payment economics and token mechanics diff --git a/stack_orchestrator/data/compose/docker-compose-test.yml b/stack_orchestrator/data/compose/docker-compose-test.yml index 7c8e8e95..ae11ca13 100644 --- a/stack_orchestrator/data/compose/docker-compose-test.yml +++ b/stack_orchestrator/data/compose/docker-compose-test.yml @@ -8,6 +8,8 @@ services: CERC_TEST_PARAM_2: "CERC_TEST_PARAM_2_VALUE" CERC_TEST_PARAM_3: ${CERC_TEST_PARAM_3:-FAILED} volumes: + - ../config/test/script.sh:/opt/run.sh + - ../config/test/settings.env:/opt/settings.env - test-data-bind:/data - test-data-auto:/data2 - test-config:/config:ro diff --git a/stack_orchestrator/data/config/test/script.sh b/stack_orchestrator/data/config/test/script.sh new file mode 100644 index 00000000..34a19ab1 --- /dev/null +++ b/stack_orchestrator/data/config/test/script.sh @@ -0,0 +1,3 @@ +#!/bin/sh + +echo "Hello" diff --git a/stack_orchestrator/data/config/test/settings.env b/stack_orchestrator/data/config/test/settings.env new file mode 100644 index 00000000..813f7578 --- /dev/null +++ b/stack_orchestrator/data/config/test/settings.env @@ -0,0 +1 @@ +ANSWER=42 diff --git a/stack_orchestrator/data/container-build/cerc-test-container/Dockerfile b/stack_orchestrator/data/container-build/cerc-test-container/Dockerfile index f4ef5506..46021613 100644 --- a/stack_orchestrator/data/container-build/cerc-test-container/Dockerfile +++ b/stack_orchestrator/data/container-build/cerc-test-container/Dockerfile @@ -1,9 +1,6 @@ -FROM ubuntu:latest +FROM alpine:latest -RUN apt-get update && export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NOWARNINGS="yes" && \ - apt-get install -y software-properties-common && \ - apt-get install -y nginx && \ - apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* +RUN apk add --no-cache nginx EXPOSE 80 diff --git a/stack_orchestrator/data/container-build/cerc-test-container/run.sh b/stack_orchestrator/data/container-build/cerc-test-container/run.sh index d06f4df4..b4da05ed 100755 --- a/stack_orchestrator/data/container-build/cerc-test-container/run.sh +++ b/stack_orchestrator/data/container-build/cerc-test-container/run.sh @@ -1,4 +1,4 @@ -#!/usr/bin/env bash +#!/usr/bin/env sh set -e if [ -n "$CERC_SCRIPT_DEBUG" ]; then @@ -8,14 +8,14 @@ fi echo "Test container starting" DATA_DEVICE=$(df | grep "/data$" | awk '{ print $1 }') -if [[ -n "$DATA_DEVICE" ]]; then +if [ -n "$DATA_DEVICE" ]; then echo "/data: MOUNTED dev=${DATA_DEVICE}" else echo "/data: not mounted" fi DATA2_DEVICE=$(df | grep "/data2$" | awk '{ print $1 }') -if [[ -n "$DATA_DEVICE" ]]; then +if [ -n "$DATA_DEVICE" ]; then echo "/data2: MOUNTED dev=${DATA2_DEVICE}" else echo "/data2: not mounted" @@ -23,7 +23,7 @@ fi # Test if the container's filesystem is old (run previously) or new for d in /data /data2; do - if [[ -f "$d/exists" ]]; + if [ -f "$d/exists" ]; then TIMESTAMP=`cat $d/exists` echo "$d filesystem is old, created: $TIMESTAMP" @@ -52,7 +52,7 @@ fi if [ -d "/config" ]; then echo "/config: EXISTS" for f in /config/*; do - if [[ -f "$f" ]] || [[ -L "$f" ]]; then + if [ -f "$f" ] || [ -L "$f" ]; then echo "$f:" cat "$f" echo "" @@ -64,4 +64,4 @@ else fi # Run nginx which will block here forever -/usr/sbin/nginx -g "daemon off;" +nginx -g "daemon off;" diff --git a/stack_orchestrator/data/stacks/fixturenet-optimism/deploy/commands.py b/stack_orchestrator/data/stacks/fixturenet-optimism/deploy/commands.py index a11a1d01..bbec0f81 100644 --- a/stack_orchestrator/data/stacks/fixturenet-optimism/deploy/commands.py +++ b/stack_orchestrator/data/stacks/fixturenet-optimism/deploy/commands.py @@ -14,7 +14,6 @@ # along with this program. If not, see . from stack_orchestrator.deploy.deployment_context import DeploymentContext -from ruamel.yaml import YAML def create(context: DeploymentContext, extra_args): @@ -28,17 +27,12 @@ def create(context: DeploymentContext, extra_args): "compose", "docker-compose-fixturenet-eth.yml" ) - with open(fixturenet_eth_compose_file, "r") as yaml_file: - yaml = YAML() - yaml_data = yaml.load(yaml_file) - new_script = "../config/fixturenet-optimism/run-geth.sh:/opt/testnet/run.sh" - if new_script not in yaml_data["services"]["fixturenet-eth-geth-1"]["volumes"]: - yaml_data["services"]["fixturenet-eth-geth-1"]["volumes"].append(new_script) + def add_geth_volume(yaml_data): + if new_script not in yaml_data["services"]["fixturenet-eth-geth-1"]["volumes"]: + yaml_data["services"]["fixturenet-eth-geth-1"]["volumes"].append(new_script) - with open(fixturenet_eth_compose_file, "w") as yaml_file: - yaml = YAML() - yaml.dump(yaml_data, yaml_file) + context.modify_yaml(fixturenet_eth_compose_file, add_geth_volume) return None diff --git a/stack_orchestrator/data/stacks/test/stack.yml b/stack_orchestrator/data/stacks/test/stack.yml index ac724c89..93d3ecd3 100644 --- a/stack_orchestrator/data/stacks/test/stack.yml +++ b/stack_orchestrator/data/stacks/test/stack.yml @@ -2,7 +2,6 @@ version: "1.0" name: test description: "A test stack" repos: - - git.vdb.to/cerc-io/laconicd - git.vdb.to/cerc-io/test-project@test-branch containers: - cerc/test-container diff --git a/stack_orchestrator/deploy/deployment_context.py b/stack_orchestrator/deploy/deployment_context.py index ffe0f71f..79fc4bb9 100644 --- a/stack_orchestrator/deploy/deployment_context.py +++ b/stack_orchestrator/deploy/deployment_context.py @@ -47,20 +47,6 @@ class DeploymentContext: def get_compose_file(self, name: str): return self.get_compose_dir() / f"docker-compose-{name}.yml" - def modify_yaml(self, file_path: Path, modifier_func): - """Load a YAML from the deployment, apply a modifier, and write back.""" - if not file_path.absolute().is_relative_to(self.deployment_dir): - raise ValueError(f"File is not inside deployment directory: {file_path}") - - yaml = get_yaml() - with open(file_path, "r") as f: - yaml_data = yaml.load(f) - - modifier_func(yaml_data) - - with open(file_path, "w") as f: - yaml.dump(yaml_data, f) - def get_cluster_id(self): return self.id @@ -82,3 +68,17 @@ class DeploymentContext: unique_cluster_descriptor = f"{path},{self.get_stack_file()},None,None" hash = hashlib.md5(unique_cluster_descriptor.encode()).hexdigest()[:16] self.id = f"{constants.cluster_name_prefix}{hash}" + + def modify_yaml(self, file_path: Path, modifier_func): + """Load a YAML, apply a modification function, and write it back.""" + if not file_path.absolute().is_relative_to(self.deployment_dir): + raise ValueError(f"File is not inside deployment directory: {file_path}") + + yaml = get_yaml() + with open(file_path, "r") as f: + yaml_data = yaml.load(f) + + modifier_func(yaml_data) + + with open(file_path, "w") as f: + yaml.dump(yaml_data, f) diff --git a/stack_orchestrator/deploy/deployment_create.py b/stack_orchestrator/deploy/deployment_create.py index 772fd02b..e7586059 100644 --- a/stack_orchestrator/deploy/deployment_create.py +++ b/stack_orchestrator/deploy/deployment_create.py @@ -19,9 +19,12 @@ import os from pathlib import Path from typing import List import random -from shutil import copy, copyfile, copytree +from shutil import copy, copyfile, copytree, rmtree from secrets import token_hex import sys +import filecmp +import tempfile + from stack_orchestrator import constants from stack_orchestrator.opts import opts from stack_orchestrator.util import ( @@ -524,6 +527,12 @@ def _check_volume_definitions(spec): "--spec-file", required=True, help="Spec file to use to create this deployment" ) @click.option("--deployment-dir", help="Create deployment files in this directory") +@click.option( + "--update", + is_flag=True, + default=False, + help="Update existing deployment directory, preserving data volumes and env file", +) @click.option( "--helm-chart", is_flag=True, @@ -536,13 +545,21 @@ def _check_volume_definitions(spec): @click.argument("extra_args", nargs=-1, type=click.UNPROCESSED) @click.pass_context def create( - ctx, spec_file, deployment_dir, helm_chart, network_dir, initial_peers, extra_args + ctx, + spec_file, + deployment_dir, + update, + helm_chart, + network_dir, + initial_peers, + extra_args, ): deployment_command_context = ctx.obj return create_operation( deployment_command_context, spec_file, deployment_dir, + update, helm_chart, network_dir, initial_peers, @@ -556,9 +573,10 @@ def create_operation( deployment_command_context, spec_file, deployment_dir, - helm_chart, - network_dir, - initial_peers, + update=False, + helm_chart=False, + network_dir=None, + initial_peers=None, extra_args=(), ): parsed_spec = Spec( @@ -568,23 +586,23 @@ def create_operation( stack_name = parsed_spec["stack"] deployment_type = parsed_spec[constants.deploy_to_key] - stack_file = get_stack_path(stack_name).joinpath(constants.stack_file_name) - parsed_stack = get_parsed_stack_config(stack_name) if opts.o.debug: print(f"parsed spec: {parsed_spec}") + if deployment_dir is None: deployment_dir_path = _make_default_deployment_dir() else: deployment_dir_path = Path(deployment_dir) - if deployment_dir_path.exists(): - error_exit(f"{deployment_dir_path} already exists") - os.mkdir(deployment_dir_path) - # Copy spec file and the stack file into the deployment dir - copyfile(spec_file, deployment_dir_path.joinpath(constants.spec_file_name)) - copyfile(stack_file, deployment_dir_path.joinpath(constants.stack_file_name)) - # Create deployment.yml with cluster-id - _create_deployment_file(deployment_dir_path) + if deployment_dir_path.exists(): + if not update: + error_exit(f"{deployment_dir_path} already exists") + if opts.o.debug: + print(f"Updating existing deployment at {deployment_dir_path}") + else: + if update: + error_exit(f"--update requires that {deployment_dir_path} already exists") + os.mkdir(deployment_dir_path) # Branch to Helm chart generation flow if --helm-chart flag is set if deployment_type == "k8s" and helm_chart: @@ -595,104 +613,41 @@ def create_operation( generate_helm_chart(stack_name, spec_file, deployment_dir_path) return # Exit early for helm chart generation - # Existing deployment flow continues unchanged - # Copy any config varibles from the spec file into an env file suitable for compose - _write_config_file( - spec_file, deployment_dir_path.joinpath(constants.config_file_name) - ) - # Copy any k8s config file into the deployment dir - if deployment_type == "k8s": - _write_kube_config_file( - Path(parsed_spec[constants.kube_config_key]), - deployment_dir_path.joinpath(constants.kube_config_filename), - ) - # Copy the pod files into the deployment dir, fixing up content - pods = get_pod_list(parsed_stack) - destination_compose_dir = deployment_dir_path.joinpath("compose") - os.mkdir(destination_compose_dir) - destination_pods_dir = deployment_dir_path.joinpath("pods") - os.mkdir(destination_pods_dir) - yaml = get_yaml() - for pod in pods: - pod_file_path = get_pod_file_path(stack_name, parsed_stack, pod) - if pod_file_path is None: - continue - parsed_pod_file = yaml.load(open(pod_file_path, "r")) - extra_config_dirs = _find_extra_config_dirs(parsed_pod_file, pod) - destination_pod_dir = destination_pods_dir.joinpath(pod) - os.mkdir(destination_pod_dir) - if opts.o.debug: - print(f"extra config dirs: {extra_config_dirs}") - _fixup_pod_file(parsed_pod_file, parsed_spec, destination_compose_dir) - with open( - destination_compose_dir.joinpath("docker-compose-%s.yml" % pod), "w" - ) as output_file: - yaml.dump(parsed_pod_file, output_file) - # Copy the config files for the pod, if any - config_dirs = {pod} - config_dirs = config_dirs.union(extra_config_dirs) - for config_dir in config_dirs: - source_config_dir = resolve_config_dir(stack_name, config_dir) - if os.path.exists(source_config_dir): - destination_config_dir = deployment_dir_path.joinpath( - "config", config_dir - ) - # If the same config dir appears in multiple pods, it may already have - # been copied - if not os.path.exists(destination_config_dir): - copytree(source_config_dir, destination_config_dir) - # Copy the script files for the pod, if any - if pod_has_scripts(parsed_stack, pod): - destination_script_dir = destination_pod_dir.joinpath("scripts") - os.mkdir(destination_script_dir) - script_paths = get_pod_script_paths(parsed_stack, pod) - _copy_files_to_directory(script_paths, destination_script_dir) - if parsed_spec.is_kubernetes_deployment(): - for configmap in parsed_spec.get_configmaps(): - source_config_dir = resolve_config_dir(stack_name, configmap) - if os.path.exists(source_config_dir): - destination_config_dir = deployment_dir_path.joinpath( - "configmaps", configmap - ) - copytree( - source_config_dir, destination_config_dir, dirs_exist_ok=True - ) - else: - # TODO: We should probably only do this if the volume is marked :ro. - for volume_name, volume_path in parsed_spec.get_volumes().items(): - source_config_dir = resolve_config_dir(stack_name, volume_name) - # Only copy if the source exists and is _not_ empty. - if os.path.exists(source_config_dir) and os.listdir(source_config_dir): - destination_config_dir = deployment_dir_path.joinpath(volume_path) - # Only copy if the destination exists and _is_ empty. - if os.path.exists(destination_config_dir) and not os.listdir( - destination_config_dir - ): - copytree( - source_config_dir, - destination_config_dir, - dirs_exist_ok=True, - ) + if update: + # Sync mode: write to temp dir, then copy to deployment dir with backups + temp_dir = Path(tempfile.mkdtemp(prefix="deployment-sync-")) + try: + # Write deployment files to temp dir (skip deployment.yml to preserve cluster ID) + _write_deployment_files( + temp_dir, + Path(spec_file), + parsed_spec, + stack_name, + deployment_type, + include_deployment_file=False, + ) - # Copy the job files into the deployment dir (for Docker deployments) - jobs = get_job_list(parsed_stack) - if jobs and not parsed_spec.is_kubernetes_deployment(): - destination_compose_jobs_dir = deployment_dir_path.joinpath("compose-jobs") - os.mkdir(destination_compose_jobs_dir) - for job in jobs: - job_file_path = get_job_file_path(stack_name, parsed_stack, job) - if job_file_path and job_file_path.exists(): - parsed_job_file = yaml.load(open(job_file_path, "r")) - _fixup_pod_file(parsed_job_file, parsed_spec, destination_compose_dir) - with open( - destination_compose_jobs_dir.joinpath( - "docker-compose-%s.yml" % job - ), - "w", - ) as output_file: - yaml.dump(parsed_job_file, output_file) - if opts.o.debug: - print(f"Copied job compose file: {job}") + # Copy from temp to deployment dir, excluding data volumes and backing up changed files + # Exclude data/* to avoid touching user data volumes + # Exclude config file to preserve deployment settings (XXX breaks passing config vars + # from spec. could warn about this or not exclude...) + exclude_patterns = ["data", "data/*", constants.config_file_name] + _safe_copy_tree( + temp_dir, deployment_dir_path, exclude_patterns=exclude_patterns + ) + finally: + # Clean up temp dir + rmtree(temp_dir) + else: + # Normal mode: write directly to deployment dir + _write_deployment_files( + deployment_dir_path, + Path(spec_file), + parsed_spec, + stack_name, + deployment_type, + include_deployment_file=True, + ) # Delegate to the stack's Python code # The deploy create command doesn't require a --stack argument so we need @@ -712,6 +667,181 @@ def create_operation( ) +def _safe_copy_tree(src: Path, dst: Path, exclude_patterns: List[str] = None): + """ + Recursively copy a directory tree, backing up changed files with .bak suffix. + + :param src: Source directory + :param dst: Destination directory + :param exclude_patterns: List of path patterns to exclude (relative to src) + """ + if exclude_patterns is None: + exclude_patterns = [] + + def should_exclude(path: Path) -> bool: + """Check if path matches any exclude pattern.""" + rel_path = path.relative_to(src) + for pattern in exclude_patterns: + if rel_path.match(pattern): + return True + return False + + def safe_copy_file(src_file: Path, dst_file: Path): + """Copy file, backing up destination if it differs.""" + if ( + dst_file.exists() + and not dst_file.is_dir() + and not filecmp.cmp(src_file, dst_file) + ): + os.rename(dst_file, f"{dst_file}.bak") + copy(src_file, dst_file) + + # Walk the source tree + for src_path in src.rglob("*"): + if should_exclude(src_path): + continue + + rel_path = src_path.relative_to(src) + dst_path = dst / rel_path + + if src_path.is_dir(): + dst_path.mkdir(parents=True, exist_ok=True) + else: + dst_path.parent.mkdir(parents=True, exist_ok=True) + safe_copy_file(src_path, dst_path) + + +def _write_deployment_files( + target_dir: Path, + spec_file: Path, + parsed_spec: Spec, + stack_name: str, + deployment_type: str, + include_deployment_file: bool = True, +): + """ + Write deployment files to target directory. + + :param target_dir: Directory to write files to + :param spec_file: Path to spec file + :param parsed_spec: Parsed spec object + :param stack_name: Name of stack + :param deployment_type: Type of deployment + :param include_deployment_file: Whether to create deployment.yml file (skip for update) + """ + stack_file = get_stack_path(stack_name).joinpath(constants.stack_file_name) + parsed_stack = get_parsed_stack_config(stack_name) + + # Copy spec file and the stack file into the target dir + copyfile(spec_file, target_dir.joinpath(constants.spec_file_name)) + copyfile(stack_file, target_dir.joinpath(constants.stack_file_name)) + + # Create deployment file if requested + if include_deployment_file: + _create_deployment_file(target_dir) + + # Copy any config variables from the spec file into an env file suitable for compose + _write_config_file(spec_file, target_dir.joinpath(constants.config_file_name)) + + # Copy any k8s config file into the target dir + if deployment_type == "k8s": + _write_kube_config_file( + Path(parsed_spec[constants.kube_config_key]), + target_dir.joinpath(constants.kube_config_filename), + ) + + # Copy the pod files into the target dir, fixing up content + pods = get_pod_list(parsed_stack) + destination_compose_dir = target_dir.joinpath("compose") + os.makedirs(destination_compose_dir, exist_ok=True) + destination_pods_dir = target_dir.joinpath("pods") + os.makedirs(destination_pods_dir, exist_ok=True) + yaml = get_yaml() + + for pod in pods: + pod_file_path = get_pod_file_path(stack_name, parsed_stack, pod) + if pod_file_path is None: + continue + parsed_pod_file = yaml.load(open(pod_file_path, "r")) + extra_config_dirs = _find_extra_config_dirs(parsed_pod_file, pod) + destination_pod_dir = destination_pods_dir.joinpath(pod) + os.makedirs(destination_pod_dir, exist_ok=True) + if opts.o.debug: + print(f"extra config dirs: {extra_config_dirs}") + _fixup_pod_file(parsed_pod_file, parsed_spec, destination_compose_dir) + with open( + destination_compose_dir.joinpath("docker-compose-%s.yml" % pod), "w" + ) as output_file: + yaml.dump(parsed_pod_file, output_file) + + # Copy the config files for the pod, if any + config_dirs = {pod} + config_dirs = config_dirs.union(extra_config_dirs) + for config_dir in config_dirs: + source_config_dir = resolve_config_dir(stack_name, config_dir) + if os.path.exists(source_config_dir): + destination_config_dir = target_dir.joinpath("config", config_dir) + copytree(source_config_dir, destination_config_dir, dirs_exist_ok=True) + + # Copy the script files for the pod, if any + if pod_has_scripts(parsed_stack, pod): + destination_script_dir = destination_pod_dir.joinpath("scripts") + os.makedirs(destination_script_dir, exist_ok=True) + script_paths = get_pod_script_paths(parsed_stack, pod) + _copy_files_to_directory(script_paths, destination_script_dir) + + if parsed_spec.is_kubernetes_deployment(): + for configmap in parsed_spec.get_configmaps(): + source_config_dir = resolve_config_dir(stack_name, configmap) + if os.path.exists(source_config_dir): + destination_config_dir = target_dir.joinpath( + "configmaps", configmap + ) + copytree( + source_config_dir, destination_config_dir, dirs_exist_ok=True + ) + else: + # TODO: + # this is odd - looks up config dir that matches a volume name, then copies as a mount dir? + # AFAICT this is not used by or relevant to any existing stack - roy + + # TODO: We should probably only do this if the volume is marked :ro. + for volume_name, volume_path in parsed_spec.get_volumes().items(): + source_config_dir = resolve_config_dir(stack_name, volume_name) + # Only copy if the source exists and is _not_ empty. + if os.path.exists(source_config_dir) and os.listdir(source_config_dir): + destination_config_dir = target_dir.joinpath(volume_path) + # Only copy if the destination exists and _is_ empty. + if os.path.exists(destination_config_dir) and not os.listdir( + destination_config_dir + ): + copytree( + source_config_dir, + destination_config_dir, + dirs_exist_ok=True, + ) + + # Copy the job files into the target dir (for Docker deployments) + jobs = get_job_list(parsed_stack) + if jobs and not parsed_spec.is_kubernetes_deployment(): + destination_compose_jobs_dir = target_dir.joinpath("compose-jobs") + os.makedirs(destination_compose_jobs_dir, exist_ok=True) + for job in jobs: + job_file_path = get_job_file_path(stack_name, parsed_stack, job) + if job_file_path and job_file_path.exists(): + parsed_job_file = yaml.load(open(job_file_path, "r")) + _fixup_pod_file(parsed_job_file, parsed_spec, destination_compose_dir) + with open( + destination_compose_jobs_dir.joinpath( + "docker-compose-%s.yml" % job + ), + "w", + ) as output_file: + yaml.dump(parsed_job_file, output_file) + if opts.o.debug: + print(f"Copied job compose file: {job}") + + # TODO: this code should be in the stack .py files but # we haven't yet figured out how to integrate click across # the plugin boundary diff --git a/stack_orchestrator/deploy/k8s/cluster_info.py b/stack_orchestrator/deploy/k8s/cluster_info.py index bce1c310..42c41b4b 100644 --- a/stack_orchestrator/deploy/k8s/cluster_info.py +++ b/stack_orchestrator/deploy/k8s/cluster_info.py @@ -216,23 +216,32 @@ class ClusterInfo: # TODO: suppoprt multiple services def get_service(self): - port = None - for pod_name in self.parsed_pod_yaml_map: - pod = self.parsed_pod_yaml_map[pod_name] - services = pod["services"] - for service_name in services: - service_info = services[service_name] - if "ports" in service_info: - port = int(service_info["ports"][0]) - if opts.o.debug: - print(f"service port: {port}") - if port is None: + # Collect all ports from http-proxy routes + ports_set = set() + http_proxy_list = self.spec.get_http_proxy() + if http_proxy_list: + for http_proxy in http_proxy_list: + for route in http_proxy.get("routes", []): + proxy_to = route.get("proxy-to", "") + if ":" in proxy_to: + port = int(proxy_to.split(":")[1]) + ports_set.add(port) + if opts.o.debug: + print(f"http-proxy route port: {port}") + + if not ports_set: return None + + service_ports = [ + client.V1ServicePort(port=p, target_port=p, name=f"port-{p}") + for p in sorted(ports_set) + ] + service = client.V1Service( metadata=client.V1ObjectMeta(name=f"{self.app_name}-service"), spec=client.V1ServiceSpec( type="ClusterIP", - ports=[client.V1ServicePort(port=port, target_port=port)], + ports=service_ports, selector={"app": self.app_name}, ), ) diff --git a/stack_orchestrator/deploy/k8s/deploy_k8s.py b/stack_orchestrator/deploy/k8s/deploy_k8s.py index cf8f564f..3d0b697c 100644 --- a/stack_orchestrator/deploy/k8s/deploy_k8s.py +++ b/stack_orchestrator/deploy/k8s/deploy_k8s.py @@ -241,7 +241,7 @@ class K8sDeployer(Deployer): service = self.cluster_info.get_service() if opts.o.debug: print(f"Sending this service: {service}") - if not opts.o.dry_run: + if service and not opts.o.dry_run: service_resp = self.core_api.create_namespaced_service( namespace=self.k8s_namespace, body=service ) diff --git a/stack_orchestrator/deploy/k8s/helpers.py b/stack_orchestrator/deploy/k8s/helpers.py index a125d4f5..f7603b5e 100644 --- a/stack_orchestrator/deploy/k8s/helpers.py +++ b/stack_orchestrator/deploy/k8s/helpers.py @@ -27,6 +27,48 @@ from stack_orchestrator.deploy.deployer import DeployerException from stack_orchestrator import constants +def is_host_path_mount(volume_name: str) -> bool: + """Check if a volume name is a host path mount (starts with /, ., or ~).""" + return volume_name.startswith(("/", ".", "~")) + + +def sanitize_host_path_to_volume_name(host_path: str) -> str: + """Convert a host path to a valid k8s volume name. + + K8s volume names must be lowercase, alphanumeric, with - allowed. + E.g., '../config/test/script.sh' -> 'host-path-config-test-script-sh' + """ + # Remove leading ./ or ../ + clean_path = re.sub(r"^\.+/", "", host_path) + # Replace path separators and dots with hyphens + name = re.sub(r"[/.]", "-", clean_path) + # Remove any non-alphanumeric characters except hyphens + name = re.sub(r"[^a-zA-Z0-9-]", "", name) + # Convert to lowercase + name = name.lower() + # Remove leading/trailing hyphens and collapse multiple hyphens + name = re.sub(r"-+", "-", name).strip("-") + # Prefix with 'host-path-' to distinguish from named volumes + return f"host-path-{name}" + + +def resolve_host_path_for_kind(host_path: str, deployment_dir: Path) -> Path: + """Resolve a host path mount (relative to compose file) to absolute path. + + Compose files are in deployment_dir/compose/, so '../config/foo' + resolves to deployment_dir/config/foo. + """ + # The path is relative to the compose directory + compose_dir = deployment_dir.joinpath("compose") + resolved = compose_dir.joinpath(host_path).resolve() + return resolved + + +def get_kind_host_path_mount_path(sanitized_name: str) -> str: + """Get the path inside the kind node where a host path mount will be available.""" + return f"/mnt/{sanitized_name}" + + def get_kind_cluster(): """Get an existing kind cluster, if any. @@ -177,6 +219,7 @@ def volume_mounts_for_service(parsed_pod_files, service): for mount_string in volumes: # Looks like: test-data:/data # or test-data:/data:ro or test-data:/data:rw + # or ../config/file.sh:/opt/file.sh (host path mount) if opts.o.debug: print(f"mount_string: {mount_string}") mount_split = mount_string.split(":") @@ -185,13 +228,21 @@ def volume_mounts_for_service(parsed_pod_files, service): mount_options = ( mount_split[2] if len(mount_split) == 3 else None ) + # For host path mounts, use sanitized name + if is_host_path_mount(volume_name): + k8s_volume_name = sanitize_host_path_to_volume_name( + volume_name + ) + else: + k8s_volume_name = volume_name if opts.o.debug: print(f"volume_name: {volume_name}") + print(f"k8s_volume_name: {k8s_volume_name}") print(f"mount path: {mount_path}") print(f"mount options: {mount_options}") volume_device = client.V1VolumeMount( mount_path=mount_path, - name=volume_name, + name=k8s_volume_name, read_only="ro" == mount_options, ) result.append(volume_device) @@ -200,8 +251,12 @@ def volume_mounts_for_service(parsed_pod_files, service): def volumes_for_pod_files(parsed_pod_files, spec, app_name): result = [] + seen_host_path_volumes = set() # Track host path volumes to avoid duplicates + for pod in parsed_pod_files: parsed_pod_file = parsed_pod_files[pod] + + # Handle named volumes from top-level volumes section if "volumes" in parsed_pod_file: volumes = parsed_pod_file["volumes"] for volume_name in volumes.keys(): @@ -220,6 +275,35 @@ def volumes_for_pod_files(parsed_pod_files, spec, app_name): name=volume_name, persistent_volume_claim=claim ) result.append(volume) + + # Handle host path mounts from service volumes + if "services" in parsed_pod_file: + services = parsed_pod_file["services"] + for service_name in services: + service_obj = services[service_name] + if "volumes" in service_obj: + for mount_string in service_obj["volumes"]: + mount_split = mount_string.split(":") + volume_source = mount_split[0] + if is_host_path_mount(volume_source): + sanitized_name = sanitize_host_path_to_volume_name( + volume_source + ) + if sanitized_name not in seen_host_path_volumes: + seen_host_path_volumes.add(sanitized_name) + # Create hostPath volume for mount inside kind node + kind_mount_path = get_kind_host_path_mount_path( + sanitized_name + ) + host_path_source = client.V1HostPathVolumeSource( + path=kind_mount_path, type="FileOrCreate" + ) + volume = client.V1Volume( + name=sanitized_name, host_path=host_path_source + ) + result.append(volume) + if opts.o.debug: + print(f"Created hostPath volume: {sanitized_name}") return result @@ -238,6 +322,8 @@ def _make_absolute_host_path(data_mount_path: Path, deployment_dir: Path) -> Pat def _generate_kind_mounts(parsed_pod_files, deployment_dir, deployment_context): volume_definitions = [] volume_host_path_map = _get_host_paths_for_volumes(deployment_context) + seen_host_path_mounts = set() # Track to avoid duplicate mounts + # Note these paths are relative to the location of the pod files (at present) # So we need to fix up to make them correct and absolute because kind assumes # relative to the cwd. @@ -252,28 +338,58 @@ def _generate_kind_mounts(parsed_pod_files, deployment_dir, deployment_context): for mount_string in volumes: # Looks like: test-data:/data # or test-data:/data:ro or test-data:/data:rw + # or ../config/file.sh:/opt/file.sh (host path mount) if opts.o.debug: print(f"mount_string: {mount_string}") mount_split = mount_string.split(":") volume_name = mount_split[0] mount_path = mount_split[1] - if opts.o.debug: - print(f"volume_name: {volume_name}") - print(f"map: {volume_host_path_map}") - print(f"mount path: {mount_path}") - if volume_name not in deployment_context.spec.get_configmaps(): - if volume_host_path_map[volume_name]: - host_path = _make_absolute_host_path( - volume_host_path_map[volume_name], - deployment_dir, + + if is_host_path_mount(volume_name): + # Host path mount - add extraMount for kind + sanitized_name = sanitize_host_path_to_volume_name( + volume_name + ) + if sanitized_name not in seen_host_path_mounts: + seen_host_path_mounts.add(sanitized_name) + # Resolve path relative to compose directory + host_path = resolve_host_path_for_kind( + volume_name, deployment_dir ) - container_path = get_kind_pv_bind_mount_path( - volume_name + container_path = get_kind_host_path_mount_path( + sanitized_name ) volume_definitions.append( f" - hostPath: {host_path}\n" f" containerPath: {container_path}\n" ) + if opts.o.debug: + print(f"Added host path mount: {host_path}") + else: + # Named volume + if opts.o.debug: + print(f"volume_name: {volume_name}") + print(f"map: {volume_host_path_map}") + print(f"mount path: {mount_path}") + if ( + volume_name + not in deployment_context.spec.get_configmaps() + ): + if ( + volume_name in volume_host_path_map + and volume_host_path_map[volume_name] + ): + host_path = _make_absolute_host_path( + volume_host_path_map[volume_name], + deployment_dir, + ) + container_path = get_kind_pv_bind_mount_path( + volume_name + ) + volume_definitions.append( + f" - hostPath: {host_path}\n" + f" containerPath: {container_path}\n" + ) return ( "" if len(volume_definitions) == 0 diff --git a/stack_orchestrator/deploy/webapp/deploy_webapp.py b/stack_orchestrator/deploy/webapp/deploy_webapp.py index 6d5ea6c2..6170dbe3 100644 --- a/stack_orchestrator/deploy/webapp/deploy_webapp.py +++ b/stack_orchestrator/deploy/webapp/deploy_webapp.py @@ -94,7 +94,7 @@ def create_deployment( # Add the TLS and DNS spec _fixup_url_spec(spec_file_name, url) create_operation( - deploy_command_context, spec_file_name, deployment_dir, False, None, None + deploy_command_context, spec_file_name, deployment_dir, False, False, None, None ) # Fix up the container tag inside the deployment compose file _fixup_container_tag(deployment_dir, image) diff --git a/tests/database/run-test.sh b/tests/database/run-test.sh index 405f6d34..2b68cb2c 100755 --- a/tests/database/run-test.sh +++ b/tests/database/run-test.sh @@ -86,7 +86,7 @@ fi echo "deploy init test: passed" # Switch to a full path for the data dir so it gets provisioned as a host bind mounted volume and preserved beyond cluster lifetime -sed -i "s|^\(\s*db-data:$\)$|\1 ${test_deployment_dir}/data/db-data|" $test_deployment_spec +sed -i.bak "s|^\(\s*db-data:$\)$|\1 ${test_deployment_dir}/data/db-data|" $test_deployment_spec $TEST_TARGET_SO --stack ${stack} deploy create --spec-file $test_deployment_spec --deployment-dir $test_deployment_dir # Check the deployment dir exists diff --git a/tests/deploy/run-deploy-test.sh b/tests/deploy/run-deploy-test.sh index fb574b03..73f26da5 100755 --- a/tests/deploy/run-deploy-test.sh +++ b/tests/deploy/run-deploy-test.sh @@ -14,8 +14,13 @@ delete_cluster_exit () { # Test basic stack-orchestrator deploy echo "Running stack-orchestrator deploy test" -# Bit of a hack, test the most recent package -TEST_TARGET_SO=$( ls -t1 ./package/laconic-so* | head -1 ) + +if [ "$1" == "from-path" ]; then + TEST_TARGET_SO="laconic-so" +else + TEST_TARGET_SO=$( ls -t1 ./package/laconic-so* | head -1 ) +fi + # Set a non-default repo dir export CERC_REPO_BASE_DIR=~/stack-orchestrator-test/repo-base-dir echo "Testing this package: $TEST_TARGET_SO" @@ -29,6 +34,7 @@ mkdir -p $CERC_REPO_BASE_DIR # with and without volume removal $TEST_TARGET_SO --stack test setup-repositories $TEST_TARGET_SO --stack test build-containers + # Test deploy command execution $TEST_TARGET_SO --stack test deploy setup $CERC_REPO_BASE_DIR # Check that we now have the expected output directory @@ -80,6 +86,7 @@ else exit 1 fi $TEST_TARGET_SO --stack test deploy down --delete-volumes + # Basic test of creating a deployment test_deployment_dir=$CERC_REPO_BASE_DIR/test-deployment-dir test_deployment_spec=$CERC_REPO_BASE_DIR/test-deployment-spec.yml @@ -117,6 +124,101 @@ fi echo "dbfc7a4d-44a7-416d-b5f3-29842cc47650" > $test_deployment_dir/data/test-config/test_config echo "deploy create output file test: passed" + +# Test sync functionality: update deployment without destroying data +# First, create a marker file in the data directory to verify it's preserved +test_data_marker="$test_deployment_dir/data/test-data-bind/sync-test-marker.txt" +echo "original-data-$(date +%s)" > "$test_data_marker" +original_marker_content=$(<$test_data_marker) + +# Modify a config file in the deployment to differ from source (to test backup) +test_config_file="$test_deployment_dir/config/test/settings.env" +test_config_file_original_content=$(<$test_config_file) +test_config_file_changed_content="ANSWER=69" +echo "$test_config_file_changed_content" > "$test_config_file" + +# Check a config file that matches the source (to test no backup for unchanged files) +test_unchanged_config="$test_deployment_dir/config/test/script.sh" + +# Modify spec file to simulate an update +sed -i.bak 's/CERC_TEST_PARAM_3:/CERC_TEST_PARAM_3: FASTER/' $test_deployment_spec + +# Create/modify config.env to test it isn't overwritten during sync +config_env_file="$test_deployment_dir/config.env" +config_env_persistent_content="PERSISTENT_VALUE=should-not-be-overwritten-$(date +%s)" +echo "$config_env_persistent_content" >> "$config_env_file" +original_config_env_content=$(<$config_env_file) + +# Run sync to update deployment files without destroying data +$TEST_TARGET_SO --stack test deploy create --spec-file $test_deployment_spec --deployment-dir $test_deployment_dir --update + +# Verify config.env was not overwritten +synced_config_env_content=$(<$config_env_file) +if [ "$synced_config_env_content" == "$original_config_env_content" ]; then + echo "deployment update test: config.env preserved - passed" +else + echo "deployment update test: config.env was overwritten - FAILED" + echo "Expected: $original_config_env_content" + echo "Got: $synced_config_env_content" + exit 1 +fi + +# Verify the spec file was updated in deployment dir +updated_deployed_spec=$(<$test_deployment_dir/spec.yml) +if [[ "$updated_deployed_spec" == *"FASTER"* ]]; then + echo "deployment update test: spec file updated" +else + echo "deployment update test: spec file not updated - FAILED" + exit 1 +fi + +# Verify changed config file was backed up +test_config_backup="${test_config_file}.bak" +if [ -f "$test_config_backup" ]; then + backup_content=$(<$test_config_backup) + if [ "$backup_content" == "$test_config_file_changed_content" ]; then + echo "deployment update test: changed config file backed up - passed" + else + echo "deployment update test: backup content incorrect - FAILED" + exit 1 + fi +else + echo "deployment update test: backup file not created for changed file - FAILED" + exit 1 +fi + +# Verify unchanged config file was NOT backed up +test_unchanged_backup="$test_unchanged_config.bak" +if [ -f "$test_unchanged_backup" ]; then + echo "deployment update test: backup created for unchanged file - FAILED" + exit 1 +else + echo "deployment update test: no backup for unchanged file - passed" +fi + +# Verify the config file was updated from source +updated_config_content=$(<$test_config_file) +if [ "$updated_config_content" == "$test_config_file_original_content" ]; then + echo "deployment update test: config file updated from source - passed" +else + echo "deployment update test: config file not updated correctly - FAILED" + exit 1 +fi + +# Verify the data marker file still exists with original content +if [ ! -f "$test_data_marker" ]; then + echo "deployment update test: data file deleted - FAILED" + exit 1 +fi +synced_marker_content=$(<$test_data_marker) +if [ "$synced_marker_content" == "$original_marker_content" ]; then + echo "deployment update test: data preserved - passed" +else + echo "deployment update test: data corrupted - FAILED" + exit 1 +fi +echo "deployment update test: passed" + # Try to start the deployment $TEST_TARGET_SO deployment --dir $test_deployment_dir start # Check logs command works diff --git a/tests/external-stack/run-test.sh b/tests/external-stack/run-test.sh index 084f3b9d..54c6f18f 100755 --- a/tests/external-stack/run-test.sh +++ b/tests/external-stack/run-test.sh @@ -125,6 +125,49 @@ fi echo "dbfc7a4d-44a7-416d-b5f3-29842cc47650" > $test_deployment_dir/data/test-config/test_config echo "deploy create output file test: passed" + +# Test sync functionality: update deployment without destroying data +# First, create a marker file in the data directory to verify it's preserved +test_data_marker="$test_deployment_dir/data/test-data/sync-test-marker.txt" +mkdir -p "$test_deployment_dir/data/test-data" +echo "external-stack-data-$(date +%s)" > "$test_data_marker" +original_marker_content=$(<$test_data_marker) +# Verify deployment file exists and preserve its cluster ID +original_cluster_id=$(grep "cluster-id:" "$test_deployment_dir/deployment.yml" 2>/dev/null || echo "") +# Modify spec file to simulate an update +sed -i.bak 's/CERC_TEST_PARAM_1=PASSED/CERC_TEST_PARAM_1=UPDATED/' $test_deployment_spec +# Run sync to update deployment files without destroying data +$TEST_TARGET_SO_STACK deploy create --spec-file $test_deployment_spec --deployment-dir $test_deployment_dir --update +# Verify the spec file was updated in deployment dir +updated_deployed_spec=$(<$test_deployment_dir/spec.yml) +if [[ "$updated_deployed_spec" == *"UPDATED"* ]]; then + echo "deploy sync test: spec file updated" +else + echo "deploy sync test: spec file not updated - FAILED" + exit 1 +fi +# Verify the data marker file still exists with original content +if [ ! -f "$test_data_marker" ]; then + echo "deploy sync test: data file deleted - FAILED" + exit 1 +fi +synced_marker_content=$(<$test_data_marker) +if [ "$synced_marker_content" == "$original_marker_content" ]; then + echo "deploy sync test: data preserved - passed" +else + echo "deploy sync test: data corrupted - FAILED" + exit 1 +fi +# Verify cluster ID was preserved (not regenerated) +new_cluster_id=$(grep "cluster-id:" "$test_deployment_dir/deployment.yml" 2>/dev/null || echo "") +if [ -n "$original_cluster_id" ] && [ "$original_cluster_id" == "$new_cluster_id" ]; then + echo "deploy sync test: cluster ID preserved - passed" +else + echo "deploy sync test: cluster ID not preserved - FAILED" + exit 1 +fi +echo "deploy sync test: passed" + # Try to start the deployment $TEST_TARGET_SO deployment --dir $test_deployment_dir start # Check logs command works diff --git a/tests/webapp-test/run-webapp-test.sh b/tests/webapp-test/run-webapp-test.sh index 6a12d54c..f950334e 100755 --- a/tests/webapp-test/run-webapp-test.sh +++ b/tests/webapp-test/run-webapp-test.sh @@ -40,7 +40,7 @@ sleep 3 wget --tries 20 --retry-connrefused --waitretry=3 -O test.before -m http://localhost:3000 docker logs $CONTAINER_ID -docker remove -f $CONTAINER_ID +docker rm -f $CONTAINER_ID echo "Running app container test" CONTAINER_ID=$(docker run -p 3000:80 -e CERC_WEBAPP_DEBUG=$CHECK -e CERC_SCRIPT_DEBUG=$CERC_SCRIPT_DEBUG -d ${app_image_name}) @@ -48,7 +48,7 @@ sleep 3 wget --tries 20 --retry-connrefused --waitretry=3 -O test.after -m http://localhost:3000 docker logs $CONTAINER_ID -docker remove -f $CONTAINER_ID +docker rm -f $CONTAINER_ID echo "###########################################################################" echo ""