Compare commits

..

2 Commits

Author SHA1 Message Date
8afae1904b Add support for running jobs from a stack (#975)
All checks were successful
Lint Checks / Run linter (push) Successful in 30s
Part of https://plan.wireit.in/deepstack/browse/VUL-265/

Reviewed-on: #975
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2025-12-04 06:13:28 +00:00
7acabb0743 Add support for generating Helm charts when creating a deployment (#974)
All checks were successful
Lint Checks / Run linter (push) Successful in 29s
Part of https://plan.wireit.in/deepstack/browse/VUL-265/

- Added a flag `--helm-chart` to `deploy create` command
- Uses Kompose CLI wrapper to generate a helm chart from compose files in a stack
- To be handled in a follow on PR(s):
  - Templatize generated charts and generate a `values.yml` file with defaults

Reviewed-on: #974
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2025-11-27 06:43:07 +00:00
21 changed files with 896 additions and 1281 deletions

View File

@ -1,151 +0,0 @@
# Plan: Make Stack-Orchestrator AI-Friendly
## Goal
Make the stack-orchestrator repository easier for AI tools (Claude Code, Cursor, Copilot) to understand and use for generating stacks, including adding a `create-stack` command.
---
## Part 1: Documentation & Context Files
### 1.1 Add CLAUDE.md
Create a root-level context file for AI assistants.
**File:** `CLAUDE.md`
Contents:
- Project overview (what stack-orchestrator does)
- Stack creation workflow (step-by-step)
- File naming conventions
- Required vs optional fields in stack.yml
- Common patterns and anti-patterns
- Links to example stacks (simple, medium, complex)
### 1.2 Add JSON Schema for stack.yml
Create formal validation schema.
**File:** `schemas/stack-schema.json`
Benefits:
- AI tools can validate generated stacks
- IDEs provide autocomplete
- CI can catch errors early
### 1.3 Add Template Stack with Comments
Create an annotated template for reference.
**File:** `stack_orchestrator/data/stacks/_template/stack.yml`
```yaml
# Stack definition template - copy this directory to create a new stack
version: "1.2" # Required: 1.0, 1.1, or 1.2
name: my-stack # Required: lowercase, hyphens only
description: "Human-readable description" # Optional
repos: # Git repositories to clone
- github.com/org/repo
containers: # Container images to build (must have matching container-build/)
- cerc/my-container
pods: # Deployment units (must have matching docker-compose-{pod}.yml)
- my-pod
```
### 1.4 Document Validation Rules
Create explicit documentation of constraints currently scattered in code.
**File:** `docs/stack-format.md`
Contents:
- Container names must start with `cerc/`
- Pod names must match compose file: `docker-compose-{pod}.yml`
- Repository format: `host/org/repo[@ref]`
- Stack directory name should match `name` field
- Version field options and differences
---
## Part 2: Add `create-stack` Command
### 2.1 Command Overview
```bash
laconic-so create-stack --repo github.com/org/my-app [--name my-app] [--type webapp]
```
**Behavior:**
1. Parse repo URL to extract app name (if --name not provided)
2. Create `stacks/{name}/stack.yml`
3. Create `container-build/cerc-{name}/Dockerfile` and `build.sh`
4. Create `compose/docker-compose-{name}.yml`
5. Update list files (repository-list.txt, container-image-list.txt, pod-list.txt)
### 2.2 Files to Create
| File | Purpose |
|------|---------|
| `stack_orchestrator/create/__init__.py` | Package init |
| `stack_orchestrator/create/create_stack.py` | Command implementation |
### 2.3 Files to Modify
| File | Change |
|------|--------|
| `stack_orchestrator/main.py` | Add import and `cli.add_command()` |
### 2.4 Command Options
| Option | Required | Description |
|--------|----------|-------------|
| `--repo` | Yes | Git repository URL (e.g., github.com/org/repo) |
| `--name` | No | Stack name (defaults to repo name) |
| `--type` | No | Template type: webapp, service, empty (default: webapp) |
| `--force` | No | Overwrite existing files |
### 2.5 Template Types
| Type | Base Image | Port | Use Case |
|------|------------|------|----------|
| webapp | node:20-bullseye-slim | 3000 | React/Vue/Next.js apps |
| service | python:3.11-slim | 8080 | Python backend services |
| empty | none | none | Custom from scratch |
---
## Part 3: Implementation Summary
### New Files (6)
1. `CLAUDE.md` - AI assistant context
2. `schemas/stack-schema.json` - Validation schema
3. `stack_orchestrator/data/stacks/_template/stack.yml` - Annotated template
4. `docs/stack-format.md` - Stack format documentation
5. `stack_orchestrator/create/__init__.py` - Package init
6. `stack_orchestrator/create/create_stack.py` - Command implementation
### Modified Files (1)
1. `stack_orchestrator/main.py` - Register create-stack command
---
## Verification
```bash
# 1. Command appears in help
laconic-so --help | grep create-stack
# 2. Dry run works
laconic-so --dry-run create-stack --repo github.com/org/test-app
# 3. Creates all expected files
laconic-so create-stack --repo github.com/org/test-app
ls stack_orchestrator/data/stacks/test-app/
ls stack_orchestrator/data/container-build/cerc-test-app/
ls stack_orchestrator/data/compose/docker-compose-test-app.yml
# 4. Build works with generated stack
laconic-so --stack test-app build-containers
```

View File

@ -1,413 +0,0 @@
# Implementing `laconic-so create-stack` Command
A plan for adding a new CLI command to scaffold stack files automatically.
---
## Overview
Add a `create-stack` command that generates all required files for a new stack:
```bash
laconic-so create-stack --name my-stack --type webapp
```
**Output:**
```
stack_orchestrator/data/
├── stacks/my-stack/stack.yml
├── container-build/cerc-my-stack/
│ ├── Dockerfile
│ └── build.sh
└── compose/docker-compose-my-stack.yml
Updated: repository-list.txt, container-image-list.txt, pod-list.txt
```
---
## CLI Architecture Summary
### Command Registration Pattern
Commands are Click functions registered in `main.py`:
```python
# main.py (line ~70)
from stack_orchestrator.create import create_stack
cli.add_command(create_stack.command, "create-stack")
```
### Global Options Access
```python
from stack_orchestrator.opts import opts
if not opts.o.quiet:
print("message")
if opts.o.dry_run:
print("(would create files)")
```
### Key Utilities
| Function | Location | Purpose |
|----------|----------|---------|
| `get_yaml()` | `util.py` | YAML parser (ruamel.yaml) |
| `get_stack_path(stack)` | `util.py` | Resolve stack directory path |
| `error_exit(msg)` | `util.py` | Print error and exit(1) |
---
## Files to Create
### 1. Command Module
**`stack_orchestrator/create/__init__.py`**
```python
# Empty file to make this a package
```
**`stack_orchestrator/create/create_stack.py`**
```python
import click
import os
from pathlib import Path
from shutil import copy
from stack_orchestrator.opts import opts
from stack_orchestrator.util import error_exit, get_yaml
# Template types
STACK_TEMPLATES = {
"webapp": {
"description": "Web application with Node.js",
"base_image": "node:20-bullseye-slim",
"port": 3000,
},
"service": {
"description": "Backend service",
"base_image": "python:3.11-slim",
"port": 8080,
},
"empty": {
"description": "Minimal stack with no defaults",
"base_image": None,
"port": None,
},
}
def get_data_dir() -> Path:
"""Get path to stack_orchestrator/data directory"""
return Path(__file__).absolute().parent.parent.joinpath("data")
def validate_stack_name(name: str) -> None:
"""Validate stack name follows conventions"""
import re
if not re.match(r'^[a-z0-9][a-z0-9-]*[a-z0-9]$', name) and len(name) > 2:
error_exit(f"Invalid stack name '{name}'. Use lowercase alphanumeric with hyphens.")
if name.startswith("cerc-"):
error_exit("Stack name should not start with 'cerc-' (container names will add this prefix)")
def create_stack_yml(stack_dir: Path, name: str, template: dict, repo_url: str) -> None:
"""Create stack.yml file"""
config = {
"version": "1.2",
"name": name,
"description": template.get("description", f"Stack: {name}"),
"repos": [repo_url] if repo_url else [],
"containers": [f"cerc/{name}"],
"pods": [name],
}
stack_dir.mkdir(parents=True, exist_ok=True)
with open(stack_dir / "stack.yml", "w") as f:
get_yaml().dump(config, f)
def create_dockerfile(container_dir: Path, name: str, template: dict) -> None:
"""Create Dockerfile"""
base_image = template.get("base_image", "node:20-bullseye-slim")
port = template.get("port", 3000)
dockerfile_content = f'''# Build stage
FROM {base_image} AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM {base_image}
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
EXPOSE {port}
CMD ["npm", "run", "start"]
'''
container_dir.mkdir(parents=True, exist_ok=True)
with open(container_dir / "Dockerfile", "w") as f:
f.write(dockerfile_content)
def create_build_script(container_dir: Path, name: str) -> None:
"""Create build.sh script"""
build_script = f'''#!/usr/bin/env bash
# Build cerc/{name}
source ${{CERC_CONTAINER_BASE_DIR}}/build-base.sh
SCRIPT_DIR=$( cd -- "$( dirname -- "${{BASH_SOURCE[0]}}" )" &> /dev/null && pwd )
docker build -t cerc/{name}:local \\
-f ${{SCRIPT_DIR}}/Dockerfile \\
${{build_command_args}} \\
${{CERC_REPO_BASE_DIR}}/{name}
'''
build_path = container_dir / "build.sh"
with open(build_path, "w") as f:
f.write(build_script)
# Make executable
os.chmod(build_path, 0o755)
def create_compose_file(compose_dir: Path, name: str, template: dict) -> None:
"""Create docker-compose file"""
port = template.get("port", 3000)
compose_content = {
"version": "3.8",
"services": {
name: {
"image": f"cerc/{name}:local",
"restart": "unless-stopped",
"ports": [f"${{HOST_PORT:-{port}}}:{port}"],
"environment": {
"NODE_ENV": "${NODE_ENV:-production}",
},
}
}
}
with open(compose_dir / f"docker-compose-{name}.yml", "w") as f:
get_yaml().dump(compose_content, f)
def update_list_file(data_dir: Path, filename: str, entry: str) -> None:
"""Add entry to a list file if not already present"""
list_path = data_dir / filename
# Read existing entries
existing = set()
if list_path.exists():
with open(list_path, "r") as f:
existing = set(line.strip() for line in f if line.strip())
# Add new entry
if entry not in existing:
with open(list_path, "a") as f:
f.write(f"{entry}\n")
@click.command()
@click.option("--name", required=True, help="Name of the new stack (lowercase, hyphens)")
@click.option("--type", "stack_type", default="webapp",
type=click.Choice(list(STACK_TEMPLATES.keys())),
help="Stack template type")
@click.option("--repo", help="Git repository URL (e.g., github.com/org/repo)")
@click.option("--force", is_flag=True, help="Overwrite existing files")
@click.pass_context
def command(ctx, name: str, stack_type: str, repo: str, force: bool):
"""Create a new stack with all required files.
Examples:
laconic-so create-stack --name my-app --type webapp
laconic-so create-stack --name my-service --type service --repo github.com/org/repo
"""
# Validate
validate_stack_name(name)
template = STACK_TEMPLATES[stack_type]
data_dir = get_data_dir()
# Define paths
stack_dir = data_dir / "stacks" / name
container_dir = data_dir / "container-build" / f"cerc-{name}"
compose_dir = data_dir / "compose"
# Check for existing files
if not force:
if stack_dir.exists():
error_exit(f"Stack already exists: {stack_dir}\nUse --force to overwrite")
if container_dir.exists():
error_exit(f"Container build dir exists: {container_dir}\nUse --force to overwrite")
# Dry run check
if opts.o.dry_run:
print(f"Would create stack '{name}' with template '{stack_type}':")
print(f" - {stack_dir}/stack.yml")
print(f" - {container_dir}/Dockerfile")
print(f" - {container_dir}/build.sh")
print(f" - {compose_dir}/docker-compose-{name}.yml")
print(f" - Update repository-list.txt")
print(f" - Update container-image-list.txt")
print(f" - Update pod-list.txt")
return
# Create files
if not opts.o.quiet:
print(f"Creating stack '{name}' with template '{stack_type}'...")
create_stack_yml(stack_dir, name, template, repo)
if opts.o.verbose:
print(f" Created {stack_dir}/stack.yml")
create_dockerfile(container_dir, name, template)
if opts.o.verbose:
print(f" Created {container_dir}/Dockerfile")
create_build_script(container_dir, name)
if opts.o.verbose:
print(f" Created {container_dir}/build.sh")
create_compose_file(compose_dir, name, template)
if opts.o.verbose:
print(f" Created {compose_dir}/docker-compose-{name}.yml")
# Update list files
if repo:
update_list_file(data_dir, "repository-list.txt", repo)
if opts.o.verbose:
print(f" Added {repo} to repository-list.txt")
update_list_file(data_dir, "container-image-list.txt", f"cerc/{name}")
if opts.o.verbose:
print(f" Added cerc/{name} to container-image-list.txt")
update_list_file(data_dir, "pod-list.txt", name)
if opts.o.verbose:
print(f" Added {name} to pod-list.txt")
# Summary
if not opts.o.quiet:
print(f"\nStack '{name}' created successfully!")
print(f"\nNext steps:")
print(f" 1. Edit {stack_dir}/stack.yml")
print(f" 2. Customize {container_dir}/Dockerfile")
print(f" 3. Run: laconic-so --stack {name} build-containers")
print(f" 4. Run: laconic-so --stack {name} deploy-system up")
```
### 2. Register Command in main.py
**Edit `stack_orchestrator/main.py`**
Add import:
```python
from stack_orchestrator.create import create_stack
```
Add command registration (after line ~78):
```python
cli.add_command(create_stack.command, "create-stack")
```
---
## Implementation Steps
### Step 1: Create module structure
```bash
mkdir -p stack_orchestrator/create
touch stack_orchestrator/create/__init__.py
```
### Step 2: Create the command file
Create `stack_orchestrator/create/create_stack.py` with the code above.
### Step 3: Register in main.py
Add the import and `cli.add_command()` line.
### Step 4: Test the command
```bash
# Show help
laconic-so create-stack --help
# Dry run
laconic-so --dry-run create-stack --name test-app --type webapp
# Create a stack
laconic-so create-stack --name test-app --type webapp --repo github.com/org/test-app
# Verify
ls -la stack_orchestrator/data/stacks/test-app/
cat stack_orchestrator/data/stacks/test-app/stack.yml
```
---
## Template Types
| Type | Base Image | Port | Use Case |
|------|------------|------|----------|
| `webapp` | node:20-bullseye-slim | 3000 | React/Vue/Next.js apps |
| `service` | python:3.11-slim | 8080 | Python backend services |
| `empty` | none | none | Custom from scratch |
---
## Future Enhancements
1. **Interactive mode** - Prompt for values if not provided
2. **More templates** - Go, Rust, database stacks
3. **Template from existing** - `--from-stack existing-stack`
4. **External stack support** - Create in custom directory
5. **Validation command** - `laconic-so validate-stack --name my-stack`
---
## Files Modified
| File | Change |
|------|--------|
| `stack_orchestrator/create/__init__.py` | New (empty) |
| `stack_orchestrator/create/create_stack.py` | New (command implementation) |
| `stack_orchestrator/main.py` | Add import and `cli.add_command()` |
---
## Verification
```bash
# 1. Command appears in help
laconic-so --help | grep create-stack
# 2. Dry run works
laconic-so --dry-run create-stack --name verify-test --type webapp
# 3. Full creation works
laconic-so create-stack --name verify-test --type webapp
ls stack_orchestrator/data/stacks/verify-test/
ls stack_orchestrator/data/container-build/cerc-verify-test/
ls stack_orchestrator/data/compose/docker-compose-verify-test.yml
# 4. Build works
laconic-so --stack verify-test build-containers
# 5. Cleanup
rm -rf stack_orchestrator/data/stacks/verify-test
rm -rf stack_orchestrator/data/container-build/cerc-verify-test
rm stack_orchestrator/data/compose/docker-compose-verify-test.yml
```

View File

@ -1,550 +0,0 @@
# Docker Compose Deployment Guide
## Introduction
### What is a Deployer?
In stack-orchestrator, a **deployer** provides a uniform interface for orchestrating containerized applications. This guide focuses on Docker Compose deployments, which is the default and recommended deployment mode.
While stack-orchestrator also supports Kubernetes (`k8s`) and Kind (`k8s-kind`) deployments, those are out of scope for this guide. See the [Kubernetes Enhancements](./k8s-deployment-enhancements.md) documentation for advanced deployment options.
## Prerequisites
To deploy stacks using Docker Compose, you need:
- Docker Engine (20.10+)
- Docker Compose plugin (v2.0+)
- Python 3.8+
- stack-orchestrator installed (`laconic-so`)
**That's it!** No additional infrastructure is required. If you have Docker installed, you're ready to deploy.
## Deployment Workflow
The typical deployment workflow consists of four main steps:
1. **Setup repositories and build containers** (first time only)
2. **Initialize deployment specification**
3. **Create deployment directory**
4. **Start and manage services**
## Quick Start Example
Here's a complete example using the built-in `test` stack:
```bash
# Step 1: Setup (first time only)
laconic-so --stack test setup-repositories
laconic-so --stack test build-containers
# Step 2: Initialize deployment spec
laconic-so --stack test deploy init --output test-spec.yml
# Step 3: Create deployment directory
laconic-so --stack test deploy create \
--spec-file test-spec.yml \
--deployment-dir test-deployment
# Step 4: Start services
laconic-so deployment --dir test-deployment start
# View running services
laconic-so deployment --dir test-deployment ps
# View logs
laconic-so deployment --dir test-deployment logs
# Stop services (preserves data)
laconic-so deployment --dir test-deployment stop
```
## Deployment Workflows
Stack-orchestrator supports two deployment workflows:
### 1. Deployment Directory Workflow (Recommended)
This workflow creates a persistent deployment directory that contains all configuration and data.
**When to use:**
- Production deployments
- When you need to preserve configuration
- When you want to manage multiple deployments
- When you need persistent volume data
**Example:**
```bash
# Initialize deployment spec
laconic-so --stack fixturenet-eth deploy init --output eth-spec.yml
# Optionally edit eth-spec.yml to customize configuration
# Create deployment directory
laconic-so --stack fixturenet-eth deploy create \
--spec-file eth-spec.yml \
--deployment-dir my-eth-deployment
# Start the deployment
laconic-so deployment --dir my-eth-deployment start
# Manage the deployment
laconic-so deployment --dir my-eth-deployment ps
laconic-so deployment --dir my-eth-deployment logs
laconic-so deployment --dir my-eth-deployment stop
```
### 2. Quick Deploy Workflow
This workflow deploys directly without creating a persistent deployment directory.
**When to use:**
- Quick testing
- Temporary deployments
- Simple stacks that don't require customization
**Example:**
```bash
# Start the stack directly
laconic-so --stack test deploy up
# Check service status
laconic-so --stack test deploy port test 80
# View logs
laconic-so --stack test deploy logs
# Stop (preserves volumes)
laconic-so --stack test deploy down
# Stop and remove volumes
laconic-so --stack test deploy down --delete-volumes
```
## Real-World Example: Ethereum Fixturenet
Deploy a local Ethereum testnet with Geth and Lighthouse:
```bash
# Setup (first time only)
laconic-so --stack fixturenet-eth setup-repositories
laconic-so --stack fixturenet-eth build-containers
# Initialize with default configuration
laconic-so --stack fixturenet-eth deploy init --output eth-spec.yml
# Create deployment
laconic-so --stack fixturenet-eth deploy create \
--spec-file eth-spec.yml \
--deployment-dir fixturenet-eth-deployment
# Start the network
laconic-so deployment --dir fixturenet-eth-deployment start
# Check status
laconic-so deployment --dir fixturenet-eth-deployment ps
# Access logs from specific service
laconic-so deployment --dir fixturenet-eth-deployment logs fixturenet-eth-geth-1
# Stop the network (preserves blockchain data)
laconic-so deployment --dir fixturenet-eth-deployment stop
# Start again - blockchain data is preserved
laconic-so deployment --dir fixturenet-eth-deployment start
# Clean up everything including data
laconic-so deployment --dir fixturenet-eth-deployment stop --delete-volumes
```
## Configuration
### Passing Configuration Parameters
Configuration can be passed in three ways:
**1. At init time via `--config` flag:**
```bash
laconic-so --stack test deploy init --output spec.yml \
--config PARAM1=value1,PARAM2=value2
```
**2. Edit the spec file after init:**
```bash
# Initialize
laconic-so --stack test deploy init --output spec.yml
# Edit spec.yml
vim spec.yml
```
Example spec.yml:
```yaml
stack: test
config:
PARAM1: value1
PARAM2: value2
```
**3. Docker Compose defaults:**
Environment variables defined in the stack's `docker-compose-*.yml` files are used as defaults. Configuration from the spec file overrides these defaults.
### Port Mapping
By default, services are accessible on randomly assigned host ports. To find the mapped port:
```bash
# Find the host port for container port 80 on service 'webapp'
laconic-so deployment --dir my-deployment port webapp 80
# Output example: 0.0.0.0:32768
```
To configure fixed ports, edit the spec file before creating the deployment:
```yaml
network:
ports:
webapp:
- '8080:80' # Maps host port 8080 to container port 80
api:
- '3000:3000'
```
Then create the deployment:
```bash
laconic-so --stack my-stack deploy create \
--spec-file spec.yml \
--deployment-dir my-deployment
```
### Volume Persistence
Volumes are preserved between stop/start cycles by default:
```bash
# Stop but keep data
laconic-so deployment --dir my-deployment stop
# Start again - data is still there
laconic-so deployment --dir my-deployment start
```
To completely remove all data:
```bash
# Stop and delete all volumes
laconic-so deployment --dir my-deployment stop --delete-volumes
```
Volume data is stored in `<deployment-dir>/data/`.
## Common Operations
### Viewing Logs
```bash
# All services, continuous follow
laconic-so deployment --dir my-deployment logs --follow
# Last 100 lines from all services
laconic-so deployment --dir my-deployment logs --tail 100
# Specific service only
laconic-so deployment --dir my-deployment logs webapp
# Combine options
laconic-so deployment --dir my-deployment logs --tail 50 --follow webapp
```
### Executing Commands in Containers
```bash
# Execute a command in a running service
laconic-so deployment --dir my-deployment exec webapp ls -la
# Interactive shell
laconic-so deployment --dir my-deployment exec webapp /bin/bash
# Run command with specific environment variables
laconic-so deployment --dir my-deployment exec webapp env VAR=value command
```
### Checking Service Status
```bash
# List all running services
laconic-so deployment --dir my-deployment ps
# Check using Docker directly
docker ps
```
### Updating a Running Deployment
If you need to change configuration after deployment:
```bash
# 1. Edit the spec file
vim my-deployment/spec.yml
# 2. Regenerate configuration
laconic-so deployment --dir my-deployment update
# 3. Restart services to apply changes
laconic-so deployment --dir my-deployment stop
laconic-so deployment --dir my-deployment start
```
## Multi-Service Deployments
Many stacks deploy multiple services that work together:
```bash
# Deploy a stack with multiple services
laconic-so --stack laconicd-with-console deploy init --output spec.yml
laconic-so --stack laconicd-with-console deploy create \
--spec-file spec.yml \
--deployment-dir laconicd-deployment
laconic-so deployment --dir laconicd-deployment start
# View all services
laconic-so deployment --dir laconicd-deployment ps
# View logs from specific services
laconic-so deployment --dir laconicd-deployment logs laconicd
laconic-so deployment --dir laconicd-deployment logs console
```
## ConfigMaps
ConfigMaps allow you to mount configuration files into containers:
```bash
# 1. Create the config directory in your deployment
mkdir -p my-deployment/data/my-config
echo "database_url=postgres://localhost" > my-deployment/data/my-config/app.conf
# 2. Reference in spec file
vim my-deployment/spec.yml
```
Add to spec.yml:
```yaml
configmaps:
my-config: ./data/my-config
```
```bash
# 3. Restart to apply
laconic-so deployment --dir my-deployment stop
laconic-so deployment --dir my-deployment start
```
The files will be mounted in the container at `/config/` (or as specified by the stack).
## Deployment Directory Structure
A typical deployment directory contains:
```
my-deployment/
├── compose/
│ └── docker-compose-*.yml # Generated compose files
├── config.env # Environment variables
├── deployment.yml # Deployment metadata
├── spec.yml # Deployment specification
└── data/ # Volume mounts and configs
├── service-data/ # Persistent service data
└── config-maps/ # ConfigMap files
```
## Troubleshooting
### Common Issues
**Problem: "Cannot connect to Docker daemon"**
```bash
# Ensure Docker is running
docker ps
# Start Docker if needed (macOS)
open -a Docker
# Start Docker (Linux)
sudo systemctl start docker
```
**Problem: "Port already in use"**
```bash
# Either stop the conflicting service or use different ports
# Edit spec.yml before creating deployment:
network:
ports:
webapp:
- '8081:80' # Use 8081 instead of 8080
```
**Problem: "Image not found"**
```bash
# Build containers first
laconic-so --stack your-stack build-containers
```
**Problem: Volumes not persisting**
```bash
# Check if you used --delete-volumes when stopping
# Volume data is in: <deployment-dir>/data/
# Don't use --delete-volumes if you want to keep data:
laconic-so deployment --dir my-deployment stop
# Only use --delete-volumes when you want to reset completely:
laconic-so deployment --dir my-deployment stop --delete-volumes
```
**Problem: Services not starting**
```bash
# Check logs for errors
laconic-so deployment --dir my-deployment logs
# Check Docker container status
docker ps -a
# Try stopping and starting again
laconic-so deployment --dir my-deployment stop
laconic-so deployment --dir my-deployment start
```
### Inspecting Deployment State
```bash
# Check deployment directory structure
ls -la my-deployment/
# Check running containers
docker ps
# Check container details
docker inspect <container-name>
# Check networks
docker network ls
# Check volumes
docker volume ls
```
## CLI Commands Reference
### Stack Operations
```bash
# Clone required repositories
laconic-so --stack <name> setup-repositories
# Build container images
laconic-so --stack <name> build-containers
```
### Deployment Initialization
```bash
# Initialize deployment spec with defaults
laconic-so --stack <name> deploy init --output <spec-file>
# Initialize with configuration
laconic-so --stack <name> deploy init --output <spec-file> \
--config PARAM1=value1,PARAM2=value2
```
### Deployment Creation
```bash
# Create deployment directory from spec
laconic-so --stack <name> deploy create \
--spec-file <spec-file> \
--deployment-dir <dir>
```
### Deployment Management
```bash
# Start all services
laconic-so deployment --dir <dir> start
# Stop services (preserves volumes)
laconic-so deployment --dir <dir> stop
# Stop and remove volumes
laconic-so deployment --dir <dir> stop --delete-volumes
# List running services
laconic-so deployment --dir <dir> ps
# View logs
laconic-so deployment --dir <dir> logs [--tail N] [--follow] [service]
# Show mapped port
laconic-so deployment --dir <dir> port <service> <private-port>
# Execute command in service
laconic-so deployment --dir <dir> exec <service> <command>
# Update configuration
laconic-so deployment --dir <dir> update
```
### Quick Deploy Commands
```bash
# Start stack directly
laconic-so --stack <name> deploy up
# Stop stack
laconic-so --stack <name> deploy down [--delete-volumes]
# View logs
laconic-so --stack <name> deploy logs
# Show port mapping
laconic-so --stack <name> deploy port <service> <port>
```
## Related Documentation
- [CLI Reference](./cli.md) - Complete CLI command documentation
- [Adding a New Stack](./adding-a-new-stack.md) - Creating custom stacks
- [Specification](./spec.md) - Internal structure and design
- [Kubernetes Enhancements](./k8s-deployment-enhancements.md) - Advanced K8s deployment options
- [Web App Deployment](./webapp.md) - Deploying web applications
## Examples
For more examples, see the test scripts:
- `scripts/quick-deploy-test.sh` - Quick deployment example
- `tests/deploy/run-deploy-test.sh` - Comprehensive test showing all features
## Summary
- Docker Compose is the default and recommended deployment mode
- Two workflows: deployment directory (recommended) or quick deploy
- The standard workflow is: setup → build → init → create → start
- Configuration is flexible with multiple override layers
- Volume persistence is automatic unless explicitly deleted
- All deployment state is contained in the deployment directory
- For Kubernetes deployments, see separate K8s documentation
You're now ready to deploy stacks using stack-orchestrator with Docker Compose!

View File

@ -0,0 +1,113 @@
# Helm Chart Generation
Generate Kubernetes Helm charts from stack compose files using Kompose.
## Prerequisites
Install Kompose:
```bash
# Linux
curl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-linux-amd64 -o kompose
chmod +x kompose
sudo mv kompose /usr/local/bin/
# macOS
brew install kompose
# Verify
kompose version
```
## Usage
### 1. Create spec file
```bash
laconic-so --stack <stack-name> deploy --deploy-to k8s init \
--kube-config ~/.kube/config \
--output spec.yml
```
### 2. Generate Helm chart
```bash
laconic-so --stack <stack-name> deploy create \
--spec-file spec.yml \
--deployment-dir my-deployment \
--helm-chart
```
### 3. Deploy to Kubernetes
```bash
helm install my-release my-deployment/chart
kubectl get pods -n zenith
```
## Output Structure
```bash
my-deployment/
├── spec.yml # Reference
├── stack.yml # Reference
└── chart/ # Helm chart
├── Chart.yaml
├── README.md
└── templates/
└── *.yaml
```
## Example
```bash
# Generate chart for stage1-zenithd
laconic-so --stack stage1-zenithd deploy --deploy-to k8s init \
--kube-config ~/.kube/config \
--output stage1-spec.yml
laconic-so --stack stage1-zenithd deploy create \
--spec-file stage1-spec.yml \
--deployment-dir stage1-deployment \
--helm-chart
# Deploy
helm install stage1-zenithd stage1-deployment/chart
```
## Production Deployment (TODO)
### Local Development
```bash
# Access services using port-forward
kubectl port-forward service/zenithd 26657:26657
kubectl port-forward service/nginx-api-proxy 1317:80
kubectl port-forward service/cosmos-explorer 4173:4173
```
### Production Access Options
- Option 1: Ingress + cert-manager (Recommended)
- Install ingress-nginx + cert-manager
- Point DNS to cluster LoadBalancer IP
- Auto-provisions Let's Encrypt TLS certs
- Access: `https://api.zenith.example.com`
- Option 2: Cloud LoadBalancer
- Use cloud provider's LoadBalancer service type
- Point DNS to assigned external IP
- Manual TLS cert management
- Option 3: Bare Metal (MetalLB + Ingress)
- MetalLB provides LoadBalancer IPs from local network
- Same Ingress setup as cloud
- Option 4: NodePort + External Proxy
- Expose services on 30000-32767 range
- External nginx/Caddy proxies 80/443 → NodePort
- Manual cert management
### Changes Needed
- Add Ingress template to charts
- Add TLS configuration to values.yaml
- Document cert-manager setup
- Add production deployment guide

View File

@ -1,128 +0,0 @@
# Deploying to the Laconic Network
## Overview
The Laconic network uses a **registry-based deployment model** where everything is published as blockchain records.
## Key Documentation in stack-orchestrator
- `docs/laconicd-with-console.md` - Setting up a laconicd network
- `docs/webapp.md` - Webapp building/running
- `stack_orchestrator/deploy/webapp/` - Implementation (14 modules)
## Core Concepts
### LRN (Laconic Resource Name)
Format: `lrn://laconic/[namespace]/[name]`
Examples:
- `lrn://laconic/deployers/my-deployer-name`
- `lrn://laconic/dns/example.com`
- `lrn://laconic/deployments/example.com`
### Registry Record Types
| Record Type | Purpose |
|-------------|---------|
| `ApplicationRecord` | Published app metadata |
| `WebappDeployer` | Deployment service offering |
| `ApplicationDeploymentRequest` | User's request to deploy |
| `ApplicationDeploymentAuction` | Optional bidding for deployers |
| `ApplicationDeploymentRecord` | Completed deployment result |
## Deployment Workflows
### 1. Direct Deployment
```
User publishes ApplicationDeploymentRequest
→ targets specific WebappDeployer (by LRN)
→ includes payment TX hash
→ Deployer picks up request, builds, deploys, publishes result
```
### 2. Auction-Based Deployment
```
User publishes ApplicationDeploymentAuction
→ Deployers bid (commit/reveal phases)
→ Winner selected
→ User publishes request targeting winner
```
## Key CLI Commands
### Publish a Deployer Service
```bash
laconic-so publish-webapp-deployer --laconic-config config.yml \
--api-url https://deployer-api.example.com \
--name my-deployer \
--payment-address laconic1... \
--minimum-payment 1000alnt
```
### Request Deployment (User Side)
```bash
laconic-so request-webapp-deployment --laconic-config config.yml \
--app lrn://laconic/apps/my-app \
--deployer lrn://laconic/deployers/xyz \
--make-payment auto
```
### Run Deployer Service (Deployer Side)
```bash
laconic-so deploy-webapp-from-registry --laconic-config config.yml --discover
```
## Laconic Config File
All tools require a laconic config file (`laconic.toml`):
```toml
[cosmos]
address_prefix = "laconic"
chain_id = "laconic_9000-1"
endpoint = "http://localhost:26657"
key = "<account-name>"
password = "<account-password>"
```
## Setting Up a Local Laconicd Network
```bash
# Clone and build
laconic-so --stack fixturenet-laconic-loaded setup-repositories
laconic-so --stack fixturenet-laconic-loaded build-containers
laconic-so --stack fixturenet-laconic-loaded deploy create
laconic-so deployment --dir laconic-loaded-deployment start
# Check status
laconic-so deployment --dir laconic-loaded-deployment exec cli "laconic registry status"
```
## Key Implementation Files
| File | Purpose |
|------|---------|
| `publish_webapp_deployer.py` | Register deployment service on network |
| `publish_deployment_auction.py` | Create auction for deployers to bid on |
| `handle_deployment_auction.py` | Monitor and bid on auctions (deployer-side) |
| `request_webapp_deployment.py` | Create deployment request (user-side) |
| `deploy_webapp_from_registry.py` | Process requests and deploy (deployer-side) |
| `request_webapp_undeployment.py` | Request app removal |
| `undeploy_webapp_from_registry.py` | Process removal requests |
| `util.py` | LaconicRegistryClient - all registry interactions |
## Payment System
- **Token Denom**: `alnt` (Laconic network tokens)
- **Payment Options**:
- `--make-payment`: Create new payment with amount (or "auto" for deployer's minimum)
- `--use-payment`: Reference existing payment TX
## What's NOT Well-Documented
1. No end-to-end tutorial for full deployment workflow
2. Stack publishing (vs webapp) process unclear
3. LRN naming conventions not formally specified
4. Payment economics and token mechanics

View File

@ -14,6 +14,7 @@
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from stack_orchestrator.deploy.deployment_context import DeploymentContext
from ruamel.yaml import YAML
def create(context: DeploymentContext, extra_args):
@ -22,12 +23,17 @@ def create(context: DeploymentContext, extra_args):
# deterministic-deployment-proxy contract, which itself is a prereq for Optimism contract deployment
fixturenet_eth_compose_file = context.deployment_dir.joinpath('compose', 'docker-compose-fixturenet-eth.yml')
with open(fixturenet_eth_compose_file, 'r') as yaml_file:
yaml = YAML()
yaml_data = yaml.load(yaml_file)
new_script = '../config/fixturenet-optimism/run-geth.sh:/opt/testnet/run.sh'
def add_geth_volume(yaml_data):
if new_script not in yaml_data['services']['fixturenet-eth-geth-1']['volumes']:
yaml_data['services']['fixturenet-eth-geth-1']['volumes'].append(new_script)
if new_script not in yaml_data['services']['fixturenet-eth-geth-1']['volumes']:
yaml_data['services']['fixturenet-eth-geth-1']['volumes'].append(new_script)
context.modify_yaml(fixturenet_eth_compose_file, add_geth_volume)
with open(fixturenet_eth_compose_file, 'w') as yaml_file:
yaml = YAML()
yaml.dump(yaml_data, yaml_file)
return None

View File

@ -2,6 +2,7 @@ version: "1.0"
name: test
description: "A test stack"
repos:
- git.vdb.to/cerc-io/laconicd
- git.vdb.to/cerc-io/test-project@test-branch
containers:
- cerc/test-container

View File

@ -94,6 +94,40 @@ class DockerDeployer(Deployer):
except DockerException as e:
raise DeployerException(e)
def run_job(self, job_name: str, release_name: str = None):
# release_name is ignored for Docker deployments (only used for K8s/Helm)
if not opts.o.dry_run:
try:
# Find job compose file in compose-jobs directory
# The deployment should have compose-jobs/docker-compose-<job_name>.yml
if not self.docker.compose_files:
raise DeployerException("No compose files configured")
# Deployment directory is parent of compose directory
compose_dir = Path(self.docker.compose_files[0]).parent
deployment_dir = compose_dir.parent
job_compose_file = deployment_dir / "compose-jobs" / f"docker-compose-{job_name}.yml"
if not job_compose_file.exists():
raise DeployerException(f"Job compose file not found: {job_compose_file}")
if opts.o.verbose:
print(f"Running job from: {job_compose_file}")
# Create a DockerClient for the job compose file with same project name and env file
# This allows the job to access volumes from the main deployment
job_docker = DockerClient(
compose_files=[job_compose_file],
compose_project_name=self.docker.compose_project_name,
compose_env_file=self.docker.compose_env_file
)
# Run the job with --rm flag to remove container after completion
return job_docker.compose.run(service=job_name, remove=True, tty=True)
except DockerException as e:
raise DeployerException(e)
class DockerDeployerConfigGenerator(DeployerConfigGenerator):

View File

@ -84,7 +84,22 @@ def create_deploy_context(
# Extract the cluster name from the deployment, if we have one
if deployment_context and cluster is None:
cluster = deployment_context.get_cluster_id()
cluster_context = _make_cluster_context(global_context, stack, include, exclude, cluster, env_file)
# Check if this is a helm chart deployment (has chart/ but no compose/)
# TODO: Add a new deployment type for helm chart deployments
# To avoid relying on chart existence in such cases
is_helm_chart_deployment = False
if deployment_context:
chart_dir = deployment_context.deployment_dir / "chart"
compose_dir = deployment_context.deployment_dir / "compose"
is_helm_chart_deployment = chart_dir.exists() and not compose_dir.exists()
# For helm chart deployments, skip compose file loading
if is_helm_chart_deployment:
cluster_context = ClusterContext(global_context, cluster, [], [], [], None, env_file)
else:
cluster_context = _make_cluster_context(global_context, stack, include, exclude, cluster, env_file)
deployer = getDeployer(deploy_to, deployment_context, compose_files=cluster_context.compose_files,
compose_project_name=cluster_context.cluster,
compose_env_file=cluster_context.env_file)
@ -188,6 +203,17 @@ def logs_operation(ctx, tail: int, follow: bool, extra_args: str):
print(stream_content.decode("utf-8"), end="")
def run_job_operation(ctx, job_name: str, helm_release: str = None):
global_context = ctx.parent.parent.obj
if not global_context.dry_run:
print(f"Running job: {job_name}")
try:
ctx.obj.deployer.run_job(job_name, helm_release)
except Exception as e:
print(f"Error running job {job_name}: {e}")
sys.exit(1)
@command.command()
@click.argument('extra_args', nargs=-1) # help: command: up <service1> <service2>
@click.pass_context

View File

@ -55,6 +55,10 @@ class Deployer(ABC):
def run(self, image: str, command=None, user=None, volumes=None, entrypoint=None, env={}, ports=[], detach=False):
pass
@abstractmethod
def run_job(self, job_name: str, release_name: str = None):
pass
class DeployerException(Exception):
def __init__(self, *args: object) -> None:

View File

@ -167,3 +167,14 @@ def status(ctx):
def update(ctx):
ctx.obj = make_deploy_context(ctx)
update_operation(ctx)
@command.command()
@click.argument('job_name')
@click.option('--helm-release', help='Helm release name (only for k8s helm chart deployments, defaults to chart name)')
@click.pass_context
def run_job(ctx, job_name, helm_release):
'''run a one-time job from the stack'''
from stack_orchestrator.deploy.deploy import run_job_operation
ctx.obj = make_deploy_context(ctx)
run_job_operation(ctx, job_name, helm_release)

View File

@ -45,14 +45,11 @@ class DeploymentContext:
def get_compose_dir(self):
return self.deployment_dir.joinpath(constants.compose_dir_name)
def get_compose_file(self, name: str):
return self.get_compose_dir() / f"docker-compose-{name}.yml"
def get_cluster_id(self):
return self.id
def init(self, dir: Path):
self.deployment_dir = dir.absolute()
def init(self, dir):
self.deployment_dir = dir
self.spec = Spec()
self.spec.init_from_file(self.get_spec_file())
self.stack = Stack(self.spec.obj["stack"])
@ -69,19 +66,3 @@ class DeploymentContext:
unique_cluster_descriptor = f"{path},{self.get_stack_file()},None,None"
hash = hashlib.md5(unique_cluster_descriptor.encode()).hexdigest()[:16]
self.id = f"{constants.cluster_name_prefix}{hash}"
def modify_yaml(self, file_path: Path, modifier_func):
"""
Load a YAML from the deployment, apply a modification function, and write it back.
"""
if not file_path.absolute().is_relative_to(self.deployment_dir):
raise ValueError(f"File is not inside deployment directory: {file_path}")
yaml = get_yaml()
with open(file_path, 'r') as f:
yaml_data = yaml.load(f)
modifier_func(yaml_data)
with open(file_path, 'w') as f:
yaml.dump(yaml_data, f)

View File

@ -27,7 +27,7 @@ from stack_orchestrator.opts import opts
from stack_orchestrator.util import (get_stack_path, get_parsed_deployment_spec, get_parsed_stack_config,
global_options, get_yaml, get_pod_list, get_pod_file_path, pod_has_scripts,
get_pod_script_paths, get_plugin_code_paths, error_exit, env_var_map_from_file,
resolve_config_dir)
resolve_config_dir, get_job_list, get_job_file_path)
from stack_orchestrator.deploy.spec import Spec
from stack_orchestrator.deploy.deploy_types import LaconicStackSetupCommand
from stack_orchestrator.deploy.deployer_factory import getDeployerConfigGenerator
@ -443,20 +443,24 @@ def _check_volume_definitions(spec):
@click.command()
@click.option("--spec-file", required=True, help="Spec file to use to create this deployment")
@click.option("--deployment-dir", help="Create deployment files in this directory")
@click.argument('extra_args', nargs=-1, type=click.UNPROCESSED)
@click.option("--helm-chart", is_flag=True, default=False, help="Generate Helm chart instead of deploying (k8s only)")
# TODO: Hack
@click.option("--network-dir", help="Network configuration supplied in this directory")
@click.option("--initial-peers", help="Initial set of persistent peers")
@click.pass_context
def create(ctx, spec_file, deployment_dir, extra_args):
def create(ctx, spec_file, deployment_dir, helm_chart, network_dir, initial_peers):
deployment_command_context = ctx.obj
return create_operation(deployment_command_context, spec_file, deployment_dir, extra_args)
return create_operation(deployment_command_context, spec_file, deployment_dir, helm_chart, network_dir, initial_peers)
# The init command's implementation is in a separate function so that we can
# call it from other commands, bypassing the click decoration stuff
def create_operation(deployment_command_context, spec_file, deployment_dir, extra_args):
def create_operation(deployment_command_context, spec_file, deployment_dir, helm_chart, network_dir, initial_peers):
parsed_spec = Spec(os.path.abspath(spec_file), get_parsed_deployment_spec(spec_file))
_check_volume_definitions(parsed_spec)
stack_name = parsed_spec["stack"]
deployment_type = parsed_spec[constants.deploy_to_key]
stack_file = get_stack_path(stack_name).joinpath(constants.stack_file_name)
parsed_stack = get_parsed_stack_config(stack_name)
if opts.o.debug:
@ -471,7 +475,17 @@ def create_operation(deployment_command_context, spec_file, deployment_dir, extr
# Copy spec file and the stack file into the deployment dir
copyfile(spec_file, deployment_dir_path.joinpath(constants.spec_file_name))
copyfile(stack_file, deployment_dir_path.joinpath(constants.stack_file_name))
# Create deployment.yml with cluster-id
_create_deployment_file(deployment_dir_path)
# Branch to Helm chart generation flow if --helm-chart flag is set
if deployment_type == "k8s" and helm_chart:
from stack_orchestrator.deploy.k8s.helm.chart_generator import generate_helm_chart
generate_helm_chart(stack_name, spec_file, deployment_dir_path)
return # Exit early for helm chart generation
# Existing deployment flow continues unchanged
# Copy any config varibles from the spec file into an env file suitable for compose
_write_config_file(spec_file, deployment_dir_path.joinpath(constants.config_file_name))
# Copy any k8s config file into the deployment dir
@ -529,6 +543,21 @@ def create_operation(deployment_command_context, spec_file, deployment_dir, extr
if os.path.exists(destination_config_dir) and not os.listdir(destination_config_dir):
copytree(source_config_dir, destination_config_dir, dirs_exist_ok=True)
# Copy the job files into the deployment dir (for Docker deployments)
jobs = get_job_list(parsed_stack)
if jobs and not parsed_spec.is_kubernetes_deployment():
destination_compose_jobs_dir = deployment_dir_path.joinpath("compose-jobs")
os.mkdir(destination_compose_jobs_dir)
for job in jobs:
job_file_path = get_job_file_path(stack_name, parsed_stack, job)
if job_file_path and job_file_path.exists():
parsed_job_file = yaml.load(open(job_file_path, "r"))
_fixup_pod_file(parsed_job_file, parsed_spec, destination_compose_dir)
with open(destination_compose_jobs_dir.joinpath("docker-compose-%s.yml" % job), "w") as output_file:
yaml.dump(parsed_job_file, output_file)
if opts.o.debug:
print(f"Copied job compose file: {job}")
# Delegate to the stack's Python code
# The deploy create command doesn't require a --stack argument so we need to insert the
# stack member here.
@ -539,7 +568,7 @@ def create_operation(deployment_command_context, spec_file, deployment_dir, extr
deployer_config_generator = getDeployerConfigGenerator(deployment_type, deployment_context)
# TODO: make deployment_dir_path a Path above
deployer_config_generator.generate(deployment_dir_path)
call_stack_deploy_create(deployment_context, extra_args)
call_stack_deploy_create(deployment_context, [network_dir, initial_peers, deployment_command_context])
# TODO: this code should be in the stack .py files but

View File

@ -510,6 +510,26 @@ class K8sDeployer(Deployer):
# We need to figure out how to do this -- check why we're being called first
pass
def run_job(self, job_name: str, helm_release: str = None):
if not opts.o.dry_run:
from stack_orchestrator.deploy.k8s.helm.job_runner import run_helm_job
# Check if this is a helm-based deployment
chart_dir = self.deployment_dir / "chart"
if not chart_dir.exists():
# TODO: Implement job support for compose-based K8s deployments
raise Exception(f"Job support is only available for helm-based deployments. Chart directory not found: {chart_dir}")
# Run the job using the helm job runner
run_helm_job(
chart_dir=chart_dir,
job_name=job_name,
release=helm_release,
namespace=self.k8s_namespace,
timeout=600,
verbose=opts.o.verbose
)
def is_kind(self):
return self.type == "k8s-kind"

View File

@ -0,0 +1,14 @@
# Copyright © 2025 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.

View File

@ -0,0 +1,320 @@
# Copyright © 2025 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
from pathlib import Path
from stack_orchestrator import constants
from stack_orchestrator.opts import opts
from stack_orchestrator.util import (
get_parsed_stack_config,
get_pod_list,
get_pod_file_path,
get_job_list,
get_job_file_path,
error_exit
)
from stack_orchestrator.deploy.k8s.helm.kompose_wrapper import (
check_kompose_available,
get_kompose_version,
convert_to_helm_chart
)
from stack_orchestrator.util import get_yaml
def _wrap_job_templates_with_conditionals(chart_dir: Path, jobs: list) -> None:
"""
Wrap job templates with conditional checks so they are not created by default.
Jobs will only be created when explicitly enabled via --set jobs.<name>.enabled=true
"""
templates_dir = chart_dir / "templates"
if not templates_dir.exists():
return
for job_name in jobs:
# Find job template file (kompose generates <service-name>-job.yaml)
job_template_file = templates_dir / f"{job_name}-job.yaml"
if not job_template_file.exists():
if opts.o.debug:
print(f"Warning: Job template not found: {job_template_file}")
continue
# Read the template content
content = job_template_file.read_text()
# Wrap with conditional (default false)
# Use 'index' function to handle job names with dashes
# Provide default dict for .Values.jobs to handle case where it doesn't exist
condition = (
f"{{{{- if (index (.Values.jobs | default dict) "
f'"{job_name}" | default dict).enabled | default false }}}}'
)
wrapped_content = f"""{condition}
{content}{{{{- end }}}}
"""
# Write back
job_template_file.write_text(wrapped_content)
if opts.o.debug:
print(f"Wrapped job template with conditional: {job_template_file.name}")
def _post_process_chart(chart_dir: Path, chart_name: str, jobs: list) -> None:
"""
Post-process Kompose-generated chart to fix common issues.
Fixes:
1. Chart.yaml name, description and keywords
2. Add conditional wrappers to job templates (default: disabled)
TODO:
- Add defaultMode: 0755 to ConfigMap volumes containing scripts (.sh files)
"""
yaml = get_yaml()
# Fix Chart.yaml
chart_yaml_path = chart_dir / "Chart.yaml"
if chart_yaml_path.exists():
chart_yaml = yaml.load(open(chart_yaml_path, "r"))
# Fix name
chart_yaml["name"] = chart_name
# Fix description
chart_yaml["description"] = f"Generated Helm chart for {chart_name} stack"
# Fix keywords
if "keywords" in chart_yaml and isinstance(chart_yaml["keywords"], list):
chart_yaml["keywords"] = [chart_name]
with open(chart_yaml_path, "w") as f:
yaml.dump(chart_yaml, f)
# Process job templates: wrap with conditionals (default disabled)
if jobs:
_wrap_job_templates_with_conditionals(chart_dir, jobs)
def generate_helm_chart(stack_path: str, spec_file: str, deployment_dir_path: Path) -> None:
"""
Generate a self-sufficient Helm chart from stack compose files using Kompose.
Args:
stack_path: Path to the stack directory
spec_file: Path to the deployment spec file
deployment_dir_path: Deployment directory path (already created with deployment.yml)
Output structure:
deployment-dir/
deployment.yml # Contains cluster-id
spec.yml # Reference
stack.yml # Reference
chart/ # Self-sufficient Helm chart
Chart.yaml
README.md
templates/
*.yaml
TODO: Enhancements:
- Convert Deployments to StatefulSets for stateful services (zenithd, postgres)
- Add _helpers.tpl with common label/selector functions
- Enhance Chart.yaml with proper metadata (version, description, etc.)
"""
parsed_stack = get_parsed_stack_config(stack_path)
stack_name = parsed_stack.get("name", stack_path)
# 1. Check Kompose availability
if not check_kompose_available():
error_exit("kompose not found in PATH.\n")
# 2. Read cluster-id from deployment.yml
deployment_file = deployment_dir_path / constants.deployment_file_name
if not deployment_file.exists():
error_exit(f"Deployment file not found: {deployment_file}")
yaml = get_yaml()
deployment_config = yaml.load(open(deployment_file, "r"))
cluster_id = deployment_config.get(constants.cluster_id_key)
if not cluster_id:
error_exit(f"cluster-id not found in {deployment_file}")
# 3. Derive chart name from stack name + cluster-id suffix
# Sanitize stack name for use in chart name
sanitized_stack_name = stack_name.replace("_", "-").replace(" ", "-")
# Extract hex suffix from cluster-id (after the prefix)
# cluster-id format: "laconic-<hex>" -> extract the hex part
cluster_id_suffix = cluster_id.split("-", 1)[1] if "-" in cluster_id else cluster_id
# Combine to create human-readable + unique chart name
chart_name = f"{sanitized_stack_name}-{cluster_id_suffix}"
if opts.o.debug:
print(f"Cluster ID: {cluster_id}")
print(f"Chart name: {chart_name}")
# 4. Get compose files from stack (pods + jobs)
pods = get_pod_list(parsed_stack)
if not pods:
error_exit(f"No pods found in stack: {stack_path}")
jobs = get_job_list(parsed_stack)
if opts.o.debug:
print(f"Found {len(pods)} pod(s) in stack: {pods}")
if jobs:
print(f"Found {len(jobs)} job(s) in stack: {jobs}")
compose_files = []
for pod in pods:
pod_file = get_pod_file_path(stack_path, parsed_stack, pod)
if not pod_file.exists():
error_exit(f"Pod file not found: {pod_file}")
compose_files.append(pod_file)
if opts.o.debug:
print(f"Found compose file: {pod_file.name}")
# Add job compose files
job_files = []
for job in jobs:
job_file = get_job_file_path(stack_path, parsed_stack, job)
if not job_file.exists():
error_exit(f"Job file not found: {job_file}")
compose_files.append(job_file)
job_files.append(job_file)
if opts.o.debug:
print(f"Found job compose file: {job_file.name}")
try:
version = get_kompose_version()
print(f"Using kompose version: {version}")
except Exception as e:
error_exit(f"Failed to get kompose version: {e}")
# 5. Create chart directory and invoke Kompose
chart_dir = deployment_dir_path / "chart"
print(f"Converting {len(compose_files)} compose file(s) to Helm chart using Kompose...")
try:
output = convert_to_helm_chart(
compose_files=compose_files,
output_dir=chart_dir,
chart_name=chart_name
)
if opts.o.debug:
print(f"Kompose output:\n{output}")
except Exception as e:
error_exit(f"Helm chart generation failed: {e}")
# 6. Post-process generated chart
_post_process_chart(chart_dir, chart_name, jobs)
# 7. Generate README.md with basic installation instructions
readme_content = f"""# {chart_name} Helm Chart
Generated by laconic-so from stack: `{stack_path}`
## Prerequisites
- Kubernetes cluster (v1.27+)
- Helm (v3.12+)
- kubectl configured to access your cluster
## Installation
```bash
# Install the chart
helm install {chart_name} {chart_dir}
# Alternatively, install with your own release name
# helm install <your-release-name> {chart_dir}
# Check deployment status
kubectl get pods
```
## Upgrade
To apply changes made to chart, perform upgrade:
```bash
helm upgrade {chart_name} {chart_dir}
```
## Uninstallation
```bash
helm uninstall {chart_name}
```
## Configuration
The chart was generated from Docker Compose files using Kompose.
### Customization
Edit the generated template files in `templates/` to customize:
- Image repositories and tags
- Resource limits (CPU, memory)
- Persistent volume sizes
- Replica counts
"""
readme_path = chart_dir / "README.md"
readme_path.write_text(readme_content)
if opts.o.debug:
print(f"Generated README: {readme_path}")
# 7. Success message
print(f"\n{'=' * 60}")
print("✓ Helm chart generated successfully!")
print(f"{'=' * 60}")
print("\nChart details:")
print(f" Name: {chart_name}")
print(f" Location: {chart_dir.absolute()}")
print(f" Stack: {stack_path}")
# Count generated files
template_files = list((chart_dir / "templates").glob("*.yaml")) if (chart_dir / "templates").exists() else []
print(f" Files: {len(template_files)} template(s) generated")
print("\nDeployment directory structure:")
print(f" {deployment_dir_path}/")
print(" ├── deployment.yml (cluster-id)")
print(" ├── spec.yml (reference)")
print(" ├── stack.yml (reference)")
print(" └── chart/ (self-sufficient Helm chart)")
print("\nNext steps:")
print(" 1. Review the chart:")
print(f" cd {chart_dir}")
print(" cat Chart.yaml")
print("")
print(" 2. Review generated templates:")
print(" ls templates/")
print("")
print(" 3. Install to Kubernetes:")
print(f" helm install {chart_name} {chart_dir}")
print("")
print(" # Or use your own release name")
print(f" helm install <your-release-name> {chart_dir}")
print("")
print(" 4. Check deployment:")
print(" kubectl get pods")
print("")

View File

@ -0,0 +1,149 @@
# Copyright © 2025 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
import subprocess
import tempfile
import os
import json
from pathlib import Path
from stack_orchestrator.util import get_yaml
def get_release_name_from_chart(chart_dir: Path) -> str:
"""
Read the chart name from Chart.yaml to use as the release name.
Args:
chart_dir: Path to the Helm chart directory
Returns:
Chart name from Chart.yaml
Raises:
Exception if Chart.yaml not found or name is missing
"""
chart_yaml_path = chart_dir / "Chart.yaml"
if not chart_yaml_path.exists():
raise Exception(f"Chart.yaml not found: {chart_yaml_path}")
yaml = get_yaml()
chart_yaml = yaml.load(open(chart_yaml_path, "r"))
if "name" not in chart_yaml:
raise Exception(f"Chart name not found in {chart_yaml_path}")
return chart_yaml["name"]
def run_helm_job(
chart_dir: Path,
job_name: str,
release: str = None,
namespace: str = "default",
timeout: int = 600,
verbose: bool = False
) -> None:
"""
Run a one-time job from a Helm chart.
This function:
1. Uses provided release name, or reads it from Chart.yaml if not provided
2. Uses helm template to render the job manifest with the job enabled
3. Applies the job manifest to the cluster
4. Waits for the job to complete
Args:
chart_dir: Path to the Helm chart directory
job_name: Name of the job to run (without -job suffix)
release: Optional Helm release name (defaults to chart name from Chart.yaml)
namespace: Kubernetes namespace
timeout: Timeout in seconds for job completion (default: 600)
verbose: Enable verbose output
Raises:
Exception if the job fails or times out
"""
if not chart_dir.exists():
raise Exception(f"Chart directory not found: {chart_dir}")
# Use provided release name, or get it from Chart.yaml
if release is None:
release = get_release_name_from_chart(chart_dir)
if verbose:
print(f"Using release name from Chart.yaml: {release}")
else:
if verbose:
print(f"Using provided release name: {release}")
job_template_file = f"templates/{job_name}-job.yaml"
if verbose:
print(f"Running job '{job_name}' from helm chart: {chart_dir}")
# Use helm template to render the job manifest
with tempfile.NamedTemporaryFile(mode='w', suffix='.yaml', delete=False) as tmp_file:
try:
# Render job template with job enabled
# Use --set-json to properly handle job names with dashes
jobs_dict = {job_name: {"enabled": True}}
values_json = json.dumps(jobs_dict)
helm_cmd = [
"helm", "template", release, str(chart_dir),
"--show-only", job_template_file,
"--set-json", f"jobs={values_json}"
]
if verbose:
print(f"Running: {' '.join(helm_cmd)}")
result = subprocess.run(helm_cmd, check=True, capture_output=True, text=True)
tmp_file.write(result.stdout)
tmp_file.flush()
if verbose:
print(f"Generated job manifest:\n{result.stdout}")
# Parse the manifest to get the actual job name
yaml = get_yaml()
manifest = yaml.load(result.stdout)
actual_job_name = manifest.get("metadata", {}).get("name", job_name)
# Apply the job manifest
kubectl_apply_cmd = ["kubectl", "apply", "-f", tmp_file.name, "-n", namespace]
subprocess.run(kubectl_apply_cmd, check=True, capture_output=True, text=True)
if verbose:
print(f"Job {actual_job_name} created, waiting for completion...")
# Wait for job completion
wait_cmd = [
"kubectl", "wait", "--for=condition=complete",
f"job/{actual_job_name}",
f"--timeout={timeout}s",
"-n", namespace
]
subprocess.run(wait_cmd, check=True, capture_output=True, text=True)
if verbose:
print(f"Job {job_name} completed successfully")
except subprocess.CalledProcessError as e:
error_msg = e.stderr if e.stderr else str(e)
raise Exception(f"Job failed: {error_msg}")
finally:
# Clean up temp file
if os.path.exists(tmp_file.name):
os.unlink(tmp_file.name)

View File

@ -0,0 +1,109 @@
# Copyright © 2025 Vulcanize
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
import subprocess
import shutil
from pathlib import Path
from typing import List
def check_kompose_available() -> bool:
"""Check if kompose binary is available in PATH."""
return shutil.which("kompose") is not None
def get_kompose_version() -> str:
"""
Get the installed kompose version.
Returns:
Version string (e.g., "1.34.0")
Raises:
Exception if kompose is not available
"""
if not check_kompose_available():
raise Exception("kompose not found in PATH")
result = subprocess.run(
["kompose", "version"],
capture_output=True,
text=True,
timeout=10
)
if result.returncode != 0:
raise Exception(f"Failed to get kompose version: {result.stderr}")
# Parse version from output like "1.34.0 (HEAD)"
# Output format: "1.34.0 (HEAD)" or just "1.34.0"
version_line = result.stdout.strip()
version = version_line.split()[0] if version_line else "unknown"
return version
def convert_to_helm_chart(compose_files: List[Path], output_dir: Path, chart_name: str = None) -> str:
"""
Invoke kompose to convert Docker Compose files to a Helm chart.
Args:
compose_files: List of paths to docker-compose.yml files
output_dir: Directory where the Helm chart will be generated
chart_name: Optional name for the chart (defaults to directory name)
Returns:
stdout from kompose command
Raises:
Exception if kompose conversion fails
"""
if not check_kompose_available():
raise Exception(
"kompose not found in PATH. "
"Install from: https://kompose.io/installation/"
)
# Ensure output directory exists
output_dir.mkdir(parents=True, exist_ok=True)
# Build kompose command
cmd = ["kompose", "convert"]
# Add all compose files
for compose_file in compose_files:
if not compose_file.exists():
raise Exception(f"Compose file not found: {compose_file}")
cmd.extend(["-f", str(compose_file)])
# Add chart flag and output directory
cmd.extend(["--chart", "-o", str(output_dir)])
# Execute kompose
result = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=60
)
if result.returncode != 0:
raise Exception(
f"Kompose conversion failed:\n"
f"Command: {' '.join(cmd)}\n"
f"Error: {result.stderr}"
)
return result.stdout

View File

@ -91,7 +91,9 @@ def create_deployment(ctx, deployment_dir, image, url, kube_config, image_regist
deploy_command_context,
spec_file_name,
deployment_dir,
False,
None,
None
)
# Fix up the container tag inside the deployment compose file
_fixup_container_tag(deployment_dir, image)

View File

@ -78,6 +78,22 @@ def get_pod_list(parsed_stack):
return result
def get_job_list(parsed_stack):
# Return list of jobs from stack config, or empty list if no jobs defined
if "jobs" not in parsed_stack:
return []
jobs = parsed_stack["jobs"]
if not jobs:
return []
if type(jobs[0]) is str:
result = jobs
else:
result = []
for job in jobs:
result.append(job["name"])
return result
def get_plugin_code_paths(stack) -> List[Path]:
parsed_stack = get_parsed_stack_config(stack)
pods = parsed_stack["pods"]
@ -119,6 +135,21 @@ def resolve_compose_file(stack, pod_name: str):
return compose_base.joinpath(f"docker-compose-{pod_name}.yml")
# Find a job compose file in compose-jobs directory
def resolve_job_compose_file(stack, job_name: str):
if stack_is_external(stack):
# First try looking in the external stack for the job compose file
compose_jobs_base = Path(stack).parent.parent.joinpath("compose-jobs")
proposed_file = compose_jobs_base.joinpath(f"docker-compose-{job_name}.yml")
if proposed_file.exists():
return proposed_file
# If we don't find it fall through to the internal case
# TODO: Add internal compose-jobs directory support if needed
# For now, jobs are expected to be in external stacks only
compose_jobs_base = Path(stack).parent.parent.joinpath("compose-jobs")
return compose_jobs_base.joinpath(f"docker-compose-{job_name}.yml")
def get_pod_file_path(stack, parsed_stack, pod_name: str):
pods = parsed_stack["pods"]
if type(pods[0]) is str:
@ -131,6 +162,18 @@ def get_pod_file_path(stack, parsed_stack, pod_name: str):
return result
def get_job_file_path(stack, parsed_stack, job_name: str):
if "jobs" not in parsed_stack or not parsed_stack["jobs"]:
return None
jobs = parsed_stack["jobs"]
if type(jobs[0]) is str:
result = resolve_job_compose_file(stack, job_name)
else:
# TODO: Support complex job definitions if needed
result = resolve_job_compose_file(stack, job_name)
return result
def get_pod_script_paths(parsed_stack, pod_name: str):
pods = parsed_stack["pods"]
result = []

View File

@ -14,13 +14,8 @@ delete_cluster_exit () {
# Test basic stack-orchestrator deploy
echo "Running stack-orchestrator deploy test"
if [ "$1" == "from-path" ]; then
TEST_TARGET_SO="laconic-so"
else
TEST_TARGET_SO=$( ls -t1 ./package/laconic-so* | head -1 )
fi
# Bit of a hack, test the most recent package
TEST_TARGET_SO=$( ls -t1 ./package/laconic-so* | head -1 )
# Set a non-default repo dir
export CERC_REPO_BASE_DIR=~/stack-orchestrator-test/repo-base-dir
echo "Testing this package: $TEST_TARGET_SO"