Compare commits
4 Commits
main
...
pm-custom-
| Author | SHA1 | Date | |
|---|---|---|---|
| 74b0792d00 | |||
| a085ca8756 | |||
| 27a6470ad9 | |||
| 53a96defe0 |
@ -1,151 +0,0 @@
|
|||||||
# Plan: Make Stack-Orchestrator AI-Friendly
|
|
||||||
|
|
||||||
## Goal
|
|
||||||
|
|
||||||
Make the stack-orchestrator repository easier for AI tools (Claude Code, Cursor, Copilot) to understand and use for generating stacks, including adding a `create-stack` command.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Part 1: Documentation & Context Files
|
|
||||||
|
|
||||||
### 1.1 Add CLAUDE.md
|
|
||||||
|
|
||||||
Create a root-level context file for AI assistants.
|
|
||||||
|
|
||||||
**File:** `CLAUDE.md`
|
|
||||||
|
|
||||||
Contents:
|
|
||||||
- Project overview (what stack-orchestrator does)
|
|
||||||
- Stack creation workflow (step-by-step)
|
|
||||||
- File naming conventions
|
|
||||||
- Required vs optional fields in stack.yml
|
|
||||||
- Common patterns and anti-patterns
|
|
||||||
- Links to example stacks (simple, medium, complex)
|
|
||||||
|
|
||||||
### 1.2 Add JSON Schema for stack.yml
|
|
||||||
|
|
||||||
Create formal validation schema.
|
|
||||||
|
|
||||||
**File:** `schemas/stack-schema.json`
|
|
||||||
|
|
||||||
Benefits:
|
|
||||||
- AI tools can validate generated stacks
|
|
||||||
- IDEs provide autocomplete
|
|
||||||
- CI can catch errors early
|
|
||||||
|
|
||||||
### 1.3 Add Template Stack with Comments
|
|
||||||
|
|
||||||
Create an annotated template for reference.
|
|
||||||
|
|
||||||
**File:** `stack_orchestrator/data/stacks/_template/stack.yml`
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# Stack definition template - copy this directory to create a new stack
|
|
||||||
version: "1.2" # Required: 1.0, 1.1, or 1.2
|
|
||||||
name: my-stack # Required: lowercase, hyphens only
|
|
||||||
description: "Human-readable description" # Optional
|
|
||||||
repos: # Git repositories to clone
|
|
||||||
- github.com/org/repo
|
|
||||||
containers: # Container images to build (must have matching container-build/)
|
|
||||||
- cerc/my-container
|
|
||||||
pods: # Deployment units (must have matching docker-compose-{pod}.yml)
|
|
||||||
- my-pod
|
|
||||||
```
|
|
||||||
|
|
||||||
### 1.4 Document Validation Rules
|
|
||||||
|
|
||||||
Create explicit documentation of constraints currently scattered in code.
|
|
||||||
|
|
||||||
**File:** `docs/stack-format.md`
|
|
||||||
|
|
||||||
Contents:
|
|
||||||
- Container names must start with `cerc/`
|
|
||||||
- Pod names must match compose file: `docker-compose-{pod}.yml`
|
|
||||||
- Repository format: `host/org/repo[@ref]`
|
|
||||||
- Stack directory name should match `name` field
|
|
||||||
- Version field options and differences
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Part 2: Add `create-stack` Command
|
|
||||||
|
|
||||||
### 2.1 Command Overview
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so create-stack --repo github.com/org/my-app [--name my-app] [--type webapp]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Behavior:**
|
|
||||||
1. Parse repo URL to extract app name (if --name not provided)
|
|
||||||
2. Create `stacks/{name}/stack.yml`
|
|
||||||
3. Create `container-build/cerc-{name}/Dockerfile` and `build.sh`
|
|
||||||
4. Create `compose/docker-compose-{name}.yml`
|
|
||||||
5. Update list files (repository-list.txt, container-image-list.txt, pod-list.txt)
|
|
||||||
|
|
||||||
### 2.2 Files to Create
|
|
||||||
|
|
||||||
| File | Purpose |
|
|
||||||
|------|---------|
|
|
||||||
| `stack_orchestrator/create/__init__.py` | Package init |
|
|
||||||
| `stack_orchestrator/create/create_stack.py` | Command implementation |
|
|
||||||
|
|
||||||
### 2.3 Files to Modify
|
|
||||||
|
|
||||||
| File | Change |
|
|
||||||
|------|--------|
|
|
||||||
| `stack_orchestrator/main.py` | Add import and `cli.add_command()` |
|
|
||||||
|
|
||||||
### 2.4 Command Options
|
|
||||||
|
|
||||||
| Option | Required | Description |
|
|
||||||
|--------|----------|-------------|
|
|
||||||
| `--repo` | Yes | Git repository URL (e.g., github.com/org/repo) |
|
|
||||||
| `--name` | No | Stack name (defaults to repo name) |
|
|
||||||
| `--type` | No | Template type: webapp, service, empty (default: webapp) |
|
|
||||||
| `--force` | No | Overwrite existing files |
|
|
||||||
|
|
||||||
### 2.5 Template Types
|
|
||||||
|
|
||||||
| Type | Base Image | Port | Use Case |
|
|
||||||
|------|------------|------|----------|
|
|
||||||
| webapp | node:20-bullseye-slim | 3000 | React/Vue/Next.js apps |
|
|
||||||
| service | python:3.11-slim | 8080 | Python backend services |
|
|
||||||
| empty | none | none | Custom from scratch |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Part 3: Implementation Summary
|
|
||||||
|
|
||||||
### New Files (6)
|
|
||||||
|
|
||||||
1. `CLAUDE.md` - AI assistant context
|
|
||||||
2. `schemas/stack-schema.json` - Validation schema
|
|
||||||
3. `stack_orchestrator/data/stacks/_template/stack.yml` - Annotated template
|
|
||||||
4. `docs/stack-format.md` - Stack format documentation
|
|
||||||
5. `stack_orchestrator/create/__init__.py` - Package init
|
|
||||||
6. `stack_orchestrator/create/create_stack.py` - Command implementation
|
|
||||||
|
|
||||||
### Modified Files (1)
|
|
||||||
|
|
||||||
1. `stack_orchestrator/main.py` - Register create-stack command
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Verification
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Command appears in help
|
|
||||||
laconic-so --help | grep create-stack
|
|
||||||
|
|
||||||
# 2. Dry run works
|
|
||||||
laconic-so --dry-run create-stack --repo github.com/org/test-app
|
|
||||||
|
|
||||||
# 3. Creates all expected files
|
|
||||||
laconic-so create-stack --repo github.com/org/test-app
|
|
||||||
ls stack_orchestrator/data/stacks/test-app/
|
|
||||||
ls stack_orchestrator/data/container-build/cerc-test-app/
|
|
||||||
ls stack_orchestrator/data/compose/docker-compose-test-app.yml
|
|
||||||
|
|
||||||
# 4. Build works with generated stack
|
|
||||||
laconic-so --stack test-app build-containers
|
|
||||||
```
|
|
||||||
@ -1,413 +0,0 @@
|
|||||||
# Implementing `laconic-so create-stack` Command
|
|
||||||
|
|
||||||
A plan for adding a new CLI command to scaffold stack files automatically.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Add a `create-stack` command that generates all required files for a new stack:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so create-stack --name my-stack --type webapp
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output:**
|
|
||||||
```
|
|
||||||
stack_orchestrator/data/
|
|
||||||
├── stacks/my-stack/stack.yml
|
|
||||||
├── container-build/cerc-my-stack/
|
|
||||||
│ ├── Dockerfile
|
|
||||||
│ └── build.sh
|
|
||||||
└── compose/docker-compose-my-stack.yml
|
|
||||||
|
|
||||||
Updated: repository-list.txt, container-image-list.txt, pod-list.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## CLI Architecture Summary
|
|
||||||
|
|
||||||
### Command Registration Pattern
|
|
||||||
|
|
||||||
Commands are Click functions registered in `main.py`:
|
|
||||||
|
|
||||||
```python
|
|
||||||
# main.py (line ~70)
|
|
||||||
from stack_orchestrator.create import create_stack
|
|
||||||
cli.add_command(create_stack.command, "create-stack")
|
|
||||||
```
|
|
||||||
|
|
||||||
### Global Options Access
|
|
||||||
|
|
||||||
```python
|
|
||||||
from stack_orchestrator.opts import opts
|
|
||||||
|
|
||||||
if not opts.o.quiet:
|
|
||||||
print("message")
|
|
||||||
if opts.o.dry_run:
|
|
||||||
print("(would create files)")
|
|
||||||
```
|
|
||||||
|
|
||||||
### Key Utilities
|
|
||||||
|
|
||||||
| Function | Location | Purpose |
|
|
||||||
|----------|----------|---------|
|
|
||||||
| `get_yaml()` | `util.py` | YAML parser (ruamel.yaml) |
|
|
||||||
| `get_stack_path(stack)` | `util.py` | Resolve stack directory path |
|
|
||||||
| `error_exit(msg)` | `util.py` | Print error and exit(1) |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Files to Create
|
|
||||||
|
|
||||||
### 1. Command Module
|
|
||||||
|
|
||||||
**`stack_orchestrator/create/__init__.py`**
|
|
||||||
```python
|
|
||||||
# Empty file to make this a package
|
|
||||||
```
|
|
||||||
|
|
||||||
**`stack_orchestrator/create/create_stack.py`**
|
|
||||||
```python
|
|
||||||
import click
|
|
||||||
import os
|
|
||||||
from pathlib import Path
|
|
||||||
from shutil import copy
|
|
||||||
from stack_orchestrator.opts import opts
|
|
||||||
from stack_orchestrator.util import error_exit, get_yaml
|
|
||||||
|
|
||||||
# Template types
|
|
||||||
STACK_TEMPLATES = {
|
|
||||||
"webapp": {
|
|
||||||
"description": "Web application with Node.js",
|
|
||||||
"base_image": "node:20-bullseye-slim",
|
|
||||||
"port": 3000,
|
|
||||||
},
|
|
||||||
"service": {
|
|
||||||
"description": "Backend service",
|
|
||||||
"base_image": "python:3.11-slim",
|
|
||||||
"port": 8080,
|
|
||||||
},
|
|
||||||
"empty": {
|
|
||||||
"description": "Minimal stack with no defaults",
|
|
||||||
"base_image": None,
|
|
||||||
"port": None,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def get_data_dir() -> Path:
|
|
||||||
"""Get path to stack_orchestrator/data directory"""
|
|
||||||
return Path(__file__).absolute().parent.parent.joinpath("data")
|
|
||||||
|
|
||||||
|
|
||||||
def validate_stack_name(name: str) -> None:
|
|
||||||
"""Validate stack name follows conventions"""
|
|
||||||
import re
|
|
||||||
if not re.match(r'^[a-z0-9][a-z0-9-]*[a-z0-9]$', name) and len(name) > 2:
|
|
||||||
error_exit(f"Invalid stack name '{name}'. Use lowercase alphanumeric with hyphens.")
|
|
||||||
if name.startswith("cerc-"):
|
|
||||||
error_exit("Stack name should not start with 'cerc-' (container names will add this prefix)")
|
|
||||||
|
|
||||||
|
|
||||||
def create_stack_yml(stack_dir: Path, name: str, template: dict, repo_url: str) -> None:
|
|
||||||
"""Create stack.yml file"""
|
|
||||||
config = {
|
|
||||||
"version": "1.2",
|
|
||||||
"name": name,
|
|
||||||
"description": template.get("description", f"Stack: {name}"),
|
|
||||||
"repos": [repo_url] if repo_url else [],
|
|
||||||
"containers": [f"cerc/{name}"],
|
|
||||||
"pods": [name],
|
|
||||||
}
|
|
||||||
|
|
||||||
stack_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
with open(stack_dir / "stack.yml", "w") as f:
|
|
||||||
get_yaml().dump(config, f)
|
|
||||||
|
|
||||||
|
|
||||||
def create_dockerfile(container_dir: Path, name: str, template: dict) -> None:
|
|
||||||
"""Create Dockerfile"""
|
|
||||||
base_image = template.get("base_image", "node:20-bullseye-slim")
|
|
||||||
port = template.get("port", 3000)
|
|
||||||
|
|
||||||
dockerfile_content = f'''# Build stage
|
|
||||||
FROM {base_image} AS builder
|
|
||||||
|
|
||||||
WORKDIR /app
|
|
||||||
COPY package*.json ./
|
|
||||||
RUN npm ci
|
|
||||||
COPY . .
|
|
||||||
RUN npm run build
|
|
||||||
|
|
||||||
# Production stage
|
|
||||||
FROM {base_image}
|
|
||||||
|
|
||||||
WORKDIR /app
|
|
||||||
COPY package*.json ./
|
|
||||||
RUN npm ci --only=production
|
|
||||||
COPY --from=builder /app/dist ./dist
|
|
||||||
|
|
||||||
EXPOSE {port}
|
|
||||||
CMD ["npm", "run", "start"]
|
|
||||||
'''
|
|
||||||
|
|
||||||
container_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
with open(container_dir / "Dockerfile", "w") as f:
|
|
||||||
f.write(dockerfile_content)
|
|
||||||
|
|
||||||
|
|
||||||
def create_build_script(container_dir: Path, name: str) -> None:
|
|
||||||
"""Create build.sh script"""
|
|
||||||
build_script = f'''#!/usr/bin/env bash
|
|
||||||
# Build cerc/{name}
|
|
||||||
|
|
||||||
source ${{CERC_CONTAINER_BASE_DIR}}/build-base.sh
|
|
||||||
|
|
||||||
SCRIPT_DIR=$( cd -- "$( dirname -- "${{BASH_SOURCE[0]}}" )" &> /dev/null && pwd )
|
|
||||||
|
|
||||||
docker build -t cerc/{name}:local \\
|
|
||||||
-f ${{SCRIPT_DIR}}/Dockerfile \\
|
|
||||||
${{build_command_args}} \\
|
|
||||||
${{CERC_REPO_BASE_DIR}}/{name}
|
|
||||||
'''
|
|
||||||
|
|
||||||
build_path = container_dir / "build.sh"
|
|
||||||
with open(build_path, "w") as f:
|
|
||||||
f.write(build_script)
|
|
||||||
|
|
||||||
# Make executable
|
|
||||||
os.chmod(build_path, 0o755)
|
|
||||||
|
|
||||||
|
|
||||||
def create_compose_file(compose_dir: Path, name: str, template: dict) -> None:
|
|
||||||
"""Create docker-compose file"""
|
|
||||||
port = template.get("port", 3000)
|
|
||||||
|
|
||||||
compose_content = {
|
|
||||||
"version": "3.8",
|
|
||||||
"services": {
|
|
||||||
name: {
|
|
||||||
"image": f"cerc/{name}:local",
|
|
||||||
"restart": "unless-stopped",
|
|
||||||
"ports": [f"${{HOST_PORT:-{port}}}:{port}"],
|
|
||||||
"environment": {
|
|
||||||
"NODE_ENV": "${NODE_ENV:-production}",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
with open(compose_dir / f"docker-compose-{name}.yml", "w") as f:
|
|
||||||
get_yaml().dump(compose_content, f)
|
|
||||||
|
|
||||||
|
|
||||||
def update_list_file(data_dir: Path, filename: str, entry: str) -> None:
|
|
||||||
"""Add entry to a list file if not already present"""
|
|
||||||
list_path = data_dir / filename
|
|
||||||
|
|
||||||
# Read existing entries
|
|
||||||
existing = set()
|
|
||||||
if list_path.exists():
|
|
||||||
with open(list_path, "r") as f:
|
|
||||||
existing = set(line.strip() for line in f if line.strip())
|
|
||||||
|
|
||||||
# Add new entry
|
|
||||||
if entry not in existing:
|
|
||||||
with open(list_path, "a") as f:
|
|
||||||
f.write(f"{entry}\n")
|
|
||||||
|
|
||||||
|
|
||||||
@click.command()
|
|
||||||
@click.option("--name", required=True, help="Name of the new stack (lowercase, hyphens)")
|
|
||||||
@click.option("--type", "stack_type", default="webapp",
|
|
||||||
type=click.Choice(list(STACK_TEMPLATES.keys())),
|
|
||||||
help="Stack template type")
|
|
||||||
@click.option("--repo", help="Git repository URL (e.g., github.com/org/repo)")
|
|
||||||
@click.option("--force", is_flag=True, help="Overwrite existing files")
|
|
||||||
@click.pass_context
|
|
||||||
def command(ctx, name: str, stack_type: str, repo: str, force: bool):
|
|
||||||
"""Create a new stack with all required files.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
|
|
||||||
laconic-so create-stack --name my-app --type webapp
|
|
||||||
|
|
||||||
laconic-so create-stack --name my-service --type service --repo github.com/org/repo
|
|
||||||
"""
|
|
||||||
# Validate
|
|
||||||
validate_stack_name(name)
|
|
||||||
|
|
||||||
template = STACK_TEMPLATES[stack_type]
|
|
||||||
data_dir = get_data_dir()
|
|
||||||
|
|
||||||
# Define paths
|
|
||||||
stack_dir = data_dir / "stacks" / name
|
|
||||||
container_dir = data_dir / "container-build" / f"cerc-{name}"
|
|
||||||
compose_dir = data_dir / "compose"
|
|
||||||
|
|
||||||
# Check for existing files
|
|
||||||
if not force:
|
|
||||||
if stack_dir.exists():
|
|
||||||
error_exit(f"Stack already exists: {stack_dir}\nUse --force to overwrite")
|
|
||||||
if container_dir.exists():
|
|
||||||
error_exit(f"Container build dir exists: {container_dir}\nUse --force to overwrite")
|
|
||||||
|
|
||||||
# Dry run check
|
|
||||||
if opts.o.dry_run:
|
|
||||||
print(f"Would create stack '{name}' with template '{stack_type}':")
|
|
||||||
print(f" - {stack_dir}/stack.yml")
|
|
||||||
print(f" - {container_dir}/Dockerfile")
|
|
||||||
print(f" - {container_dir}/build.sh")
|
|
||||||
print(f" - {compose_dir}/docker-compose-{name}.yml")
|
|
||||||
print(f" - Update repository-list.txt")
|
|
||||||
print(f" - Update container-image-list.txt")
|
|
||||||
print(f" - Update pod-list.txt")
|
|
||||||
return
|
|
||||||
|
|
||||||
# Create files
|
|
||||||
if not opts.o.quiet:
|
|
||||||
print(f"Creating stack '{name}' with template '{stack_type}'...")
|
|
||||||
|
|
||||||
create_stack_yml(stack_dir, name, template, repo)
|
|
||||||
if opts.o.verbose:
|
|
||||||
print(f" Created {stack_dir}/stack.yml")
|
|
||||||
|
|
||||||
create_dockerfile(container_dir, name, template)
|
|
||||||
if opts.o.verbose:
|
|
||||||
print(f" Created {container_dir}/Dockerfile")
|
|
||||||
|
|
||||||
create_build_script(container_dir, name)
|
|
||||||
if opts.o.verbose:
|
|
||||||
print(f" Created {container_dir}/build.sh")
|
|
||||||
|
|
||||||
create_compose_file(compose_dir, name, template)
|
|
||||||
if opts.o.verbose:
|
|
||||||
print(f" Created {compose_dir}/docker-compose-{name}.yml")
|
|
||||||
|
|
||||||
# Update list files
|
|
||||||
if repo:
|
|
||||||
update_list_file(data_dir, "repository-list.txt", repo)
|
|
||||||
if opts.o.verbose:
|
|
||||||
print(f" Added {repo} to repository-list.txt")
|
|
||||||
|
|
||||||
update_list_file(data_dir, "container-image-list.txt", f"cerc/{name}")
|
|
||||||
if opts.o.verbose:
|
|
||||||
print(f" Added cerc/{name} to container-image-list.txt")
|
|
||||||
|
|
||||||
update_list_file(data_dir, "pod-list.txt", name)
|
|
||||||
if opts.o.verbose:
|
|
||||||
print(f" Added {name} to pod-list.txt")
|
|
||||||
|
|
||||||
# Summary
|
|
||||||
if not opts.o.quiet:
|
|
||||||
print(f"\nStack '{name}' created successfully!")
|
|
||||||
print(f"\nNext steps:")
|
|
||||||
print(f" 1. Edit {stack_dir}/stack.yml")
|
|
||||||
print(f" 2. Customize {container_dir}/Dockerfile")
|
|
||||||
print(f" 3. Run: laconic-so --stack {name} build-containers")
|
|
||||||
print(f" 4. Run: laconic-so --stack {name} deploy-system up")
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Register Command in main.py
|
|
||||||
|
|
||||||
**Edit `stack_orchestrator/main.py`**
|
|
||||||
|
|
||||||
Add import:
|
|
||||||
```python
|
|
||||||
from stack_orchestrator.create import create_stack
|
|
||||||
```
|
|
||||||
|
|
||||||
Add command registration (after line ~78):
|
|
||||||
```python
|
|
||||||
cli.add_command(create_stack.command, "create-stack")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Implementation Steps
|
|
||||||
|
|
||||||
### Step 1: Create module structure
|
|
||||||
```bash
|
|
||||||
mkdir -p stack_orchestrator/create
|
|
||||||
touch stack_orchestrator/create/__init__.py
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 2: Create the command file
|
|
||||||
Create `stack_orchestrator/create/create_stack.py` with the code above.
|
|
||||||
|
|
||||||
### Step 3: Register in main.py
|
|
||||||
Add the import and `cli.add_command()` line.
|
|
||||||
|
|
||||||
### Step 4: Test the command
|
|
||||||
```bash
|
|
||||||
# Show help
|
|
||||||
laconic-so create-stack --help
|
|
||||||
|
|
||||||
# Dry run
|
|
||||||
laconic-so --dry-run create-stack --name test-app --type webapp
|
|
||||||
|
|
||||||
# Create a stack
|
|
||||||
laconic-so create-stack --name test-app --type webapp --repo github.com/org/test-app
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
ls -la stack_orchestrator/data/stacks/test-app/
|
|
||||||
cat stack_orchestrator/data/stacks/test-app/stack.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Template Types
|
|
||||||
|
|
||||||
| Type | Base Image | Port | Use Case |
|
|
||||||
|------|------------|------|----------|
|
|
||||||
| `webapp` | node:20-bullseye-slim | 3000 | React/Vue/Next.js apps |
|
|
||||||
| `service` | python:3.11-slim | 8080 | Python backend services |
|
|
||||||
| `empty` | none | none | Custom from scratch |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Future Enhancements
|
|
||||||
|
|
||||||
1. **Interactive mode** - Prompt for values if not provided
|
|
||||||
2. **More templates** - Go, Rust, database stacks
|
|
||||||
3. **Template from existing** - `--from-stack existing-stack`
|
|
||||||
4. **External stack support** - Create in custom directory
|
|
||||||
5. **Validation command** - `laconic-so validate-stack --name my-stack`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Files Modified
|
|
||||||
|
|
||||||
| File | Change |
|
|
||||||
|------|--------|
|
|
||||||
| `stack_orchestrator/create/__init__.py` | New (empty) |
|
|
||||||
| `stack_orchestrator/create/create_stack.py` | New (command implementation) |
|
|
||||||
| `stack_orchestrator/main.py` | Add import and `cli.add_command()` |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Verification
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Command appears in help
|
|
||||||
laconic-so --help | grep create-stack
|
|
||||||
|
|
||||||
# 2. Dry run works
|
|
||||||
laconic-so --dry-run create-stack --name verify-test --type webapp
|
|
||||||
|
|
||||||
# 3. Full creation works
|
|
||||||
laconic-so create-stack --name verify-test --type webapp
|
|
||||||
ls stack_orchestrator/data/stacks/verify-test/
|
|
||||||
ls stack_orchestrator/data/container-build/cerc-verify-test/
|
|
||||||
ls stack_orchestrator/data/compose/docker-compose-verify-test.yml
|
|
||||||
|
|
||||||
# 4. Build works
|
|
||||||
laconic-so --stack verify-test build-containers
|
|
||||||
|
|
||||||
# 5. Cleanup
|
|
||||||
rm -rf stack_orchestrator/data/stacks/verify-test
|
|
||||||
rm -rf stack_orchestrator/data/container-build/cerc-verify-test
|
|
||||||
rm stack_orchestrator/data/compose/docker-compose-verify-test.yml
|
|
||||||
```
|
|
||||||
@ -1,550 +0,0 @@
|
|||||||
# Docker Compose Deployment Guide
|
|
||||||
|
|
||||||
## Introduction
|
|
||||||
|
|
||||||
### What is a Deployer?
|
|
||||||
|
|
||||||
In stack-orchestrator, a **deployer** provides a uniform interface for orchestrating containerized applications. This guide focuses on Docker Compose deployments, which is the default and recommended deployment mode.
|
|
||||||
|
|
||||||
While stack-orchestrator also supports Kubernetes (`k8s`) and Kind (`k8s-kind`) deployments, those are out of scope for this guide. See the [Kubernetes Enhancements](./k8s-deployment-enhancements.md) documentation for advanced deployment options.
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
To deploy stacks using Docker Compose, you need:
|
|
||||||
|
|
||||||
- Docker Engine (20.10+)
|
|
||||||
- Docker Compose plugin (v2.0+)
|
|
||||||
- Python 3.8+
|
|
||||||
- stack-orchestrator installed (`laconic-so`)
|
|
||||||
|
|
||||||
**That's it!** No additional infrastructure is required. If you have Docker installed, you're ready to deploy.
|
|
||||||
|
|
||||||
## Deployment Workflow
|
|
||||||
|
|
||||||
The typical deployment workflow consists of four main steps:
|
|
||||||
|
|
||||||
1. **Setup repositories and build containers** (first time only)
|
|
||||||
2. **Initialize deployment specification**
|
|
||||||
3. **Create deployment directory**
|
|
||||||
4. **Start and manage services**
|
|
||||||
|
|
||||||
## Quick Start Example
|
|
||||||
|
|
||||||
Here's a complete example using the built-in `test` stack:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Step 1: Setup (first time only)
|
|
||||||
laconic-so --stack test setup-repositories
|
|
||||||
laconic-so --stack test build-containers
|
|
||||||
|
|
||||||
# Step 2: Initialize deployment spec
|
|
||||||
laconic-so --stack test deploy init --output test-spec.yml
|
|
||||||
|
|
||||||
# Step 3: Create deployment directory
|
|
||||||
laconic-so --stack test deploy create \
|
|
||||||
--spec-file test-spec.yml \
|
|
||||||
--deployment-dir test-deployment
|
|
||||||
|
|
||||||
# Step 4: Start services
|
|
||||||
laconic-so deployment --dir test-deployment start
|
|
||||||
|
|
||||||
# View running services
|
|
||||||
laconic-so deployment --dir test-deployment ps
|
|
||||||
|
|
||||||
# View logs
|
|
||||||
laconic-so deployment --dir test-deployment logs
|
|
||||||
|
|
||||||
# Stop services (preserves data)
|
|
||||||
laconic-so deployment --dir test-deployment stop
|
|
||||||
```
|
|
||||||
|
|
||||||
## Deployment Workflows
|
|
||||||
|
|
||||||
Stack-orchestrator supports two deployment workflows:
|
|
||||||
|
|
||||||
### 1. Deployment Directory Workflow (Recommended)
|
|
||||||
|
|
||||||
This workflow creates a persistent deployment directory that contains all configuration and data.
|
|
||||||
|
|
||||||
**When to use:**
|
|
||||||
- Production deployments
|
|
||||||
- When you need to preserve configuration
|
|
||||||
- When you want to manage multiple deployments
|
|
||||||
- When you need persistent volume data
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Initialize deployment spec
|
|
||||||
laconic-so --stack fixturenet-eth deploy init --output eth-spec.yml
|
|
||||||
|
|
||||||
# Optionally edit eth-spec.yml to customize configuration
|
|
||||||
|
|
||||||
# Create deployment directory
|
|
||||||
laconic-so --stack fixturenet-eth deploy create \
|
|
||||||
--spec-file eth-spec.yml \
|
|
||||||
--deployment-dir my-eth-deployment
|
|
||||||
|
|
||||||
# Start the deployment
|
|
||||||
laconic-so deployment --dir my-eth-deployment start
|
|
||||||
|
|
||||||
# Manage the deployment
|
|
||||||
laconic-so deployment --dir my-eth-deployment ps
|
|
||||||
laconic-so deployment --dir my-eth-deployment logs
|
|
||||||
laconic-so deployment --dir my-eth-deployment stop
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Quick Deploy Workflow
|
|
||||||
|
|
||||||
This workflow deploys directly without creating a persistent deployment directory.
|
|
||||||
|
|
||||||
**When to use:**
|
|
||||||
- Quick testing
|
|
||||||
- Temporary deployments
|
|
||||||
- Simple stacks that don't require customization
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start the stack directly
|
|
||||||
laconic-so --stack test deploy up
|
|
||||||
|
|
||||||
# Check service status
|
|
||||||
laconic-so --stack test deploy port test 80
|
|
||||||
|
|
||||||
# View logs
|
|
||||||
laconic-so --stack test deploy logs
|
|
||||||
|
|
||||||
# Stop (preserves volumes)
|
|
||||||
laconic-so --stack test deploy down
|
|
||||||
|
|
||||||
# Stop and remove volumes
|
|
||||||
laconic-so --stack test deploy down --delete-volumes
|
|
||||||
```
|
|
||||||
|
|
||||||
## Real-World Example: Ethereum Fixturenet
|
|
||||||
|
|
||||||
Deploy a local Ethereum testnet with Geth and Lighthouse:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Setup (first time only)
|
|
||||||
laconic-so --stack fixturenet-eth setup-repositories
|
|
||||||
laconic-so --stack fixturenet-eth build-containers
|
|
||||||
|
|
||||||
# Initialize with default configuration
|
|
||||||
laconic-so --stack fixturenet-eth deploy init --output eth-spec.yml
|
|
||||||
|
|
||||||
# Create deployment
|
|
||||||
laconic-so --stack fixturenet-eth deploy create \
|
|
||||||
--spec-file eth-spec.yml \
|
|
||||||
--deployment-dir fixturenet-eth-deployment
|
|
||||||
|
|
||||||
# Start the network
|
|
||||||
laconic-so deployment --dir fixturenet-eth-deployment start
|
|
||||||
|
|
||||||
# Check status
|
|
||||||
laconic-so deployment --dir fixturenet-eth-deployment ps
|
|
||||||
|
|
||||||
# Access logs from specific service
|
|
||||||
laconic-so deployment --dir fixturenet-eth-deployment logs fixturenet-eth-geth-1
|
|
||||||
|
|
||||||
# Stop the network (preserves blockchain data)
|
|
||||||
laconic-so deployment --dir fixturenet-eth-deployment stop
|
|
||||||
|
|
||||||
# Start again - blockchain data is preserved
|
|
||||||
laconic-so deployment --dir fixturenet-eth-deployment start
|
|
||||||
|
|
||||||
# Clean up everything including data
|
|
||||||
laconic-so deployment --dir fixturenet-eth-deployment stop --delete-volumes
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Passing Configuration Parameters
|
|
||||||
|
|
||||||
Configuration can be passed in three ways:
|
|
||||||
|
|
||||||
**1. At init time via `--config` flag:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so --stack test deploy init --output spec.yml \
|
|
||||||
--config PARAM1=value1,PARAM2=value2
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. Edit the spec file after init:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Initialize
|
|
||||||
laconic-so --stack test deploy init --output spec.yml
|
|
||||||
|
|
||||||
# Edit spec.yml
|
|
||||||
vim spec.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
Example spec.yml:
|
|
||||||
```yaml
|
|
||||||
stack: test
|
|
||||||
config:
|
|
||||||
PARAM1: value1
|
|
||||||
PARAM2: value2
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. Docker Compose defaults:**
|
|
||||||
|
|
||||||
Environment variables defined in the stack's `docker-compose-*.yml` files are used as defaults. Configuration from the spec file overrides these defaults.
|
|
||||||
|
|
||||||
### Port Mapping
|
|
||||||
|
|
||||||
By default, services are accessible on randomly assigned host ports. To find the mapped port:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Find the host port for container port 80 on service 'webapp'
|
|
||||||
laconic-so deployment --dir my-deployment port webapp 80
|
|
||||||
|
|
||||||
# Output example: 0.0.0.0:32768
|
|
||||||
```
|
|
||||||
|
|
||||||
To configure fixed ports, edit the spec file before creating the deployment:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
network:
|
|
||||||
ports:
|
|
||||||
webapp:
|
|
||||||
- '8080:80' # Maps host port 8080 to container port 80
|
|
||||||
api:
|
|
||||||
- '3000:3000'
|
|
||||||
```
|
|
||||||
|
|
||||||
Then create the deployment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so --stack my-stack deploy create \
|
|
||||||
--spec-file spec.yml \
|
|
||||||
--deployment-dir my-deployment
|
|
||||||
```
|
|
||||||
|
|
||||||
### Volume Persistence
|
|
||||||
|
|
||||||
Volumes are preserved between stop/start cycles by default:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Stop but keep data
|
|
||||||
laconic-so deployment --dir my-deployment stop
|
|
||||||
|
|
||||||
# Start again - data is still there
|
|
||||||
laconic-so deployment --dir my-deployment start
|
|
||||||
```
|
|
||||||
|
|
||||||
To completely remove all data:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Stop and delete all volumes
|
|
||||||
laconic-so deployment --dir my-deployment stop --delete-volumes
|
|
||||||
```
|
|
||||||
|
|
||||||
Volume data is stored in `<deployment-dir>/data/`.
|
|
||||||
|
|
||||||
## Common Operations
|
|
||||||
|
|
||||||
### Viewing Logs
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# All services, continuous follow
|
|
||||||
laconic-so deployment --dir my-deployment logs --follow
|
|
||||||
|
|
||||||
# Last 100 lines from all services
|
|
||||||
laconic-so deployment --dir my-deployment logs --tail 100
|
|
||||||
|
|
||||||
# Specific service only
|
|
||||||
laconic-so deployment --dir my-deployment logs webapp
|
|
||||||
|
|
||||||
# Combine options
|
|
||||||
laconic-so deployment --dir my-deployment logs --tail 50 --follow webapp
|
|
||||||
```
|
|
||||||
|
|
||||||
### Executing Commands in Containers
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Execute a command in a running service
|
|
||||||
laconic-so deployment --dir my-deployment exec webapp ls -la
|
|
||||||
|
|
||||||
# Interactive shell
|
|
||||||
laconic-so deployment --dir my-deployment exec webapp /bin/bash
|
|
||||||
|
|
||||||
# Run command with specific environment variables
|
|
||||||
laconic-so deployment --dir my-deployment exec webapp env VAR=value command
|
|
||||||
```
|
|
||||||
|
|
||||||
### Checking Service Status
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# List all running services
|
|
||||||
laconic-so deployment --dir my-deployment ps
|
|
||||||
|
|
||||||
# Check using Docker directly
|
|
||||||
docker ps
|
|
||||||
```
|
|
||||||
|
|
||||||
### Updating a Running Deployment
|
|
||||||
|
|
||||||
If you need to change configuration after deployment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Edit the spec file
|
|
||||||
vim my-deployment/spec.yml
|
|
||||||
|
|
||||||
# 2. Regenerate configuration
|
|
||||||
laconic-so deployment --dir my-deployment update
|
|
||||||
|
|
||||||
# 3. Restart services to apply changes
|
|
||||||
laconic-so deployment --dir my-deployment stop
|
|
||||||
laconic-so deployment --dir my-deployment start
|
|
||||||
```
|
|
||||||
|
|
||||||
## Multi-Service Deployments
|
|
||||||
|
|
||||||
Many stacks deploy multiple services that work together:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Deploy a stack with multiple services
|
|
||||||
laconic-so --stack laconicd-with-console deploy init --output spec.yml
|
|
||||||
laconic-so --stack laconicd-with-console deploy create \
|
|
||||||
--spec-file spec.yml \
|
|
||||||
--deployment-dir laconicd-deployment
|
|
||||||
|
|
||||||
laconic-so deployment --dir laconicd-deployment start
|
|
||||||
|
|
||||||
# View all services
|
|
||||||
laconic-so deployment --dir laconicd-deployment ps
|
|
||||||
|
|
||||||
# View logs from specific services
|
|
||||||
laconic-so deployment --dir laconicd-deployment logs laconicd
|
|
||||||
laconic-so deployment --dir laconicd-deployment logs console
|
|
||||||
```
|
|
||||||
|
|
||||||
## ConfigMaps
|
|
||||||
|
|
||||||
ConfigMaps allow you to mount configuration files into containers:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create the config directory in your deployment
|
|
||||||
mkdir -p my-deployment/data/my-config
|
|
||||||
echo "database_url=postgres://localhost" > my-deployment/data/my-config/app.conf
|
|
||||||
|
|
||||||
# 2. Reference in spec file
|
|
||||||
vim my-deployment/spec.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
Add to spec.yml:
|
|
||||||
```yaml
|
|
||||||
configmaps:
|
|
||||||
my-config: ./data/my-config
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 3. Restart to apply
|
|
||||||
laconic-so deployment --dir my-deployment stop
|
|
||||||
laconic-so deployment --dir my-deployment start
|
|
||||||
```
|
|
||||||
|
|
||||||
The files will be mounted in the container at `/config/` (or as specified by the stack).
|
|
||||||
|
|
||||||
## Deployment Directory Structure
|
|
||||||
|
|
||||||
A typical deployment directory contains:
|
|
||||||
|
|
||||||
```
|
|
||||||
my-deployment/
|
|
||||||
├── compose/
|
|
||||||
│ └── docker-compose-*.yml # Generated compose files
|
|
||||||
├── config.env # Environment variables
|
|
||||||
├── deployment.yml # Deployment metadata
|
|
||||||
├── spec.yml # Deployment specification
|
|
||||||
└── data/ # Volume mounts and configs
|
|
||||||
├── service-data/ # Persistent service data
|
|
||||||
└── config-maps/ # ConfigMap files
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Common Issues
|
|
||||||
|
|
||||||
**Problem: "Cannot connect to Docker daemon"**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Ensure Docker is running
|
|
||||||
docker ps
|
|
||||||
|
|
||||||
# Start Docker if needed (macOS)
|
|
||||||
open -a Docker
|
|
||||||
|
|
||||||
# Start Docker (Linux)
|
|
||||||
sudo systemctl start docker
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problem: "Port already in use"**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Either stop the conflicting service or use different ports
|
|
||||||
# Edit spec.yml before creating deployment:
|
|
||||||
|
|
||||||
network:
|
|
||||||
ports:
|
|
||||||
webapp:
|
|
||||||
- '8081:80' # Use 8081 instead of 8080
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problem: "Image not found"**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Build containers first
|
|
||||||
laconic-so --stack your-stack build-containers
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problem: Volumes not persisting**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check if you used --delete-volumes when stopping
|
|
||||||
# Volume data is in: <deployment-dir>/data/
|
|
||||||
|
|
||||||
# Don't use --delete-volumes if you want to keep data:
|
|
||||||
laconic-so deployment --dir my-deployment stop
|
|
||||||
|
|
||||||
# Only use --delete-volumes when you want to reset completely:
|
|
||||||
laconic-so deployment --dir my-deployment stop --delete-volumes
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problem: Services not starting**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check logs for errors
|
|
||||||
laconic-so deployment --dir my-deployment logs
|
|
||||||
|
|
||||||
# Check Docker container status
|
|
||||||
docker ps -a
|
|
||||||
|
|
||||||
# Try stopping and starting again
|
|
||||||
laconic-so deployment --dir my-deployment stop
|
|
||||||
laconic-so deployment --dir my-deployment start
|
|
||||||
```
|
|
||||||
|
|
||||||
### Inspecting Deployment State
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check deployment directory structure
|
|
||||||
ls -la my-deployment/
|
|
||||||
|
|
||||||
# Check running containers
|
|
||||||
docker ps
|
|
||||||
|
|
||||||
# Check container details
|
|
||||||
docker inspect <container-name>
|
|
||||||
|
|
||||||
# Check networks
|
|
||||||
docker network ls
|
|
||||||
|
|
||||||
# Check volumes
|
|
||||||
docker volume ls
|
|
||||||
```
|
|
||||||
|
|
||||||
## CLI Commands Reference
|
|
||||||
|
|
||||||
### Stack Operations
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Clone required repositories
|
|
||||||
laconic-so --stack <name> setup-repositories
|
|
||||||
|
|
||||||
# Build container images
|
|
||||||
laconic-so --stack <name> build-containers
|
|
||||||
```
|
|
||||||
|
|
||||||
### Deployment Initialization
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Initialize deployment spec with defaults
|
|
||||||
laconic-so --stack <name> deploy init --output <spec-file>
|
|
||||||
|
|
||||||
# Initialize with configuration
|
|
||||||
laconic-so --stack <name> deploy init --output <spec-file> \
|
|
||||||
--config PARAM1=value1,PARAM2=value2
|
|
||||||
```
|
|
||||||
|
|
||||||
### Deployment Creation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create deployment directory from spec
|
|
||||||
laconic-so --stack <name> deploy create \
|
|
||||||
--spec-file <spec-file> \
|
|
||||||
--deployment-dir <dir>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Deployment Management
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start all services
|
|
||||||
laconic-so deployment --dir <dir> start
|
|
||||||
|
|
||||||
# Stop services (preserves volumes)
|
|
||||||
laconic-so deployment --dir <dir> stop
|
|
||||||
|
|
||||||
# Stop and remove volumes
|
|
||||||
laconic-so deployment --dir <dir> stop --delete-volumes
|
|
||||||
|
|
||||||
# List running services
|
|
||||||
laconic-so deployment --dir <dir> ps
|
|
||||||
|
|
||||||
# View logs
|
|
||||||
laconic-so deployment --dir <dir> logs [--tail N] [--follow] [service]
|
|
||||||
|
|
||||||
# Show mapped port
|
|
||||||
laconic-so deployment --dir <dir> port <service> <private-port>
|
|
||||||
|
|
||||||
# Execute command in service
|
|
||||||
laconic-so deployment --dir <dir> exec <service> <command>
|
|
||||||
|
|
||||||
# Update configuration
|
|
||||||
laconic-so deployment --dir <dir> update
|
|
||||||
```
|
|
||||||
|
|
||||||
### Quick Deploy Commands
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start stack directly
|
|
||||||
laconic-so --stack <name> deploy up
|
|
||||||
|
|
||||||
# Stop stack
|
|
||||||
laconic-so --stack <name> deploy down [--delete-volumes]
|
|
||||||
|
|
||||||
# View logs
|
|
||||||
laconic-so --stack <name> deploy logs
|
|
||||||
|
|
||||||
# Show port mapping
|
|
||||||
laconic-so --stack <name> deploy port <service> <port>
|
|
||||||
```
|
|
||||||
|
|
||||||
## Related Documentation
|
|
||||||
|
|
||||||
- [CLI Reference](./cli.md) - Complete CLI command documentation
|
|
||||||
- [Adding a New Stack](./adding-a-new-stack.md) - Creating custom stacks
|
|
||||||
- [Specification](./spec.md) - Internal structure and design
|
|
||||||
- [Kubernetes Enhancements](./k8s-deployment-enhancements.md) - Advanced K8s deployment options
|
|
||||||
- [Web App Deployment](./webapp.md) - Deploying web applications
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
For more examples, see the test scripts:
|
|
||||||
- `scripts/quick-deploy-test.sh` - Quick deployment example
|
|
||||||
- `tests/deploy/run-deploy-test.sh` - Comprehensive test showing all features
|
|
||||||
|
|
||||||
## Summary
|
|
||||||
|
|
||||||
- Docker Compose is the default and recommended deployment mode
|
|
||||||
- Two workflows: deployment directory (recommended) or quick deploy
|
|
||||||
- The standard workflow is: setup → build → init → create → start
|
|
||||||
- Configuration is flexible with multiple override layers
|
|
||||||
- Volume persistence is automatic unless explicitly deleted
|
|
||||||
- All deployment state is contained in the deployment directory
|
|
||||||
- For Kubernetes deployments, see separate K8s documentation
|
|
||||||
|
|
||||||
You're now ready to deploy stacks using stack-orchestrator with Docker Compose!
|
|
||||||
@ -1,128 +0,0 @@
|
|||||||
# Deploying to the Laconic Network
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
The Laconic network uses a **registry-based deployment model** where everything is published as blockchain records.
|
|
||||||
|
|
||||||
## Key Documentation in stack-orchestrator
|
|
||||||
|
|
||||||
- `docs/laconicd-with-console.md` - Setting up a laconicd network
|
|
||||||
- `docs/webapp.md` - Webapp building/running
|
|
||||||
- `stack_orchestrator/deploy/webapp/` - Implementation (14 modules)
|
|
||||||
|
|
||||||
## Core Concepts
|
|
||||||
|
|
||||||
### LRN (Laconic Resource Name)
|
|
||||||
Format: `lrn://laconic/[namespace]/[name]`
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
- `lrn://laconic/deployers/my-deployer-name`
|
|
||||||
- `lrn://laconic/dns/example.com`
|
|
||||||
- `lrn://laconic/deployments/example.com`
|
|
||||||
|
|
||||||
### Registry Record Types
|
|
||||||
|
|
||||||
| Record Type | Purpose |
|
|
||||||
|-------------|---------|
|
|
||||||
| `ApplicationRecord` | Published app metadata |
|
|
||||||
| `WebappDeployer` | Deployment service offering |
|
|
||||||
| `ApplicationDeploymentRequest` | User's request to deploy |
|
|
||||||
| `ApplicationDeploymentAuction` | Optional bidding for deployers |
|
|
||||||
| `ApplicationDeploymentRecord` | Completed deployment result |
|
|
||||||
|
|
||||||
## Deployment Workflows
|
|
||||||
|
|
||||||
### 1. Direct Deployment
|
|
||||||
|
|
||||||
```
|
|
||||||
User publishes ApplicationDeploymentRequest
|
|
||||||
→ targets specific WebappDeployer (by LRN)
|
|
||||||
→ includes payment TX hash
|
|
||||||
→ Deployer picks up request, builds, deploys, publishes result
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Auction-Based Deployment
|
|
||||||
|
|
||||||
```
|
|
||||||
User publishes ApplicationDeploymentAuction
|
|
||||||
→ Deployers bid (commit/reveal phases)
|
|
||||||
→ Winner selected
|
|
||||||
→ User publishes request targeting winner
|
|
||||||
```
|
|
||||||
|
|
||||||
## Key CLI Commands
|
|
||||||
|
|
||||||
### Publish a Deployer Service
|
|
||||||
```bash
|
|
||||||
laconic-so publish-webapp-deployer --laconic-config config.yml \
|
|
||||||
--api-url https://deployer-api.example.com \
|
|
||||||
--name my-deployer \
|
|
||||||
--payment-address laconic1... \
|
|
||||||
--minimum-payment 1000alnt
|
|
||||||
```
|
|
||||||
|
|
||||||
### Request Deployment (User Side)
|
|
||||||
```bash
|
|
||||||
laconic-so request-webapp-deployment --laconic-config config.yml \
|
|
||||||
--app lrn://laconic/apps/my-app \
|
|
||||||
--deployer lrn://laconic/deployers/xyz \
|
|
||||||
--make-payment auto
|
|
||||||
```
|
|
||||||
|
|
||||||
### Run Deployer Service (Deployer Side)
|
|
||||||
```bash
|
|
||||||
laconic-so deploy-webapp-from-registry --laconic-config config.yml --discover
|
|
||||||
```
|
|
||||||
|
|
||||||
## Laconic Config File
|
|
||||||
|
|
||||||
All tools require a laconic config file (`laconic.toml`):
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[cosmos]
|
|
||||||
address_prefix = "laconic"
|
|
||||||
chain_id = "laconic_9000-1"
|
|
||||||
endpoint = "http://localhost:26657"
|
|
||||||
key = "<account-name>"
|
|
||||||
password = "<account-password>"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Setting Up a Local Laconicd Network
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Clone and build
|
|
||||||
laconic-so --stack fixturenet-laconic-loaded setup-repositories
|
|
||||||
laconic-so --stack fixturenet-laconic-loaded build-containers
|
|
||||||
laconic-so --stack fixturenet-laconic-loaded deploy create
|
|
||||||
laconic-so deployment --dir laconic-loaded-deployment start
|
|
||||||
|
|
||||||
# Check status
|
|
||||||
laconic-so deployment --dir laconic-loaded-deployment exec cli "laconic registry status"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Key Implementation Files
|
|
||||||
|
|
||||||
| File | Purpose |
|
|
||||||
|------|---------|
|
|
||||||
| `publish_webapp_deployer.py` | Register deployment service on network |
|
|
||||||
| `publish_deployment_auction.py` | Create auction for deployers to bid on |
|
|
||||||
| `handle_deployment_auction.py` | Monitor and bid on auctions (deployer-side) |
|
|
||||||
| `request_webapp_deployment.py` | Create deployment request (user-side) |
|
|
||||||
| `deploy_webapp_from_registry.py` | Process requests and deploy (deployer-side) |
|
|
||||||
| `request_webapp_undeployment.py` | Request app removal |
|
|
||||||
| `undeploy_webapp_from_registry.py` | Process removal requests |
|
|
||||||
| `util.py` | LaconicRegistryClient - all registry interactions |
|
|
||||||
|
|
||||||
## Payment System
|
|
||||||
|
|
||||||
- **Token Denom**: `alnt` (Laconic network tokens)
|
|
||||||
- **Payment Options**:
|
|
||||||
- `--make-payment`: Create new payment with amount (or "auto" for deployer's minimum)
|
|
||||||
- `--use-payment`: Reference existing payment TX
|
|
||||||
|
|
||||||
## What's NOT Well-Documented
|
|
||||||
|
|
||||||
1. No end-to-end tutorial for full deployment workflow
|
|
||||||
2. Stack publishing (vs webapp) process unclear
|
|
||||||
3. LRN naming conventions not formally specified
|
|
||||||
4. Payment economics and token mechanics
|
|
||||||
@ -26,14 +26,8 @@ fi
|
|||||||
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
|
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
|
||||||
WORK_DIR="${1:-/app}"
|
WORK_DIR="${1:-/app}"
|
||||||
|
|
||||||
if [ -f "${WORK_DIR}/build-webapp.sh" ]; then
|
|
||||||
echo "Building webapp with ${WORK_DIR}/build-webapp.sh ..."
|
|
||||||
cd "${WORK_DIR}" || exit 1
|
cd "${WORK_DIR}" || exit 1
|
||||||
|
|
||||||
./build-webapp.sh || exit 1
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -f "next.config.mjs" ]; then
|
if [ -f "next.config.mjs" ]; then
|
||||||
NEXT_CONFIG_JS="next.config.mjs"
|
NEXT_CONFIG_JS="next.config.mjs"
|
||||||
IMPORT_OR_REQUIRE="import"
|
IMPORT_OR_REQUIRE="import"
|
||||||
|
|||||||
@ -30,44 +30,36 @@ fi
|
|||||||
CERC_WEBAPP_FILES_DIR="${CERC_WEBAPP_FILES_DIR:-/app}"
|
CERC_WEBAPP_FILES_DIR="${CERC_WEBAPP_FILES_DIR:-/app}"
|
||||||
cd "$CERC_WEBAPP_FILES_DIR"
|
cd "$CERC_WEBAPP_FILES_DIR"
|
||||||
|
|
||||||
if [ -f "./run-webapp.sh" ]; then
|
"$SCRIPT_DIR/apply-runtime-env.sh" "`pwd`" .next .next-r
|
||||||
echo "Running webapp with run-webapp.sh ..."
|
mv .next .next.old
|
||||||
cd "${WORK_DIR}" || exit 1
|
mv .next-r/.next .
|
||||||
./run-webapp.sh &
|
|
||||||
tpid=$!
|
|
||||||
wait $tpid
|
|
||||||
else
|
|
||||||
"$SCRIPT_DIR/apply-runtime-env.sh" "`pwd`" .next .next-r
|
|
||||||
mv .next .next.old
|
|
||||||
mv .next-r/.next .
|
|
||||||
|
|
||||||
if [ "$CERC_NEXTJS_SKIP_GENERATE" != "true" ]; then
|
if [ "$CERC_NEXTJS_SKIP_GENERATE" != "true" ]; then
|
||||||
jq -e '.scripts.cerc_generate' package.json >/dev/null
|
jq -e '.scripts.cerc_generate' package.json >/dev/null
|
||||||
if [ $? -eq 0 ]; then
|
if [ $? -eq 0 ]; then
|
||||||
npm run cerc_generate > gen.out 2>&1 &
|
npm run cerc_generate > gen.out 2>&1 &
|
||||||
tail -f gen.out &
|
tail -f gen.out &
|
||||||
tpid=$!
|
tpid=$!
|
||||||
|
|
||||||
count=0
|
count=0
|
||||||
generate_done="false"
|
generate_done="false"
|
||||||
while [ $count -lt $CERC_MAX_GENERATE_TIME ] && [ "$generate_done" == "false" ]; do
|
while [ $count -lt $CERC_MAX_GENERATE_TIME ] && [ "$generate_done" == "false" ]; do
|
||||||
sleep 1
|
sleep 1
|
||||||
count=$((count + 1))
|
count=$((count + 1))
|
||||||
grep 'rendered as static' gen.out > /dev/null
|
grep 'rendered as static' gen.out > /dev/null
|
||||||
if [ $? -eq 0 ]; then
|
if [ $? -eq 0 ]; then
|
||||||
generate_done="true"
|
generate_done="true"
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
if [ $generate_done != "true" ]; then
|
|
||||||
echo "ERROR: 'npm run cerc_generate' not successful within CERC_MAX_GENERATE_TIME" 1>&2
|
|
||||||
exit 1
|
|
||||||
fi
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
kill $tpid $(ps -ef | grep node | grep next | grep generate | awk '{print $2}') 2>/dev/null
|
if [ $generate_done != "true" ]; then
|
||||||
tpid=""
|
echo "ERROR: 'npm run cerc_generate' not successful within CERC_MAX_GENERATE_TIME" 1>&2
|
||||||
|
exit 1
|
||||||
fi
|
fi
|
||||||
fi
|
|
||||||
|
|
||||||
$CERC_BUILD_TOOL start . -- -p ${CERC_LISTEN_PORT:-80}
|
kill $tpid $(ps -ef | grep node | grep next | grep generate | awk '{print $2}') 2>/dev/null
|
||||||
|
tpid=""
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
$CERC_BUILD_TOOL start . -- -p ${CERC_LISTEN_PORT:-80}
|
||||||
|
|||||||
@ -14,6 +14,7 @@
|
|||||||
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
|
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
|
||||||
|
|
||||||
from stack_orchestrator.deploy.deployment_context import DeploymentContext
|
from stack_orchestrator.deploy.deployment_context import DeploymentContext
|
||||||
|
from ruamel.yaml import YAML
|
||||||
|
|
||||||
|
|
||||||
def create(context: DeploymentContext, extra_args):
|
def create(context: DeploymentContext, extra_args):
|
||||||
@ -22,12 +23,17 @@ def create(context: DeploymentContext, extra_args):
|
|||||||
# deterministic-deployment-proxy contract, which itself is a prereq for Optimism contract deployment
|
# deterministic-deployment-proxy contract, which itself is a prereq for Optimism contract deployment
|
||||||
fixturenet_eth_compose_file = context.deployment_dir.joinpath('compose', 'docker-compose-fixturenet-eth.yml')
|
fixturenet_eth_compose_file = context.deployment_dir.joinpath('compose', 'docker-compose-fixturenet-eth.yml')
|
||||||
|
|
||||||
|
with open(fixturenet_eth_compose_file, 'r') as yaml_file:
|
||||||
|
yaml = YAML()
|
||||||
|
yaml_data = yaml.load(yaml_file)
|
||||||
|
|
||||||
new_script = '../config/fixturenet-optimism/run-geth.sh:/opt/testnet/run.sh'
|
new_script = '../config/fixturenet-optimism/run-geth.sh:/opt/testnet/run.sh'
|
||||||
|
|
||||||
def add_geth_volume(yaml_data):
|
if new_script not in yaml_data['services']['fixturenet-eth-geth-1']['volumes']:
|
||||||
if new_script not in yaml_data['services']['fixturenet-eth-geth-1']['volumes']:
|
yaml_data['services']['fixturenet-eth-geth-1']['volumes'].append(new_script)
|
||||||
yaml_data['services']['fixturenet-eth-geth-1']['volumes'].append(new_script)
|
|
||||||
|
|
||||||
context.modify_yaml(fixturenet_eth_compose_file, add_geth_volume)
|
with open(fixturenet_eth_compose_file, 'w') as yaml_file:
|
||||||
|
yaml = YAML()
|
||||||
|
yaml.dump(yaml_data, yaml_file)
|
||||||
|
|
||||||
return None
|
return None
|
||||||
|
|||||||
@ -2,6 +2,7 @@ version: "1.0"
|
|||||||
name: test
|
name: test
|
||||||
description: "A test stack"
|
description: "A test stack"
|
||||||
repos:
|
repos:
|
||||||
|
- git.vdb.to/cerc-io/laconicd
|
||||||
- git.vdb.to/cerc-io/test-project@test-branch
|
- git.vdb.to/cerc-io/test-project@test-branch
|
||||||
containers:
|
containers:
|
||||||
- cerc/test-container
|
- cerc/test-container
|
||||||
|
|||||||
@ -45,22 +45,20 @@ class DeploymentContext:
|
|||||||
def get_compose_dir(self):
|
def get_compose_dir(self):
|
||||||
return self.deployment_dir.joinpath(constants.compose_dir_name)
|
return self.deployment_dir.joinpath(constants.compose_dir_name)
|
||||||
|
|
||||||
def get_compose_file(self, name: str):
|
|
||||||
return self.get_compose_dir() / f"docker-compose-{name}.yml"
|
|
||||||
|
|
||||||
def get_cluster_id(self):
|
def get_cluster_id(self):
|
||||||
return self.id
|
return self.id
|
||||||
|
|
||||||
def init(self, dir: Path):
|
def init(self, dir):
|
||||||
self.deployment_dir = dir.absolute()
|
self.deployment_dir = dir
|
||||||
self.spec = Spec()
|
self.spec = Spec()
|
||||||
self.spec.init_from_file(self.get_spec_file())
|
self.spec.init_from_file(self.get_spec_file())
|
||||||
self.stack = Stack(self.spec.obj["stack"])
|
self.stack = Stack(self.spec.obj["stack"])
|
||||||
self.stack.init_from_file(self.get_stack_file())
|
self.stack.init_from_file(self.get_stack_file())
|
||||||
deployment_file_path = self.get_deployment_file()
|
deployment_file_path = self.get_deployment_file()
|
||||||
if deployment_file_path.exists():
|
if deployment_file_path.exists():
|
||||||
obj = get_yaml().load(open(deployment_file_path, "r"))
|
with deployment_file_path:
|
||||||
self.id = obj[constants.cluster_id_key]
|
obj = get_yaml().load(open(deployment_file_path, "r"))
|
||||||
|
self.id = obj[constants.cluster_id_key]
|
||||||
# Handle the case of a legacy deployment with no file
|
# Handle the case of a legacy deployment with no file
|
||||||
# Code below is intended to match the output from _make_default_cluster_name()
|
# Code below is intended to match the output from _make_default_cluster_name()
|
||||||
# TODO: remove when we no longer need to support legacy deployments
|
# TODO: remove when we no longer need to support legacy deployments
|
||||||
@ -69,19 +67,3 @@ class DeploymentContext:
|
|||||||
unique_cluster_descriptor = f"{path},{self.get_stack_file()},None,None"
|
unique_cluster_descriptor = f"{path},{self.get_stack_file()},None,None"
|
||||||
hash = hashlib.md5(unique_cluster_descriptor.encode()).hexdigest()[:16]
|
hash = hashlib.md5(unique_cluster_descriptor.encode()).hexdigest()[:16]
|
||||||
self.id = f"{constants.cluster_name_prefix}{hash}"
|
self.id = f"{constants.cluster_name_prefix}{hash}"
|
||||||
|
|
||||||
def modify_yaml(self, file_path: Path, modifier_func):
|
|
||||||
"""
|
|
||||||
Load a YAML from the deployment, apply a modification function, and write it back.
|
|
||||||
"""
|
|
||||||
if not file_path.absolute().is_relative_to(self.deployment_dir):
|
|
||||||
raise ValueError(f"File is not inside deployment directory: {file_path}")
|
|
||||||
|
|
||||||
yaml = get_yaml()
|
|
||||||
with open(file_path, 'r') as f:
|
|
||||||
yaml_data = yaml.load(f)
|
|
||||||
|
|
||||||
modifier_func(yaml_data)
|
|
||||||
|
|
||||||
with open(file_path, 'w') as f:
|
|
||||||
yaml.dump(yaml_data, f)
|
|
||||||
|
|||||||
@ -443,16 +443,18 @@ def _check_volume_definitions(spec):
|
|||||||
@click.command()
|
@click.command()
|
||||||
@click.option("--spec-file", required=True, help="Spec file to use to create this deployment")
|
@click.option("--spec-file", required=True, help="Spec file to use to create this deployment")
|
||||||
@click.option("--deployment-dir", help="Create deployment files in this directory")
|
@click.option("--deployment-dir", help="Create deployment files in this directory")
|
||||||
@click.argument('extra_args', nargs=-1, type=click.UNPROCESSED)
|
# TODO: Hack
|
||||||
|
@click.option("--network-dir", help="Network configuration supplied in this directory")
|
||||||
|
@click.option("--initial-peers", help="Initial set of persistent peers")
|
||||||
@click.pass_context
|
@click.pass_context
|
||||||
def create(ctx, spec_file, deployment_dir, extra_args):
|
def create(ctx, spec_file, deployment_dir, network_dir, initial_peers):
|
||||||
deployment_command_context = ctx.obj
|
deployment_command_context = ctx.obj
|
||||||
return create_operation(deployment_command_context, spec_file, deployment_dir, extra_args)
|
return create_operation(deployment_command_context, spec_file, deployment_dir, network_dir, initial_peers)
|
||||||
|
|
||||||
|
|
||||||
# The init command's implementation is in a separate function so that we can
|
# The init command's implementation is in a separate function so that we can
|
||||||
# call it from other commands, bypassing the click decoration stuff
|
# call it from other commands, bypassing the click decoration stuff
|
||||||
def create_operation(deployment_command_context, spec_file, deployment_dir, extra_args):
|
def create_operation(deployment_command_context, spec_file, deployment_dir, network_dir, initial_peers):
|
||||||
parsed_spec = Spec(os.path.abspath(spec_file), get_parsed_deployment_spec(spec_file))
|
parsed_spec = Spec(os.path.abspath(spec_file), get_parsed_deployment_spec(spec_file))
|
||||||
_check_volume_definitions(parsed_spec)
|
_check_volume_definitions(parsed_spec)
|
||||||
stack_name = parsed_spec["stack"]
|
stack_name = parsed_spec["stack"]
|
||||||
@ -539,7 +541,7 @@ def create_operation(deployment_command_context, spec_file, deployment_dir, extr
|
|||||||
deployer_config_generator = getDeployerConfigGenerator(deployment_type, deployment_context)
|
deployer_config_generator = getDeployerConfigGenerator(deployment_type, deployment_context)
|
||||||
# TODO: make deployment_dir_path a Path above
|
# TODO: make deployment_dir_path a Path above
|
||||||
deployer_config_generator.generate(deployment_dir_path)
|
deployer_config_generator.generate(deployment_dir_path)
|
||||||
call_stack_deploy_create(deployment_context, extra_args)
|
call_stack_deploy_create(deployment_context, [network_dir, initial_peers, deployment_command_context])
|
||||||
|
|
||||||
|
|
||||||
# TODO: this code should be in the stack .py files but
|
# TODO: this code should be in the stack .py files but
|
||||||
|
|||||||
@ -92,8 +92,9 @@ class Spec:
|
|||||||
return self.obj.get(item, default)
|
return self.obj.get(item, default)
|
||||||
|
|
||||||
def init_from_file(self, file_path: Path):
|
def init_from_file(self, file_path: Path):
|
||||||
self.obj = get_yaml().load(open(file_path, "r"))
|
with file_path:
|
||||||
self.file_path = file_path
|
self.obj = get_yaml().load(open(file_path, "r"))
|
||||||
|
self.file_path = file_path
|
||||||
|
|
||||||
def get_image_registry(self):
|
def get_image_registry(self):
|
||||||
return self.obj.get(constants.image_registry_key)
|
return self.obj.get(constants.image_registry_key)
|
||||||
|
|||||||
@ -27,4 +27,5 @@ class Stack:
|
|||||||
self.name = name
|
self.name = name
|
||||||
|
|
||||||
def init_from_file(self, file_path: Path):
|
def init_from_file(self, file_path: Path):
|
||||||
self.obj = get_yaml().load(open(file_path, "r"))
|
with file_path:
|
||||||
|
self.obj = get_yaml().load(open(file_path, "r"))
|
||||||
|
|||||||
@ -92,6 +92,7 @@ def create_deployment(ctx, deployment_dir, image, url, kube_config, image_regist
|
|||||||
spec_file_name,
|
spec_file_name,
|
||||||
deployment_dir,
|
deployment_dir,
|
||||||
None,
|
None,
|
||||||
|
None
|
||||||
)
|
)
|
||||||
# Fix up the container tag inside the deployment compose file
|
# Fix up the container tag inside the deployment compose file
|
||||||
_fixup_container_tag(deployment_dir, image)
|
_fixup_container_tag(deployment_dir, image)
|
||||||
|
|||||||
@ -172,6 +172,7 @@ def process_app_deployment_request(
|
|||||||
logger.log(
|
logger.log(
|
||||||
f"Creating webapp deployment in: {deployment_dir} with container id: {deployment_container_tag}"
|
f"Creating webapp deployment in: {deployment_dir} with container id: {deployment_container_tag}"
|
||||||
)
|
)
|
||||||
|
# CREATES DEPLOYMENT DIR, NOT SKIPPING FOR TESTING
|
||||||
deploy_webapp.create_deployment(
|
deploy_webapp.create_deployment(
|
||||||
ctx,
|
ctx,
|
||||||
deployment_dir,
|
deployment_dir,
|
||||||
@ -214,21 +215,24 @@ def process_app_deployment_request(
|
|||||||
# add_tags_to_image(image_registry, app_image_shared_tag, deployment_container_tag)
|
# add_tags_to_image(image_registry, app_image_shared_tag, deployment_container_tag)
|
||||||
logger.log("Tag complete")
|
logger.log("Tag complete")
|
||||||
else:
|
else:
|
||||||
extra_build_args = [] # TODO: pull from request
|
# SKIP BUILD
|
||||||
logger.log(f"Building container image: {deployment_container_tag}")
|
logger.log("TESTING: Skipping container build.")
|
||||||
build_container_image(
|
|
||||||
app, deployment_container_tag, extra_build_args, logger
|
# extra_build_args = [] # TODO: pull from request
|
||||||
)
|
# logger.log(f"Building container image: {deployment_container_tag}")
|
||||||
logger.log("Build complete")
|
# build_container_image(
|
||||||
logger.log(f"Pushing container image: {deployment_container_tag}")
|
# app, deployment_container_tag, extra_build_args, logger
|
||||||
push_container_image(deployment_dir, logger)
|
# )
|
||||||
logger.log("Push complete")
|
# logger.log("Build complete")
|
||||||
# The build/push commands above will use the unique deployment tag, so now we need to add the shared tag.
|
# logger.log(f"Pushing container image: {deployment_container_tag}")
|
||||||
logger.log(
|
# push_container_image(deployment_dir, logger)
|
||||||
f"(SKIPPED) Adding global app image tag: {app_image_shared_tag} to newly built image: {deployment_container_tag}"
|
# logger.log("Push complete")
|
||||||
)
|
# # The build/push commands above will use the unique deployment tag, so now we need to add the shared tag.
|
||||||
# add_tags_to_image(image_registry, deployment_container_tag, app_image_shared_tag)
|
# logger.log(
|
||||||
logger.log("Tag complete")
|
# f"(SKIPPED) Adding global app image tag: {app_image_shared_tag} to newly built image: {deployment_container_tag}"
|
||||||
|
# )
|
||||||
|
# # add_tags_to_image(image_registry, deployment_container_tag, app_image_shared_tag)
|
||||||
|
# logger.log("Tag complete")
|
||||||
else:
|
else:
|
||||||
logger.log("Requested app is already deployed, skipping build and image push")
|
logger.log("Requested app is already deployed, skipping build and image push")
|
||||||
|
|
||||||
@ -241,7 +245,9 @@ def process_app_deployment_request(
|
|||||||
|
|
||||||
# 8. update k8s deployment
|
# 8. update k8s deployment
|
||||||
if needs_k8s_deploy:
|
if needs_k8s_deploy:
|
||||||
deploy_to_k8s(deployment_record, deployment_dir, recreate_on_deploy, logger)
|
# SKIP DEPLOY
|
||||||
|
logger.log("TESTING: Skipping deployment to k8s.")
|
||||||
|
# deploy_to_k8s(deployment_record, deployment_dir, recreate_on_deploy, logger)
|
||||||
|
|
||||||
logger.log("Publishing deployment to registry.")
|
logger.log("Publishing deployment to registry.")
|
||||||
publish_deployment(
|
publish_deployment(
|
||||||
|
|||||||
@ -72,11 +72,12 @@ def process_app_removal_request(
|
|||||||
# TODO(telackey): Call the function directly. The easiest way to build the correct click context is to
|
# TODO(telackey): Call the function directly. The easiest way to build the correct click context is to
|
||||||
# exec the process, but it would be better to refactor so we could just call down_operation with the
|
# exec the process, but it would be better to refactor so we could just call down_operation with the
|
||||||
# necessary parameters
|
# necessary parameters
|
||||||
down_command = [sys.argv[0], "deployment", "--dir", deployment_dir, "down"]
|
main_logger.log("TESTING: Skipping stopping deployment.")
|
||||||
if delete_volumes:
|
# down_command = [sys.argv[0], "deployment", "--dir", deployment_dir, "down"]
|
||||||
down_command.append("--delete-volumes")
|
# if delete_volumes:
|
||||||
result = subprocess.run(down_command)
|
# down_command.append("--delete-volumes")
|
||||||
result.check_returncode()
|
# result = subprocess.run(down_command)
|
||||||
|
# result.check_returncode()
|
||||||
|
|
||||||
removal_record = {
|
removal_record = {
|
||||||
"record": {
|
"record": {
|
||||||
|
|||||||
@ -180,7 +180,9 @@ def get_k8s_dir():
|
|||||||
def get_parsed_deployment_spec(spec_file):
|
def get_parsed_deployment_spec(spec_file):
|
||||||
spec_file_path = Path(spec_file)
|
spec_file_path = Path(spec_file)
|
||||||
try:
|
try:
|
||||||
return get_yaml().load(open(spec_file_path, "r"))
|
with spec_file_path:
|
||||||
|
deploy_spec = get_yaml().load(open(spec_file_path, "r"))
|
||||||
|
return deploy_spec
|
||||||
except FileNotFoundError as error:
|
except FileNotFoundError as error:
|
||||||
# We try here to generate a useful diagnostic error
|
# We try here to generate a useful diagnostic error
|
||||||
print(f"Error: spec file: {spec_file_path} does not exist")
|
print(f"Error: spec file: {spec_file_path} does not exist")
|
||||||
|
|||||||
@ -14,13 +14,8 @@ delete_cluster_exit () {
|
|||||||
|
|
||||||
# Test basic stack-orchestrator deploy
|
# Test basic stack-orchestrator deploy
|
||||||
echo "Running stack-orchestrator deploy test"
|
echo "Running stack-orchestrator deploy test"
|
||||||
|
# Bit of a hack, test the most recent package
|
||||||
if [ "$1" == "from-path" ]; then
|
TEST_TARGET_SO=$( ls -t1 ./package/laconic-so* | head -1 )
|
||||||
TEST_TARGET_SO="laconic-so"
|
|
||||||
else
|
|
||||||
TEST_TARGET_SO=$( ls -t1 ./package/laconic-so* | head -1 )
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Set a non-default repo dir
|
# Set a non-default repo dir
|
||||||
export CERC_REPO_BASE_DIR=~/stack-orchestrator-test/repo-base-dir
|
export CERC_REPO_BASE_DIR=~/stack-orchestrator-test/repo-base-dir
|
||||||
echo "Testing this package: $TEST_TARGET_SO"
|
echo "Testing this package: $TEST_TARGET_SO"
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user