Stack Orchestrator Template #976

Open
opened 2026-01-15 16:55:04 +00:00 by rjk-laconic · 0 comments
Member

Stack Orchestrator Complete Guide

Purpose: Take any repository and make it deployable via laconic-so with full CI/CD.

This guide is designed to be followed by Claude Code or any AI assistant to systematically containerize and deploy any application using Laconic's stack-orchestrator infrastructure.


Table of Contents

  1. Phase 1: Repository Analysis
  2. Phase 2: Stack Definition
  3. Phase 3: Container Build Files
  4. Phase 4: Docker Compose
  5. Phase 5: Local Testing
  6. Phase 6: CI/CD Integration
  7. Phase 7: Data Optimization
  8. Phase 8: PR Submission
  9. Reference: Common Patterns
  10. Reference: Troubleshooting

Phase 1: Repository Analysis

Step 1.1: Identify Tech Stack

Examine these files to determine the application type:

File Indicates
package.json Node.js (check for next, vite, express, etc.)
requirements.txt / pyproject.toml Python
go.mod Go
Cargo.toml Rust
pom.xml / build.gradle Java
Gemfile Ruby
composer.json PHP

Step 1.2: Extract Build Information

For Node.js apps, read package.json and find:

# Key fields to extract:
jq '.scripts.build' package.json      # Build command
jq '.scripts.start' package.json      # Start command
jq '.scripts.dev' package.json        # Dev command (hints at framework)
jq '.dependencies' package.json       # Runtime dependencies
jq '.devDependencies' package.json    # Build dependencies

Step 1.3: Identify Port

Search for port configuration:

# Common patterns
grep -r "PORT" --include="*.ts" --include="*.js" --include="*.env*" .
grep -r "listen(" --include="*.ts" --include="*.js" .
grep -r ":3000\|:5000\|:8080\|:4000" --include="*.ts" --include="*.js" .

Step 1.4: Identify Database Requirements

# Check for database dependencies
grep -i "postgres\|mysql\|mongo\|redis\|sqlite" package.json
grep -r "DATABASE_URL\|DB_HOST\|MONGO_URI\|REDIS_URL" --include="*.ts" --include="*.js" --include="*.env*" .

Step 1.5: Identify Environment Variables

# Find all env vars used
grep -r "process.env\." --include="*.ts" --include="*.js" . | sed 's/.*process.env.\([A-Z_]*\).*/\1/' | sort -u
# Or check for .env.example
cat .env.example 2>/dev/null || cat .env.sample 2>/dev/null

Step 1.6: Document Findings

Create analysis summary:

# Repository Analysis
repo_url: github.com/org/repo
tech_stack: node  # node, python, go, rust, etc.
framework: express  # next, vite, express, fastapi, gin, etc.
node_version: "20"  # or python_version, go_version, etc.

build:
  command: "npm run build"
  output: "dist/"  # or .next/, build/, etc.

start:
  command: "node dist/index.js"
  port: 5000

database:
  type: postgresql  # postgresql, mysql, mongodb, redis, none
  env_var: DATABASE_URL

environment_variables:
  required:
    - DATABASE_URL
    - NODE_ENV
  optional:
    - PORT
    - LOG_LEVEL

static_assets:
  - public/
  - dist/client/

Phase 2: Stack Definition

Step 2.1: Choose Stack Name

  • Use lowercase with hyphens: my-app-name
  • Should be descriptive but concise
  • Must be unique in stack-orchestrator

Step 2.2: Create stack.yml

Location: stack-orchestrator/stack_orchestrator/data/stacks/<stack-name>/stack.yml

version: "1.0"
name: <stack-name>
description: "<One-line description of the application>"
repos:
  - github.com/<org>/<repo>
containers:
  - cerc/<app-name>
pods:
  - <pod-name>

Rules:

  • name must match directory name
  • repos use format github.com/org/repo (no https://)
  • containers must start with cerc/
  • pods must match docker-compose file name: docker-compose-<pod-name>.yml

Step 2.3: Create README.md

Location: stack-orchestrator/stack_orchestrator/data/stacks/<stack-name>/README.md

# <App Name> Stack

<Brief description>

## Quick Start

\`\`\`bash
laconic-so --stack <stack-name> setup-repositories
laconic-so --stack <stack-name> build-containers
laconic-so --stack <stack-name> deploy-system up
\`\`\`

## Access

- Web UI: http://localhost:<PORT>
- API: http://localhost:<PORT>/api

## Configuration

| Variable | Description | Default |
|----------|-------------|---------|
| HOST_PORT | External port | <PORT> |

## Services

- **<app-name>**: Main application
- **<db-name>**: Database (if applicable)

Phase 3: Container Build Files

Step 3.1: Create build.sh

Location: stack-orchestrator/stack_orchestrator/data/container-build/cerc-<app-name>/build.sh

#!/usr/bin/env bash
# Build script for <app-name>
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )

docker build -t cerc/<app-name>:local \
    -f ${SCRIPT_DIR}/Dockerfile \
    ${build_command_args} \
    ${CERC_REPO_BASE_DIR}/<repo-directory-name>

Note: <repo-directory-name> is the directory name after cloning (usually the repo name).

Step 3.2: Create Dockerfile

Location: stack-orchestrator/stack_orchestrator/data/container-build/cerc-<app-name>/Dockerfile

Node.js Template

# Build stage
FROM node:20-alpine AS builder

WORKDIR /app

# Install dependencies
COPY package*.json ./
RUN npm ci

# Copy source and build
COPY . .
RUN npm run build

# Production stage
FROM node:20-alpine AS runner

WORKDIR /app

# Install production dependencies only
COPY package*.json ./
RUN npm ci --only=production

# Copy built assets
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/public ./public

# Add any additional files needed at runtime
# COPY --from=builder /app/drizzle ./drizzle

ENV NODE_ENV=production
ENV PORT=5000

EXPOSE 5000

CMD ["node", "dist/index.js"]

Python Template

FROM python:3.11-slim

WORKDIR /app

# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application
COPY . .

ENV PYTHONUNBUFFERED=1

EXPOSE 8000

CMD ["python", "-m", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

Go Template

# Build stage
FROM golang:1.21-alpine AS builder

WORKDIR /app

COPY go.mod go.sum ./
RUN go mod download

COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o main .

# Production stage
FROM alpine:latest

WORKDIR /app

COPY --from=builder /app/main .

EXPOSE 8080

CMD ["./main"]

Step 3.3: Create Startup Script (if needed)

Location: stack-orchestrator/stack_orchestrator/data/container-build/cerc-<app-name>/scripts/start.sh

#!/usr/bin/env bash
set -e

if [ -n "$CERC_SCRIPT_DEBUG" ]; then
    set -x
fi

echo "Starting <app-name>..."

# Wait for database (if needed)
if [ -n "$DATABASE_URL" ]; then
    echo "Waiting for database..."

    DB_HOST=$(echo $DATABASE_URL | sed -e 's|.*@||' -e 's|:.*||' -e 's|/.*||')
    DB_PORT=$(echo $DATABASE_URL | sed -e 's|.*@[^:]*:||' -e 's|/.*||')
    : ${DB_PORT:=5432}

    timeout=60
    counter=0
    until nc -z "$DB_HOST" "$DB_PORT" 2>/dev/null; do
        counter=$((counter + 1))
        if [ $counter -ge $timeout ]; then
            echo "Error: Database not available after ${timeout}s"
            exit 1
        fi
        echo "Waiting for database at ${DB_HOST}:${DB_PORT}... ($counter/$timeout)"
        sleep 1
    done
    echo "Database is available!"
fi

# Run migrations (if applicable)
if [ "${RUN_MIGRATIONS:-true}" = "true" ]; then
    echo "Running migrations..."
    npm run migrate 2>&1 || echo "Migration warning, continuing..."
fi

# Extract cached data (if applicable)
if [ -f "data/cache.tar.gz" ] && [ ! -d "cache" ]; then
    echo "Extracting cached data..."
    tar -xzf data/cache.tar.gz -C .
fi

# Start the application
echo "Starting application on port ${PORT:-5000}..."
exec node dist/index.js

Update Dockerfile to use startup script:

# Add to Dockerfile
COPY scripts/start.sh /app/start.sh
RUN chmod +x /app/start.sh

CMD ["/app/start.sh"]

Phase 4: Docker Compose

Step 4.1: Create docker-compose file

Location: stack-orchestrator/stack_orchestrator/data/compose/docker-compose-<pod-name>.yml

With PostgreSQL

version: "3.8"

services:
  <app-name>-db:
    image: postgres:15-alpine
    restart: unless-stopped
    environment:
      POSTGRES_USER: ${DB_USER:-appuser}
      POSTGRES_PASSWORD: ${DB_PASSWORD:-apppass}
      POSTGRES_DB: ${DB_NAME:-appdb}
    volumes:
      - <app-name>_db_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-appuser} -d ${DB_NAME:-appdb}"]
      interval: 10s
      timeout: 5s
      retries: 5

  <app-name>:
    image: cerc/<app-name>:local
    restart: unless-stopped
    depends_on:
      <app-name>-db:
        condition: service_healthy
    environment:
      NODE_ENV: production
      PORT: ${APP_PORT:-5000}
      DATABASE_URL: postgresql://${DB_USER:-appuser}:${DB_PASSWORD:-apppass}@<app-name>-db:5432/${DB_NAME:-appdb}
    ports:
      - "${HOST_PORT:-5000}:${APP_PORT:-5000}"
    healthcheck:
      test: ["CMD", "nc", "-z", "localhost", "${APP_PORT:-5000}"]
      interval: 30s
      timeout: 10s
      retries: 10

volumes:
  <app-name>_db_data:

Without Database

version: "3.8"

services:
  <app-name>:
    image: cerc/<app-name>:local
    restart: unless-stopped
    environment:
      NODE_ENV: production
      PORT: ${APP_PORT:-5000}
    ports:
      - "${HOST_PORT:-5000}:${APP_PORT:-5000}"
    healthcheck:
      test: ["CMD", "nc", "-z", "localhost", "${APP_PORT:-5000}"]
      interval: 30s
      timeout: 10s
      retries: 10

With Redis

version: "3.8"

services:
  <app-name>-redis:
    image: redis:7-alpine
    restart: unless-stopped
    volumes:
      - <app-name>_redis_data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5

  <app-name>:
    image: cerc/<app-name>:local
    restart: unless-stopped
    depends_on:
      <app-name>-redis:
        condition: service_healthy
    environment:
      NODE_ENV: production
      REDIS_URL: redis://<app-name>-redis:6379
    ports:
      - "${HOST_PORT:-5000}:5000"

volumes:
  <app-name>_redis_data:

Phase 5: Local Testing

Step 5.1: Setup Repositories

laconic-so --stack <stack-name> setup-repositories

This clones repos to ~/cerc/ (or $CERC_REPO_BASE_DIR).

Step 5.2: Build Containers

laconic-so --stack <stack-name> build-containers

Watch for errors. Common fixes:

  • Missing dependencies: Add to Dockerfile
  • Build script errors: Check paths in build.sh
  • Permission issues: Add chmod +x for scripts

Step 5.3: Deploy Stack

laconic-so --stack <stack-name> deploy-system up

Step 5.4: Verify

# Check containers running
docker ps --filter "name=<app-name>"

# Check logs
laconic-so --stack <stack-name> deploy-system logs -f

# Test endpoints
curl http://localhost:<PORT>/
curl http://localhost:<PORT>/health
curl http://localhost:<PORT>/api/status

Step 5.5: Debug Common Issues

# View all logs
docker logs <container-name>

# Shell into container
docker exec -it <container-name> sh

# Check database
docker exec -it <db-container> psql -U <user> -d <db>

# Stop and clean
laconic-so --stack <stack-name> deploy-system down -v

Step 5.6: Iterate

Repeat build/deploy cycle until everything works:

laconic-so --stack <stack-name> deploy-system down -v
laconic-so --stack <stack-name> build-containers
laconic-so --stack <stack-name> deploy-system up

Phase 6: CI/CD Integration

Step 6.1: Add Playwright to package.json

{
  "devDependencies": {
    "@playwright/test": "^1.40.0"
  },
  "scripts": {
    "test:e2e": "playwright test"
  }
}

Run npm install to update lock file.

Step 6.2: Create playwright.config.ts

import { defineConfig, devices } from '@playwright/test';

export default defineConfig({
  testDir: './e2e',
  fullyParallel: true,
  forbidOnly: !!process.env.CI,
  retries: process.env.CI ? 1 : 0,
  workers: process.env.CI ? 4 : undefined,
  reporter: process.env.CI ? 'github' : 'html',
  timeout: 30000,

  use: {
    baseURL: process.env.BASE_URL || 'http://localhost:<PORT>',
    trace: 'on-first-retry',
    screenshot: 'only-on-failure',
    video: 'retain-on-failure',
  },

  projects: [
    {
      name: 'chromium',
      use: { ...devices['Desktop Chrome'] },
    },
  ],

  outputDir: 'test-results/',
});

Step 6.3: Create E2E Tests

Create e2e/smoke.spec.ts:

import { test, expect } from '@playwright/test';

test.describe('Smoke Tests', () => {
  test('homepage loads', async ({ page }) => {
    await page.goto('/');
    await page.waitForLoadState('domcontentloaded');
    await expect(page.locator('body')).not.toBeEmpty();
  });

  test('API health check', async ({ request }) => {
    const response = await request.get('/health');
    expect(response.ok()).toBeTruthy();
  });
});

test.describe('Page Tests', () => {
  const routes = [
    '/',
    '/about',
    // Add your routes
  ];

  for (const route of routes) {
    test(`${route} loads without error`, async ({ page }) => {
      const errors: string[] = [];
      page.on('pageerror', (err) => errors.push(err.message));

      await page.goto(route);
      await page.waitForLoadState('domcontentloaded');

      const criticalErrors = errors.filter(
        (e) => !e.includes('ResizeObserver')
      );
      expect(criticalErrors).toHaveLength(0);
    });
  }
});

Step 6.4: Create Integration Test Script

Create scripts/integration-test.sh:

#!/usr/bin/env bash
set +e

BASE_URL="${BASE_URL:-http://localhost:<PORT>}"
PASSED=0
FAILED=0

RED='\033[0;31m'
GREEN='\033[0;32m'
NC='\033[0m'

log_pass() { echo -e "${GREEN}[PASS]${NC} $1"; PASSED=$((PASSED + 1)); }
log_fail() { echo -e "${RED}[FAIL]${NC} $1"; FAILED=$((FAILED + 1)); }

test_endpoint() {
    local name="$1"
    local endpoint="$2"
    local expected="${3:-200}"

    status=$(curl -s -o /dev/null -w "%{http_code}" --max-time 30 "$BASE_URL$endpoint")

    if [ "$status" = "$expected" ]; then
        log_pass "$name (HTTP $status)"
    else
        log_fail "$name (expected $expected, got $status)"
    fi
}

echo "========================================"
echo "Integration Tests - $BASE_URL"
echo "========================================"

# Frontend
test_endpoint "Homepage" "/"
test_endpoint "Static Assets" "/favicon.ico"

# API
test_endpoint "Health Check" "/health"
test_endpoint "API Status" "/api/status"

# Error handling
test_endpoint "404 Handler" "/nonexistent" "404"

echo "========================================"
echo "Results: $PASSED passed, $FAILED failed"
echo "========================================"

[ $FAILED -eq 0 ] && exit 0 || exit 1

Step 6.5: Create GitHub Workflow

Create .github/workflows/ci.yml:

name: CI/CD

on:
  push:
    branches: [main]
    tags: ['v*']
  pull_request:
    branches: [main]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ghcr.io/${{ github.repository_owner }}/<app-name>

jobs:
  build-test:
    name: Build & Test via laconic-so
    runs-on: ubuntu-latest
    timeout-minutes: 45
    permissions:
      contents: read
      packages: write

    steps:
      - name: Checkout
        uses: actions/checkout@v4
        with:
          path: <RepoName>

      - name: Setup Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.11'

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install Playwright
        run: |
          cd <RepoName>
          npm ci
          npx playwright install --with-deps chromium          

      - name: Install laconic-so
        run: |
          pip install git+https://git.vdb.to/cerc-io/stack-orchestrator.git@main
          laconic-so version          

      - name: Setup repositories
        run: |
          mkdir -p ${{ github.workspace }}/repos
          cp -r ${{ github.workspace }}/<RepoName> ${{ github.workspace }}/repos/<RepoName>

          cd ${{ github.workspace }}/repos/<RepoName>
          git config user.email "ci@example.com"
          git config user.name "CI"
          git init && git add -A && git commit -m "CI" --allow-empty

          laconic-so --stack <stack-name> setup-repositories          
        env:
          CERC_REPO_BASE_DIR: ${{ github.workspace }}/repos

      - name: Build containers
        run: laconic-so --stack <stack-name> build-containers
        env:
          CERC_REPO_BASE_DIR: ${{ github.workspace }}/repos

      - name: Deploy stack
        run: laconic-so --stack <stack-name> deploy-system up
        env:
          CERC_REPO_BASE_DIR: ${{ github.workspace }}/repos

      - name: Wait for healthy
        run: |
          timeout 300 bash -c '
            until curl -sf http://localhost:<PORT>/health > /dev/null 2>&1; do
              sleep 3
            done
          '          

      - name: Integration tests
        run: |
          cd <RepoName>
          chmod +x scripts/integration-test.sh
          ./scripts/integration-test.sh          
        env:
          BASE_URL: http://localhost:<PORT>

      - name: Playwright tests
        run: |
          cd <RepoName>
          npm run test:e2e          
        env:
          BASE_URL: http://localhost:<PORT>

      - name: Upload artifacts
        uses: actions/upload-artifact@v4
        if: always()
        with:
          name: test-results
          path: |
            <RepoName>/playwright-report/
            <RepoName>/test-results/            
          retention-days: 7

      - name: Logs on failure
        if: failure()
        run: laconic-so --stack <stack-name> deploy-system logs
        env:
          CERC_REPO_BASE_DIR: ${{ github.workspace }}/repos

      - name: Cleanup
        if: always()
        run: laconic-so --stack <stack-name> deploy-system down -v || true
        env:
          CERC_REPO_BASE_DIR: ${{ github.workspace }}/repos

      - name: Login to GHCR
        if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/'))
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Push image
        if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/'))
        run: |
          SOURCE="cerc/<app-name>:local"
          if [[ "${{ github.ref }}" == refs/tags/v* ]]; then
            docker tag $SOURCE ${{ env.IMAGE_NAME }}:${{ github.ref_name }}
            docker tag $SOURCE ${{ env.IMAGE_NAME }}:latest
            docker push ${{ env.IMAGE_NAME }}:${{ github.ref_name }}
            docker push ${{ env.IMAGE_NAME }}:latest
          else
            docker tag $SOURCE ${{ env.IMAGE_NAME }}:main
            docker push ${{ env.IMAGE_NAME }}:main
          fi          

Step 6.6: Update .gitignore

# Playwright
test-results/
playwright-report/
playwright/.cache/

Phase 7: Data Optimization

For repos with large cached data (images, JSON files, etc.):

Step 7.1: Identify Large Data

# Find large directories
du -sh */ | sort -hr | head -10

# Count files in directories
find . -type f -not -path "./node_modules/*" -not -path "./.git/*" | cut -d/ -f2 | sort | uniq -c | sort -rn

Step 7.2: Compress Data

mkdir -p data

# Compress directory
tar -czf data/cache-name.tar.gz -C . cache-directory/

# Check size
ls -lh data/

# If > 100MB, split for GitHub
if [ $(stat -f%z data/cache-name.tar.gz 2>/dev/null || stat -c%s data/cache-name.tar.gz) -gt 100000000 ]; then
    split -b 50m data/cache-name.tar.gz data/cache-name.tar.gz.
    rm data/cache-name.tar.gz
fi

Step 7.3: Update .gitignore

# Cached data (extracted at startup)
cache-directory/
image-cache/

# Track archives
*.tar.gz
!data/*.tar.gz
!data/*.tar.gz.*

Step 7.4: Remove from Git

git rm -r --cached cache-directory/
git add data/*.tar.gz* .gitignore
git commit -m "refactor: Compress cached data into archives"

Step 7.5: Update Startup Script

Add extraction to startup script:

# Single archive
if [ -f "data/cache.tar.gz" ] && [ ! -d "cache-directory" ]; then
    echo "Extracting cache..."
    tar -xzf data/cache.tar.gz -C .
fi

# Split archives
if [ -f "data/cache.tar.gz.aa" ] && [ ! -d "cache-directory" ]; then
    echo "Extracting split archives..."
    cat data/cache.tar.gz.* | tar -xzf - -C .
fi

Phase 8: PR Submission

Step 8.1: Stack PR to stack-orchestrator

Create branch and PR to cerc-io/stack-orchestrator:

cd ~/cerc/stack-orchestrator  # or wherever cloned
git checkout -b feat/<stack-name>-stack

# Files to add:
# - stack_orchestrator/data/stacks/<stack-name>/stack.yml
# - stack_orchestrator/data/stacks/<stack-name>/README.md
# - stack_orchestrator/data/container-build/cerc-<app-name>/build.sh
# - stack_orchestrator/data/container-build/cerc-<app-name>/Dockerfile
# - stack_orchestrator/data/container-build/cerc-<app-name>/scripts/start.sh (if needed)
# - stack_orchestrator/data/compose/docker-compose-<pod-name>.yml

git add .
git commit -m "feat: Add <app-name> stack"
git push origin feat/<stack-name>-stack

Step 8.2: CI/CD PR to App Repository

Create branch and PR to the application repository:

cd ~/cerc/<RepoName>
git checkout -b feat/ci-cd

# Files to add:
# - .github/workflows/ci.yml
# - playwright.config.ts
# - e2e/smoke.spec.ts
# - scripts/integration-test.sh
# - data/*.tar.gz* (if compressed data)

git add .
git commit -m "feat: Add laconic-so CI/CD pipeline"
git push origin feat/ci-cd

Step 8.3: Update CI Workflow Reference

Once stack PR is merged, update CI workflow:

# Change from branch reference:
pip install git+https://git.vdb.to/cerc-io/stack-orchestrator.git@feat/<stack-name>-stack

# To main:
pip install git+https://git.vdb.to/cerc-io/stack-orchestrator.git@main

Reference: Common Patterns

Next.js App

FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/public ./public
ENV NODE_ENV=production
EXPOSE 3000
CMD ["node", "server.js"]

Vite/React App (Static)

FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Express + Vite Full-Stack

FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/public ./public
ENV NODE_ENV=production
EXPOSE 5000
CMD ["node", "dist/index.js"]

Python FastAPI

FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

Go Gin

FROM golang:1.21-alpine AS builder
WORKDIR /app
COPY go.* ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o main .

FROM alpine:latest
WORKDIR /app
COPY --from=builder /app/main .
EXPOSE 8080
CMD ["./main"]

Reference: Troubleshooting

Problem Solution
Cannot find module X Add RUN npm install X to Dockerfile
ENOENT: no such file Check COPY paths in Dockerfile, ensure files exist
Port already in use Change HOST_PORT in docker-compose
Database connection refused Add healthcheck + depends_on with condition
npm ci lock file error Regenerate: rm -rf node_modules package-lock.json && npm install
Static files 404 Set NODE_ENV=production, check COPY paths
Build works, runtime fails Check CMD vs build output paths
Vite not found in production Add vite to dependencies (not devDependencies) if imported at runtime
GitHub file too large Split with split -b 50m file.tar.gz file.tar.gz.
Playwright timeout Use domcontentloaded instead of networkidle
Screenshot tests flaky Convert to content-based smoke tests
test.use() error Move test.use({ video: 'on' }) to top level, not inside describe

Checklist

Stack Files (in stack-orchestrator)

  • stacks/<stack-name>/stack.yml
  • stacks/<stack-name>/README.md
  • container-build/cerc-<app>/build.sh
  • container-build/cerc-<app>/Dockerfile
  • container-build/cerc-<app>/scripts/start.sh (if needed)
  • compose/docker-compose-<pod>.yml

CI/CD Files (in app repo)

  • .github/workflows/ci.yml
  • playwright.config.ts
  • e2e/smoke.spec.ts
  • scripts/integration-test.sh
  • package.json has @playwright/test and test:e2e script
  • .gitignore updated for test artifacts

Testing

  • Stack builds locally with laconic-so
  • Stack deploys and app is accessible
  • Integration tests pass
  • Playwright tests pass
  • CI workflow passes on PR

Data (if applicable)

  • Large files compressed to data/*.tar.gz
  • Archives split if > 100MB
  • Startup script extracts archives
  • Original directories in .gitignore
# Stack Orchestrator Complete Guide **Purpose:** Take any repository and make it deployable via laconic-so with full CI/CD. This guide is designed to be followed by Claude Code or any AI assistant to systematically containerize and deploy any application using Laconic's stack-orchestrator infrastructure. --- ## Table of Contents 1. [Phase 1: Repository Analysis](#phase-1-repository-analysis) 2. [Phase 2: Stack Definition](#phase-2-stack-definition) 3. [Phase 3: Container Build Files](#phase-3-container-build-files) 4. [Phase 4: Docker Compose](#phase-4-docker-compose) 5. [Phase 5: Local Testing](#phase-5-local-testing) 6. [Phase 6: CI/CD Integration](#phase-6-cicd-integration) 7. [Phase 7: Data Optimization](#phase-7-data-optimization) 8. [Phase 8: PR Submission](#phase-8-pr-submission) 9. [Reference: Common Patterns](#reference-common-patterns) 10. [Reference: Troubleshooting](#reference-troubleshooting) --- ## Phase 1: Repository Analysis ### Step 1.1: Identify Tech Stack Examine these files to determine the application type: | File | Indicates | |------|-----------| | `package.json` | Node.js (check for next, vite, express, etc.) | | `requirements.txt` / `pyproject.toml` | Python | | `go.mod` | Go | | `Cargo.toml` | Rust | | `pom.xml` / `build.gradle` | Java | | `Gemfile` | Ruby | | `composer.json` | PHP | ### Step 1.2: Extract Build Information For Node.js apps, read `package.json` and find: ```bash # Key fields to extract: jq '.scripts.build' package.json # Build command jq '.scripts.start' package.json # Start command jq '.scripts.dev' package.json # Dev command (hints at framework) jq '.dependencies' package.json # Runtime dependencies jq '.devDependencies' package.json # Build dependencies ``` ### Step 1.3: Identify Port Search for port configuration: ```bash # Common patterns grep -r "PORT" --include="*.ts" --include="*.js" --include="*.env*" . grep -r "listen(" --include="*.ts" --include="*.js" . grep -r ":3000\|:5000\|:8080\|:4000" --include="*.ts" --include="*.js" . ``` ### Step 1.4: Identify Database Requirements ```bash # Check for database dependencies grep -i "postgres\|mysql\|mongo\|redis\|sqlite" package.json grep -r "DATABASE_URL\|DB_HOST\|MONGO_URI\|REDIS_URL" --include="*.ts" --include="*.js" --include="*.env*" . ``` ### Step 1.5: Identify Environment Variables ```bash # Find all env vars used grep -r "process.env\." --include="*.ts" --include="*.js" . | sed 's/.*process.env.\([A-Z_]*\).*/\1/' | sort -u # Or check for .env.example cat .env.example 2>/dev/null || cat .env.sample 2>/dev/null ``` ### Step 1.6: Document Findings Create analysis summary: ```yaml # Repository Analysis repo_url: github.com/org/repo tech_stack: node # node, python, go, rust, etc. framework: express # next, vite, express, fastapi, gin, etc. node_version: "20" # or python_version, go_version, etc. build: command: "npm run build" output: "dist/" # or .next/, build/, etc. start: command: "node dist/index.js" port: 5000 database: type: postgresql # postgresql, mysql, mongodb, redis, none env_var: DATABASE_URL environment_variables: required: - DATABASE_URL - NODE_ENV optional: - PORT - LOG_LEVEL static_assets: - public/ - dist/client/ ``` --- ## Phase 2: Stack Definition ### Step 2.1: Choose Stack Name - Use lowercase with hyphens: `my-app-name` - Should be descriptive but concise - Must be unique in stack-orchestrator ### Step 2.2: Create stack.yml Location: `stack-orchestrator/stack_orchestrator/data/stacks/<stack-name>/stack.yml` ```yaml version: "1.0" name: <stack-name> description: "<One-line description of the application>" repos: - github.com/<org>/<repo> containers: - cerc/<app-name> pods: - <pod-name> ``` **Rules:** - `name` must match directory name - `repos` use format `github.com/org/repo` (no https://) - `containers` must start with `cerc/` - `pods` must match docker-compose file name: `docker-compose-<pod-name>.yml` ### Step 2.3: Create README.md Location: `stack-orchestrator/stack_orchestrator/data/stacks/<stack-name>/README.md` ```markdown # <App Name> Stack <Brief description> ## Quick Start \`\`\`bash laconic-so --stack <stack-name> setup-repositories laconic-so --stack <stack-name> build-containers laconic-so --stack <stack-name> deploy-system up \`\`\` ## Access - Web UI: http://localhost:<PORT> - API: http://localhost:<PORT>/api ## Configuration | Variable | Description | Default | |----------|-------------|---------| | HOST_PORT | External port | <PORT> | ## Services - **<app-name>**: Main application - **<db-name>**: Database (if applicable) ``` --- ## Phase 3: Container Build Files ### Step 3.1: Create build.sh Location: `stack-orchestrator/stack_orchestrator/data/container-build/cerc-<app-name>/build.sh` ```bash #!/usr/bin/env bash # Build script for <app-name> source ${CERC_CONTAINER_BASE_DIR}/build-base.sh SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) docker build -t cerc/<app-name>:local \ -f ${SCRIPT_DIR}/Dockerfile \ ${build_command_args} \ ${CERC_REPO_BASE_DIR}/<repo-directory-name> ``` **Note:** `<repo-directory-name>` is the directory name after cloning (usually the repo name). ### Step 3.2: Create Dockerfile Location: `stack-orchestrator/stack_orchestrator/data/container-build/cerc-<app-name>/Dockerfile` #### Node.js Template ```dockerfile # Build stage FROM node:20-alpine AS builder WORKDIR /app # Install dependencies COPY package*.json ./ RUN npm ci # Copy source and build COPY . . RUN npm run build # Production stage FROM node:20-alpine AS runner WORKDIR /app # Install production dependencies only COPY package*.json ./ RUN npm ci --only=production # Copy built assets COPY --from=builder /app/dist ./dist COPY --from=builder /app/public ./public # Add any additional files needed at runtime # COPY --from=builder /app/drizzle ./drizzle ENV NODE_ENV=production ENV PORT=5000 EXPOSE 5000 CMD ["node", "dist/index.js"] ``` #### Python Template ```dockerfile FROM python:3.11-slim WORKDIR /app # Install dependencies COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # Copy application COPY . . ENV PYTHONUNBUFFERED=1 EXPOSE 8000 CMD ["python", "-m", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"] ``` #### Go Template ```dockerfile # Build stage FROM golang:1.21-alpine AS builder WORKDIR /app COPY go.mod go.sum ./ RUN go mod download COPY . . RUN CGO_ENABLED=0 GOOS=linux go build -o main . # Production stage FROM alpine:latest WORKDIR /app COPY --from=builder /app/main . EXPOSE 8080 CMD ["./main"] ``` ### Step 3.3: Create Startup Script (if needed) Location: `stack-orchestrator/stack_orchestrator/data/container-build/cerc-<app-name>/scripts/start.sh` ```bash #!/usr/bin/env bash set -e if [ -n "$CERC_SCRIPT_DEBUG" ]; then set -x fi echo "Starting <app-name>..." # Wait for database (if needed) if [ -n "$DATABASE_URL" ]; then echo "Waiting for database..." DB_HOST=$(echo $DATABASE_URL | sed -e 's|.*@||' -e 's|:.*||' -e 's|/.*||') DB_PORT=$(echo $DATABASE_URL | sed -e 's|.*@[^:]*:||' -e 's|/.*||') : ${DB_PORT:=5432} timeout=60 counter=0 until nc -z "$DB_HOST" "$DB_PORT" 2>/dev/null; do counter=$((counter + 1)) if [ $counter -ge $timeout ]; then echo "Error: Database not available after ${timeout}s" exit 1 fi echo "Waiting for database at ${DB_HOST}:${DB_PORT}... ($counter/$timeout)" sleep 1 done echo "Database is available!" fi # Run migrations (if applicable) if [ "${RUN_MIGRATIONS:-true}" = "true" ]; then echo "Running migrations..." npm run migrate 2>&1 || echo "Migration warning, continuing..." fi # Extract cached data (if applicable) if [ -f "data/cache.tar.gz" ] && [ ! -d "cache" ]; then echo "Extracting cached data..." tar -xzf data/cache.tar.gz -C . fi # Start the application echo "Starting application on port ${PORT:-5000}..." exec node dist/index.js ``` Update Dockerfile to use startup script: ```dockerfile # Add to Dockerfile COPY scripts/start.sh /app/start.sh RUN chmod +x /app/start.sh CMD ["/app/start.sh"] ``` --- ## Phase 4: Docker Compose ### Step 4.1: Create docker-compose file Location: `stack-orchestrator/stack_orchestrator/data/compose/docker-compose-<pod-name>.yml` #### With PostgreSQL ```yaml version: "3.8" services: <app-name>-db: image: postgres:15-alpine restart: unless-stopped environment: POSTGRES_USER: ${DB_USER:-appuser} POSTGRES_PASSWORD: ${DB_PASSWORD:-apppass} POSTGRES_DB: ${DB_NAME:-appdb} volumes: - <app-name>_db_data:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-appuser} -d ${DB_NAME:-appdb}"] interval: 10s timeout: 5s retries: 5 <app-name>: image: cerc/<app-name>:local restart: unless-stopped depends_on: <app-name>-db: condition: service_healthy environment: NODE_ENV: production PORT: ${APP_PORT:-5000} DATABASE_URL: postgresql://${DB_USER:-appuser}:${DB_PASSWORD:-apppass}@<app-name>-db:5432/${DB_NAME:-appdb} ports: - "${HOST_PORT:-5000}:${APP_PORT:-5000}" healthcheck: test: ["CMD", "nc", "-z", "localhost", "${APP_PORT:-5000}"] interval: 30s timeout: 10s retries: 10 volumes: <app-name>_db_data: ``` #### Without Database ```yaml version: "3.8" services: <app-name>: image: cerc/<app-name>:local restart: unless-stopped environment: NODE_ENV: production PORT: ${APP_PORT:-5000} ports: - "${HOST_PORT:-5000}:${APP_PORT:-5000}" healthcheck: test: ["CMD", "nc", "-z", "localhost", "${APP_PORT:-5000}"] interval: 30s timeout: 10s retries: 10 ``` #### With Redis ```yaml version: "3.8" services: <app-name>-redis: image: redis:7-alpine restart: unless-stopped volumes: - <app-name>_redis_data:/data healthcheck: test: ["CMD", "redis-cli", "ping"] interval: 10s timeout: 5s retries: 5 <app-name>: image: cerc/<app-name>:local restart: unless-stopped depends_on: <app-name>-redis: condition: service_healthy environment: NODE_ENV: production REDIS_URL: redis://<app-name>-redis:6379 ports: - "${HOST_PORT:-5000}:5000" volumes: <app-name>_redis_data: ``` --- ## Phase 5: Local Testing ### Step 5.1: Setup Repositories ```bash laconic-so --stack <stack-name> setup-repositories ``` This clones repos to `~/cerc/` (or `$CERC_REPO_BASE_DIR`). ### Step 5.2: Build Containers ```bash laconic-so --stack <stack-name> build-containers ``` Watch for errors. Common fixes: - Missing dependencies: Add to Dockerfile - Build script errors: Check paths in build.sh - Permission issues: Add `chmod +x` for scripts ### Step 5.3: Deploy Stack ```bash laconic-so --stack <stack-name> deploy-system up ``` ### Step 5.4: Verify ```bash # Check containers running docker ps --filter "name=<app-name>" # Check logs laconic-so --stack <stack-name> deploy-system logs -f # Test endpoints curl http://localhost:<PORT>/ curl http://localhost:<PORT>/health curl http://localhost:<PORT>/api/status ``` ### Step 5.5: Debug Common Issues ```bash # View all logs docker logs <container-name> # Shell into container docker exec -it <container-name> sh # Check database docker exec -it <db-container> psql -U <user> -d <db> # Stop and clean laconic-so --stack <stack-name> deploy-system down -v ``` ### Step 5.6: Iterate Repeat build/deploy cycle until everything works: ```bash laconic-so --stack <stack-name> deploy-system down -v laconic-so --stack <stack-name> build-containers laconic-so --stack <stack-name> deploy-system up ``` --- ## Phase 6: CI/CD Integration ### Step 6.1: Add Playwright to package.json ```json { "devDependencies": { "@playwright/test": "^1.40.0" }, "scripts": { "test:e2e": "playwright test" } } ``` Run `npm install` to update lock file. ### Step 6.2: Create playwright.config.ts ```typescript import { defineConfig, devices } from '@playwright/test'; export default defineConfig({ testDir: './e2e', fullyParallel: true, forbidOnly: !!process.env.CI, retries: process.env.CI ? 1 : 0, workers: process.env.CI ? 4 : undefined, reporter: process.env.CI ? 'github' : 'html', timeout: 30000, use: { baseURL: process.env.BASE_URL || 'http://localhost:<PORT>', trace: 'on-first-retry', screenshot: 'only-on-failure', video: 'retain-on-failure', }, projects: [ { name: 'chromium', use: { ...devices['Desktop Chrome'] }, }, ], outputDir: 'test-results/', }); ``` ### Step 6.3: Create E2E Tests Create `e2e/smoke.spec.ts`: ```typescript import { test, expect } from '@playwright/test'; test.describe('Smoke Tests', () => { test('homepage loads', async ({ page }) => { await page.goto('/'); await page.waitForLoadState('domcontentloaded'); await expect(page.locator('body')).not.toBeEmpty(); }); test('API health check', async ({ request }) => { const response = await request.get('/health'); expect(response.ok()).toBeTruthy(); }); }); test.describe('Page Tests', () => { const routes = [ '/', '/about', // Add your routes ]; for (const route of routes) { test(`${route} loads without error`, async ({ page }) => { const errors: string[] = []; page.on('pageerror', (err) => errors.push(err.message)); await page.goto(route); await page.waitForLoadState('domcontentloaded'); const criticalErrors = errors.filter( (e) => !e.includes('ResizeObserver') ); expect(criticalErrors).toHaveLength(0); }); } }); ``` ### Step 6.4: Create Integration Test Script Create `scripts/integration-test.sh`: ```bash #!/usr/bin/env bash set +e BASE_URL="${BASE_URL:-http://localhost:<PORT>}" PASSED=0 FAILED=0 RED='\033[0;31m' GREEN='\033[0;32m' NC='\033[0m' log_pass() { echo -e "${GREEN}[PASS]${NC} $1"; PASSED=$((PASSED + 1)); } log_fail() { echo -e "${RED}[FAIL]${NC} $1"; FAILED=$((FAILED + 1)); } test_endpoint() { local name="$1" local endpoint="$2" local expected="${3:-200}" status=$(curl -s -o /dev/null -w "%{http_code}" --max-time 30 "$BASE_URL$endpoint") if [ "$status" = "$expected" ]; then log_pass "$name (HTTP $status)" else log_fail "$name (expected $expected, got $status)" fi } echo "========================================" echo "Integration Tests - $BASE_URL" echo "========================================" # Frontend test_endpoint "Homepage" "/" test_endpoint "Static Assets" "/favicon.ico" # API test_endpoint "Health Check" "/health" test_endpoint "API Status" "/api/status" # Error handling test_endpoint "404 Handler" "/nonexistent" "404" echo "========================================" echo "Results: $PASSED passed, $FAILED failed" echo "========================================" [ $FAILED -eq 0 ] && exit 0 || exit 1 ``` ### Step 6.5: Create GitHub Workflow Create `.github/workflows/ci.yml`: ```yaml name: CI/CD on: push: branches: [main] tags: ['v*'] pull_request: branches: [main] env: REGISTRY: ghcr.io IMAGE_NAME: ghcr.io/${{ github.repository_owner }}/<app-name> jobs: build-test: name: Build & Test via laconic-so runs-on: ubuntu-latest timeout-minutes: 45 permissions: contents: read packages: write steps: - name: Checkout uses: actions/checkout@v4 with: path: <RepoName> - name: Setup Python uses: actions/setup-python@v5 with: python-version: '3.11' - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: '20' - name: Install Playwright run: | cd <RepoName> npm ci npx playwright install --with-deps chromium - name: Install laconic-so run: | pip install git+https://git.vdb.to/cerc-io/stack-orchestrator.git@main laconic-so version - name: Setup repositories run: | mkdir -p ${{ github.workspace }}/repos cp -r ${{ github.workspace }}/<RepoName> ${{ github.workspace }}/repos/<RepoName> cd ${{ github.workspace }}/repos/<RepoName> git config user.email "ci@example.com" git config user.name "CI" git init && git add -A && git commit -m "CI" --allow-empty laconic-so --stack <stack-name> setup-repositories env: CERC_REPO_BASE_DIR: ${{ github.workspace }}/repos - name: Build containers run: laconic-so --stack <stack-name> build-containers env: CERC_REPO_BASE_DIR: ${{ github.workspace }}/repos - name: Deploy stack run: laconic-so --stack <stack-name> deploy-system up env: CERC_REPO_BASE_DIR: ${{ github.workspace }}/repos - name: Wait for healthy run: | timeout 300 bash -c ' until curl -sf http://localhost:<PORT>/health > /dev/null 2>&1; do sleep 3 done ' - name: Integration tests run: | cd <RepoName> chmod +x scripts/integration-test.sh ./scripts/integration-test.sh env: BASE_URL: http://localhost:<PORT> - name: Playwright tests run: | cd <RepoName> npm run test:e2e env: BASE_URL: http://localhost:<PORT> - name: Upload artifacts uses: actions/upload-artifact@v4 if: always() with: name: test-results path: | <RepoName>/playwright-report/ <RepoName>/test-results/ retention-days: 7 - name: Logs on failure if: failure() run: laconic-so --stack <stack-name> deploy-system logs env: CERC_REPO_BASE_DIR: ${{ github.workspace }}/repos - name: Cleanup if: always() run: laconic-so --stack <stack-name> deploy-system down -v || true env: CERC_REPO_BASE_DIR: ${{ github.workspace }}/repos - name: Login to GHCR if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/')) uses: docker/login-action@v3 with: registry: ghcr.io username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - name: Push image if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/')) run: | SOURCE="cerc/<app-name>:local" if [[ "${{ github.ref }}" == refs/tags/v* ]]; then docker tag $SOURCE ${{ env.IMAGE_NAME }}:${{ github.ref_name }} docker tag $SOURCE ${{ env.IMAGE_NAME }}:latest docker push ${{ env.IMAGE_NAME }}:${{ github.ref_name }} docker push ${{ env.IMAGE_NAME }}:latest else docker tag $SOURCE ${{ env.IMAGE_NAME }}:main docker push ${{ env.IMAGE_NAME }}:main fi ``` ### Step 6.6: Update .gitignore ```gitignore # Playwright test-results/ playwright-report/ playwright/.cache/ ``` --- ## Phase 7: Data Optimization For repos with large cached data (images, JSON files, etc.): ### Step 7.1: Identify Large Data ```bash # Find large directories du -sh */ | sort -hr | head -10 # Count files in directories find . -type f -not -path "./node_modules/*" -not -path "./.git/*" | cut -d/ -f2 | sort | uniq -c | sort -rn ``` ### Step 7.2: Compress Data ```bash mkdir -p data # Compress directory tar -czf data/cache-name.tar.gz -C . cache-directory/ # Check size ls -lh data/ # If > 100MB, split for GitHub if [ $(stat -f%z data/cache-name.tar.gz 2>/dev/null || stat -c%s data/cache-name.tar.gz) -gt 100000000 ]; then split -b 50m data/cache-name.tar.gz data/cache-name.tar.gz. rm data/cache-name.tar.gz fi ``` ### Step 7.3: Update .gitignore ```gitignore # Cached data (extracted at startup) cache-directory/ image-cache/ # Track archives *.tar.gz !data/*.tar.gz !data/*.tar.gz.* ``` ### Step 7.4: Remove from Git ```bash git rm -r --cached cache-directory/ git add data/*.tar.gz* .gitignore git commit -m "refactor: Compress cached data into archives" ``` ### Step 7.5: Update Startup Script Add extraction to startup script: ```bash # Single archive if [ -f "data/cache.tar.gz" ] && [ ! -d "cache-directory" ]; then echo "Extracting cache..." tar -xzf data/cache.tar.gz -C . fi # Split archives if [ -f "data/cache.tar.gz.aa" ] && [ ! -d "cache-directory" ]; then echo "Extracting split archives..." cat data/cache.tar.gz.* | tar -xzf - -C . fi ``` --- ## Phase 8: PR Submission ### Step 8.1: Stack PR to stack-orchestrator Create branch and PR to `cerc-io/stack-orchestrator`: ```bash cd ~/cerc/stack-orchestrator # or wherever cloned git checkout -b feat/<stack-name>-stack # Files to add: # - stack_orchestrator/data/stacks/<stack-name>/stack.yml # - stack_orchestrator/data/stacks/<stack-name>/README.md # - stack_orchestrator/data/container-build/cerc-<app-name>/build.sh # - stack_orchestrator/data/container-build/cerc-<app-name>/Dockerfile # - stack_orchestrator/data/container-build/cerc-<app-name>/scripts/start.sh (if needed) # - stack_orchestrator/data/compose/docker-compose-<pod-name>.yml git add . git commit -m "feat: Add <app-name> stack" git push origin feat/<stack-name>-stack ``` ### Step 8.2: CI/CD PR to App Repository Create branch and PR to the application repository: ```bash cd ~/cerc/<RepoName> git checkout -b feat/ci-cd # Files to add: # - .github/workflows/ci.yml # - playwright.config.ts # - e2e/smoke.spec.ts # - scripts/integration-test.sh # - data/*.tar.gz* (if compressed data) git add . git commit -m "feat: Add laconic-so CI/CD pipeline" git push origin feat/ci-cd ``` ### Step 8.3: Update CI Workflow Reference Once stack PR is merged, update CI workflow: ```yaml # Change from branch reference: pip install git+https://git.vdb.to/cerc-io/stack-orchestrator.git@feat/<stack-name>-stack # To main: pip install git+https://git.vdb.to/cerc-io/stack-orchestrator.git@main ``` --- ## Reference: Common Patterns ### Next.js App ```dockerfile FROM node:20-alpine AS builder WORKDIR /app COPY package*.json ./ RUN npm ci COPY . . RUN npm run build FROM node:20-alpine WORKDIR /app COPY --from=builder /app/.next/standalone ./ COPY --from=builder /app/.next/static ./.next/static COPY --from=builder /app/public ./public ENV NODE_ENV=production EXPOSE 3000 CMD ["node", "server.js"] ``` ### Vite/React App (Static) ```dockerfile FROM node:20-alpine AS builder WORKDIR /app COPY package*.json ./ RUN npm ci COPY . . RUN npm run build FROM nginx:alpine COPY --from=builder /app/dist /usr/share/nginx/html EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] ``` ### Express + Vite Full-Stack ```dockerfile FROM node:20-alpine AS builder WORKDIR /app COPY package*.json ./ RUN npm ci COPY . . RUN npm run build FROM node:20-alpine WORKDIR /app COPY package*.json ./ RUN npm ci --only=production COPY --from=builder /app/dist ./dist COPY --from=builder /app/public ./public ENV NODE_ENV=production EXPOSE 5000 CMD ["node", "dist/index.js"] ``` ### Python FastAPI ```dockerfile FROM python:3.11-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY . . EXPOSE 8000 CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"] ``` ### Go Gin ```dockerfile FROM golang:1.21-alpine AS builder WORKDIR /app COPY go.* ./ RUN go mod download COPY . . RUN CGO_ENABLED=0 go build -o main . FROM alpine:latest WORKDIR /app COPY --from=builder /app/main . EXPOSE 8080 CMD ["./main"] ``` --- ## Reference: Troubleshooting | Problem | Solution | |---------|----------| | `Cannot find module X` | Add `RUN npm install X` to Dockerfile | | `ENOENT: no such file` | Check COPY paths in Dockerfile, ensure files exist | | Port already in use | Change `HOST_PORT` in docker-compose | | Database connection refused | Add healthcheck + depends_on with condition | | `npm ci` lock file error | Regenerate: `rm -rf node_modules package-lock.json && npm install` | | Static files 404 | Set `NODE_ENV=production`, check COPY paths | | Build works, runtime fails | Check CMD vs build output paths | | Vite not found in production | Add vite to dependencies (not devDependencies) if imported at runtime | | GitHub file too large | Split with `split -b 50m file.tar.gz file.tar.gz.` | | Playwright timeout | Use `domcontentloaded` instead of `networkidle` | | Screenshot tests flaky | Convert to content-based smoke tests | | `test.use()` error | Move `test.use({ video: 'on' })` to top level, not inside describe | --- ## Checklist ### Stack Files (in stack-orchestrator) - [ ] `stacks/<stack-name>/stack.yml` - [ ] `stacks/<stack-name>/README.md` - [ ] `container-build/cerc-<app>/build.sh` - [ ] `container-build/cerc-<app>/Dockerfile` - [ ] `container-build/cerc-<app>/scripts/start.sh` (if needed) - [ ] `compose/docker-compose-<pod>.yml` ### CI/CD Files (in app repo) - [ ] `.github/workflows/ci.yml` - [ ] `playwright.config.ts` - [ ] `e2e/smoke.spec.ts` - [ ] `scripts/integration-test.sh` - [ ] `package.json` has `@playwright/test` and `test:e2e` script - [ ] `.gitignore` updated for test artifacts ### Testing - [ ] Stack builds locally with `laconic-so` - [ ] Stack deploys and app is accessible - [ ] Integration tests pass - [ ] Playwright tests pass - [ ] CI workflow passes on PR ### Data (if applicable) - [ ] Large files compressed to `data/*.tar.gz` - [ ] Archives split if > 100MB - [ ] Startup script extracts archives - [ ] Original directories in `.gitignore`
Sign in to join this conversation.
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: cerc-io/stack-orchestrator#976
No description provided.