Merge branch 'main' into zach/mars
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 57s
Deploy Test / Run deploy test suite (pull_request) Successful in 7m3s
Webapp Test / Run webapp test suite (pull_request) Successful in 5m43s
Smoke Test / Run basic test suite (pull_request) Successful in 6m5s

This commit is contained in:
David Boreham 2024-02-08 19:52:49 +00:00
commit f914baa913
52 changed files with 943 additions and 293 deletions

View File

@ -6,6 +6,8 @@ on:
paths: paths:
- '!**' - '!**'
- '.gitea/workflows/triggers/fixturenet-eth-plugeth-test' - '.gitea/workflows/triggers/fixturenet-eth-plugeth-test'
schedule: # Note: coordinate with other tests to not overload runners at the same time of day
- cron: '2 14 * * *'
# Needed until we can incorporate docker startup into the executor container # Needed until we can incorporate docker startup into the executor container
env: env:

21
.gitea/workflows/lint.yml Normal file
View File

@ -0,0 +1,21 @@
name: Lint Checks
on:
pull_request:
branches: '*'
push:
branches: '*'
jobs:
test:
name: "Run linter"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name : "Run flake8"
uses: py-actions/flake8@v2

View File

@ -5,12 +5,16 @@ on:
branches: '*' branches: '*'
paths: paths:
- '!**' - '!**'
- '.gitea/workflows/triggers/fixturenet-laconicd-test' - '.gitea/workflows/triggers/test-k8s-deploy'
- '.gitea/workflows/test-k8s-deploy.yml'
- 'tests/k8s-deploy/run-deploy-test.sh'
schedule: # Note: coordinate with other tests to not overload runners at the same time of day
- cron: '3 15 * * *'
jobs: jobs:
test: test:
name: "Run deploy test suite on kind/k8s" name: "Run deploy test suite on kind/k8s"
runs-on: ubuntu-22.04-with-syn-ethdb runs-on: ubuntu-22.04
steps: steps:
- name: "Clone project repository" - name: "Clone project repository"
uses: actions/checkout@v3 uses: actions/checkout@v3
@ -34,9 +38,15 @@ jobs:
run: ./scripts/create_build_tag_file.sh run: ./scripts/create_build_tag_file.sh
- name: "Build local shiv package" - name: "Build local shiv package"
run: ./scripts/build_shiv_package.sh run: ./scripts/build_shiv_package.sh
- name: "Check cgroups version"
run: mount | grep cgroup
- name: "Install kind" - name: "Install kind"
run: ./tests/scripts/install-kind.sh run: ./tests/scripts/install-kind.sh
- name: "Install Kubectl" - name: "Install Kubectl"
run: ./tests/scripts/install-kubectl.sh run: ./tests/scripts/install-kubectl.sh
- name: "Run k8s deployment test" - name: "Run k8s deployment test"
run: ./tests/k8s-deploy/run-deploy-test.sh run: |
source /opt/bash-utils/cgroup-helper.sh
join_cgroup
./tests/k8s-deploy/run-deploy-test.sh

View File

@ -1,2 +1,3 @@
Change this file to trigger running the fixturenet-eth-plugeth-test CI job Change this file to trigger running the fixturenet-eth-plugeth-test CI job
trigger trigger
trigger

View File

@ -1 +1,2 @@
Change this file to trigger running the test-k8s-deploy CI job Change this file to trigger running the test-k8s-deploy CI job
Trigger test on PR branch

View File

@ -29,10 +29,10 @@ chmod +x ~/.docker/cli-plugins/docker-compose
Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be
a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory. a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory.
Now, having selected that directory, download the latest release from [this page](https://github.com/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable: Now, having selected that directory, download the latest release from [this page](https://git.vdb.to/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable:
```bash ```bash
curl -L -o ~/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so curl -L -o ~/bin/laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
``` ```
Give it execute permissions: Give it execute permissions:
@ -52,7 +52,7 @@ Version: 1.1.0-7a607c2-202304260513
Save the distribution url to `~/.laconic-so/config.yml`: Save the distribution url to `~/.laconic-so/config.yml`:
```bash ```bash
mkdir ~/.laconic-so mkdir ~/.laconic-so
echo "distribution-url: https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so" > ~/.laconic-so/config.yml echo "distribution-url: https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so" > ~/.laconic-so/config.yml
``` ```
### Update ### Update

View File

@ -26,7 +26,7 @@ In addition to the pre-requisites listed in the [README](/README.md), the follow
1. Clone this repository: 1. Clone this repository:
``` ```
$ git clone https://github.com/cerc-io/stack-orchestrator.git $ git clone https://git.vdb.to/cerc-io/stack-orchestrator.git
``` ```
2. Enter the project directory: 2. Enter the project directory:

View File

@ -1,10 +1,10 @@
# Adding a new stack # Adding a new stack
See [this PR](https://github.com/cerc-io/stack-orchestrator/pull/434) for an example of how to currently add a minimal stack to stack orchestrator. The [reth stack](https://github.com/cerc-io/stack-orchestrator/pull/435) is another good example. See [this PR](https://git.vdb.to/cerc-io/stack-orchestrator/pull/434) for an example of how to currently add a minimal stack to stack orchestrator. The [reth stack](https://git.vdb.to/cerc-io/stack-orchestrator/pull/435) is another good example.
For external developers, we recommend forking this repo and adding your stack directly to your fork. This initially requires running in "developer mode" as described [here](/docs/CONTRIBUTING.md). Check out the [Namada stack](https://github.com/vknowable/stack-orchestrator/blob/main/app/data/stacks/public-namada/digitalocean_quickstart.md) from Knowable to see how that is done. For external developers, we recommend forking this repo and adding your stack directly to your fork. This initially requires running in "developer mode" as described [here](/docs/CONTRIBUTING.md). Check out the [Namada stack](https://github.com/vknowable/stack-orchestrator/blob/main/app/data/stacks/public-namada/digitalocean_quickstart.md) from Knowable to see how that is done.
Core to the feature completeness of stack orchestrator is to [decouple the tool functionality from payload](https://github.com/cerc-io/stack-orchestrator/issues/315) which will no longer require forking to add a stack. Core to the feature completeness of stack orchestrator is to [decouple the tool functionality from payload](https://git.vdb.to/cerc-io/stack-orchestrator/issues/315) which will no longer require forking to add a stack.
## Example ## Example

View File

@ -1,6 +1,6 @@
# Specification # Specification
Note: this page is out of date (but still useful) - it will no longer be useful once stacks are [decoupled from the tool functionality](https://github.com/cerc-io/stack-orchestrator/issues/315). Note: this page is out of date (but still useful) - it will no longer be useful once stacks are [decoupled from the tool functionality](https://git.vdb.to/cerc-io/stack-orchestrator/issues/315).
## Implementation ## Implementation

View File

@ -10,3 +10,4 @@ pydantic==1.10.9
tomli==2.0.1 tomli==2.0.1
validators==0.22.0 validators==0.22.0
kubernetes>=28.1.0 kubernetes>=28.1.0
humanfriendly>=10.0

View File

@ -41,4 +41,4 @@ runcmd:
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin - apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- systemctl enable docker - systemctl enable docker
- systemctl start docker - systemctl start docker
- git clone https://github.com/cerc-io/stack-orchestrator.git /home/ubuntu/stack-orchestrator - git clone https://git.vdb.to/cerc-io/stack-orchestrator.git /home/ubuntu/stack-orchestrator

View File

@ -31,5 +31,5 @@ runcmd:
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin - apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- systemctl enable docker - systemctl enable docker
- systemctl start docker - systemctl start docker
- curl -L -o /usr/local/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so - curl -L -o /usr/local/bin/laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
- chmod +x /usr/local/bin/laconic-so - chmod +x /usr/local/bin/laconic-so

View File

@ -137,7 +137,7 @@ fi
echo "**************************************************************************************" echo "**************************************************************************************"
echo "Installing laconic-so" echo "Installing laconic-so"
# install latest `laconic-so` # install latest `laconic-so`
distribution_url=https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so distribution_url=https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
install_filename=${install_dir}/laconic-so install_filename=${install_dir}/laconic-so
mkdir -p ${install_dir} mkdir -p ${install_dir}
curl -L -o ${install_filename} ${distribution_url} curl -L -o ${install_filename} ${distribution_url}

View File

@ -13,7 +13,7 @@ setup(
description='Orchestrates deployment of the Laconic stack', description='Orchestrates deployment of the Laconic stack',
long_description=long_description, long_description=long_description,
long_description_content_type="text/markdown", long_description_content_type="text/markdown",
url='https://github.com/cerc-io/stack-orchestrator', url='https://git.vdb.to/cerc-io/stack-orchestrator',
py_modules=['stack_orchestrator'], py_modules=['stack_orchestrator'],
packages=find_packages(), packages=find_packages(),
install_requires=[requirements], install_requires=[requirements],

View File

@ -33,6 +33,7 @@ from stack_orchestrator.base import get_npm_registry_url
# TODO: find a place for this # TODO: find a place for this
# epilog="Config provided either in .env or settings.ini or env vars: CERC_REPO_BASE_DIR (defaults to ~/cerc)" # epilog="Config provided either in .env or settings.ini or env vars: CERC_REPO_BASE_DIR (defaults to ~/cerc)"
def make_container_build_env(dev_root_path: str, def make_container_build_env(dev_root_path: str,
container_build_dir: str, container_build_dir: str,
debug: bool, debug: bool,
@ -104,6 +105,9 @@ def process_container(stack: str,
build_command = os.path.join(container_build_dir, build_command = os.path.join(container_build_dir,
"default-build.sh") + f" {default_container_tag} {repo_dir_or_build_dir}" "default-build.sh") + f" {default_container_tag} {repo_dir_or_build_dir}"
if not dry_run: if not dry_run:
# No PATH at all causes failures with podman.
if "PATH" not in container_build_env:
container_build_env["PATH"] = os.environ["PATH"]
if verbose: if verbose:
print(f"Executing: {build_command} with environment: {container_build_env}") print(f"Executing: {build_command} with environment: {container_build_env}")
build_result = subprocess.run(build_command, shell=True, env=container_build_env) build_result = subprocess.run(build_command, shell=True, env=container_build_env)
@ -119,6 +123,7 @@ def process_container(stack: str,
else: else:
print("Skipped") print("Skipped")
@click.command() @click.command()
@click.option('--include', help="only build these containers") @click.option('--include', help="only build these containers")
@click.option('--exclude', help="don\'t build these containers") @click.option('--exclude', help="don\'t build these containers")

View File

@ -25,10 +25,11 @@ from decouple import config
import click import click
from pathlib import Path from pathlib import Path
from stack_orchestrator.build import build_containers from stack_orchestrator.build import build_containers
from stack_orchestrator.deploy.webapp.util import determine_base_container
@click.command() @click.command()
@click.option('--base-container', default="cerc/nextjs-base") @click.option('--base-container')
@click.option('--source-repo', help="directory containing the webapp to build", required=True) @click.option('--source-repo', help="directory containing the webapp to build", required=True)
@click.option("--force-rebuild", is_flag=True, default=False, help="Override dependency checking -- always rebuild") @click.option("--force-rebuild", is_flag=True, default=False, help="Override dependency checking -- always rebuild")
@click.option("--extra-build-args", help="Supply extra arguments to build") @click.option("--extra-build-args", help="Supply extra arguments to build")
@ -57,6 +58,9 @@ def command(ctx, base_container, source_repo, force_rebuild, extra_build_args, t
if not quiet: if not quiet:
print(f'Dev Root is: {dev_root_path}') print(f'Dev Root is: {dev_root_path}')
if not base_container:
base_container = determine_base_container(source_repo)
# First build the base container. # First build the base container.
container_build_env = build_containers.make_container_build_env(dev_root_path, container_build_dir, debug, container_build_env = build_containers.make_container_build_env(dev_root_path, container_build_dir, debug,
force_rebuild, extra_build_args) force_rebuild, extra_build_args)
@ -64,7 +68,6 @@ def command(ctx, base_container, source_repo, force_rebuild, extra_build_args, t
build_containers.process_container(None, base_container, container_build_dir, container_build_env, dev_root_path, quiet, build_containers.process_container(None, base_container, container_build_dir, container_build_env, dev_root_path, quiet,
verbose, dry_run, continue_on_error) verbose, dry_run, continue_on_error)
# Now build the target webapp. We use the same build script, but with a different Dockerfile and work dir. # Now build the target webapp. We use the same build script, but with a different Dockerfile and work dir.
container_build_env["CERC_WEBAPP_BUILD_RUNNING"] = "true" container_build_env["CERC_WEBAPP_BUILD_RUNNING"] = "true"
container_build_env["CERC_CONTAINER_BUILD_WORK_DIR"] = os.path.abspath(source_repo) container_build_env["CERC_CONTAINER_BUILD_WORK_DIR"] = os.path.abspath(source_repo)

View File

@ -5,10 +5,13 @@ services:
environment: environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG} CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_TEST_PARAM_1: ${CERC_TEST_PARAM_1:-FAILED} CERC_TEST_PARAM_1: ${CERC_TEST_PARAM_1:-FAILED}
CERC_TEST_PARAM_2: "CERC_TEST_PARAM_2_VALUE"
volumes: volumes:
- test-data:/data - test-data:/data
- test-config:/config:ro
ports: ports:
- "80" - "80"
volumes: volumes:
test-data: test-data:
test-config:

View File

@ -30,13 +30,13 @@ RUN \
# [Optional] Uncomment this section to install additional OS packages. # [Optional] Uncomment this section to install additional OS packages.
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \ RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install --no-install-recommends jq gettext-base && apt-get -y install --no-install-recommends jq gettext-base procps
# [Optional] Uncomment if you want to install more global node modules # [Optional] Uncomment if you want to install more global node modules
# RUN su node -c "npm install -g <your-package-list-here>" # RUN su node -c "npm install -g <your-package-list-here>"
# Expose port for http # Expose port for http
EXPOSE 3000 EXPOSE 80
COPY /scripts /scripts COPY /scripts /scripts

View File

@ -58,4 +58,4 @@ if [ "$CERC_NEXTJS_SKIP_GENERATE" != "true" ]; then
fi fi
fi fi
$CERC_BUILD_TOOL start . -p ${CERC_LISTEN_PORT:-3000} $CERC_BUILD_TOOL start . -- -p ${CERC_LISTEN_PORT:-80}

View File

@ -17,5 +17,23 @@ fi
if [ -n "$CERC_TEST_PARAM_1" ]; then if [ -n "$CERC_TEST_PARAM_1" ]; then
echo "Test-param-1: ${CERC_TEST_PARAM_1}" echo "Test-param-1: ${CERC_TEST_PARAM_1}"
fi fi
if [ -n "$CERC_TEST_PARAM_2" ]; then
echo "Test-param-2: ${CERC_TEST_PARAM_2}"
fi
if [ -d "/config" ]; then
echo "/config: EXISTS"
for f in /config/*; do
if [[ -f "$f" ]] || [[ -L "$f" ]]; then
echo "$f:"
cat "$f"
echo ""
echo ""
fi
done
else
echo "/config: does NOT EXIST"
fi
# Run nginx which will block here forever # Run nginx which will block here forever
/usr/sbin/nginx -g "daemon off;" /usr/sbin/nginx -g "daemon off;"

View File

@ -1,6 +1,6 @@
# Originally from: https://github.com/devcontainers/images/blob/main/src/javascript-node/.devcontainer/Dockerfile # Originally from: https://github.com/devcontainers/images/blob/main/src/javascript-node/.devcontainer/Dockerfile
# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 18, 16, 14, 18-bullseye, 16-bullseye, 14-bullseye, 18-buster, 16-buster, 14-buster # [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 18, 16, 14, 18-bullseye, 16-bullseye, 14-bullseye, 18-buster, 16-buster, 14-buster
ARG VARIANT=18-bullseye ARG VARIANT=20-bullseye
FROM node:${VARIANT} FROM node:${VARIANT}
ARG USERNAME=node ARG USERNAME=node
@ -28,7 +28,7 @@ RUN \
# [Optional] Uncomment this section to install additional OS packages. # [Optional] Uncomment this section to install additional OS packages.
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \ RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install --no-install-recommends jq && apt-get -y install --no-install-recommends jq gettext-base
# [Optional] Uncomment if you want to install an additional version of node using nvm # [Optional] Uncomment if you want to install an additional version of node using nvm
# ARG EXTRA_NODE_VERSION=10 # ARG EXTRA_NODE_VERSION=10
@ -37,9 +37,7 @@ RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# We do this to get a yq binary from the published container, for the correct architecture we're building here # We do this to get a yq binary from the published container, for the correct architecture we're building here
COPY --from=docker.io/mikefarah/yq:latest /usr/bin/yq /usr/local/bin/yq COPY --from=docker.io/mikefarah/yq:latest /usr/bin/yq /usr/local/bin/yq
RUN mkdir -p /scripts COPY scripts /scripts
COPY ./apply-webapp-config.sh /scripts
COPY ./start-serving-app.sh /scripts
# [Optional] Uncomment if you want to install more global node modules # [Optional] Uncomment if you want to install more global node modules
# RUN su node -c "npm install -g <your-package-list-here>" # RUN su node -c "npm install -g <your-package-list-here>"

View File

@ -0,0 +1,11 @@
FROM cerc/webapp-base:local as builder
ARG CERC_BUILD_TOOL
WORKDIR /app
COPY . .
RUN rm -rf node_modules build .next*
RUN /scripts/build-app.sh /app build /data
FROM cerc/webapp-base:local
COPY --from=builder /data /data

View File

@ -1,9 +1,29 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Build cerc/laconic-registry-cli # Build cerc/webapp-base
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
# See: https://stackoverflow.com/a/246128/1701505 # See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/webapp-base:local ${build_command_args} -f ${SCRIPT_DIR}/Dockerfile ${SCRIPT_DIR} CERC_CONTAINER_BUILD_WORK_DIR=${CERC_CONTAINER_BUILD_WORK_DIR:-$SCRIPT_DIR}
CERC_CONTAINER_BUILD_DOCKERFILE=${CERC_CONTAINER_BUILD_DOCKERFILE:-$SCRIPT_DIR/Dockerfile}
CERC_CONTAINER_BUILD_TAG=${CERC_CONTAINER_BUILD_TAG:-cerc/webapp-base:local}
docker build -t $CERC_CONTAINER_BUILD_TAG ${build_command_args} -f $CERC_CONTAINER_BUILD_DOCKERFILE $CERC_CONTAINER_BUILD_WORK_DIR
if [ $? -eq 0 ] && [ "$CERC_CONTAINER_BUILD_TAG" != "cerc/webapp-base:local" ]; then
cat <<EOF
#################################################################
Built host container for $CERC_CONTAINER_BUILD_WORK_DIR with tag:
$CERC_CONTAINER_BUILD_TAG
To test locally run:
laconic-so run-webapp --image $CERC_CONTAINER_BUILD_TAG --env-file /path/to/environment.env
EOF
fi

View File

@ -0,0 +1,33 @@
#!/bin/bash
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
WORK_DIR="${1:-./}"
cd "${WORK_DIR}" || exit 1
if [ -f ".env" ]; then
TMP_ENV=`mktemp`
declare -px > $TMP_ENV
set -a
source .env
source $TMP_ENV
set +a
rm -f $TMP_ENV
fi
for f in $(find . -regex ".*.[tj]sx?$" -type f | grep -v 'node_modules'); do
for e in $(cat "${f}" | tr -s '[:blank:]' '\n' | tr -s '[{},();]' '\n' | egrep -o -e '^"CERC_RUNTIME_ENV_[^\"]+"' -e '^"LACONIC_HOSTED_CONFIG_[^\"]+"'); do
orig_name=$(echo -n "${e}" | sed 's/"//g')
cur_name=$(echo -n "${orig_name}" | sed 's/CERC_RUNTIME_ENV_//g')
cur_val=$(echo -n "\$${cur_name}" | envsubst)
if [ "$CERC_RETAIN_ENV_QUOTES" != "true" ]; then
cur_val=$(sed "s/^[\"']//" <<< "$cur_val" | sed "s/[\"']//")
fi
esc_val=$(sed 's/[&/\]/\\&/g' <<< "$cur_val")
echo "$f: $cur_name=$cur_val"
sed -i "s/$orig_name/$esc_val/g" $f
done
done

View File

@ -0,0 +1,36 @@
#!/bin/bash
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_BUILD_TOOL="${CERC_BUILD_TOOL}"
WORK_DIR="${1:-/app}"
OUTPUT_DIR="${2:-build}"
DEST_DIR="${3:-/data}"
if [ -f "${WORK_DIR}/package.json" ]; then
echo "Building node-based webapp ..."
cd "${WORK_DIR}" || exit 1
if [ -z "$CERC_BUILD_TOOL" ]; then
if [ -f "yarn.lock" ]; then
CERC_BUILD_TOOL=yarn
else
CERC_BUILD_TOOL=npm
fi
fi
$CERC_BUILD_TOOL install || exit 1
$CERC_BUILD_TOOL build || exit 1
rm -rf "${DEST_DIR}"
mv "${WORK_DIR}/${OUTPUT_DIR}" "${DEST_DIR}"
else
echo "Copying static app ..."
mv "${WORK_DIR}" "${DEST_DIR}"
fi
exit 0

View File

@ -0,0 +1,15 @@
#!/usr/bin/env bash
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_WEBAPP_FILES_DIR="${CERC_WEBAPP_FILES_DIR:-/data}"
CERC_ENABLE_CORS="${CERC_ENABLE_CORS:-false}"
if [ "true" == "$CERC_ENABLE_CORS" ]; then
CERC_HTTP_EXTRA_ARGS="$CERC_HTTP_EXTRA_ARGS --cors"
fi
/scripts/apply-webapp-config.sh /config/config.yml ${CERC_WEBAPP_FILES_DIR}
/scripts/apply-runtime-env.sh ${CERC_WEBAPP_FILES_DIR}
http-server $CERC_HTTP_EXTRA_ARGS -p ${CERC_LISTEN_PORT:-80} ${CERC_WEBAPP_FILES_DIR}

View File

@ -1,9 +0,0 @@
#!/usr/bin/env bash
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_WEBAPP_FILES_DIR="${CERC_WEBAPP_FILES_DIR:-/data}"
/scripts/apply-webapp-config.sh /config/config.yml ${CERC_WEBAPP_FILES_DIR}
http-server -p 80 ${CERC_WEBAPP_FILES_DIR}

View File

@ -0,0 +1,9 @@
#!/usr/bin/env bash
# Build cerc/webapp-deployer-backend
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
# See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/webapp-deployer-backend:local ${build_command_args} ${CERC_REPO_BASE_DIR}/webapp-deployment-status-api

View File

@ -1,6 +1,6 @@
# fixturenet-eth # fixturenet-eth
Instructions for deploying a local a geth + lighthouse blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator (the installation of which is covered [here](https://github.com/cerc-io/stack-orchestrator)): Instructions for deploying a local a geth + lighthouse blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator (the installation of which is covered [here](https://git.vdb.to/cerc-io/stack-orchestrator)):
## Clone required repositories ## Clone required repositories

View File

@ -7,11 +7,11 @@ Instructions for deploying a local Laconic blockchain "fixturenet" for developme
**Note:** For building some NPMs, access to the @lirewine repositories is required. If you don't have access, see [this tutorial](/docs/laconicd-fixturenet.md) to run this stack **Note:** For building some NPMs, access to the @lirewine repositories is required. If you don't have access, see [this tutorial](/docs/laconicd-fixturenet.md) to run this stack
## 1. Install Laconic Stack Orchestrator ## 1. Install Laconic Stack Orchestrator
Installation is covered in detail [here](https://github.com/cerc-io/stack-orchestrator#user-mode) but if you're on Linux and already have docker installed it should be as simple as: Installation is covered in detail [here](https://git.vdb.to/cerc-io/stack-orchestrator#user-mode) but if you're on Linux and already have docker installed it should be as simple as:
``` ```
$ mkdir my-working-dir $ mkdir my-working-dir
$ cd my-working-dir $ cd my-working-dir
$ curl -L -o ./laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so $ curl -L -o ./laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
$ chmod +x ./laconic-so $ chmod +x ./laconic-so
$ export PATH=$PATH:$(pwd) # Or move laconic-so to ~/bin or your favorite on-path directory $ export PATH=$PATH:$(pwd) # Or move laconic-so to ~/bin or your favorite on-path directory
``` ```

View File

@ -3,11 +3,11 @@
Instructions for deploying a local Laconic blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator. Instructions for deploying a local Laconic blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator.
## 1. Install Laconic Stack Orchestrator ## 1. Install Laconic Stack Orchestrator
Installation is covered in detail [here](https://github.com/cerc-io/stack-orchestrator#user-mode) but if you're on Linux and already have docker installed it should be as simple as: Installation is covered in detail [here](https://git.vdb.to/cerc-io/stack-orchestrator#user-mode) but if you're on Linux and already have docker installed it should be as simple as:
``` ```
$ mkdir my-working-dir $ mkdir my-working-dir
$ cd my-working-dir $ cd my-working-dir
$ curl -L -o ./laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so $ curl -L -o ./laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
$ chmod +x ./laconic-so $ chmod +x ./laconic-so
$ export PATH=$PATH:$(pwd) # Or move laconic-so to ~/bin or your favorite on-path directory $ export PATH=$PATH:$(pwd) # Or move laconic-so to ~/bin or your favorite on-path directory
``` ```

View File

@ -18,7 +18,7 @@ $ laconic-so --stack mainnet-eth build-containers
``` ```
$ laconic-so --stack mainnet-eth deploy init --map-ports-to-host any-same --output mainnet-eth-spec.yml $ laconic-so --stack mainnet-eth deploy init --map-ports-to-host any-same --output mainnet-eth-spec.yml
$ laconic-so deploy create --spec-file mainnet-eth-spec.yml --deployment-dir mainnet-eth-deployment $ laconic-so deploy --stack mainnet-eth create --spec-file mainnet-eth-spec.yml --deployment-dir mainnet-eth-deployment
``` ```
## Start the stack ## Start the stack
``` ```

View File

@ -4,7 +4,7 @@ The MobyMask watcher is a Laconic Network component that provides efficient acce
## Deploy the MobyMask Watcher ## Deploy the MobyMask Watcher
The instructions below show how to deploy a MobyMask watcher using laconic-stack-orchestrator (the installation of which is covered [here](https://github.com/cerc-io/stack-orchestrator#install)). The instructions below show how to deploy a MobyMask watcher using laconic-stack-orchestrator (the installation of which is covered [here](https://git.vdb.to/cerc-io/stack-orchestrator#install)).
This deployment expects that ipld-eth-server's endpoints are available on the local machine at http://ipld-eth-server.example.com:8083/graphql and http://ipld-eth-server.example.com:8082. More advanced configurations are supported by modifying the watcher's [config file](../../config/watcher-mobymask/mobymask-watcher.toml). This deployment expects that ipld-eth-server's endpoints are available on the local machine at http://ipld-eth-server.example.com:8083/graphql and http://ipld-eth-server.example.com:8082. More advanced configurations are supported by modifying the watcher's [config file](../../config/watcher-mobymask/mobymask-watcher.toml).

View File

@ -0,0 +1,11 @@
version: "1.0"
name: webapp-deployer-backend
description: "Deployer for webapps"
repos:
- git.vdb.to/telackey/webapp-deployment-status-api
containers:
- cerc/webapp-deployer-backend
pods:
- name: webapp-deployer-backend
repository: git.vdb.to/telackey/webapp-deployment-status-api
path: ./

View File

@ -17,6 +17,7 @@ from pathlib import Path
from python_on_whales import DockerClient, DockerException from python_on_whales import DockerClient, DockerException
from stack_orchestrator.deploy.deployer import Deployer, DeployerException, DeployerConfigGenerator from stack_orchestrator.deploy.deployer import Deployer, DeployerException, DeployerConfigGenerator
from stack_orchestrator.deploy.deployment_context import DeploymentContext from stack_orchestrator.deploy.deployment_context import DeploymentContext
from stack_orchestrator.opts import opts
class DockerDeployer(Deployer): class DockerDeployer(Deployer):
@ -29,24 +30,28 @@ class DockerDeployer(Deployer):
self.type = type self.type = type
def up(self, detach, services): def up(self, detach, services):
if not opts.o.dry_run:
try: try:
return self.docker.compose.up(detach=detach, services=services) return self.docker.compose.up(detach=detach, services=services)
except DockerException as e: except DockerException as e:
raise DeployerException(e) raise DeployerException(e)
def down(self, timeout, volumes): def down(self, timeout, volumes):
if not opts.o.dry_run:
try: try:
return self.docker.compose.down(timeout=timeout, volumes=volumes) return self.docker.compose.down(timeout=timeout, volumes=volumes)
except DockerException as e: except DockerException as e:
raise DeployerException(e) raise DeployerException(e)
def update(self): def update(self):
if not opts.o.dry_run:
try: try:
return self.docker.compose.restart() return self.docker.compose.restart()
except DockerException as e: except DockerException as e:
raise DeployerException(e) raise DeployerException(e)
def status(self): def status(self):
if not opts.o.dry_run:
try: try:
for p in self.docker.compose.ps(): for p in self.docker.compose.ps():
print(f"{p.name}\t{p.state.status}") print(f"{p.name}\t{p.state.status}")
@ -54,30 +59,35 @@ class DockerDeployer(Deployer):
raise DeployerException(e) raise DeployerException(e)
def ps(self): def ps(self):
if not opts.o.dry_run:
try: try:
return self.docker.compose.ps() return self.docker.compose.ps()
except DockerException as e: except DockerException as e:
raise DeployerException(e) raise DeployerException(e)
def port(self, service, private_port): def port(self, service, private_port):
if not opts.o.dry_run:
try: try:
return self.docker.compose.port(service=service, private_port=private_port) return self.docker.compose.port(service=service, private_port=private_port)
except DockerException as e: except DockerException as e:
raise DeployerException(e) raise DeployerException(e)
def execute(self, service, command, tty, envs): def execute(self, service, command, tty, envs):
if not opts.o.dry_run:
try: try:
return self.docker.compose.execute(service=service, command=command, tty=tty, envs=envs) return self.docker.compose.execute(service=service, command=command, tty=tty, envs=envs)
except DockerException as e: except DockerException as e:
raise DeployerException(e) raise DeployerException(e)
def logs(self, services, tail, follow, stream): def logs(self, services, tail, follow, stream):
if not opts.o.dry_run:
try: try:
return self.docker.compose.logs(services=services, tail=tail, follow=follow, stream=stream) return self.docker.compose.logs(services=services, tail=tail, follow=follow, stream=stream)
except DockerException as e: except DockerException as e:
raise DeployerException(e) raise DeployerException(e)
def run(self, image: str, command=None, user=None, volumes=None, entrypoint=None, env={}, ports=[], detach=False): def run(self, image: str, command=None, user=None, volumes=None, entrypoint=None, env={}, ports=[], detach=False):
if not opts.o.dry_run:
try: try:
return self.docker.run(image=image, command=command, user=user, volumes=volumes, return self.docker.run(image=image, command=command, user=user, volumes=volumes,
entrypoint=entrypoint, envs=env, detach=detach, publish=ports, publish_all=len(ports) == 0) entrypoint=entrypoint, envs=env, detach=detach, publish=ports, publish_all=len(ports) == 0)

View File

@ -85,7 +85,6 @@ def create_deploy_context(
def up_operation(ctx, services_list, stay_attached=False): def up_operation(ctx, services_list, stay_attached=False):
global_context = ctx.parent.parent.obj global_context = ctx.parent.parent.obj
deploy_context = ctx.obj deploy_context = ctx.obj
if not global_context.dry_run:
cluster_context = deploy_context.cluster_context cluster_context = deploy_context.cluster_context
container_exec_env = _make_runtime_env(global_context) container_exec_env = _make_runtime_env(global_context)
for attr, value in container_exec_env.items(): for attr, value in container_exec_env.items():
@ -101,10 +100,6 @@ def up_operation(ctx, services_list, stay_attached=False):
def down_operation(ctx, delete_volumes, extra_args_list): def down_operation(ctx, delete_volumes, extra_args_list):
global_context = ctx.parent.parent.obj
if not global_context.dry_run:
if global_context.verbose:
print("Running compose down")
timeout_arg = None timeout_arg = None
if extra_args_list: if extra_args_list:
timeout_arg = extra_args_list[0] timeout_arg = extra_args_list[0]
@ -113,26 +108,16 @@ def down_operation(ctx, delete_volumes, extra_args_list):
def status_operation(ctx): def status_operation(ctx):
global_context = ctx.parent.parent.obj
if not global_context.dry_run:
if global_context.verbose:
print("Running compose status")
ctx.obj.deployer.status() ctx.obj.deployer.status()
def update_operation(ctx): def update_operation(ctx):
global_context = ctx.parent.parent.obj
if not global_context.dry_run:
if global_context.verbose:
print("Running compose update")
ctx.obj.deployer.update() ctx.obj.deployer.update()
def ps_operation(ctx): def ps_operation(ctx):
global_context = ctx.parent.parent.obj global_context = ctx.parent.parent.obj
if not global_context.dry_run: if not global_context.dry_run:
if global_context.verbose:
print("Running compose ps")
container_list = ctx.obj.deployer.ps() container_list = ctx.obj.deployer.ps()
if len(container_list) > 0: if len(container_list) > 0:
print("Running containers:") print("Running containers:")
@ -187,11 +172,7 @@ def exec_operation(ctx, extra_args):
def logs_operation(ctx, tail: int, follow: bool, extra_args: str): def logs_operation(ctx, tail: int, follow: bool, extra_args: str):
global_context = ctx.parent.parent.obj
extra_args_list = list(extra_args) or None extra_args_list = list(extra_args) or None
if not global_context.dry_run:
if global_context.verbose:
print("Running compose logs")
services_list = extra_args_list if extra_args_list is not None else [] services_list = extra_args_list if extra_args_list is not None else []
logs_stream = ctx.obj.deployer.logs(services=services_list, tail=tail, follow=follow, stream=True) logs_stream = ctx.obj.deployer.logs(services=services_list, tail=tail, follow=follow, stream=True)
for stream_type, stream_content in logs_stream: for stream_type, stream_content in logs_stream:
@ -347,8 +328,8 @@ def _make_cluster_context(ctx, stack, include, exclude, cluster, env_file):
else: else:
if deployment: if deployment:
compose_file_name = os.path.join(compose_dir, f"docker-compose-{pod_name}.yml") compose_file_name = os.path.join(compose_dir, f"docker-compose-{pod_name}.yml")
pod_pre_start_command = pod["pre_start_command"] pod_pre_start_command = pod.get("pre_start_command")
pod_post_start_command = pod["post_start_command"] pod_post_start_command = pod.get("post_start_command")
script_dir = compose_dir.parent.joinpath("pods", pod_name, "scripts") script_dir = compose_dir.parent.joinpath("pods", pod_name, "scripts")
if pod_pre_start_command is not None: if pod_pre_start_command is not None:
pre_start_commands.append(os.path.join(script_dir, pod_pre_start_command)) pre_start_commands.append(os.path.join(script_dir, pod_pre_start_command))
@ -357,8 +338,8 @@ def _make_cluster_context(ctx, stack, include, exclude, cluster, env_file):
else: else:
pod_root_dir = os.path.join(dev_root_path, pod_repository.split("/")[-1], pod["path"]) pod_root_dir = os.path.join(dev_root_path, pod_repository.split("/")[-1], pod["path"])
compose_file_name = os.path.join(pod_root_dir, f"docker-compose-{pod_name}.yml") compose_file_name = os.path.join(pod_root_dir, f"docker-compose-{pod_name}.yml")
pod_pre_start_command = pod["pre_start_command"] pod_pre_start_command = pod.get("pre_start_command")
pod_post_start_command = pod["post_start_command"] pod_post_start_command = pod.get("post_start_command")
if pod_pre_start_command is not None: if pod_pre_start_command is not None:
pre_start_commands.append(os.path.join(pod_root_dir, pod_pre_start_command)) pre_start_commands.append(os.path.join(pod_root_dir, pod_pre_start_command))
if pod_post_start_command is not None: if pod_post_start_command is not None:
@ -463,7 +444,7 @@ def _orchestrate_cluster_config(ctx, cluster_config, deployer, container_exec_en
tty=False, tty=False,
envs=container_exec_env) envs=container_exec_env)
waiting_for_data = False waiting_for_data = False
if ctx.debug: if ctx.debug and not waiting_for_data:
print(f"destination output: {destination_output}") print(f"destination output: {destination_output}")

View File

@ -18,11 +18,11 @@ from stack_orchestrator.deploy.k8s.deploy_k8s import K8sDeployer, K8sDeployerCon
from stack_orchestrator.deploy.compose.deploy_docker import DockerDeployer, DockerDeployerConfigGenerator from stack_orchestrator.deploy.compose.deploy_docker import DockerDeployer, DockerDeployerConfigGenerator
def getDeployerConfigGenerator(type: str): def getDeployerConfigGenerator(type: str, deployment_context):
if type == "compose" or type is None: if type == "compose" or type is None:
return DockerDeployerConfigGenerator(type) return DockerDeployerConfigGenerator(type)
elif type == constants.k8s_deploy_type or type == constants.k8s_kind_deploy_type: elif type == constants.k8s_deploy_type or type == constants.k8s_kind_deploy_type:
return K8sDeployerConfigGenerator(type) return K8sDeployerConfigGenerator(type, deployment_context)
else: else:
print(f"ERROR: deploy-to {type} is not valid") print(f"ERROR: deploy-to {type} is not valid")

View File

@ -54,19 +54,44 @@ def _get_ports(stack):
def _get_named_volumes(stack): def _get_named_volumes(stack):
# Parse the compose files looking for named volumes # Parse the compose files looking for named volumes
named_volumes = [] named_volumes = {
"rw": [],
"ro": []
}
parsed_stack = get_parsed_stack_config(stack) parsed_stack = get_parsed_stack_config(stack)
pods = get_pod_list(parsed_stack) pods = get_pod_list(parsed_stack)
yaml = get_yaml() yaml = get_yaml()
def find_vol_usage(parsed_pod_file, vol):
ret = {}
if "services" in parsed_pod_file:
for svc_name, svc in parsed_pod_file["services"].items():
if "volumes" in svc:
for svc_volume in svc["volumes"]:
parts = svc_volume.split(":")
if parts[0] == vol:
ret[svc_name] = {
"volume": parts[0],
"mount": parts[1],
"options": parts[2] if len(parts) == 3 else None
}
return ret
for pod in pods: for pod in pods:
pod_file_path = get_pod_file_path(parsed_stack, pod) pod_file_path = get_pod_file_path(parsed_stack, pod)
parsed_pod_file = yaml.load(open(pod_file_path, "r")) parsed_pod_file = yaml.load(open(pod_file_path, "r"))
if "volumes" in parsed_pod_file: if "volumes" in parsed_pod_file:
volumes = parsed_pod_file["volumes"] volumes = parsed_pod_file["volumes"]
for volume in volumes.keys(): for volume in volumes.keys():
# Volume definition looks like: for vu in find_vol_usage(parsed_pod_file, volume).values():
# 'laconicd-data': None read_only = vu["options"] == "ro"
named_volumes.append(volume) if read_only:
if vu["volume"] not in named_volumes["rw"] and vu["volume"] not in named_volumes["ro"]:
named_volumes["ro"].append(vu["volume"])
else:
if vu["volume"] not in named_volumes["rw"]:
named_volumes["rw"].append(vu["volume"])
return named_volumes return named_volumes
@ -104,6 +129,18 @@ def _fixup_pod_file(pod, spec, compose_dir):
} }
} }
pod["volumes"][volume] = new_volume_spec pod["volumes"][volume] = new_volume_spec
# Fix up configmaps
if "configmaps" in spec:
spec_cfgmaps = spec["configmaps"]
if "volumes" in pod:
pod_volumes = pod["volumes"]
for volume in pod_volumes.keys():
if volume in spec_cfgmaps:
volume_cfg = spec_cfgmaps[volume]
# Just make the dir (if necessary)
_create_bind_dir_if_relative(volume, volume_cfg, compose_dir)
# Fix up ports # Fix up ports
if "network" in spec and "ports" in spec["network"]: if "network" in spec and "ports" in spec["network"]:
spec_ports = spec["network"]["ports"] spec_ports = spec["network"]["ports"]
@ -319,9 +356,18 @@ def init_operation(deploy_command_context, stack, deployer_type, config,
named_volumes = _get_named_volumes(stack) named_volumes = _get_named_volumes(stack)
if named_volumes: if named_volumes:
volume_descriptors = {} volume_descriptors = {}
for named_volume in named_volumes: configmap_descriptors = {}
for named_volume in named_volumes["rw"]:
volume_descriptors[named_volume] = f"./data/{named_volume}" volume_descriptors[named_volume] = f"./data/{named_volume}"
for named_volume in named_volumes["ro"]:
if "k8s" in deployer_type and "config" in named_volume:
configmap_descriptors[named_volume] = f"./data/{named_volume}"
else:
volume_descriptors[named_volume] = f"./data/{named_volume}"
if volume_descriptors:
spec_file_content["volumes"] = volume_descriptors spec_file_content["volumes"] = volume_descriptors
if configmap_descriptors:
spec_file_content["configmaps"] = configmap_descriptors
if opts.o.debug: if opts.o.debug:
print(f"Creating spec file for stack: {stack} with content: {spec_file_content}") print(f"Creating spec file for stack: {stack} with content: {spec_file_content}")
@ -441,7 +487,7 @@ def create_operation(deployment_command_context, spec_file, deployment_dir, netw
deployment_context = DeploymentContext() deployment_context = DeploymentContext()
deployment_context.init(deployment_dir_path) deployment_context.init(deployment_dir_path)
# Call the deployer to generate any deployer-specific files (e.g. for kind) # Call the deployer to generate any deployer-specific files (e.g. for kind)
deployer_config_generator = getDeployerConfigGenerator(deployment_type) deployer_config_generator = getDeployerConfigGenerator(deployment_type, deployment_context)
# TODO: make deployment_dir_path a Path above # TODO: make deployment_dir_path a Path above
deployer_config_generator.generate(deployment_dir_path) deployer_config_generator.generate(deployment_dir_path)
call_stack_deploy_create(deployment_context, [network_dir, initial_peers, deployment_command_context]) call_stack_deploy_create(deployment_context, [network_dir, initial_peers, deployment_command_context])

View File

@ -31,7 +31,8 @@ def _image_needs_pushed(image: str):
def remote_tag_for_image(image: str, remote_repo_url: str): def remote_tag_for_image(image: str, remote_repo_url: str):
# Turns image tags of the form: foo/bar:local into remote.repo/org/bar:deploy # Turns image tags of the form: foo/bar:local into remote.repo/org/bar:deploy
(org, image_name_with_version) = image.split("/") major_parts = image.split("/", 2)
image_name_with_version = major_parts[1] if 2 == len(major_parts) else major_parts[0]
(image_name, image_version) = image_name_with_version.split(":") (image_name, image_version) = image_name_with_version.split(":")
if image_version == "local": if image_version == "local":
return f"{remote_repo_url}/{image_name}:deploy" return f"{remote_repo_url}/{image_name}:deploy"

View File

@ -13,6 +13,8 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>. # along with this program. If not, see <http:#www.gnu.org/licenses/>.
import os
from kubernetes import client from kubernetes import client
from typing import Any, List, Set from typing import Any, List, Set
@ -20,12 +22,41 @@ from stack_orchestrator.opts import opts
from stack_orchestrator.util import env_var_map_from_file from stack_orchestrator.util import env_var_map_from_file
from stack_orchestrator.deploy.k8s.helpers import named_volumes_from_pod_files, volume_mounts_for_service, volumes_for_pod_files from stack_orchestrator.deploy.k8s.helpers import named_volumes_from_pod_files, volume_mounts_for_service, volumes_for_pod_files
from stack_orchestrator.deploy.k8s.helpers import get_node_pv_mount_path from stack_orchestrator.deploy.k8s.helpers import get_node_pv_mount_path
from stack_orchestrator.deploy.k8s.helpers import envs_from_environment_variables_map from stack_orchestrator.deploy.k8s.helpers import envs_from_environment_variables_map, envs_from_compose_file, merge_envs
from stack_orchestrator.deploy.deploy_util import parsed_pod_files_map_from_file_names, images_for_deployment from stack_orchestrator.deploy.deploy_util import parsed_pod_files_map_from_file_names, images_for_deployment
from stack_orchestrator.deploy.deploy_types import DeployEnvVars from stack_orchestrator.deploy.deploy_types import DeployEnvVars
from stack_orchestrator.deploy.spec import Spec from stack_orchestrator.deploy.spec import Spec, Resources, ResourceLimits
from stack_orchestrator.deploy.images import remote_tag_for_image from stack_orchestrator.deploy.images import remote_tag_for_image
DEFAULT_VOLUME_RESOURCES = Resources({
"reservations": {"storage": "2Gi"}
})
DEFAULT_CONTAINER_RESOURCES = Resources({
"reservations": {"cpus": "0.1", "memory": "200M"},
"limits": {"cpus": "1.0", "memory": "2000M"},
})
def to_k8s_resource_requirements(resources: Resources) -> client.V1ResourceRequirements:
def to_dict(limits: ResourceLimits):
if not limits:
return None
ret = {}
if limits.cpus:
ret["cpu"] = str(limits.cpus)
if limits.memory:
ret["memory"] = f"{int(limits.memory / (1000 * 1000))}M"
if limits.storage:
ret["storage"] = f"{int(limits.storage / (1000 * 1000))}M"
return ret
return client.V1ResourceRequirements(
requests=to_dict(resources.reservations),
limits=to_dict(resources.limits)
)
class ClusterInfo: class ClusterInfo:
parsed_pod_yaml_map: Any parsed_pod_yaml_map: Any
@ -112,6 +143,7 @@ class ClusterInfo:
services = pod["services"] services = pod["services"]
for service_name in services: for service_name in services:
service_info = services[service_name] service_info = services[service_name]
if "ports" in service_info:
port = int(service_info["ports"][0]) port = int(service_info["ports"][0])
if opts.o.debug: if opts.o.debug:
print(f"service port: {port}") print(f"service port: {port}")
@ -130,39 +162,84 @@ class ClusterInfo:
def get_pvcs(self): def get_pvcs(self):
result = [] result = []
volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map) spec_volumes = self.spec.get_volumes()
named_volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map)
resources = self.spec.get_volume_resources()
if not resources:
resources = DEFAULT_VOLUME_RESOURCES
if opts.o.debug: if opts.o.debug:
print(f"Volumes: {volumes}") print(f"Spec Volumes: {spec_volumes}")
for volume_name in volumes: print(f"Named Volumes: {named_volumes}")
print(f"Resources: {resources}")
for volume_name in spec_volumes:
if volume_name not in named_volumes:
if opts.o.debug:
print(f"{volume_name} not in pod files")
continue
spec = client.V1PersistentVolumeClaimSpec( spec = client.V1PersistentVolumeClaimSpec(
access_modes=["ReadWriteOnce"], access_modes=["ReadWriteOnce"],
storage_class_name="manual", storage_class_name="manual",
resources=client.V1ResourceRequirements( resources=to_k8s_resource_requirements(resources),
requests={"storage": "2Gi"} volume_name=f"{self.app_name}-{volume_name}"
),
volume_name=volume_name
) )
pvc = client.V1PersistentVolumeClaim( pvc = client.V1PersistentVolumeClaim(
metadata=client.V1ObjectMeta(name=volume_name, metadata=client.V1ObjectMeta(name=f"{self.app_name}-{volume_name}",
labels={"volume-label": volume_name}), labels={"volume-label": f"{self.app_name}-{volume_name}"}),
spec=spec, spec=spec,
) )
result.append(pvc) result.append(pvc)
return result return result
def get_configmaps(self):
result = []
spec_configmaps = self.spec.get_configmaps()
named_volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map)
for cfg_map_name, cfg_map_path in spec_configmaps.items():
if cfg_map_name not in named_volumes:
if opts.o.debug:
print(f"{cfg_map_name} not in pod files")
continue
if not cfg_map_path.startswith("/"):
cfg_map_path = os.path.join(os.path.dirname(self.spec.file_path), cfg_map_path)
# Read in all the files at a single-level of the directory. This mimics the behavior
# of `kubectl create configmap foo --from-file=/path/to/dir`
data = {}
for f in os.listdir(cfg_map_path):
full_path = os.path.join(cfg_map_path, f)
if os.path.isfile(full_path):
data[f] = open(full_path, 'rt').read()
spec = client.V1ConfigMap(
metadata=client.V1ObjectMeta(name=f"{self.app_name}-{cfg_map_name}",
labels={"configmap-label": cfg_map_name}),
data=data
)
result.append(spec)
return result
def get_pvs(self): def get_pvs(self):
result = [] result = []
volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map) spec_volumes = self.spec.get_volumes()
for volume_name in volumes: named_volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map)
resources = self.spec.get_volume_resources()
if not resources:
resources = DEFAULT_VOLUME_RESOURCES
for volume_name in spec_volumes:
if volume_name not in named_volumes:
if opts.o.debug:
print(f"{volume_name} not in pod files")
continue
spec = client.V1PersistentVolumeSpec( spec = client.V1PersistentVolumeSpec(
storage_class_name="manual", storage_class_name="manual",
access_modes=["ReadWriteOnce"], access_modes=["ReadWriteOnce"],
capacity={"storage": "2Gi"}, capacity=to_k8s_resource_requirements(resources).requests,
host_path=client.V1HostPathVolumeSource(path=get_node_pv_mount_path(volume_name)) host_path=client.V1HostPathVolumeSource(path=get_node_pv_mount_path(volume_name))
) )
pv = client.V1PersistentVolume( pv = client.V1PersistentVolume(
metadata=client.V1ObjectMeta(name=volume_name, metadata=client.V1ObjectMeta(name=f"{self.app_name}-{volume_name}",
labels={"volume-label": volume_name}), labels={"volume-label": f"{self.app_name}-{volume_name}"}),
spec=spec, spec=spec,
) )
result.append(pv) result.append(pv)
@ -171,6 +248,9 @@ class ClusterInfo:
# TODO: put things like image pull policy into an object-scope struct # TODO: put things like image pull policy into an object-scope struct
def get_deployment(self, image_pull_policy: str = None): def get_deployment(self, image_pull_policy: str = None):
containers = [] containers = []
resources = self.spec.get_container_resources()
if not resources:
resources = DEFAULT_CONTAINER_RESOURCES
for pod_name in self.parsed_pod_yaml_map: for pod_name in self.parsed_pod_yaml_map:
pod = self.parsed_pod_yaml_map[pod_name] pod = self.parsed_pod_yaml_map[pod_name]
services = pod["services"] services = pod["services"]
@ -178,10 +258,18 @@ class ClusterInfo:
container_name = service_name container_name = service_name
service_info = services[service_name] service_info = services[service_name]
image = service_info["image"] image = service_info["image"]
if "ports" in service_info:
port = int(service_info["ports"][0]) port = int(service_info["ports"][0])
if opts.o.debug: if opts.o.debug:
print(f"image: {image}") print(f"image: {image}")
print(f"service port: {port}") print(f"service port: {port}")
merged_envs = merge_envs(
envs_from_compose_file(
service_info["environment"]), self.environment_variables.map
) if "environment" in service_info else self.environment_variables.map
envs = envs_from_environment_variables_map(merged_envs)
if opts.o.debug:
print(f"Merged envs: {envs}")
# Re-write the image tag for remote deployment # Re-write the image tag for remote deployment
image_to_use = remote_tag_for_image( image_to_use = remote_tag_for_image(
image, self.spec.get_image_registry()) if self.spec.get_image_registry() is not None else image image, self.spec.get_image_registry()) if self.spec.get_image_registry() is not None else image
@ -190,16 +278,13 @@ class ClusterInfo:
name=container_name, name=container_name,
image=image_to_use, image=image_to_use,
image_pull_policy=image_pull_policy, image_pull_policy=image_pull_policy,
env=envs_from_environment_variables_map(self.environment_variables.map), env=envs,
ports=[client.V1ContainerPort(container_port=port)], ports=[client.V1ContainerPort(container_port=port)],
volume_mounts=volume_mounts, volume_mounts=volume_mounts,
resources=client.V1ResourceRequirements( resources=to_k8s_resource_requirements(resources),
requests={"cpu": "100m", "memory": "200Mi"},
limits={"cpu": "500m", "memory": "500Mi"},
),
) )
containers.append(container) containers.append(container)
volumes = volumes_for_pod_files(self.parsed_pod_yaml_map) volumes = volumes_for_pod_files(self.parsed_pod_yaml_map, self.spec, self.app_name)
image_pull_secrets = [client.V1LocalObjectReference(name="laconic-registry")] image_pull_secrets = [client.V1LocalObjectReference(name="laconic-registry")]
template = client.V1PodTemplateSpec( template = client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(labels={"app": self.app_name}), metadata=client.V1ObjectMeta(labels={"app": self.app_name}),

View File

@ -1,5 +1,4 @@
# Copyright © 2023 Vulcanize # Copyright © 2023 Vulcanize
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by # it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
@ -82,20 +81,13 @@ class K8sDeployer(Deployer):
self.apps_api = client.AppsV1Api() self.apps_api = client.AppsV1Api()
self.custom_obj_api = client.CustomObjectsApi() self.custom_obj_api = client.CustomObjectsApi()
def up(self, detach, services): def _create_volume_data(self):
if self.is_kind():
# Create the kind cluster
create_cluster(self.kind_cluster_name, self.deployment_dir.joinpath(constants.kind_config_filename))
# Ensure the referenced containers are copied into kind
load_images_into_kind(self.kind_cluster_name, self.cluster_info.image_set)
self.connect_api()
# Create the host-path-mounted PVs for this deployment # Create the host-path-mounted PVs for this deployment
pvs = self.cluster_info.get_pvs() pvs = self.cluster_info.get_pvs()
for pv in pvs: for pv in pvs:
if opts.o.debug: if opts.o.debug:
print(f"Sending this pv: {pv}") print(f"Sending this pv: {pv}")
if not opts.o.dry_run:
pv_resp = self.core_api.create_persistent_volume(body=pv) pv_resp = self.core_api.create_persistent_volume(body=pv)
if opts.o.debug: if opts.o.debug:
print("PVs created:") print("PVs created:")
@ -106,15 +98,34 @@ class K8sDeployer(Deployer):
for pvc in pvcs: for pvc in pvcs:
if opts.o.debug: if opts.o.debug:
print(f"Sending this pvc: {pvc}") print(f"Sending this pvc: {pvc}")
if not opts.o.dry_run:
pvc_resp = self.core_api.create_namespaced_persistent_volume_claim(body=pvc, namespace=self.k8s_namespace) pvc_resp = self.core_api.create_namespaced_persistent_volume_claim(body=pvc, namespace=self.k8s_namespace)
if opts.o.debug: if opts.o.debug:
print("PVCs created:") print("PVCs created:")
print(f"{pvc_resp}") print(f"{pvc_resp}")
# Figure out the ConfigMaps for this deployment
config_maps = self.cluster_info.get_configmaps()
for cfg_map in config_maps:
if opts.o.debug:
print(f"Sending this ConfigMap: {cfg_map}")
if not opts.o.dry_run:
cfg_rsp = self.core_api.create_namespaced_config_map(
body=cfg_map,
namespace=self.k8s_namespace
)
if opts.o.debug:
print("ConfigMap created:")
print(f"{cfg_rsp}")
def _create_deployment(self):
# Process compose files into a Deployment # Process compose files into a Deployment
deployment = self.cluster_info.get_deployment(image_pull_policy=None if self.is_kind() else "Always") deployment = self.cluster_info.get_deployment(image_pull_policy=None if self.is_kind() else "Always")
# Create the k8s objects # Create the k8s objects
if opts.o.debug: if opts.o.debug:
print(f"Sending this deployment: {deployment}") print(f"Sending this deployment: {deployment}")
if not opts.o.dry_run:
deployment_resp = self.apps_api.create_namespaced_deployment( deployment_resp = self.apps_api.create_namespaced_deployment(
body=deployment, namespace=self.k8s_namespace body=deployment, namespace=self.k8s_namespace
) )
@ -124,6 +135,9 @@ class K8sDeployer(Deployer):
{deployment_resp.metadata.generation} {deployment_resp.spec.template.spec.containers[0].image}") {deployment_resp.metadata.generation} {deployment_resp.spec.template.spec.containers[0].image}")
service: client.V1Service = self.cluster_info.get_service() service: client.V1Service = self.cluster_info.get_service()
if opts.o.debug:
print(f"Sending this service: {service}")
if not opts.o.dry_run:
service_resp = self.core_api.create_namespaced_service( service_resp = self.core_api.create_namespaced_service(
namespace=self.k8s_namespace, namespace=self.k8s_namespace,
body=service body=service
@ -132,11 +146,27 @@ class K8sDeployer(Deployer):
print("Service created:") print("Service created:")
print(f"{service_resp}") print(f"{service_resp}")
def up(self, detach, services):
if not opts.o.dry_run:
if self.is_kind():
# Create the kind cluster
create_cluster(self.kind_cluster_name, self.deployment_dir.joinpath(constants.kind_config_filename))
# Ensure the referenced containers are copied into kind
load_images_into_kind(self.kind_cluster_name, self.cluster_info.image_set)
self.connect_api()
else:
print("Dry run mode enabled, skipping k8s API connect")
self._create_volume_data()
self._create_deployment()
if not self.is_kind(): if not self.is_kind():
ingress: client.V1Ingress = self.cluster_info.get_ingress() ingress: client.V1Ingress = self.cluster_info.get_ingress()
if ingress:
if opts.o.debug: if opts.o.debug:
print(f"Sending this ingress: {ingress}") print(f"Sending this ingress: {ingress}")
if not opts.o.dry_run:
ingress_resp = self.networking_api.create_namespaced_ingress( ingress_resp = self.networking_api.create_namespaced_ingress(
namespace=self.k8s_namespace, namespace=self.k8s_namespace,
body=ingress body=ingress
@ -144,8 +174,11 @@ class K8sDeployer(Deployer):
if opts.o.debug: if opts.o.debug:
print("Ingress created:") print("Ingress created:")
print(f"{ingress_resp}") print(f"{ingress_resp}")
else:
if opts.o.debug:
print("No ingress configured")
def down(self, timeout, volumes): def down(self, timeout, volumes): # noqa: C901
self.connect_api() self.connect_api()
# Delete the k8s objects # Delete the k8s objects
# Create the host-path-mounted PVs for this deployment # Create the host-path-mounted PVs for this deployment
@ -175,6 +208,22 @@ class K8sDeployer(Deployer):
print(f"{pvc_resp}") print(f"{pvc_resp}")
except client.exceptions.ApiException as e: except client.exceptions.ApiException as e:
_check_delete_exception(e) _check_delete_exception(e)
# Figure out the ConfigMaps for this deployment
cfg_maps = self.cluster_info.get_configmaps()
for cfg_map in cfg_maps:
if opts.o.debug:
print(f"Deleting this ConfigMap: {cfg_map}")
try:
cfg_map_resp = self.core_api.delete_namespaced_config_map(
name=cfg_map.metadata.name, namespace=self.k8s_namespace
)
if opts.o.debug:
print("ConfigMap deleted:")
print(f"{cfg_map_resp}")
except client.exceptions.ApiException as e:
_check_delete_exception(e)
deployment = self.cluster_info.get_deployment() deployment = self.cluster_info.get_deployment()
if opts.o.debug: if opts.o.debug:
print(f"Deleting this deployment: {deployment}") print(f"Deleting this deployment: {deployment}")
@ -198,6 +247,7 @@ class K8sDeployer(Deployer):
if not self.is_kind(): if not self.is_kind():
ingress: client.V1Ingress = self.cluster_info.get_ingress() ingress: client.V1Ingress = self.cluster_info.get_ingress()
if ingress:
if opts.o.debug: if opts.o.debug:
print(f"Deleting this ingress: {ingress}") print(f"Deleting this ingress: {ingress}")
try: try:
@ -206,6 +256,9 @@ class K8sDeployer(Deployer):
) )
except client.exceptions.ApiException as e: except client.exceptions.ApiException as e:
_check_delete_exception(e) _check_delete_exception(e)
else:
if opts.o.debug:
print("No ingress to delete")
if self.is_kind(): if self.is_kind():
# Destroy the kind cluster # Destroy the kind cluster
@ -353,8 +406,9 @@ class K8sDeployer(Deployer):
class K8sDeployerConfigGenerator(DeployerConfigGenerator): class K8sDeployerConfigGenerator(DeployerConfigGenerator):
type: str type: str
def __init__(self, type: str) -> None: def __init__(self, type: str, deployment_context) -> None:
self.type = type self.type = type
self.deployment_context = deployment_context
super().__init__() super().__init__()
def generate(self, deployment_dir: Path): def generate(self, deployment_dir: Path):
@ -362,7 +416,7 @@ class K8sDeployerConfigGenerator(DeployerConfigGenerator):
if self.type == "k8s-kind": if self.type == "k8s-kind":
# Check the file isn't already there # Check the file isn't already there
# Get the config file contents # Get the config file contents
content = generate_kind_config(deployment_dir) content = generate_kind_config(deployment_dir, self.deployment_context)
if opts.o.debug: if opts.o.debug:
print(f"kind config is: {content}") print(f"kind config is: {content}")
config_file = deployment_dir.joinpath(constants.kind_config_filename) config_file = deployment_dir.joinpath(constants.kind_config_filename)

View File

@ -17,6 +17,7 @@ from kubernetes import client
import os import os
from pathlib import Path from pathlib import Path
import subprocess import subprocess
import re
from typing import Set, Mapping, List from typing import Set, Mapping, List
from stack_orchestrator.opts import opts from stack_orchestrator.opts import opts
@ -73,7 +74,7 @@ def named_volumes_from_pod_files(parsed_pod_files):
parsed_pod_file = parsed_pod_files[pod] parsed_pod_file = parsed_pod_files[pod]
if "volumes" in parsed_pod_file: if "volumes" in parsed_pod_file:
volumes = parsed_pod_file["volumes"] volumes = parsed_pod_file["volumes"]
for volume in volumes.keys(): for volume, value in volumes.items():
# Volume definition looks like: # Volume definition looks like:
# 'laconicd-data': None # 'laconicd-data': None
named_volumes.append(volume) named_volumes.append(volume)
@ -97,21 +98,36 @@ def volume_mounts_for_service(parsed_pod_files, service):
if "volumes" in service_obj: if "volumes" in service_obj:
volumes = service_obj["volumes"] volumes = service_obj["volumes"]
for mount_string in volumes: for mount_string in volumes:
# Looks like: test-data:/data # Looks like: test-data:/data or test-data:/data:ro or test-data:/data:rw
(volume_name, mount_path) = mount_string.split(":") if opts.o.debug:
volume_device = client.V1VolumeMount(mount_path=mount_path, name=volume_name) print(f"mount_string: {mount_string}")
mount_split = mount_string.split(":")
volume_name = mount_split[0]
mount_path = mount_split[1]
mount_options = mount_split[2] if len(mount_split) == 3 else None
if opts.o.debug:
print(f"volumne_name: {volume_name}")
print(f"mount path: {mount_path}")
print(f"mount options: {mount_options}")
volume_device = client.V1VolumeMount(
mount_path=mount_path, name=volume_name, read_only="ro" == mount_options)
result.append(volume_device) result.append(volume_device)
return result return result
def volumes_for_pod_files(parsed_pod_files): def volumes_for_pod_files(parsed_pod_files, spec, app_name):
result = [] result = []
for pod in parsed_pod_files: for pod in parsed_pod_files:
parsed_pod_file = parsed_pod_files[pod] parsed_pod_file = parsed_pod_files[pod]
if "volumes" in parsed_pod_file: if "volumes" in parsed_pod_file:
volumes = parsed_pod_file["volumes"] volumes = parsed_pod_file["volumes"]
for volume_name in volumes.keys(): for volume_name in volumes.keys():
claim = client.V1PersistentVolumeClaimVolumeSource(claim_name=volume_name) if volume_name in spec.get_configmaps():
config_map = client.V1ConfigMapVolumeSource(name=f"{app_name}-{volume_name}")
volume = client.V1Volume(name=volume_name, config_map=config_map)
result.append(volume)
else:
claim = client.V1PersistentVolumeClaimVolumeSource(claim_name=f"{app_name}-{volume_name}")
volume = client.V1Volume(name=volume_name, persistent_volume_claim=claim) volume = client.V1Volume(name=volume_name, persistent_volume_claim=claim)
result.append(volume) result.append(volume)
return result return result
@ -125,6 +141,7 @@ def _get_host_paths_for_volumes(parsed_pod_files):
volumes = parsed_pod_file["volumes"] volumes = parsed_pod_file["volumes"]
for volume_name in volumes.keys(): for volume_name in volumes.keys():
volume_definition = volumes[volume_name] volume_definition = volumes[volume_name]
if volume_definition and "driver_opts" in volume_definition:
host_path = volume_definition["driver_opts"]["device"] host_path = volume_definition["driver_opts"]["device"]
result[volume_name] = host_path result[volume_name] = host_path
return result return result
@ -138,7 +155,7 @@ def _make_absolute_host_path(data_mount_path: Path, deployment_dir: Path) -> Pat
return Path.cwd().joinpath(deployment_dir.joinpath("compose").joinpath(data_mount_path)).resolve() return Path.cwd().joinpath(deployment_dir.joinpath("compose").joinpath(data_mount_path)).resolve()
def _generate_kind_mounts(parsed_pod_files, deployment_dir): def _generate_kind_mounts(parsed_pod_files, deployment_dir, deployment_context):
volume_definitions = [] volume_definitions = []
volume_host_path_map = _get_host_paths_for_volumes(parsed_pod_files) volume_host_path_map = _get_host_paths_for_volumes(parsed_pod_files)
# Note these paths are relative to the location of the pod files (at present) # Note these paths are relative to the location of the pod files (at present)
@ -153,11 +170,20 @@ def _generate_kind_mounts(parsed_pod_files, deployment_dir):
if "volumes" in service_obj: if "volumes" in service_obj:
volumes = service_obj["volumes"] volumes = service_obj["volumes"]
for mount_string in volumes: for mount_string in volumes:
# Looks like: test-data:/data # Looks like: test-data:/data or test-data:/data:ro or test-data:/data:rw
(volume_name, mount_path) = mount_string.split(":") if opts.o.debug:
print(f"mount_string: {mount_string}")
mount_split = mount_string.split(":")
volume_name = mount_split[0]
mount_path = mount_split[1]
if opts.o.debug:
print(f"volumne_name: {volume_name}")
print(f"map: {volume_host_path_map}")
print(f"mount path: {mount_path}")
if volume_name not in deployment_context.spec.get_configmaps():
volume_definitions.append( volume_definitions.append(
f" - hostPath: {_make_absolute_host_path(volume_host_path_map[volume_name], deployment_dir)}\n" f" - hostPath: {_make_absolute_host_path(volume_host_path_map[volume_name], deployment_dir)}\n"
f" containerPath: {get_node_pv_mount_path(volume_name)}" f" containerPath: {get_node_pv_mount_path(volume_name)}\n"
) )
return ( return (
"" if len(volume_definitions) == 0 else ( "" if len(volume_definitions) == 0 else (
@ -180,7 +206,7 @@ def _generate_kind_port_mappings(parsed_pod_files):
for port_string in ports: for port_string in ports:
# TODO handle the complex cases # TODO handle the complex cases
# Looks like: 80 or something more complicated # Looks like: 80 or something more complicated
port_definitions.append(f" - containerPort: {port_string}\n hostPort: {port_string}") port_definitions.append(f" - containerPort: {port_string}\n hostPort: {port_string}\n")
return ( return (
"" if len(port_definitions) == 0 else ( "" if len(port_definitions) == 0 else (
" extraPortMappings:\n" " extraPortMappings:\n"
@ -189,6 +215,33 @@ def _generate_kind_port_mappings(parsed_pod_files):
) )
# Note: this makes any duplicate definition in b overwrite a
def merge_envs(a: Mapping[str, str], b: Mapping[str, str]) -> Mapping[str, str]:
result = {**a, **b}
return result
def _expand_shell_vars(raw_val: str) -> str:
# could be: <string> or ${<env-var-name>} or ${<env-var-name>:-<default-value>}
# TODO: implement support for variable substitution and default values
# if raw_val is like ${<something>} print a warning and substitute an empty string
# otherwise return raw_val
match = re.search(r"^\$\{(.*)\}$", raw_val)
if match:
print(f"WARNING: found unimplemented environment variable substitution: {raw_val}")
else:
return raw_val
# TODO: handle the case where the same env var is defined in multiple places
def envs_from_compose_file(compose_file_envs: Mapping[str, str]) -> Mapping[str, str]:
result = {}
for env_var, env_val in compose_file_envs.items():
expanded_env_val = _expand_shell_vars(env_val)
result.update({env_var: expanded_env_val})
return result
def envs_from_environment_variables_map(map: Mapping[str, str]) -> List[client.V1EnvVar]: def envs_from_environment_variables_map(map: Mapping[str, str]) -> List[client.V1EnvVar]:
result = [] result = []
for env_var, env_val in map.items(): for env_var, env_val in map.items():
@ -214,13 +267,13 @@ def envs_from_environment_variables_map(map: Mapping[str, str]) -> List[client.V
# extraMounts: # extraMounts:
# - hostPath: /path/to/my/files # - hostPath: /path/to/my/files
# containerPath: /files # containerPath: /files
def generate_kind_config(deployment_dir: Path): def generate_kind_config(deployment_dir: Path, deployment_context):
compose_file_dir = deployment_dir.joinpath("compose") compose_file_dir = deployment_dir.joinpath("compose")
# TODO: this should come from the stack file, not this way # TODO: this should come from the stack file, not this way
pod_files = [p for p in compose_file_dir.iterdir() if p.is_file()] pod_files = [p for p in compose_file_dir.iterdir() if p.is_file()]
parsed_pod_files_map = parsed_pod_files_map_from_file_names(pod_files) parsed_pod_files_map = parsed_pod_files_map_from_file_names(pod_files)
port_mappings_yml = _generate_kind_port_mappings(parsed_pod_files_map) port_mappings_yml = _generate_kind_port_mappings(parsed_pod_files_map)
mounts_yml = _generate_kind_mounts(parsed_pod_files_map, deployment_dir) mounts_yml = _generate_kind_mounts(parsed_pod_files_map, deployment_dir, deployment_context)
return ( return (
"kind: Cluster\n" "kind: Cluster\n"
"apiVersion: kind.x-k8s.io/v1alpha4\n" "apiVersion: kind.x-k8s.io/v1alpha4\n"

View File

@ -13,15 +13,64 @@
# You should have received a copy of the GNU Affero General Public License # You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>. # along with this program. If not, see <http:#www.gnu.org/licenses/>.
from pathlib import Path
import typing import typing
import humanfriendly
from pathlib import Path
from stack_orchestrator.util import get_yaml from stack_orchestrator.util import get_yaml
from stack_orchestrator import constants from stack_orchestrator import constants
class ResourceLimits:
cpus: float = None
memory: int = None
storage: int = None
def __init__(self, obj={}):
if "cpus" in obj:
self.cpus = float(obj["cpus"])
if "memory" in obj:
self.memory = humanfriendly.parse_size(obj["memory"])
if "storage" in obj:
self.storage = humanfriendly.parse_size(obj["storage"])
def __len__(self):
return len(self.__dict__)
def __iter__(self):
for k in self.__dict__:
yield k, self.__dict__[k]
def __repr__(self):
return str(self.__dict__)
class Resources:
limits: ResourceLimits = None
reservations: ResourceLimits = None
def __init__(self, obj={}):
if "reservations" in obj:
self.reservations = ResourceLimits(obj["reservations"])
if "limits" in obj:
self.limits = ResourceLimits(obj["limits"])
def __len__(self):
return len(self.__dict__)
def __iter__(self):
for k in self.__dict__:
yield k, self.__dict__[k]
def __repr__(self):
return str(self.__dict__)
class Spec: class Spec:
obj: typing.Any obj: typing.Any
file_path: Path
def __init__(self) -> None: def __init__(self) -> None:
pass pass
@ -29,12 +78,29 @@ class Spec:
def init_from_file(self, file_path: Path): def init_from_file(self, file_path: Path):
with file_path: with file_path:
self.obj = get_yaml().load(open(file_path, "r")) self.obj = get_yaml().load(open(file_path, "r"))
self.file_path = file_path
def get_image_registry(self): def get_image_registry(self):
return (self.obj[constants.image_resigtry_key] return (self.obj[constants.image_resigtry_key]
if self.obj and constants.image_resigtry_key in self.obj if self.obj and constants.image_resigtry_key in self.obj
else None) else None)
def get_volumes(self):
return (self.obj["volumes"]
if self.obj and "volumes" in self.obj
else {})
def get_configmaps(self):
return (self.obj["configmaps"]
if self.obj and "configmaps" in self.obj
else {})
def get_container_resources(self):
return Resources(self.obj.get("resources", {}).get("containers", {}))
def get_volume_resources(self):
return Resources(self.obj.get("resources", {}).get("volumes", {}))
def get_http_proxy(self): def get_http_proxy(self):
return (self.obj[constants.network_key][constants.http_proxy_key] return (self.obj[constants.network_key][constants.http_proxy_key]
if self.obj and constants.network_key in self.obj if self.obj and constants.network_key in self.obj

View File

@ -19,6 +19,8 @@ import shlex
import shutil import shutil
import sys import sys
import tempfile import tempfile
import time
import uuid
import click import click
@ -27,7 +29,7 @@ from stack_orchestrator.deploy.webapp.util import (LaconicRegistryClient,
build_container_image, push_container_image, build_container_image, push_container_image,
file_hash, deploy_to_k8s, publish_deployment, file_hash, deploy_to_k8s, publish_deployment,
hostname_for_deployment_request, generate_hostname_for_app, hostname_for_deployment_request, generate_hostname_for_app,
match_owner) match_owner, skip_by_tag)
def process_app_deployment_request( def process_app_deployment_request(
@ -39,8 +41,19 @@ def process_app_deployment_request(
dns_suffix, dns_suffix,
deployment_parent_dir, deployment_parent_dir,
kube_config, kube_config,
image_registry image_registry,
log_parent_dir
): ):
run_id = f"{app_deployment_request.id}-{str(time.time()).split('.')[0]}-{str(uuid.uuid4()).split('-')[0]}"
log_file = None
if log_parent_dir:
log_dir = os.path.join(log_parent_dir, app_deployment_request.id)
if not os.path.exists(log_dir):
os.mkdir(log_dir)
log_file_path = os.path.join(log_dir, f"{run_id}.log")
print(f"Directing build logs to: {log_file_path}")
log_file = open(log_file_path, "wt")
# 1. look up application # 1. look up application
app = laconic.get_record(app_deployment_request.attributes.application, require=True) app = laconic.get_record(app_deployment_request.attributes.application, require=True)
@ -59,8 +72,8 @@ def process_app_deployment_request(
dns_record = laconic.get_record(dns_crn) dns_record = laconic.get_record(dns_crn)
if dns_record: if dns_record:
matched_owner = match_owner(app_deployment_request, dns_record) matched_owner = match_owner(app_deployment_request, dns_record)
if not matched_owner and dns_record.request: if not matched_owner and dns_record.attributes.request:
matched_owner = match_owner(app_deployment_request, laconic.get_record(dns_record.request, require=True)) matched_owner = match_owner(app_deployment_request, laconic.get_record(dns_record.attributes.request, require=True))
if matched_owner: if matched_owner:
print("Matched DnsRecord ownership:", matched_owner) print("Matched DnsRecord ownership:", matched_owner)
@ -102,8 +115,10 @@ def process_app_deployment_request(
needs_k8s_deploy = False needs_k8s_deploy = False
# 6. build container (if needed) # 6. build container (if needed)
if not deployment_record or deployment_record.attributes.application != app.id: if not deployment_record or deployment_record.attributes.application != app.id:
build_container_image(app, deployment_container_tag) # TODO: pull from request
push_container_image(deployment_dir) extra_build_args = []
build_container_image(app, deployment_container_tag, extra_build_args, log_file)
push_container_image(deployment_dir, log_file)
needs_k8s_deploy = True needs_k8s_deploy = True
# 7. update config (if needed) # 7. update config (if needed)
@ -116,6 +131,7 @@ def process_app_deployment_request(
deploy_to_k8s( deploy_to_k8s(
deployment_record, deployment_record,
deployment_dir, deployment_dir,
log_file
) )
publish_deployment( publish_deployment(
@ -136,13 +152,17 @@ def load_known_requests(filename):
return {} return {}
def dump_known_requests(filename, requests): def dump_known_requests(filename, requests, status="SEEN"):
if not filename: if not filename:
return return
known_requests = load_known_requests(filename) known_requests = load_known_requests(filename)
for r in requests: for r in requests:
known_requests[r.id] = r.createTime known_requests[r.id] = {
json.dump(known_requests, open(filename, "w")) "createTime": r.createTime,
"status": status
}
with open(filename, "w") as f:
json.dump(known_requests, f)
@click.command() @click.command()
@ -158,10 +178,14 @@ def dump_known_requests(filename, requests):
@click.option("--record-namespace-dns", help="eg, crn://laconic/dns") @click.option("--record-namespace-dns", help="eg, crn://laconic/dns")
@click.option("--record-namespace-deployments", help="eg, crn://laconic/deployments") @click.option("--record-namespace-deployments", help="eg, crn://laconic/deployments")
@click.option("--dry-run", help="Don't do anything, just report what would be done.", is_flag=True) @click.option("--dry-run", help="Don't do anything, just report what would be done.", is_flag=True)
@click.option("--include-tags", help="Only include requests with matching tags (comma-separated).", default="")
@click.option("--exclude-tags", help="Exclude requests with matching tags (comma-separated).", default="")
@click.option("--log-dir", help="Output build/deployment logs to directory.", default=None)
@click.pass_context @click.pass_context
def command(ctx, kube_config, laconic_config, image_registry, deployment_parent_dir, def command(ctx, kube_config, laconic_config, image_registry, deployment_parent_dir, # noqa: C901
request_id, discover, state_file, only_update_state, request_id, discover, state_file, only_update_state,
dns_suffix, record_namespace_dns, record_namespace_deployments, dry_run): dns_suffix, record_namespace_dns, record_namespace_deployments, dry_run,
include_tags, exclude_tags, log_dir):
if request_id and discover: if request_id and discover:
print("Cannot specify both --request-id and --discover", file=sys.stderr) print("Cannot specify both --request-id and --discover", file=sys.stderr)
sys.exit(2) sys.exit(2)
@ -179,6 +203,10 @@ def command(ctx, kube_config, laconic_config, image_registry, deployment_parent_
print("--dns-suffix, --record-namespace-dns, and --record-namespace-deployments are all required", file=sys.stderr) print("--dns-suffix, --record-namespace-dns, and --record-namespace-deployments are all required", file=sys.stderr)
sys.exit(2) sys.exit(2)
# Split CSV and clean up values.
include_tags = [tag.strip() for tag in include_tags.split(",") if tag]
exclude_tags = [tag.strip() for tag in exclude_tags.split(",") if tag]
laconic = LaconicRegistryClient(laconic_config) laconic = LaconicRegistryClient(laconic_config)
# Find deployment requests. # Find deployment requests.
@ -200,7 +228,9 @@ def command(ctx, kube_config, laconic_config, image_registry, deployment_parent_
requests.sort(key=lambda r: r.createTime) requests.sort(key=lambda r: r.createTime)
requests.reverse() requests.reverse()
requests_by_name = {} requests_by_name = {}
skipped_by_name = {}
for r in requests: for r in requests:
# TODO: Do this _after_ filtering deployments and cancellations to minimize round trips.
app = laconic.get_record(r.attributes.application) app = laconic.get_record(r.attributes.application)
if not app: if not app:
print("Skipping request %s, cannot locate app." % r.id) print("Skipping request %s, cannot locate app." % r.id)
@ -211,17 +241,20 @@ def command(ctx, kube_config, laconic_config, image_registry, deployment_parent_
requested_name = generate_hostname_for_app(app) requested_name = generate_hostname_for_app(app)
print("Generating name %s for request %s." % (requested_name, r.id)) print("Generating name %s for request %s." % (requested_name, r.id))
if requested_name not in requests_by_name: if requested_name in skipped_by_name or requested_name in requests_by_name:
print( print("Ignoring request %s, it has been superseded." % r.id)
"Found request %s to run application %s on %s." continue
% (r.id, r.attributes.application, requested_name)
) if skip_by_tag(r, include_tags, exclude_tags):
print("Skipping request %s, filtered by tag (include %s, exclude %s, present %s)" % (r.id,
include_tags,
exclude_tags,
r.attributes.tags))
skipped_by_name[requested_name] = r
continue
print("Found request %s to run application %s on %s." % (r.id, r.attributes.application, requested_name))
requests_by_name[requested_name] = r requests_by_name[requested_name] = r
else:
print(
"Ignoring request %s, it is superseded by %s."
% (r.id, requests_by_name[requested_name].id)
)
# Find deployments. # Find deployments.
deployments = laconic.app_deployments() deployments = laconic.app_deployments()
@ -256,6 +289,8 @@ def command(ctx, kube_config, laconic_config, image_registry, deployment_parent_
if not dry_run: if not dry_run:
for r in requests_to_execute: for r in requests_to_execute:
dump_known_requests(state_file, [r], "DEPLOYING")
status = "ERROR"
try: try:
process_app_deployment_request( process_app_deployment_request(
ctx, ctx,
@ -266,7 +301,9 @@ def command(ctx, kube_config, laconic_config, image_registry, deployment_parent_
dns_suffix, dns_suffix,
os.path.abspath(deployment_parent_dir), os.path.abspath(deployment_parent_dir),
kube_config, kube_config,
image_registry image_registry,
log_dir
) )
status = "DEPLOYED"
finally: finally:
dump_known_requests(state_file, [r]) dump_known_requests(state_file, [r], status)

View File

@ -27,7 +27,7 @@ from dotenv import dotenv_values
from stack_orchestrator import constants from stack_orchestrator import constants
from stack_orchestrator.deploy.deployer_factory import getDeployer from stack_orchestrator.deploy.deployer_factory import getDeployer
WEBAPP_PORT = 3000 WEBAPP_PORT = 80
@click.command() @click.command()

View File

@ -20,7 +20,7 @@ import sys
import click import click
from stack_orchestrator.deploy.webapp.util import LaconicRegistryClient, match_owner from stack_orchestrator.deploy.webapp.util import LaconicRegistryClient, match_owner, skip_by_tag
def process_app_removal_request(ctx, def process_app_removal_request(ctx,
@ -40,8 +40,8 @@ def process_app_removal_request(ctx,
matched_owner = match_owner(app_removal_request, deployment_record, dns_record) matched_owner = match_owner(app_removal_request, deployment_record, dns_record)
# Or of the original deployment request. # Or of the original deployment request.
if not matched_owner and deployment_record.request: if not matched_owner and deployment_record.attributes.request:
matched_owner = match_owner(app_removal_request, laconic.get_record(deployment_record.request, require=True)) matched_owner = match_owner(app_removal_request, laconic.get_record(deployment_record.attributes.request, require=True))
if matched_owner: if matched_owner:
print("Matched deployment ownership:", matched_owner) print("Matched deployment ownership:", matched_owner)
@ -107,10 +107,12 @@ def dump_known_requests(filename, requests):
@click.option("--delete-names/--preserve-names", help="Delete all names associated with removed deployments.", default=True) @click.option("--delete-names/--preserve-names", help="Delete all names associated with removed deployments.", default=True)
@click.option("--delete-volumes/--preserve-volumes", default=True, help="delete data volumes") @click.option("--delete-volumes/--preserve-volumes", default=True, help="delete data volumes")
@click.option("--dry-run", help="Don't do anything, just report what would be done.", is_flag=True) @click.option("--dry-run", help="Don't do anything, just report what would be done.", is_flag=True)
@click.option("--include-tags", help="Only include requests with matching tags (comma-separated).", default="")
@click.option("--exclude-tags", help="Exclude requests with matching tags (comma-separated).", default="")
@click.pass_context @click.pass_context
def command(ctx, laconic_config, deployment_parent_dir, def command(ctx, laconic_config, deployment_parent_dir,
request_id, discover, state_file, only_update_state, request_id, discover, state_file, only_update_state,
delete_names, delete_volumes, dry_run): delete_names, delete_volumes, dry_run, include_tags, exclude_tags):
if request_id and discover: if request_id and discover:
print("Cannot specify both --request-id and --discover", file=sys.stderr) print("Cannot specify both --request-id and --discover", file=sys.stderr)
sys.exit(2) sys.exit(2)
@ -123,6 +125,10 @@ def command(ctx, laconic_config, deployment_parent_dir,
print("--only-update-state requires --state-file", file=sys.stderr) print("--only-update-state requires --state-file", file=sys.stderr)
sys.exit(2) sys.exit(2)
# Split CSV and clean up values.
include_tags = [tag.strip() for tag in include_tags.split(",") if tag]
exclude_tags = [tag.strip() for tag in exclude_tags.split(",") if tag]
laconic = LaconicRegistryClient(laconic_config) laconic = LaconicRegistryClient(laconic_config)
# Find deployment removal requests. # Find deployment removal requests.
@ -155,10 +161,22 @@ def command(ctx, laconic_config, deployment_parent_dir,
# TODO: should we handle CRNs? # TODO: should we handle CRNs?
removals_by_deployment[r.attributes.deployment] = r removals_by_deployment[r.attributes.deployment] = r
requests_to_execute = [] one_per_deployment = {}
for r in requests: for r in requests:
if not r.attributes.deployment: if not r.attributes.deployment:
print(f"Skipping removal request {r.id} since it was a cancellation.") print(f"Skipping removal request {r.id} since it was a cancellation.")
elif r.attributes.deployment in one_per_deployment:
print(f"Skipping removal request {r.id} since it was superseded.")
else:
one_per_deployment[r.attributes.deployment] = r
requests_to_execute = []
for r in one_per_deployment.values():
if skip_by_tag(r, include_tags, exclude_tags):
print("Skipping removal request %s, filtered by tag (include %s, exclude %s, present %s)" % (r.id,
include_tags,
exclude_tags,
r.attributes.tags))
elif r.id in removals_by_request: elif r.id in removals_by_request:
print(f"Found satisfied request for {r.id} at {removals_by_request[r.id].id}") print(f"Found satisfied request for {r.id} at {removals_by_request[r.id].id}")
elif r.attributes.deployment in removals_by_deployment: elif r.attributes.deployment in removals_by_deployment:

View File

@ -195,7 +195,24 @@ def file_hash(filename):
return hashlib.sha1(open(filename).read().encode()).hexdigest() return hashlib.sha1(open(filename).read().encode()).hexdigest()
def build_container_image(app_record, tag, extra_build_args=[]): def determine_base_container(clone_dir, app_type="webapp"):
if not app_type or not app_type.startswith("webapp"):
raise Exception(f"Unsupported app_type {app_type}")
base_container = "cerc/webapp-base"
if app_type == "webapp/next":
base_container = "cerc/nextjs-base"
elif app_type == "webapp":
pkg_json_path = os.path.join(clone_dir, "package.json")
if os.path.exists(pkg_json_path):
pkg_json = json.load(open(pkg_json_path))
if "next" in pkg_json.get("dependencies", {}):
base_container = "cerc/nextjs-base"
return base_container
def build_container_image(app_record, tag, extra_build_args=[], log_file=None):
tmpdir = tempfile.mkdtemp() tmpdir = tempfile.mkdtemp()
try: try:
@ -210,37 +227,46 @@ def build_container_image(app_record, tag, extra_build_args=[]):
git_env = dict(os.environ.copy()) git_env = dict(os.environ.copy())
# Never prompt # Never prompt
git_env["GIT_TERMINAL_PROMPT"] = "0" git_env["GIT_TERMINAL_PROMPT"] = "0"
subprocess.check_call(["git", "clone", repo, clone_dir], env=git_env) subprocess.check_call(["git", "clone", repo, clone_dir], env=git_env, stdout=log_file, stderr=log_file)
subprocess.check_call(["git", "checkout", ref], cwd=clone_dir, env=git_env) subprocess.check_call(["git", "checkout", ref], cwd=clone_dir, env=git_env, stdout=log_file, stderr=log_file)
else: else:
result = subprocess.run(["git", "clone", "--depth", "1", repo, clone_dir]) result = subprocess.run(["git", "clone", "--depth", "1", repo, clone_dir], stdout=log_file, stderr=log_file)
result.check_returncode() result.check_returncode()
base_container = determine_base_container(clone_dir, app_record.attributes.app_type)
print("Building webapp ...") print("Building webapp ...")
build_command = [sys.argv[0], "build-webapp", "--source-repo", clone_dir, "--tag", tag] build_command = [
sys.argv[0], "build-webapp",
"--source-repo", clone_dir,
"--tag", tag,
"--base-container", base_container
]
if extra_build_args: if extra_build_args:
build_command.append("--extra-build-args") build_command.append("--extra-build-args")
build_command.append(" ".join(extra_build_args)) build_command.append(" ".join(extra_build_args))
result = subprocess.run(build_command) result = subprocess.run(build_command, stdout=log_file, stderr=log_file)
result.check_returncode() result.check_returncode()
finally: finally:
cmd("rm", "-rf", tmpdir) cmd("rm", "-rf", tmpdir)
def push_container_image(deployment_dir): def push_container_image(deployment_dir, log_file=None):
print("Pushing image ...") print("Pushing image ...")
result = subprocess.run([sys.argv[0], "deployment", "--dir", deployment_dir, "push-images"]) result = subprocess.run([sys.argv[0], "deployment", "--dir", deployment_dir, "push-images"],
stdout=log_file, stderr=log_file)
result.check_returncode() result.check_returncode()
def deploy_to_k8s(deploy_record, deployment_dir): def deploy_to_k8s(deploy_record, deployment_dir, log_file=None):
if not deploy_record: if not deploy_record:
command = "up" command = "up"
else: else:
command = "update" command = "update"
result = subprocess.run([sys.argv[0], "deployment", "--dir", deployment_dir, command]) result = subprocess.run([sys.argv[0], "deployment", "--dir", deployment_dir, command],
stdout=log_file, stderr=log_file)
result.check_returncode() result.check_returncode()
@ -325,3 +351,15 @@ def generate_hostname_for_app(app):
else: else:
m.update(app.attributes.repository.encode()) m.update(app.attributes.repository.encode())
return "%s-%s" % (last_part, m.hexdigest()[0:10]) return "%s-%s" % (last_part, m.hexdigest()[0:10])
def skip_by_tag(r, include_tags, exclude_tags):
for tag in exclude_tags:
if tag and r.attributes.tags and tag in r.attributes.tags:
return True
for tag in include_tags:
if tag and (not r.attributes.tags or tag not in r.attributes.tags):
return True
return False

View File

@ -19,7 +19,7 @@ import sys
import ruamel.yaml import ruamel.yaml
from pathlib import Path from pathlib import Path
from dotenv import dotenv_values from dotenv import dotenv_values
from typing import Mapping from typing import Mapping, Set, List
def include_exclude_check(s, include, exclude): def include_exclude_check(s, include, exclude):
@ -81,17 +81,17 @@ def get_pod_list(parsed_stack):
return result return result
def get_plugin_code_paths(stack): def get_plugin_code_paths(stack) -> List[Path]:
parsed_stack = get_parsed_stack_config(stack) parsed_stack = get_parsed_stack_config(stack)
pods = parsed_stack["pods"] pods = parsed_stack["pods"]
result = [] result: Set[Path] = set()
for pod in pods: for pod in pods:
if type(pod) is str: if type(pod) is str:
result.append(get_stack_file_path(stack).parent) result.add(get_stack_file_path(stack).parent)
else: else:
pod_root_dir = os.path.join(get_dev_root_path(None), pod["repository"].split("/")[-1], pod["path"]) pod_root_dir = os.path.join(get_dev_root_path(None), pod["repository"].split("/")[-1], pod["path"])
result.append(Path(os.path.join(pod_root_dir, "stack"))) result.add(Path(os.path.join(pod_root_dir, "stack")))
return result return list(result)
def get_pod_file_path(parsed_stack, pod_name: str): def get_pod_file_path(parsed_stack, pod_name: str):
@ -139,6 +139,13 @@ def get_compose_file_dir():
return source_compose_dir return source_compose_dir
def get_config_file_dir():
# TODO: refactor to use common code with deploy command
data_dir = Path(__file__).absolute().parent.joinpath("data")
source_config_dir = data_dir.joinpath("config")
return source_config_dir
def get_parsed_deployment_spec(spec_file): def get_parsed_deployment_spec(spec_file):
spec_file_path = Path(spec_file) spec_file_path = Path(spec_file)
try: try:

View File

@ -6,6 +6,12 @@ fi
# Dump environment variables for debugging # Dump environment variables for debugging
echo "Environment variables:" echo "Environment variables:"
env env
delete_cluster_exit () {
$TEST_TARGET_SO deployment --dir $test_deployment_dir stop --delete-volumes
exit 1
}
# Test basic stack-orchestrator deploy # Test basic stack-orchestrator deploy
echo "Running stack-orchestrator deploy test" echo "Running stack-orchestrator deploy test"
# Bit of a hack, test the most recent package # Bit of a hack, test the most recent package
@ -106,6 +112,10 @@ if [ ! "$create_file_content" == "create-command-output-data" ]; then
echo "deploy create test: FAILED" echo "deploy create test: FAILED"
exit 1 exit 1
fi fi
# Add a config file to be picked up by the ConfigMap before starting.
echo "dbfc7a4d-44a7-416d-b5f3-29842cc47650" > $test_deployment_dir/data/test-config/test_config
echo "deploy create output file test: passed" echo "deploy create output file test: passed"
# Try to start the deployment # Try to start the deployment
$TEST_TARGET_SO deployment --dir $test_deployment_dir start $TEST_TARGET_SO deployment --dir $test_deployment_dir start
@ -124,6 +134,37 @@ else
echo "deployment config test: FAILED" echo "deployment config test: FAILED"
exit 1 exit 1
fi fi
# Check the config variable CERC_TEST_PARAM_2 was passed correctly from the compose file
if [[ "$log_output_3" == *"Test-param-2: CERC_TEST_PARAM_2_VALUE"* ]]; then
echo "deployment compose config test: passed"
else
echo "deployment compose config test: FAILED"
exit 1
fi
# Check that the ConfigMap is mounted and contains the expected content.
log_output_4=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir logs )
if [[ "$log_output_4" == *"/config/test_config:"* ]] && [[ "$log_output_4" == *"dbfc7a4d-44a7-416d-b5f3-29842cc47650"* ]]; then
echo "deployment ConfigMap test: passed"
else
echo "deployment ConfigMap test: FAILED"
delete_cluster_exit
fi
# Stop then start again and check the volume was preserved
$TEST_TARGET_SO deployment --dir $test_deployment_dir stop
# Sleep a bit just in case
# sleep for longer to check if that's why the subsequent create cluster fails
sleep 20
$TEST_TARGET_SO deployment --dir $test_deployment_dir start
log_output_5=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir logs )
if [[ "$log_output_5" == *"Filesystem is old"* ]]; then
echo "Retain volumes test: passed"
else
echo "Retain volumes test: FAILED"
delete_cluster_exit
fi
# Stop and clean up # Stop and clean up
$TEST_TARGET_SO deployment --dir $test_deployment_dir stop --delete-volumes $TEST_TARGET_SO deployment --dir $test_deployment_dir stop --delete-volumes
echo "Test passed" echo "Test passed"

View File

@ -9,7 +9,7 @@ fi
# Helper functions: TODO move into a separate file # Helper functions: TODO move into a separate file
wait_for_pods_started () { wait_for_pods_started () {
for i in {1..5} for i in {1..50}
do do
local ps_output=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir ps ) local ps_output=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir ps )
@ -27,7 +27,7 @@ wait_for_pods_started () {
} }
wait_for_log_output () { wait_for_log_output () {
for i in {1..5} for i in {1..50}
do do
local log_output=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir logs ) local log_output=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir logs )
@ -97,6 +97,10 @@ if [ ! "$create_file_content" == "create-command-output-data" ]; then
echo "deploy create test: FAILED" echo "deploy create test: FAILED"
exit 1 exit 1
fi fi
# Add a config file to be picked up by the ConfigMap before starting.
echo "dbfc7a4d-44a7-416d-b5f3-29842cc47650" > $test_deployment_dir/data/test-config/test_config
echo "deploy create output file test: passed" echo "deploy create output file test: passed"
# Try to start the deployment # Try to start the deployment
$TEST_TARGET_SO deployment --dir $test_deployment_dir start $TEST_TARGET_SO deployment --dir $test_deployment_dir start
@ -110,6 +114,7 @@ else
echo "deployment logs test: FAILED" echo "deployment logs test: FAILED"
delete_cluster_exit delete_cluster_exit
fi fi
# Check the config variable CERC_TEST_PARAM_1 was passed correctly # Check the config variable CERC_TEST_PARAM_1 was passed correctly
if [[ "$log_output_3" == *"Test-param-1: PASSED"* ]]; then if [[ "$log_output_3" == *"Test-param-1: PASSED"* ]]; then
echo "deployment config test: passed" echo "deployment config test: passed"
@ -117,15 +122,34 @@ else
echo "deployment config test: FAILED" echo "deployment config test: FAILED"
delete_cluster_exit delete_cluster_exit
fi fi
# Check the config variable CERC_TEST_PARAM_2 was passed correctly from the compose file
if [[ "$log_output_3" == *"Test-param-2: CERC_TEST_PARAM_2_VALUE"* ]]; then
echo "deployment compose config test: passed"
else
echo "deployment compose config test: FAILED"
exit 1
fi
# Check that the ConfigMap is mounted and contains the expected content.
log_output_4=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir logs )
if [[ "$log_output_4" == *"/config/test_config:"* ]] && [[ "$log_output_4" == *"dbfc7a4d-44a7-416d-b5f3-29842cc47650"* ]]; then
echo "deployment ConfigMap test: passed"
else
echo "deployment ConfigMap test: FAILED"
delete_cluster_exit
fi
# Stop then start again and check the volume was preserved # Stop then start again and check the volume was preserved
$TEST_TARGET_SO deployment --dir $test_deployment_dir stop $TEST_TARGET_SO deployment --dir $test_deployment_dir stop
# Sleep a bit just in case # Sleep a bit just in case
sleep 2 # sleep for longer to check if that's why the subsequent create cluster fails
sleep 20
$TEST_TARGET_SO deployment --dir $test_deployment_dir start $TEST_TARGET_SO deployment --dir $test_deployment_dir start
wait_for_pods_started wait_for_pods_started
wait_for_log_output wait_for_log_output
log_output_4=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir logs ) log_output_5=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir logs )
if [[ "$log_output_4" == *"Filesystem is old"* ]]; then if [[ "$log_output_5" == *"Filesystem is old"* ]]; then
echo "Retain volumes test: passed" echo "Retain volumes test: passed"
else else
echo "Retain volumes test: FAILED" echo "Retain volumes test: FAILED"

View File

@ -30,14 +30,14 @@ CHECK="SPECIAL_01234567890_TEST_STRING"
set +e set +e
CONTAINER_ID=$(docker run -p 3000:3000 -d -e CERC_SCRIPT_DEBUG=$CERC_SCRIPT_DEBUG cerc/test-progressive-web-app:local) CONTAINER_ID=$(docker run -p 3000:80 -d -e CERC_SCRIPT_DEBUG=$CERC_SCRIPT_DEBUG cerc/test-progressive-web-app:local)
sleep 3 sleep 3
wget -t 7 -O test.before -m http://localhost:3000 wget -t 7 -O test.before -m http://localhost:3000
docker logs $CONTAINER_ID docker logs $CONTAINER_ID
docker remove -f $CONTAINER_ID docker remove -f $CONTAINER_ID
CONTAINER_ID=$(docker run -p 3000:3000 -e CERC_WEBAPP_DEBUG=$CHECK -e CERC_SCRIPT_DEBUG=$CERC_SCRIPT_DEBUG -d cerc/test-progressive-web-app:local) CONTAINER_ID=$(docker run -p 3000:80 -e CERC_WEBAPP_DEBUG=$CHECK -e CERC_SCRIPT_DEBUG=$CERC_SCRIPT_DEBUG -d cerc/test-progressive-web-app:local)
sleep 3 sleep 3
wget -t 7 -O test.after -m http://localhost:3000 wget -t 7 -O test.after -m http://localhost:3000