Merge branch 'main' into telackey/tags
All checks were successful
Lint Checks / Run linter (pull_request) Successful in 32s
Webapp Test / Run webapp test suite (pull_request) Successful in 3m9s
Smoke Test / Run basic test suite (pull_request) Successful in 3m11s
Deploy Test / Run deploy test suite (pull_request) Successful in 4m31s

This commit is contained in:
Thomas E Lackey 2024-02-07 01:22:11 +00:00
commit 40e82f3bd7
18 changed files with 206 additions and 157 deletions

21
.gitea/workflows/lint.yml Normal file
View File

@ -0,0 +1,21 @@
name: Lint Checks
on:
pull_request:
branches: '*'
push:
branches: '*'
jobs:
test:
name: "Run linter"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v3
- name: "Install Python"
uses: actions/setup-python@v4
with:
python-version: '3.8'
- name : "Run flake8"
uses: py-actions/flake8@v2

View File

@ -29,10 +29,10 @@ chmod +x ~/.docker/cli-plugins/docker-compose
Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be
a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory. a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory.
Now, having selected that directory, download the latest release from [this page](https://github.com/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable: Now, having selected that directory, download the latest release from [this page](https://git.vdb.to/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable:
```bash ```bash
curl -L -o ~/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so curl -L -o ~/bin/laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
``` ```
Give it execute permissions: Give it execute permissions:
@ -52,7 +52,7 @@ Version: 1.1.0-7a607c2-202304260513
Save the distribution url to `~/.laconic-so/config.yml`: Save the distribution url to `~/.laconic-so/config.yml`:
```bash ```bash
mkdir ~/.laconic-so mkdir ~/.laconic-so
echo "distribution-url: https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so" > ~/.laconic-so/config.yml echo "distribution-url: https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so" > ~/.laconic-so/config.yml
``` ```
### Update ### Update

View File

@ -26,7 +26,7 @@ In addition to the pre-requisites listed in the [README](/README.md), the follow
1. Clone this repository: 1. Clone this repository:
``` ```
$ git clone https://github.com/cerc-io/stack-orchestrator.git $ git clone https://git.vdb.to/cerc-io/stack-orchestrator.git
``` ```
2. Enter the project directory: 2. Enter the project directory:

View File

@ -1,10 +1,10 @@
# Adding a new stack # Adding a new stack
See [this PR](https://github.com/cerc-io/stack-orchestrator/pull/434) for an example of how to currently add a minimal stack to stack orchestrator. The [reth stack](https://github.com/cerc-io/stack-orchestrator/pull/435) is another good example. See [this PR](https://git.vdb.to/cerc-io/stack-orchestrator/pull/434) for an example of how to currently add a minimal stack to stack orchestrator. The [reth stack](https://git.vdb.to/cerc-io/stack-orchestrator/pull/435) is another good example.
For external developers, we recommend forking this repo and adding your stack directly to your fork. This initially requires running in "developer mode" as described [here](/docs/CONTRIBUTING.md). Check out the [Namada stack](https://github.com/vknowable/stack-orchestrator/blob/main/app/data/stacks/public-namada/digitalocean_quickstart.md) from Knowable to see how that is done. For external developers, we recommend forking this repo and adding your stack directly to your fork. This initially requires running in "developer mode" as described [here](/docs/CONTRIBUTING.md). Check out the [Namada stack](https://github.com/vknowable/stack-orchestrator/blob/main/app/data/stacks/public-namada/digitalocean_quickstart.md) from Knowable to see how that is done.
Core to the feature completeness of stack orchestrator is to [decouple the tool functionality from payload](https://github.com/cerc-io/stack-orchestrator/issues/315) which will no longer require forking to add a stack. Core to the feature completeness of stack orchestrator is to [decouple the tool functionality from payload](https://git.vdb.to/cerc-io/stack-orchestrator/issues/315) which will no longer require forking to add a stack.
## Example ## Example

View File

@ -1,6 +1,6 @@
# Specification # Specification
Note: this page is out of date (but still useful) - it will no longer be useful once stacks are [decoupled from the tool functionality](https://github.com/cerc-io/stack-orchestrator/issues/315). Note: this page is out of date (but still useful) - it will no longer be useful once stacks are [decoupled from the tool functionality](https://git.vdb.to/cerc-io/stack-orchestrator/issues/315).
## Implementation ## Implementation

View File

@ -41,4 +41,4 @@ runcmd:
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin - apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- systemctl enable docker - systemctl enable docker
- systemctl start docker - systemctl start docker
- git clone https://github.com/cerc-io/stack-orchestrator.git /home/ubuntu/stack-orchestrator - git clone https://git.vdb.to/cerc-io/stack-orchestrator.git /home/ubuntu/stack-orchestrator

View File

@ -31,5 +31,5 @@ runcmd:
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin - apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- systemctl enable docker - systemctl enable docker
- systemctl start docker - systemctl start docker
- curl -L -o /usr/local/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so - curl -L -o /usr/local/bin/laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
- chmod +x /usr/local/bin/laconic-so - chmod +x /usr/local/bin/laconic-so

View File

@ -137,7 +137,7 @@ fi
echo "**************************************************************************************" echo "**************************************************************************************"
echo "Installing laconic-so" echo "Installing laconic-so"
# install latest `laconic-so` # install latest `laconic-so`
distribution_url=https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so distribution_url=https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
install_filename=${install_dir}/laconic-so install_filename=${install_dir}/laconic-so
mkdir -p ${install_dir} mkdir -p ${install_dir}
curl -L -o ${install_filename} ${distribution_url} curl -L -o ${install_filename} ${distribution_url}

View File

@ -13,7 +13,7 @@ setup(
description='Orchestrates deployment of the Laconic stack', description='Orchestrates deployment of the Laconic stack',
long_description=long_description, long_description=long_description,
long_description_content_type="text/markdown", long_description_content_type="text/markdown",
url='https://github.com/cerc-io/stack-orchestrator', url='https://git.vdb.to/cerc-io/stack-orchestrator',
py_modules=['stack_orchestrator'], py_modules=['stack_orchestrator'],
packages=find_packages(), packages=find_packages(),
install_requires=[requirements], install_requires=[requirements],

View File

@ -1,6 +1,6 @@
# fixturenet-eth # fixturenet-eth
Instructions for deploying a local a geth + lighthouse blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator (the installation of which is covered [here](https://github.com/cerc-io/stack-orchestrator)): Instructions for deploying a local a geth + lighthouse blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator (the installation of which is covered [here](https://git.vdb.to/cerc-io/stack-orchestrator)):
## Clone required repositories ## Clone required repositories

View File

@ -7,11 +7,11 @@ Instructions for deploying a local Laconic blockchain "fixturenet" for developme
**Note:** For building some NPMs, access to the @lirewine repositories is required. If you don't have access, see [this tutorial](/docs/laconicd-fixturenet.md) to run this stack **Note:** For building some NPMs, access to the @lirewine repositories is required. If you don't have access, see [this tutorial](/docs/laconicd-fixturenet.md) to run this stack
## 1. Install Laconic Stack Orchestrator ## 1. Install Laconic Stack Orchestrator
Installation is covered in detail [here](https://github.com/cerc-io/stack-orchestrator#user-mode) but if you're on Linux and already have docker installed it should be as simple as: Installation is covered in detail [here](https://git.vdb.to/cerc-io/stack-orchestrator#user-mode) but if you're on Linux and already have docker installed it should be as simple as:
``` ```
$ mkdir my-working-dir $ mkdir my-working-dir
$ cd my-working-dir $ cd my-working-dir
$ curl -L -o ./laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so $ curl -L -o ./laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
$ chmod +x ./laconic-so $ chmod +x ./laconic-so
$ export PATH=$PATH:$(pwd) # Or move laconic-so to ~/bin or your favorite on-path directory $ export PATH=$PATH:$(pwd) # Or move laconic-so to ~/bin or your favorite on-path directory
``` ```

View File

@ -3,11 +3,11 @@
Instructions for deploying a local Laconic blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator. Instructions for deploying a local Laconic blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator.
## 1. Install Laconic Stack Orchestrator ## 1. Install Laconic Stack Orchestrator
Installation is covered in detail [here](https://github.com/cerc-io/stack-orchestrator#user-mode) but if you're on Linux and already have docker installed it should be as simple as: Installation is covered in detail [here](https://git.vdb.to/cerc-io/stack-orchestrator#user-mode) but if you're on Linux and already have docker installed it should be as simple as:
``` ```
$ mkdir my-working-dir $ mkdir my-working-dir
$ cd my-working-dir $ cd my-working-dir
$ curl -L -o ./laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so $ curl -L -o ./laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
$ chmod +x ./laconic-so $ chmod +x ./laconic-so
$ export PATH=$PATH:$(pwd) # Or move laconic-so to ~/bin or your favorite on-path directory $ export PATH=$PATH:$(pwd) # Or move laconic-so to ~/bin or your favorite on-path directory
``` ```

View File

@ -4,7 +4,7 @@ The MobyMask watcher is a Laconic Network component that provides efficient acce
## Deploy the MobyMask Watcher ## Deploy the MobyMask Watcher
The instructions below show how to deploy a MobyMask watcher using laconic-stack-orchestrator (the installation of which is covered [here](https://github.com/cerc-io/stack-orchestrator#install)). The instructions below show how to deploy a MobyMask watcher using laconic-stack-orchestrator (the installation of which is covered [here](https://git.vdb.to/cerc-io/stack-orchestrator#install)).
This deployment expects that ipld-eth-server's endpoints are available on the local machine at http://ipld-eth-server.example.com:8083/graphql and http://ipld-eth-server.example.com:8082. More advanced configurations are supported by modifying the watcher's [config file](../../config/watcher-mobymask/mobymask-watcher.toml). This deployment expects that ipld-eth-server's endpoints are available on the local machine at http://ipld-eth-server.example.com:8083/graphql and http://ipld-eth-server.example.com:8082. More advanced configurations are supported by modifying the watcher's [config file](../../config/watcher-mobymask/mobymask-watcher.toml).

View File

@ -17,6 +17,7 @@ from pathlib import Path
from python_on_whales import DockerClient, DockerException from python_on_whales import DockerClient, DockerException
from stack_orchestrator.deploy.deployer import Deployer, DeployerException, DeployerConfigGenerator from stack_orchestrator.deploy.deployer import Deployer, DeployerException, DeployerConfigGenerator
from stack_orchestrator.deploy.deployment_context import DeploymentContext from stack_orchestrator.deploy.deployment_context import DeploymentContext
from stack_orchestrator.opts import opts
class DockerDeployer(Deployer): class DockerDeployer(Deployer):
@ -29,60 +30,69 @@ class DockerDeployer(Deployer):
self.type = type self.type = type
def up(self, detach, services): def up(self, detach, services):
try: if not opts.o.dry_run:
return self.docker.compose.up(detach=detach, services=services) try:
except DockerException as e: return self.docker.compose.up(detach=detach, services=services)
raise DeployerException(e) except DockerException as e:
raise DeployerException(e)
def down(self, timeout, volumes): def down(self, timeout, volumes):
try: if not opts.o.dry_run:
return self.docker.compose.down(timeout=timeout, volumes=volumes) try:
except DockerException as e: return self.docker.compose.down(timeout=timeout, volumes=volumes)
raise DeployerException(e) except DockerException as e:
raise DeployerException(e)
def update(self): def update(self):
try: if not opts.o.dry_run:
return self.docker.compose.restart() try:
except DockerException as e: return self.docker.compose.restart()
raise DeployerException(e) except DockerException as e:
raise DeployerException(e)
def status(self): def status(self):
try: if not opts.o.dry_run:
for p in self.docker.compose.ps(): try:
print(f"{p.name}\t{p.state.status}") for p in self.docker.compose.ps():
except DockerException as e: print(f"{p.name}\t{p.state.status}")
raise DeployerException(e) except DockerException as e:
raise DeployerException(e)
def ps(self): def ps(self):
try: if not opts.o.dry_run:
return self.docker.compose.ps() try:
except DockerException as e: return self.docker.compose.ps()
raise DeployerException(e) except DockerException as e:
raise DeployerException(e)
def port(self, service, private_port): def port(self, service, private_port):
try: if not opts.o.dry_run:
return self.docker.compose.port(service=service, private_port=private_port) try:
except DockerException as e: return self.docker.compose.port(service=service, private_port=private_port)
raise DeployerException(e) except DockerException as e:
raise DeployerException(e)
def execute(self, service, command, tty, envs): def execute(self, service, command, tty, envs):
try: if not opts.o.dry_run:
return self.docker.compose.execute(service=service, command=command, tty=tty, envs=envs) try:
except DockerException as e: return self.docker.compose.execute(service=service, command=command, tty=tty, envs=envs)
raise DeployerException(e) except DockerException as e:
raise DeployerException(e)
def logs(self, services, tail, follow, stream): def logs(self, services, tail, follow, stream):
try: if not opts.o.dry_run:
return self.docker.compose.logs(services=services, tail=tail, follow=follow, stream=stream) try:
except DockerException as e: return self.docker.compose.logs(services=services, tail=tail, follow=follow, stream=stream)
raise DeployerException(e) except DockerException as e:
raise DeployerException(e)
def run(self, image: str, command=None, user=None, volumes=None, entrypoint=None, env={}, ports=[], detach=False): def run(self, image: str, command=None, user=None, volumes=None, entrypoint=None, env={}, ports=[], detach=False):
try: if not opts.o.dry_run:
return self.docker.run(image=image, command=command, user=user, volumes=volumes, try:
entrypoint=entrypoint, envs=env, detach=detach, publish=ports, publish_all=len(ports) == 0) return self.docker.run(image=image, command=command, user=user, volumes=volumes,
except DockerException as e: entrypoint=entrypoint, envs=env, detach=detach, publish=ports, publish_all=len(ports) == 0)
raise DeployerException(e) except DockerException as e:
raise DeployerException(e)
class DockerDeployerConfigGenerator(DeployerConfigGenerator): class DockerDeployerConfigGenerator(DeployerConfigGenerator):

View File

@ -85,54 +85,39 @@ def create_deploy_context(
def up_operation(ctx, services_list, stay_attached=False): def up_operation(ctx, services_list, stay_attached=False):
global_context = ctx.parent.parent.obj global_context = ctx.parent.parent.obj
deploy_context = ctx.obj deploy_context = ctx.obj
if not global_context.dry_run: cluster_context = deploy_context.cluster_context
cluster_context = deploy_context.cluster_context container_exec_env = _make_runtime_env(global_context)
container_exec_env = _make_runtime_env(global_context) for attr, value in container_exec_env.items():
for attr, value in container_exec_env.items(): os.environ[attr] = value
os.environ[attr] = value if global_context.verbose:
if global_context.verbose: print(f"Running compose up with container_exec_env: {container_exec_env}, extra_args: {services_list}")
print(f"Running compose up with container_exec_env: {container_exec_env}, extra_args: {services_list}") for pre_start_command in cluster_context.pre_start_commands:
for pre_start_command in cluster_context.pre_start_commands: _run_command(global_context, cluster_context.cluster, pre_start_command)
_run_command(global_context, cluster_context.cluster, pre_start_command) deploy_context.deployer.up(detach=not stay_attached, services=services_list)
deploy_context.deployer.up(detach=not stay_attached, services=services_list) for post_start_command in cluster_context.post_start_commands:
for post_start_command in cluster_context.post_start_commands: _run_command(global_context, cluster_context.cluster, post_start_command)
_run_command(global_context, cluster_context.cluster, post_start_command) _orchestrate_cluster_config(global_context, cluster_context.config, deploy_context.deployer, container_exec_env)
_orchestrate_cluster_config(global_context, cluster_context.config, deploy_context.deployer, container_exec_env)
def down_operation(ctx, delete_volumes, extra_args_list): def down_operation(ctx, delete_volumes, extra_args_list):
global_context = ctx.parent.parent.obj timeout_arg = None
if not global_context.dry_run: if extra_args_list:
if global_context.verbose: timeout_arg = extra_args_list[0]
print("Running compose down") # Specify shutdown timeout (default 10s) to give services enough time to shutdown gracefully
timeout_arg = None ctx.obj.deployer.down(timeout=timeout_arg, volumes=delete_volumes)
if extra_args_list:
timeout_arg = extra_args_list[0]
# Specify shutdown timeout (default 10s) to give services enough time to shutdown gracefully
ctx.obj.deployer.down(timeout=timeout_arg, volumes=delete_volumes)
def status_operation(ctx): def status_operation(ctx):
global_context = ctx.parent.parent.obj ctx.obj.deployer.status()
if not global_context.dry_run:
if global_context.verbose:
print("Running compose status")
ctx.obj.deployer.status()
def update_operation(ctx): def update_operation(ctx):
global_context = ctx.parent.parent.obj ctx.obj.deployer.update()
if not global_context.dry_run:
if global_context.verbose:
print("Running compose update")
ctx.obj.deployer.update()
def ps_operation(ctx): def ps_operation(ctx):
global_context = ctx.parent.parent.obj global_context = ctx.parent.parent.obj
if not global_context.dry_run: if not global_context.dry_run:
if global_context.verbose:
print("Running compose ps")
container_list = ctx.obj.deployer.ps() container_list = ctx.obj.deployer.ps()
if len(container_list) > 0: if len(container_list) > 0:
print("Running containers:") print("Running containers:")
@ -187,15 +172,11 @@ def exec_operation(ctx, extra_args):
def logs_operation(ctx, tail: int, follow: bool, extra_args: str): def logs_operation(ctx, tail: int, follow: bool, extra_args: str):
global_context = ctx.parent.parent.obj
extra_args_list = list(extra_args) or None extra_args_list = list(extra_args) or None
if not global_context.dry_run: services_list = extra_args_list if extra_args_list is not None else []
if global_context.verbose: logs_stream = ctx.obj.deployer.logs(services=services_list, tail=tail, follow=follow, stream=True)
print("Running compose logs") for stream_type, stream_content in logs_stream:
services_list = extra_args_list if extra_args_list is not None else [] print(stream_content.decode("utf-8"), end="")
logs_stream = ctx.obj.deployer.logs(services=services_list, tail=tail, follow=follow, stream=True)
for stream_type, stream_content in logs_stream:
print(stream_content.decode("utf-8"), end="")
@command.command() @command.command()
@ -463,7 +444,7 @@ def _orchestrate_cluster_config(ctx, cluster_config, deployer, container_exec_en
tty=False, tty=False,
envs=container_exec_env) envs=container_exec_env)
waiting_for_data = False waiting_for_data = False
if ctx.debug: if ctx.debug and not waiting_for_data:
print(f"destination output: {destination_output}") print(f"destination output: {destination_output}")

View File

@ -81,69 +81,84 @@ class K8sDeployer(Deployer):
self.apps_api = client.AppsV1Api() self.apps_api = client.AppsV1Api()
self.custom_obj_api = client.CustomObjectsApi() self.custom_obj_api = client.CustomObjectsApi()
def up(self, detach, services): def _create_volume_data(self):
if self.is_kind():
# Create the kind cluster
create_cluster(self.kind_cluster_name, self.deployment_dir.joinpath(constants.kind_config_filename))
# Ensure the referenced containers are copied into kind
load_images_into_kind(self.kind_cluster_name, self.cluster_info.image_set)
self.connect_api()
# Create the host-path-mounted PVs for this deployment # Create the host-path-mounted PVs for this deployment
pvs = self.cluster_info.get_pvs() pvs = self.cluster_info.get_pvs()
for pv in pvs: for pv in pvs:
if opts.o.debug: if opts.o.debug:
print(f"Sending this pv: {pv}") print(f"Sending this pv: {pv}")
pv_resp = self.core_api.create_persistent_volume(body=pv) if not opts.o.dry_run:
if opts.o.debug: pv_resp = self.core_api.create_persistent_volume(body=pv)
print("PVs created:") if opts.o.debug:
print(f"{pv_resp}") print("PVs created:")
print(f"{pv_resp}")
# Figure out the PVCs for this deployment # Figure out the PVCs for this deployment
pvcs = self.cluster_info.get_pvcs() pvcs = self.cluster_info.get_pvcs()
for pvc in pvcs: for pvc in pvcs:
if opts.o.debug: if opts.o.debug:
print(f"Sending this pvc: {pvc}") print(f"Sending this pvc: {pvc}")
pvc_resp = self.core_api.create_namespaced_persistent_volume_claim(body=pvc, namespace=self.k8s_namespace)
if opts.o.debug: if not opts.o.dry_run:
print("PVCs created:") pvc_resp = self.core_api.create_namespaced_persistent_volume_claim(body=pvc, namespace=self.k8s_namespace)
print(f"{pvc_resp}") if opts.o.debug:
print("PVCs created:")
print(f"{pvc_resp}")
# Figure out the ConfigMaps for this deployment # Figure out the ConfigMaps for this deployment
config_maps = self.cluster_info.get_configmaps() config_maps = self.cluster_info.get_configmaps()
for cfg_map in config_maps: for cfg_map in config_maps:
if opts.o.debug: if opts.o.debug:
print(f"Sending this ConfigMap: {cfg_map}") print(f"Sending this ConfigMap: {cfg_map}")
cfg_rsp = self.core_api.create_namespaced_config_map( if not opts.o.dry_run:
body=cfg_map, cfg_rsp = self.core_api.create_namespaced_config_map(
namespace=self.k8s_namespace body=cfg_map,
) namespace=self.k8s_namespace
if opts.o.debug: )
print("ConfigMap created:") if opts.o.debug:
print(f"{cfg_rsp}") print("ConfigMap created:")
print(f"{cfg_rsp}")
def _create_deployment(self):
# Process compose files into a Deployment # Process compose files into a Deployment
deployment = self.cluster_info.get_deployment(image_pull_policy=None if self.is_kind() else "Always") deployment = self.cluster_info.get_deployment(image_pull_policy=None if self.is_kind() else "Always")
# Create the k8s objects # Create the k8s objects
if opts.o.debug: if opts.o.debug:
print(f"Sending this deployment: {deployment}") print(f"Sending this deployment: {deployment}")
deployment_resp = self.apps_api.create_namespaced_deployment( if not opts.o.dry_run:
body=deployment, namespace=self.k8s_namespace deployment_resp = self.apps_api.create_namespaced_deployment(
) body=deployment, namespace=self.k8s_namespace
if opts.o.debug: )
print("Deployment created:") if opts.o.debug:
print(f"{deployment_resp.metadata.namespace} {deployment_resp.metadata.name} \ print("Deployment created:")
{deployment_resp.metadata.generation} {deployment_resp.spec.template.spec.containers[0].image}") print(f"{deployment_resp.metadata.namespace} {deployment_resp.metadata.name} \
{deployment_resp.metadata.generation} {deployment_resp.spec.template.spec.containers[0].image}")
service: client.V1Service = self.cluster_info.get_service() service: client.V1Service = self.cluster_info.get_service()
service_resp = self.core_api.create_namespaced_service(
namespace=self.k8s_namespace,
body=service
)
if opts.o.debug: if opts.o.debug:
print("Service created:") print(f"Sending this service: {service}")
print(f"{service_resp}") if not opts.o.dry_run:
service_resp = self.core_api.create_namespaced_service(
namespace=self.k8s_namespace,
body=service
)
if opts.o.debug:
print("Service created:")
print(f"{service_resp}")
def up(self, detach, services):
if not opts.o.dry_run:
if self.is_kind():
# Create the kind cluster
create_cluster(self.kind_cluster_name, self.deployment_dir.joinpath(constants.kind_config_filename))
# Ensure the referenced containers are copied into kind
load_images_into_kind(self.kind_cluster_name, self.cluster_info.image_set)
self.connect_api()
else:
print("Dry run mode enabled, skipping k8s API connect")
self._create_volume_data()
self._create_deployment()
if not self.is_kind(): if not self.is_kind():
ingress: client.V1Ingress = self.cluster_info.get_ingress() ingress: client.V1Ingress = self.cluster_info.get_ingress()
@ -151,13 +166,14 @@ class K8sDeployer(Deployer):
if ingress: if ingress:
if opts.o.debug: if opts.o.debug:
print(f"Sending this ingress: {ingress}") print(f"Sending this ingress: {ingress}")
ingress_resp = self.networking_api.create_namespaced_ingress( if not opts.o.dry_run:
namespace=self.k8s_namespace, ingress_resp = self.networking_api.create_namespaced_ingress(
body=ingress namespace=self.k8s_namespace,
) body=ingress
if opts.o.debug: )
print("Ingress created:") if opts.o.debug:
print(f"{ingress_resp}") print("Ingress created:")
print(f"{ingress_resp}")
else: else:
if opts.o.debug: if opts.o.debug:
print("No ingress configured") print("No ingress configured")

View File

@ -97,11 +97,17 @@ def volume_mounts_for_service(parsed_pod_files, service):
if "volumes" in service_obj: if "volumes" in service_obj:
volumes = service_obj["volumes"] volumes = service_obj["volumes"]
for mount_string in volumes: for mount_string in volumes:
# Looks like: test-data:/data # Looks like: test-data:/data or test-data:/data:ro or test-data:/data:rw
parts = mount_string.split(":") if opts.o.debug:
volume_name = parts[0] print(f"mount_string: {mount_string}")
mount_path = parts[1] mount_split = mount_string.split(":")
mount_options = parts[2] if len(parts) == 3 else None volume_name = mount_split[0]
mount_path = mount_split[1]
mount_options = mount_split[2] if len(mount_split) == 3 else None
if opts.o.debug:
print(f"volumne_name: {volume_name}")
print(f"mount path: {mount_path}")
print(f"mount options: {mount_options}")
volume_device = client.V1VolumeMount( volume_device = client.V1VolumeMount(
mount_path=mount_path, name=volume_name, read_only="ro" == mount_options) mount_path=mount_path, name=volume_name, read_only="ro" == mount_options)
result.append(volume_device) result.append(volume_device)
@ -163,12 +169,20 @@ def _generate_kind_mounts(parsed_pod_files, deployment_dir, deployment_context):
if "volumes" in service_obj: if "volumes" in service_obj:
volumes = service_obj["volumes"] volumes = service_obj["volumes"]
for mount_string in volumes: for mount_string in volumes:
# Looks like: test-data:/data # Looks like: test-data:/data or test-data:/data:ro or test-data:/data:rw
volume_name = mount_string.split(":")[0] if opts.o.debug:
print(f"mount_string: {mount_string}")
mount_split = mount_string.split(":")
volume_name = mount_split[0]
mount_path = mount_split[1]
if opts.o.debug:
print(f"volumne_name: {volume_name}")
print(f"map: {volume_host_path_map}")
print(f"mount path: {mount_path}")
if volume_name not in deployment_context.spec.get_configmaps(): if volume_name not in deployment_context.spec.get_configmaps():
volume_definitions.append( volume_definitions.append(
f" - hostPath: {_make_absolute_host_path(volume_host_path_map[volume_name], deployment_dir)}\n" f" - hostPath: {_make_absolute_host_path(volume_host_path_map[volume_name], deployment_dir)}\n"
f" containerPath: {get_node_pv_mount_path(volume_name)}" f" containerPath: {get_node_pv_mount_path(volume_name)}\n"
) )
return ( return (
"" if len(volume_definitions) == 0 else ( "" if len(volume_definitions) == 0 else (
@ -191,7 +205,7 @@ def _generate_kind_port_mappings(parsed_pod_files):
for port_string in ports: for port_string in ports:
# TODO handle the complex cases # TODO handle the complex cases
# Looks like: 80 or something more complicated # Looks like: 80 or something more complicated
port_definitions.append(f" - containerPort: {port_string}\n hostPort: {port_string}") port_definitions.append(f" - containerPort: {port_string}\n hostPort: {port_string}\n")
return ( return (
"" if len(port_definitions) == 0 else ( "" if len(port_definitions) == 0 else (
" extraPortMappings:\n" " extraPortMappings:\n"

View File

@ -19,7 +19,7 @@ import sys
import ruamel.yaml import ruamel.yaml
from pathlib import Path from pathlib import Path
from dotenv import dotenv_values from dotenv import dotenv_values
from typing import Mapping from typing import Mapping, Set, List
def include_exclude_check(s, include, exclude): def include_exclude_check(s, include, exclude):
@ -81,17 +81,17 @@ def get_pod_list(parsed_stack):
return result return result
def get_plugin_code_paths(stack): def get_plugin_code_paths(stack) -> List[Path]:
parsed_stack = get_parsed_stack_config(stack) parsed_stack = get_parsed_stack_config(stack)
pods = parsed_stack["pods"] pods = parsed_stack["pods"]
result = [] result: Set[Path] = set()
for pod in pods: for pod in pods:
if type(pod) is str: if type(pod) is str:
result.append(get_stack_file_path(stack).parent) result.add(get_stack_file_path(stack).parent)
else: else:
pod_root_dir = os.path.join(get_dev_root_path(None), pod["repository"].split("/")[-1], pod["path"]) pod_root_dir = os.path.join(get_dev_root_path(None), pod["repository"].split("/")[-1], pod["path"])
result.append(Path(os.path.join(pod_root_dir, "stack"))) result.add(Path(os.path.join(pod_root_dir, "stack")))
return result return list(result)
def get_pod_file_path(parsed_stack, pod_name: str): def get_pod_file_path(parsed_stack, pod_name: str):
@ -139,6 +139,13 @@ def get_compose_file_dir():
return source_compose_dir return source_compose_dir
def get_config_file_dir():
# TODO: refactor to use common code with deploy command
data_dir = Path(__file__).absolute().parent.joinpath("data")
source_config_dir = data_dir.joinpath("config")
return source_config_dir
def get_parsed_deployment_spec(spec_file): def get_parsed_deployment_spec(spec_file):
spec_file_path = Path(spec_file) spec_file_path = Path(spec_file)
try: try: