Merge branch 'main' into dboreham/test-database-stack
	
		
			
	
		
	
	
		
	
		
			All checks were successful
		
		
	
	
	
				
					
				
			
		
			All checks were successful
		
		
	
	
This commit is contained in:
		
						commit
						b5656b8c8f
					
				
							
								
								
									
										21
									
								
								.gitea/workflows/lint.yml
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										21
									
								
								.gitea/workflows/lint.yml
									
									
									
									
									
										Normal file
									
								
							| @ -0,0 +1,21 @@ | |||||||
|  | name: Lint Checks | ||||||
|  | 
 | ||||||
|  | on: | ||||||
|  |   pull_request: | ||||||
|  |     branches: '*' | ||||||
|  |   push: | ||||||
|  |     branches: '*' | ||||||
|  | 
 | ||||||
|  | jobs: | ||||||
|  |   test: | ||||||
|  |     name: "Run linter" | ||||||
|  |     runs-on: ubuntu-latest | ||||||
|  |     steps: | ||||||
|  |       - name: "Clone project repository" | ||||||
|  |         uses: actions/checkout@v3 | ||||||
|  |       - name: "Install Python" | ||||||
|  |         uses: actions/setup-python@v4 | ||||||
|  |         with: | ||||||
|  |           python-version: '3.8' | ||||||
|  |       - name : "Run flake8" | ||||||
|  |         uses: py-actions/flake8@v2 | ||||||
| @ -29,10 +29,10 @@ chmod +x ~/.docker/cli-plugins/docker-compose | |||||||
| Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be  | Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be  | ||||||
| a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory. | a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory. | ||||||
| 
 | 
 | ||||||
| Now, having selected that directory, download the latest release from [this page](https://github.com/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable: | Now, having selected that directory, download the latest release from [this page](https://git.vdb.to/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable: | ||||||
| 
 | 
 | ||||||
| ```bash | ```bash | ||||||
| curl -L -o ~/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so | curl -L -o ~/bin/laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so | ||||||
| ``` | ``` | ||||||
| 
 | 
 | ||||||
| Give it execute permissions: | Give it execute permissions: | ||||||
| @ -52,7 +52,7 @@ Version: 1.1.0-7a607c2-202304260513 | |||||||
| Save the distribution url to `~/.laconic-so/config.yml`: | Save the distribution url to `~/.laconic-so/config.yml`: | ||||||
| ```bash | ```bash | ||||||
| mkdir ~/.laconic-so | mkdir ~/.laconic-so | ||||||
| echo "distribution-url: https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so" >  ~/.laconic-so/config.yml | echo "distribution-url: https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so" >  ~/.laconic-so/config.yml | ||||||
| ``` | ``` | ||||||
| 
 | 
 | ||||||
| ### Update | ### Update | ||||||
|  | |||||||
| @ -26,7 +26,7 @@ In addition to the pre-requisites listed in the [README](/README.md), the follow | |||||||
| 
 | 
 | ||||||
| 1. Clone this repository: | 1. Clone this repository: | ||||||
|    ``` |    ``` | ||||||
|    $ git clone https://github.com/cerc-io/stack-orchestrator.git |    $ git clone https://git.vdb.to/cerc-io/stack-orchestrator.git | ||||||
|    ``` |    ``` | ||||||
| 
 | 
 | ||||||
| 2. Enter the project directory: | 2. Enter the project directory: | ||||||
|  | |||||||
| @ -1,10 +1,10 @@ | |||||||
| # Adding a new stack | # Adding a new stack | ||||||
| 
 | 
 | ||||||
| See [this PR](https://github.com/cerc-io/stack-orchestrator/pull/434) for an example of how to currently add a minimal stack to stack orchestrator. The [reth stack](https://github.com/cerc-io/stack-orchestrator/pull/435) is another good example. | See [this PR](https://git.vdb.to/cerc-io/stack-orchestrator/pull/434) for an example of how to currently add a minimal stack to stack orchestrator. The [reth stack](https://git.vdb.to/cerc-io/stack-orchestrator/pull/435) is another good example. | ||||||
| 
 | 
 | ||||||
| For external developers, we recommend forking this repo and adding your stack directly to your fork. This initially requires running in "developer mode" as described [here](/docs/CONTRIBUTING.md). Check out the [Namada stack](https://github.com/vknowable/stack-orchestrator/blob/main/app/data/stacks/public-namada/digitalocean_quickstart.md) from Knowable to see how that is done. | For external developers, we recommend forking this repo and adding your stack directly to your fork. This initially requires running in "developer mode" as described [here](/docs/CONTRIBUTING.md). Check out the [Namada stack](https://github.com/vknowable/stack-orchestrator/blob/main/app/data/stacks/public-namada/digitalocean_quickstart.md) from Knowable to see how that is done. | ||||||
| 
 | 
 | ||||||
| Core to the feature completeness of stack orchestrator is to [decouple the tool functionality from payload](https://github.com/cerc-io/stack-orchestrator/issues/315) which will no longer require forking to add a stack. | Core to the feature completeness of stack orchestrator is to [decouple the tool functionality from payload](https://git.vdb.to/cerc-io/stack-orchestrator/issues/315) which will no longer require forking to add a stack. | ||||||
| 
 | 
 | ||||||
| ## Example | ## Example | ||||||
| 
 | 
 | ||||||
|  | |||||||
| @ -1,6 +1,6 @@ | |||||||
| # Specification | # Specification | ||||||
| 
 | 
 | ||||||
| Note: this page is out of date (but still useful) - it will no longer be useful once stacks are [decoupled from the tool functionality](https://github.com/cerc-io/stack-orchestrator/issues/315). | Note: this page is out of date (but still useful) - it will no longer be useful once stacks are [decoupled from the tool functionality](https://git.vdb.to/cerc-io/stack-orchestrator/issues/315). | ||||||
| 
 | 
 | ||||||
| ## Implementation | ## Implementation | ||||||
| 
 | 
 | ||||||
|  | |||||||
| @ -10,3 +10,4 @@ pydantic==1.10.9 | |||||||
| tomli==2.0.1 | tomli==2.0.1 | ||||||
| validators==0.22.0 | validators==0.22.0 | ||||||
| kubernetes>=28.1.0 | kubernetes>=28.1.0 | ||||||
|  | humanfriendly>=10.0 | ||||||
|  | |||||||
| @ -41,4 +41,4 @@ runcmd: | |||||||
|   - apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin |   - apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin | ||||||
|   - systemctl enable docker |   - systemctl enable docker | ||||||
|   - systemctl start docker |   - systemctl start docker | ||||||
|   - git clone https://github.com/cerc-io/stack-orchestrator.git /home/ubuntu/stack-orchestrator |   - git clone https://git.vdb.to/cerc-io/stack-orchestrator.git /home/ubuntu/stack-orchestrator | ||||||
|  | |||||||
| @ -31,5 +31,5 @@ runcmd: | |||||||
|   - apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin |   - apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin | ||||||
|   - systemctl enable docker |   - systemctl enable docker | ||||||
|   - systemctl start docker |   - systemctl start docker | ||||||
|   - curl -L -o /usr/local/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so |   - curl -L -o /usr/local/bin/laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so | ||||||
|   - chmod +x /usr/local/bin/laconic-so |   - chmod +x /usr/local/bin/laconic-so | ||||||
|  | |||||||
| @ -137,7 +137,7 @@ fi | |||||||
| echo "**************************************************************************************" | echo "**************************************************************************************" | ||||||
| echo "Installing laconic-so" | echo "Installing laconic-so" | ||||||
| # install latest `laconic-so` | # install latest `laconic-so` | ||||||
| distribution_url=https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so | distribution_url=https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so | ||||||
| install_filename=${install_dir}/laconic-so | install_filename=${install_dir}/laconic-so | ||||||
| mkdir -p  ${install_dir} | mkdir -p  ${install_dir} | ||||||
| curl -L -o ${install_filename} ${distribution_url} | curl -L -o ${install_filename} ${distribution_url} | ||||||
|  | |||||||
							
								
								
									
										2
									
								
								setup.py
									
									
									
									
									
								
							
							
						
						
									
										2
									
								
								setup.py
									
									
									
									
									
								
							| @ -13,7 +13,7 @@ setup( | |||||||
|     description='Orchestrates deployment of the Laconic stack', |     description='Orchestrates deployment of the Laconic stack', | ||||||
|     long_description=long_description, |     long_description=long_description, | ||||||
|     long_description_content_type="text/markdown", |     long_description_content_type="text/markdown", | ||||||
|     url='https://github.com/cerc-io/stack-orchestrator', |     url='https://git.vdb.to/cerc-io/stack-orchestrator', | ||||||
|     py_modules=['stack_orchestrator'], |     py_modules=['stack_orchestrator'], | ||||||
|     packages=find_packages(), |     packages=find_packages(), | ||||||
|     install_requires=[requirements], |     install_requires=[requirements], | ||||||
|  | |||||||
| @ -5,6 +5,7 @@ services: | |||||||
|     environment: |     environment: | ||||||
|       CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG} |       CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG} | ||||||
|       CERC_TEST_PARAM_1: ${CERC_TEST_PARAM_1:-FAILED} |       CERC_TEST_PARAM_1: ${CERC_TEST_PARAM_1:-FAILED} | ||||||
|  |       CERC_TEST_PARAM_2: "CERC_TEST_PARAM_2_VALUE" | ||||||
|     volumes: |     volumes: | ||||||
|       - test-data:/data |       - test-data:/data | ||||||
|       - test-config:/config:ro |       - test-config:/config:ro | ||||||
|  | |||||||
| @ -17,6 +17,9 @@ fi | |||||||
| if [ -n "$CERC_TEST_PARAM_1" ]; then | if [ -n "$CERC_TEST_PARAM_1" ]; then | ||||||
|   echo "Test-param-1: ${CERC_TEST_PARAM_1}" |   echo "Test-param-1: ${CERC_TEST_PARAM_1}" | ||||||
| fi | fi | ||||||
|  | if [ -n "$CERC_TEST_PARAM_2" ]; then | ||||||
|  |   echo "Test-param-2: ${CERC_TEST_PARAM_2}" | ||||||
|  | fi | ||||||
| 
 | 
 | ||||||
| if [ -d "/config" ]; then | if [ -d "/config" ]; then | ||||||
|   echo "/config: EXISTS" |   echo "/config: EXISTS" | ||||||
|  | |||||||
| @ -1,6 +1,6 @@ | |||||||
| # fixturenet-eth | # fixturenet-eth | ||||||
| 
 | 
 | ||||||
| Instructions for deploying a local a geth + lighthouse blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator (the installation of which is covered [here](https://github.com/cerc-io/stack-orchestrator)): | Instructions for deploying a local a geth + lighthouse blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator (the installation of which is covered [here](https://git.vdb.to/cerc-io/stack-orchestrator)): | ||||||
| 
 | 
 | ||||||
| ## Clone required repositories | ## Clone required repositories | ||||||
| 
 | 
 | ||||||
|  | |||||||
| @ -7,11 +7,11 @@ Instructions for deploying a local Laconic blockchain "fixturenet" for developme | |||||||
| **Note:** For building some NPMs, access to the @lirewine repositories is required. If you don't have access, see [this tutorial](/docs/laconicd-fixturenet.md) to run this stack | **Note:** For building some NPMs, access to the @lirewine repositories is required. If you don't have access, see [this tutorial](/docs/laconicd-fixturenet.md) to run this stack | ||||||
| 
 | 
 | ||||||
| ## 1. Install Laconic Stack Orchestrator | ## 1. Install Laconic Stack Orchestrator | ||||||
| Installation is covered in detail [here](https://github.com/cerc-io/stack-orchestrator#user-mode) but if you're on Linux and already have docker installed it should be as simple as: | Installation is covered in detail [here](https://git.vdb.to/cerc-io/stack-orchestrator#user-mode) but if you're on Linux and already have docker installed it should be as simple as: | ||||||
| ``` | ``` | ||||||
| $ mkdir my-working-dir | $ mkdir my-working-dir | ||||||
| $ cd my-working-dir | $ cd my-working-dir | ||||||
| $ curl -L -o ./laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so | $ curl -L -o ./laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so | ||||||
| $ chmod +x ./laconic-so | $ chmod +x ./laconic-so | ||||||
| $ export PATH=$PATH:$(pwd)  # Or move laconic-so to ~/bin or your favorite on-path directory | $ export PATH=$PATH:$(pwd)  # Or move laconic-so to ~/bin or your favorite on-path directory | ||||||
| ``` | ``` | ||||||
|  | |||||||
| @ -3,11 +3,11 @@ | |||||||
| Instructions for deploying a local Laconic blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator. | Instructions for deploying a local Laconic blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator. | ||||||
| 
 | 
 | ||||||
| ## 1. Install Laconic Stack Orchestrator | ## 1. Install Laconic Stack Orchestrator | ||||||
| Installation is covered in detail [here](https://github.com/cerc-io/stack-orchestrator#user-mode) but if you're on Linux and already have docker installed it should be as simple as: | Installation is covered in detail [here](https://git.vdb.to/cerc-io/stack-orchestrator#user-mode) but if you're on Linux and already have docker installed it should be as simple as: | ||||||
| ``` | ``` | ||||||
| $ mkdir my-working-dir | $ mkdir my-working-dir | ||||||
| $ cd my-working-dir | $ cd my-working-dir | ||||||
| $ curl -L -o ./laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so | $ curl -L -o ./laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so | ||||||
| $ chmod +x ./laconic-so | $ chmod +x ./laconic-so | ||||||
| $ export PATH=$PATH:$(pwd)  # Or move laconic-so to ~/bin or your favorite on-path directory | $ export PATH=$PATH:$(pwd)  # Or move laconic-so to ~/bin or your favorite on-path directory | ||||||
| ``` | ``` | ||||||
|  | |||||||
| @ -4,7 +4,7 @@ The MobyMask watcher is a Laconic Network component that provides efficient acce | |||||||
| 
 | 
 | ||||||
| ## Deploy the MobyMask Watcher | ## Deploy the MobyMask Watcher | ||||||
| 
 | 
 | ||||||
| The instructions below show how to deploy a MobyMask watcher using laconic-stack-orchestrator (the installation of which is covered [here](https://github.com/cerc-io/stack-orchestrator#install)). | The instructions below show how to deploy a MobyMask watcher using laconic-stack-orchestrator (the installation of which is covered [here](https://git.vdb.to/cerc-io/stack-orchestrator#install)). | ||||||
| 
 | 
 | ||||||
| This deployment expects that ipld-eth-server's endpoints are available on the local machine at http://ipld-eth-server.example.com:8083/graphql and http://ipld-eth-server.example.com:8082. More advanced configurations are supported by modifying the watcher's [config file](../../config/watcher-mobymask/mobymask-watcher.toml). | This deployment expects that ipld-eth-server's endpoints are available on the local machine at http://ipld-eth-server.example.com:8083/graphql and http://ipld-eth-server.example.com:8082. More advanced configurations are supported by modifying the watcher's [config file](../../config/watcher-mobymask/mobymask-watcher.toml). | ||||||
| 
 | 
 | ||||||
|  | |||||||
| @ -2,10 +2,10 @@ version: "1.0" | |||||||
| name: webapp-deployer-backend | name: webapp-deployer-backend | ||||||
| description: "Deployer for webapps" | description: "Deployer for webapps" | ||||||
| repos: | repos: | ||||||
|   - git.vdb.to:telackey/webapp-deployment-status-api |   - git.vdb.to/telackey/webapp-deployment-status-api | ||||||
| containers: | containers: | ||||||
|   - cerc/webapp-deployer-backend |   - cerc/webapp-deployer-backend | ||||||
| pods: | pods: | ||||||
|   - name: webapp-deployer-backend |   - name: webapp-deployer-backend | ||||||
|     repository: git.vdb.to:telackey/webapp-deployment-status-api |     repository: git.vdb.to/telackey/webapp-deployment-status-api | ||||||
|     path: ./ |     path: ./ | ||||||
|  | |||||||
| @ -22,12 +22,41 @@ from stack_orchestrator.opts import opts | |||||||
| from stack_orchestrator.util import env_var_map_from_file | from stack_orchestrator.util import env_var_map_from_file | ||||||
| from stack_orchestrator.deploy.k8s.helpers import named_volumes_from_pod_files, volume_mounts_for_service, volumes_for_pod_files | from stack_orchestrator.deploy.k8s.helpers import named_volumes_from_pod_files, volume_mounts_for_service, volumes_for_pod_files | ||||||
| from stack_orchestrator.deploy.k8s.helpers import get_node_pv_mount_path | from stack_orchestrator.deploy.k8s.helpers import get_node_pv_mount_path | ||||||
| from stack_orchestrator.deploy.k8s.helpers import envs_from_environment_variables_map | from stack_orchestrator.deploy.k8s.helpers import envs_from_environment_variables_map, envs_from_compose_file, merge_envs | ||||||
| from stack_orchestrator.deploy.deploy_util import parsed_pod_files_map_from_file_names, images_for_deployment | from stack_orchestrator.deploy.deploy_util import parsed_pod_files_map_from_file_names, images_for_deployment | ||||||
| from stack_orchestrator.deploy.deploy_types import DeployEnvVars | from stack_orchestrator.deploy.deploy_types import DeployEnvVars | ||||||
| from stack_orchestrator.deploy.spec import Spec | from stack_orchestrator.deploy.spec import Spec, Resources, ResourceLimits | ||||||
| from stack_orchestrator.deploy.images import remote_tag_for_image | from stack_orchestrator.deploy.images import remote_tag_for_image | ||||||
| 
 | 
 | ||||||
|  | DEFAULT_VOLUME_RESOURCES = Resources({ | ||||||
|  |     "reservations": {"storage": "2Gi"} | ||||||
|  | }) | ||||||
|  | 
 | ||||||
|  | DEFAULT_CONTAINER_RESOURCES = Resources({ | ||||||
|  |     "reservations": {"cpus": "0.1", "memory": "200M"}, | ||||||
|  |     "limits": {"cpus": "1.0", "memory": "2000M"}, | ||||||
|  | }) | ||||||
|  | 
 | ||||||
|  | 
 | ||||||
|  | def to_k8s_resource_requirements(resources: Resources) -> client.V1ResourceRequirements: | ||||||
|  |     def to_dict(limits: ResourceLimits): | ||||||
|  |         if not limits: | ||||||
|  |             return None | ||||||
|  | 
 | ||||||
|  |         ret = {} | ||||||
|  |         if limits.cpus: | ||||||
|  |             ret["cpu"] = str(limits.cpus) | ||||||
|  |         if limits.memory: | ||||||
|  |             ret["memory"] = f"{int(limits.memory / (1000 * 1000))}M" | ||||||
|  |         if limits.storage: | ||||||
|  |             ret["storage"] = f"{int(limits.storage / (1000 * 1000))}M" | ||||||
|  |         return ret | ||||||
|  | 
 | ||||||
|  |     return client.V1ResourceRequirements( | ||||||
|  |         requests=to_dict(resources.reservations), | ||||||
|  |         limits=to_dict(resources.limits) | ||||||
|  |     ) | ||||||
|  | 
 | ||||||
| 
 | 
 | ||||||
| class ClusterInfo: | class ClusterInfo: | ||||||
|     parsed_pod_yaml_map: Any |     parsed_pod_yaml_map: Any | ||||||
| @ -135,9 +164,13 @@ class ClusterInfo: | |||||||
|         result = [] |         result = [] | ||||||
|         spec_volumes = self.spec.get_volumes() |         spec_volumes = self.spec.get_volumes() | ||||||
|         named_volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map) |         named_volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map) | ||||||
|  |         resources = self.spec.get_volume_resources() | ||||||
|  |         if not resources: | ||||||
|  |             resources = DEFAULT_VOLUME_RESOURCES | ||||||
|         if opts.o.debug: |         if opts.o.debug: | ||||||
|             print(f"Spec Volumes: {spec_volumes}") |             print(f"Spec Volumes: {spec_volumes}") | ||||||
|             print(f"Named Volumes: {named_volumes}") |             print(f"Named Volumes: {named_volumes}") | ||||||
|  |             print(f"Resources: {resources}") | ||||||
|         for volume_name in spec_volumes: |         for volume_name in spec_volumes: | ||||||
|             if volume_name not in named_volumes: |             if volume_name not in named_volumes: | ||||||
|                 if opts.o.debug: |                 if opts.o.debug: | ||||||
| @ -146,9 +179,7 @@ class ClusterInfo: | |||||||
|             spec = client.V1PersistentVolumeClaimSpec( |             spec = client.V1PersistentVolumeClaimSpec( | ||||||
|                 access_modes=["ReadWriteOnce"], |                 access_modes=["ReadWriteOnce"], | ||||||
|                 storage_class_name="manual", |                 storage_class_name="manual", | ||||||
|                 resources=client.V1ResourceRequirements( |                 resources=to_k8s_resource_requirements(resources), | ||||||
|                     requests={"storage": "2Gi"} |  | ||||||
|                 ), |  | ||||||
|                 volume_name=f"{self.app_name}-{volume_name}" |                 volume_name=f"{self.app_name}-{volume_name}" | ||||||
|             ) |             ) | ||||||
|             pvc = client.V1PersistentVolumeClaim( |             pvc = client.V1PersistentVolumeClaim( | ||||||
| @ -192,6 +223,9 @@ class ClusterInfo: | |||||||
|         result = [] |         result = [] | ||||||
|         spec_volumes = self.spec.get_volumes() |         spec_volumes = self.spec.get_volumes() | ||||||
|         named_volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map) |         named_volumes = named_volumes_from_pod_files(self.parsed_pod_yaml_map) | ||||||
|  |         resources = self.spec.get_volume_resources() | ||||||
|  |         if not resources: | ||||||
|  |             resources = DEFAULT_VOLUME_RESOURCES | ||||||
|         for volume_name in spec_volumes: |         for volume_name in spec_volumes: | ||||||
|             if volume_name not in named_volumes: |             if volume_name not in named_volumes: | ||||||
|                 if opts.o.debug: |                 if opts.o.debug: | ||||||
| @ -200,7 +234,7 @@ class ClusterInfo: | |||||||
|             spec = client.V1PersistentVolumeSpec( |             spec = client.V1PersistentVolumeSpec( | ||||||
|                 storage_class_name="manual", |                 storage_class_name="manual", | ||||||
|                 access_modes=["ReadWriteOnce"], |                 access_modes=["ReadWriteOnce"], | ||||||
|                 capacity={"storage": "2Gi"}, |                 capacity=to_k8s_resource_requirements(resources).requests, | ||||||
|                 host_path=client.V1HostPathVolumeSource(path=get_node_pv_mount_path(volume_name)) |                 host_path=client.V1HostPathVolumeSource(path=get_node_pv_mount_path(volume_name)) | ||||||
|             ) |             ) | ||||||
|             pv = client.V1PersistentVolume( |             pv = client.V1PersistentVolume( | ||||||
| @ -214,6 +248,9 @@ class ClusterInfo: | |||||||
|     # TODO: put things like image pull policy into an object-scope struct |     # TODO: put things like image pull policy into an object-scope struct | ||||||
|     def get_deployment(self, image_pull_policy: str = None): |     def get_deployment(self, image_pull_policy: str = None): | ||||||
|         containers = [] |         containers = [] | ||||||
|  |         resources = self.spec.get_container_resources() | ||||||
|  |         if not resources: | ||||||
|  |             resources = DEFAULT_CONTAINER_RESOURCES | ||||||
|         for pod_name in self.parsed_pod_yaml_map: |         for pod_name in self.parsed_pod_yaml_map: | ||||||
|             pod = self.parsed_pod_yaml_map[pod_name] |             pod = self.parsed_pod_yaml_map[pod_name] | ||||||
|             services = pod["services"] |             services = pod["services"] | ||||||
| @ -226,6 +263,13 @@ class ClusterInfo: | |||||||
|                     if opts.o.debug: |                     if opts.o.debug: | ||||||
|                         print(f"image: {image}") |                         print(f"image: {image}") | ||||||
|                         print(f"service port: {port}") |                         print(f"service port: {port}") | ||||||
|  |                 merged_envs = merge_envs( | ||||||
|  |                     envs_from_compose_file( | ||||||
|  |                         service_info["environment"]), self.environment_variables.map | ||||||
|  |                         ) if "environment" in service_info else self.environment_variables.map | ||||||
|  |                 envs = envs_from_environment_variables_map(merged_envs) | ||||||
|  |                 if opts.o.debug: | ||||||
|  |                     print(f"Merged envs: {envs}") | ||||||
|                 # Re-write the image tag for remote deployment |                 # Re-write the image tag for remote deployment | ||||||
|                 image_to_use = remote_tag_for_image( |                 image_to_use = remote_tag_for_image( | ||||||
|                     image, self.spec.get_image_registry()) if self.spec.get_image_registry() is not None else image |                     image, self.spec.get_image_registry()) if self.spec.get_image_registry() is not None else image | ||||||
| @ -234,13 +278,10 @@ class ClusterInfo: | |||||||
|                     name=container_name, |                     name=container_name, | ||||||
|                     image=image_to_use, |                     image=image_to_use, | ||||||
|                     image_pull_policy=image_pull_policy, |                     image_pull_policy=image_pull_policy, | ||||||
|                     env=envs_from_environment_variables_map(self.environment_variables.map), |                     env=envs, | ||||||
|                     ports=[client.V1ContainerPort(container_port=port)], |                     ports=[client.V1ContainerPort(container_port=port)], | ||||||
|                     volume_mounts=volume_mounts, |                     volume_mounts=volume_mounts, | ||||||
|                     resources=client.V1ResourceRequirements( |                     resources=to_k8s_resource_requirements(resources), | ||||||
|                         requests={"cpu": "100m", "memory": "200Mi"}, |  | ||||||
|                         limits={"cpu": "1000m", "memory": "2000Mi"}, |  | ||||||
|                     ), |  | ||||||
|                 ) |                 ) | ||||||
|                 containers.append(container) |                 containers.append(container) | ||||||
|         volumes = volumes_for_pod_files(self.parsed_pod_yaml_map, self.spec, self.app_name) |         volumes = volumes_for_pod_files(self.parsed_pod_yaml_map, self.spec, self.app_name) | ||||||
|  | |||||||
| @ -17,6 +17,7 @@ from kubernetes import client | |||||||
| import os | import os | ||||||
| from pathlib import Path | from pathlib import Path | ||||||
| import subprocess | import subprocess | ||||||
|  | import re | ||||||
| from typing import Set, Mapping, List | from typing import Set, Mapping, List | ||||||
| 
 | 
 | ||||||
| from stack_orchestrator.opts import opts | from stack_orchestrator.opts import opts | ||||||
| @ -214,6 +215,33 @@ def _generate_kind_port_mappings(parsed_pod_files): | |||||||
|     ) |     ) | ||||||
| 
 | 
 | ||||||
| 
 | 
 | ||||||
|  | # Note: this makes any duplicate definition in b overwrite a | ||||||
|  | def merge_envs(a: Mapping[str, str], b: Mapping[str, str]) -> Mapping[str, str]: | ||||||
|  |     result = {**a, **b} | ||||||
|  |     return result | ||||||
|  | 
 | ||||||
|  | 
 | ||||||
|  | def _expand_shell_vars(raw_val: str) -> str: | ||||||
|  |     # could be: <string> or ${<env-var-name>} or ${<env-var-name>:-<default-value>} | ||||||
|  |     # TODO: implement support for variable substitution and default values | ||||||
|  |     # if raw_val is like ${<something>} print a warning and substitute an empty string | ||||||
|  |     # otherwise return raw_val | ||||||
|  |     match = re.search(r"^\$\{(.*)\}$", raw_val) | ||||||
|  |     if match: | ||||||
|  |         print(f"WARNING: found unimplemented environment variable substitution: {raw_val}") | ||||||
|  |     else: | ||||||
|  |         return raw_val | ||||||
|  | 
 | ||||||
|  | 
 | ||||||
|  | # TODO: handle the case where the same env var is defined in multiple places | ||||||
|  | def envs_from_compose_file(compose_file_envs: Mapping[str, str]) -> Mapping[str, str]: | ||||||
|  |     result = {} | ||||||
|  |     for env_var, env_val in compose_file_envs.items(): | ||||||
|  |         expanded_env_val = _expand_shell_vars(env_val) | ||||||
|  |         result.update({env_var: expanded_env_val}) | ||||||
|  |     return result | ||||||
|  | 
 | ||||||
|  | 
 | ||||||
| def envs_from_environment_variables_map(map: Mapping[str, str]) -> List[client.V1EnvVar]: | def envs_from_environment_variables_map(map: Mapping[str, str]) -> List[client.V1EnvVar]: | ||||||
|     result = [] |     result = [] | ||||||
|     for env_var, env_val in map.items(): |     for env_var, env_val in map.items(): | ||||||
|  | |||||||
| @ -13,12 +13,60 @@ | |||||||
| # You should have received a copy of the GNU Affero General Public License | # You should have received a copy of the GNU Affero General Public License | ||||||
| # along with this program.  If not, see <http:#www.gnu.org/licenses/>. | # along with this program.  If not, see <http:#www.gnu.org/licenses/>. | ||||||
| 
 | 
 | ||||||
| from pathlib import Path |  | ||||||
| import typing | import typing | ||||||
|  | import humanfriendly | ||||||
|  | 
 | ||||||
|  | from pathlib import Path | ||||||
|  | 
 | ||||||
| from stack_orchestrator.util import get_yaml | from stack_orchestrator.util import get_yaml | ||||||
| from stack_orchestrator import constants | from stack_orchestrator import constants | ||||||
| 
 | 
 | ||||||
| 
 | 
 | ||||||
|  | class ResourceLimits: | ||||||
|  |     cpus: float = None | ||||||
|  |     memory: int = None | ||||||
|  |     storage: int = None | ||||||
|  | 
 | ||||||
|  |     def __init__(self, obj={}): | ||||||
|  |         if "cpus" in obj: | ||||||
|  |             self.cpus = float(obj["cpus"]) | ||||||
|  |         if "memory" in obj: | ||||||
|  |             self.memory = humanfriendly.parse_size(obj["memory"]) | ||||||
|  |         if "storage" in obj: | ||||||
|  |             self.storage = humanfriendly.parse_size(obj["storage"]) | ||||||
|  | 
 | ||||||
|  |     def __len__(self): | ||||||
|  |         return len(self.__dict__) | ||||||
|  | 
 | ||||||
|  |     def __iter__(self): | ||||||
|  |         for k in self.__dict__: | ||||||
|  |             yield k, self.__dict__[k] | ||||||
|  | 
 | ||||||
|  |     def __repr__(self): | ||||||
|  |         return str(self.__dict__) | ||||||
|  | 
 | ||||||
|  | 
 | ||||||
|  | class Resources: | ||||||
|  |     limits: ResourceLimits = None | ||||||
|  |     reservations: ResourceLimits = None | ||||||
|  | 
 | ||||||
|  |     def __init__(self, obj={}): | ||||||
|  |         if "reservations" in obj: | ||||||
|  |             self.reservations = ResourceLimits(obj["reservations"]) | ||||||
|  |         if "limits" in obj: | ||||||
|  |             self.limits = ResourceLimits(obj["limits"]) | ||||||
|  | 
 | ||||||
|  |     def __len__(self): | ||||||
|  |         return len(self.__dict__) | ||||||
|  | 
 | ||||||
|  |     def __iter__(self): | ||||||
|  |         for k in self.__dict__: | ||||||
|  |             yield k, self.__dict__[k] | ||||||
|  | 
 | ||||||
|  |     def __repr__(self): | ||||||
|  |         return str(self.__dict__) | ||||||
|  | 
 | ||||||
|  | 
 | ||||||
| class Spec: | class Spec: | ||||||
| 
 | 
 | ||||||
|     obj: typing.Any |     obj: typing.Any | ||||||
| @ -47,6 +95,12 @@ class Spec: | |||||||
|                 if self.obj and "configmaps" in self.obj |                 if self.obj and "configmaps" in self.obj | ||||||
|                 else {}) |                 else {}) | ||||||
| 
 | 
 | ||||||
|  |     def get_container_resources(self): | ||||||
|  |         return Resources(self.obj.get("resources", {}).get("containers", {})) | ||||||
|  | 
 | ||||||
|  |     def get_volume_resources(self): | ||||||
|  |         return Resources(self.obj.get("resources", {}).get("volumes", {})) | ||||||
|  | 
 | ||||||
|     def get_http_proxy(self): |     def get_http_proxy(self): | ||||||
|         return (self.obj[constants.network_key][constants.http_proxy_key] |         return (self.obj[constants.network_key][constants.http_proxy_key] | ||||||
|                 if self.obj and constants.network_key in self.obj |                 if self.obj and constants.network_key in self.obj | ||||||
|  | |||||||
| @ -19,6 +19,8 @@ import shlex | |||||||
| import shutil | import shutil | ||||||
| import sys | import sys | ||||||
| import tempfile | import tempfile | ||||||
|  | import time | ||||||
|  | import uuid | ||||||
| 
 | 
 | ||||||
| import click | import click | ||||||
| 
 | 
 | ||||||
| @ -27,7 +29,7 @@ from stack_orchestrator.deploy.webapp.util import (LaconicRegistryClient, | |||||||
|                                                    build_container_image, push_container_image, |                                                    build_container_image, push_container_image, | ||||||
|                                                    file_hash, deploy_to_k8s, publish_deployment, |                                                    file_hash, deploy_to_k8s, publish_deployment, | ||||||
|                                                    hostname_for_deployment_request, generate_hostname_for_app, |                                                    hostname_for_deployment_request, generate_hostname_for_app, | ||||||
|                                                    match_owner) |                                                    match_owner, skip_by_tag) | ||||||
| 
 | 
 | ||||||
| 
 | 
 | ||||||
| def process_app_deployment_request( | def process_app_deployment_request( | ||||||
| @ -39,8 +41,19 @@ def process_app_deployment_request( | |||||||
|     dns_suffix, |     dns_suffix, | ||||||
|     deployment_parent_dir, |     deployment_parent_dir, | ||||||
|     kube_config, |     kube_config, | ||||||
|     image_registry |     image_registry, | ||||||
|  |     log_parent_dir | ||||||
| ): | ): | ||||||
|  |     run_id = f"{app_deployment_request.id}-{str(time.time()).split('.')[0]}-{str(uuid.uuid4()).split('-')[0]}" | ||||||
|  |     log_file = None | ||||||
|  |     if log_parent_dir: | ||||||
|  |         log_dir = os.path.join(log_parent_dir, app_deployment_request.id) | ||||||
|  |         if not os.path.exists(log_dir): | ||||||
|  |             os.mkdir(log_dir) | ||||||
|  |         log_file_path = os.path.join(log_dir, f"{run_id}.log") | ||||||
|  |         print(f"Directing build logs to: {log_file_path}") | ||||||
|  |         log_file = open(log_file_path, "wt") | ||||||
|  | 
 | ||||||
|     # 1. look up application |     # 1. look up application | ||||||
|     app = laconic.get_record(app_deployment_request.attributes.application, require=True) |     app = laconic.get_record(app_deployment_request.attributes.application, require=True) | ||||||
| 
 | 
 | ||||||
| @ -102,8 +115,10 @@ def process_app_deployment_request( | |||||||
|     needs_k8s_deploy = False |     needs_k8s_deploy = False | ||||||
|     # 6. build container (if needed) |     # 6. build container (if needed) | ||||||
|     if not deployment_record or deployment_record.attributes.application != app.id: |     if not deployment_record or deployment_record.attributes.application != app.id: | ||||||
|         build_container_image(app, deployment_container_tag) |         # TODO: pull from request | ||||||
|         push_container_image(deployment_dir) |         extra_build_args = [] | ||||||
|  |         build_container_image(app, deployment_container_tag, extra_build_args, log_file) | ||||||
|  |         push_container_image(deployment_dir, log_file) | ||||||
|         needs_k8s_deploy = True |         needs_k8s_deploy = True | ||||||
| 
 | 
 | ||||||
|     # 7. update config (if needed) |     # 7. update config (if needed) | ||||||
| @ -116,6 +131,7 @@ def process_app_deployment_request( | |||||||
|         deploy_to_k8s( |         deploy_to_k8s( | ||||||
|             deployment_record, |             deployment_record, | ||||||
|             deployment_dir, |             deployment_dir, | ||||||
|  |             log_file | ||||||
|         ) |         ) | ||||||
| 
 | 
 | ||||||
|     publish_deployment( |     publish_deployment( | ||||||
| @ -162,10 +178,14 @@ def dump_known_requests(filename, requests, status="SEEN"): | |||||||
| @click.option("--record-namespace-dns", help="eg, crn://laconic/dns") | @click.option("--record-namespace-dns", help="eg, crn://laconic/dns") | ||||||
| @click.option("--record-namespace-deployments", help="eg, crn://laconic/deployments") | @click.option("--record-namespace-deployments", help="eg, crn://laconic/deployments") | ||||||
| @click.option("--dry-run", help="Don't do anything, just report what would be done.", is_flag=True) | @click.option("--dry-run", help="Don't do anything, just report what would be done.", is_flag=True) | ||||||
|  | @click.option("--include-tags", help="Only include requests with matching tags (comma-separated).", default="") | ||||||
|  | @click.option("--exclude-tags", help="Exclude requests with matching tags (comma-separated).", default="") | ||||||
|  | @click.option("--log-dir", help="Output build/deployment logs to directory.", default=None) | ||||||
| @click.pass_context | @click.pass_context | ||||||
| def command(ctx, kube_config, laconic_config, image_registry, deployment_parent_dir, | def command(ctx, kube_config, laconic_config, image_registry, deployment_parent_dir,  # noqa: C901 | ||||||
|             request_id, discover, state_file, only_update_state, |             request_id, discover, state_file, only_update_state, | ||||||
|             dns_suffix, record_namespace_dns, record_namespace_deployments, dry_run): |             dns_suffix, record_namespace_dns, record_namespace_deployments, dry_run, | ||||||
|  |             include_tags, exclude_tags, log_dir): | ||||||
|     if request_id and discover: |     if request_id and discover: | ||||||
|         print("Cannot specify both --request-id and --discover", file=sys.stderr) |         print("Cannot specify both --request-id and --discover", file=sys.stderr) | ||||||
|         sys.exit(2) |         sys.exit(2) | ||||||
| @ -183,6 +203,10 @@ def command(ctx, kube_config, laconic_config, image_registry, deployment_parent_ | |||||||
|             print("--dns-suffix, --record-namespace-dns, and --record-namespace-deployments are all required", file=sys.stderr) |             print("--dns-suffix, --record-namespace-dns, and --record-namespace-deployments are all required", file=sys.stderr) | ||||||
|             sys.exit(2) |             sys.exit(2) | ||||||
| 
 | 
 | ||||||
|  |     # Split CSV and clean up values. | ||||||
|  |     include_tags = [tag.strip() for tag in include_tags.split(",") if tag] | ||||||
|  |     exclude_tags = [tag.strip() for tag in exclude_tags.split(",") if tag] | ||||||
|  | 
 | ||||||
|     laconic = LaconicRegistryClient(laconic_config) |     laconic = LaconicRegistryClient(laconic_config) | ||||||
| 
 | 
 | ||||||
|     # Find deployment requests. |     # Find deployment requests. | ||||||
| @ -204,6 +228,7 @@ def command(ctx, kube_config, laconic_config, image_registry, deployment_parent_ | |||||||
|     requests.sort(key=lambda r: r.createTime) |     requests.sort(key=lambda r: r.createTime) | ||||||
|     requests.reverse() |     requests.reverse() | ||||||
|     requests_by_name = {} |     requests_by_name = {} | ||||||
|  |     skipped_by_name = {} | ||||||
|     for r in requests: |     for r in requests: | ||||||
|         # TODO: Do this _after_ filtering deployments and cancellations to minimize round trips. |         # TODO: Do this _after_ filtering deployments and cancellations to minimize round trips. | ||||||
|         app = laconic.get_record(r.attributes.application) |         app = laconic.get_record(r.attributes.application) | ||||||
| @ -216,17 +241,20 @@ def command(ctx, kube_config, laconic_config, image_registry, deployment_parent_ | |||||||
|             requested_name = generate_hostname_for_app(app) |             requested_name = generate_hostname_for_app(app) | ||||||
|             print("Generating name %s for request %s." % (requested_name, r.id)) |             print("Generating name %s for request %s." % (requested_name, r.id)) | ||||||
| 
 | 
 | ||||||
|         if requested_name not in requests_by_name: |         if requested_name in skipped_by_name or requested_name in requests_by_name: | ||||||
|             print( |             print("Ignoring request %s, it has been superseded." % r.id) | ||||||
|                 "Found request %s to run application %s on %s." |             continue | ||||||
|                 % (r.id, r.attributes.application, requested_name) | 
 | ||||||
|             ) |         if skip_by_tag(r, include_tags, exclude_tags): | ||||||
|  |             print("Skipping request %s, filtered by tag (include %s, exclude %s, present %s)" % (r.id, | ||||||
|  |                                                                                                  include_tags, | ||||||
|  |                                                                                                  exclude_tags, | ||||||
|  |                                                                                                  r.attributes.tags)) | ||||||
|  |             skipped_by_name[requested_name] = r | ||||||
|  |             continue | ||||||
|  | 
 | ||||||
|  |         print("Found request %s to run application %s on %s." % (r.id, r.attributes.application, requested_name)) | ||||||
|         requests_by_name[requested_name] = r |         requests_by_name[requested_name] = r | ||||||
|         else: |  | ||||||
|             print( |  | ||||||
|                 "Ignoring request %s, it is superseded by %s." |  | ||||||
|                 % (r.id, requests_by_name[requested_name].id) |  | ||||||
|             ) |  | ||||||
| 
 | 
 | ||||||
|     # Find deployments. |     # Find deployments. | ||||||
|     deployments = laconic.app_deployments() |     deployments = laconic.app_deployments() | ||||||
| @ -273,7 +301,8 @@ def command(ctx, kube_config, laconic_config, image_registry, deployment_parent_ | |||||||
|                     dns_suffix, |                     dns_suffix, | ||||||
|                     os.path.abspath(deployment_parent_dir), |                     os.path.abspath(deployment_parent_dir), | ||||||
|                     kube_config, |                     kube_config, | ||||||
|                     image_registry |                     image_registry, | ||||||
|  |                     log_dir | ||||||
|                 ) |                 ) | ||||||
|                 status = "DEPLOYED" |                 status = "DEPLOYED" | ||||||
|             finally: |             finally: | ||||||
|  | |||||||
| @ -20,7 +20,7 @@ import sys | |||||||
| 
 | 
 | ||||||
| import click | import click | ||||||
| 
 | 
 | ||||||
| from stack_orchestrator.deploy.webapp.util import LaconicRegistryClient, match_owner | from stack_orchestrator.deploy.webapp.util import LaconicRegistryClient, match_owner, skip_by_tag | ||||||
| 
 | 
 | ||||||
| 
 | 
 | ||||||
| def process_app_removal_request(ctx, | def process_app_removal_request(ctx, | ||||||
| @ -107,10 +107,12 @@ def dump_known_requests(filename, requests): | |||||||
| @click.option("--delete-names/--preserve-names", help="Delete all names associated with removed deployments.", default=True) | @click.option("--delete-names/--preserve-names", help="Delete all names associated with removed deployments.", default=True) | ||||||
| @click.option("--delete-volumes/--preserve-volumes", default=True, help="delete data volumes") | @click.option("--delete-volumes/--preserve-volumes", default=True, help="delete data volumes") | ||||||
| @click.option("--dry-run", help="Don't do anything, just report what would be done.", is_flag=True) | @click.option("--dry-run", help="Don't do anything, just report what would be done.", is_flag=True) | ||||||
|  | @click.option("--include-tags", help="Only include requests with matching tags (comma-separated).", default="") | ||||||
|  | @click.option("--exclude-tags", help="Exclude requests with matching tags (comma-separated).", default="") | ||||||
| @click.pass_context | @click.pass_context | ||||||
| def command(ctx, laconic_config, deployment_parent_dir, | def command(ctx, laconic_config, deployment_parent_dir, | ||||||
|             request_id, discover, state_file, only_update_state, |             request_id, discover, state_file, only_update_state, | ||||||
|             delete_names, delete_volumes, dry_run): |             delete_names, delete_volumes, dry_run, include_tags, exclude_tags): | ||||||
|     if request_id and discover: |     if request_id and discover: | ||||||
|         print("Cannot specify both --request-id and --discover", file=sys.stderr) |         print("Cannot specify both --request-id and --discover", file=sys.stderr) | ||||||
|         sys.exit(2) |         sys.exit(2) | ||||||
| @ -123,6 +125,10 @@ def command(ctx, laconic_config, deployment_parent_dir, | |||||||
|         print("--only-update-state requires --state-file", file=sys.stderr) |         print("--only-update-state requires --state-file", file=sys.stderr) | ||||||
|         sys.exit(2) |         sys.exit(2) | ||||||
| 
 | 
 | ||||||
|  |     # Split CSV and clean up values. | ||||||
|  |     include_tags = [tag.strip() for tag in include_tags.split(",") if tag] | ||||||
|  |     exclude_tags = [tag.strip() for tag in exclude_tags.split(",") if tag] | ||||||
|  | 
 | ||||||
|     laconic = LaconicRegistryClient(laconic_config) |     laconic = LaconicRegistryClient(laconic_config) | ||||||
| 
 | 
 | ||||||
|     # Find deployment removal requests. |     # Find deployment removal requests. | ||||||
| @ -155,10 +161,22 @@ def command(ctx, laconic_config, deployment_parent_dir, | |||||||
|             # TODO: should we handle CRNs? |             # TODO: should we handle CRNs? | ||||||
|             removals_by_deployment[r.attributes.deployment] = r |             removals_by_deployment[r.attributes.deployment] = r | ||||||
| 
 | 
 | ||||||
|     requests_to_execute = [] |     one_per_deployment = {} | ||||||
|     for r in requests: |     for r in requests: | ||||||
|         if not r.attributes.deployment: |         if not r.attributes.deployment: | ||||||
|             print(f"Skipping removal request {r.id} since it was a cancellation.") |             print(f"Skipping removal request {r.id} since it was a cancellation.") | ||||||
|  |         elif r.attributes.deployment in one_per_deployment: | ||||||
|  |             print(f"Skipping removal request {r.id} since it was superseded.") | ||||||
|  |         else: | ||||||
|  |             one_per_deployment[r.attributes.deployment] = r | ||||||
|  | 
 | ||||||
|  |     requests_to_execute = [] | ||||||
|  |     for r in one_per_deployment.values(): | ||||||
|  |         if skip_by_tag(r, include_tags, exclude_tags): | ||||||
|  |             print("Skipping removal request %s, filtered by tag (include %s, exclude %s, present %s)" % (r.id, | ||||||
|  |                                                                                                          include_tags, | ||||||
|  |                                                                                                          exclude_tags, | ||||||
|  |                                                                                                          r.attributes.tags)) | ||||||
|         elif r.id in removals_by_request: |         elif r.id in removals_by_request: | ||||||
|             print(f"Found satisfied request for {r.id} at {removals_by_request[r.id].id}") |             print(f"Found satisfied request for {r.id} at {removals_by_request[r.id].id}") | ||||||
|         elif r.attributes.deployment in removals_by_deployment: |         elif r.attributes.deployment in removals_by_deployment: | ||||||
|  | |||||||
| @ -212,7 +212,7 @@ def determine_base_container(clone_dir, app_type="webapp"): | |||||||
|     return base_container |     return base_container | ||||||
| 
 | 
 | ||||||
| 
 | 
 | ||||||
| def build_container_image(app_record, tag, extra_build_args=[]): | def build_container_image(app_record, tag, extra_build_args=[], log_file=None): | ||||||
|     tmpdir = tempfile.mkdtemp() |     tmpdir = tempfile.mkdtemp() | ||||||
| 
 | 
 | ||||||
|     try: |     try: | ||||||
| @ -227,10 +227,10 @@ def build_container_image(app_record, tag, extra_build_args=[]): | |||||||
|             git_env = dict(os.environ.copy()) |             git_env = dict(os.environ.copy()) | ||||||
|             # Never prompt |             # Never prompt | ||||||
|             git_env["GIT_TERMINAL_PROMPT"] = "0" |             git_env["GIT_TERMINAL_PROMPT"] = "0" | ||||||
|             subprocess.check_call(["git", "clone", repo, clone_dir], env=git_env) |             subprocess.check_call(["git", "clone", repo, clone_dir], env=git_env, stdout=log_file, stderr=log_file) | ||||||
|             subprocess.check_call(["git", "checkout", ref], cwd=clone_dir, env=git_env) |             subprocess.check_call(["git", "checkout", ref], cwd=clone_dir, env=git_env, stdout=log_file, stderr=log_file) | ||||||
|         else: |         else: | ||||||
|             result = subprocess.run(["git", "clone", "--depth", "1", repo, clone_dir]) |             result = subprocess.run(["git", "clone", "--depth", "1", repo, clone_dir], stdout=log_file, stderr=log_file) | ||||||
|             result.check_returncode() |             result.check_returncode() | ||||||
| 
 | 
 | ||||||
|         base_container = determine_base_container(clone_dir, app_record.attributes.app_type) |         base_container = determine_base_container(clone_dir, app_record.attributes.app_type) | ||||||
| @ -246,25 +246,27 @@ def build_container_image(app_record, tag, extra_build_args=[]): | |||||||
|             build_command.append("--extra-build-args") |             build_command.append("--extra-build-args") | ||||||
|             build_command.append(" ".join(extra_build_args)) |             build_command.append(" ".join(extra_build_args)) | ||||||
| 
 | 
 | ||||||
|         result = subprocess.run(build_command) |         result = subprocess.run(build_command, stdout=log_file, stderr=log_file) | ||||||
|         result.check_returncode() |         result.check_returncode() | ||||||
|     finally: |     finally: | ||||||
|         cmd("rm", "-rf", tmpdir) |         cmd("rm", "-rf", tmpdir) | ||||||
| 
 | 
 | ||||||
| 
 | 
 | ||||||
| def push_container_image(deployment_dir): | def push_container_image(deployment_dir, log_file=None): | ||||||
|     print("Pushing image ...") |     print("Pushing image ...") | ||||||
|     result = subprocess.run([sys.argv[0], "deployment", "--dir", deployment_dir, "push-images"]) |     result = subprocess.run([sys.argv[0], "deployment", "--dir", deployment_dir, "push-images"], | ||||||
|  |                             stdout=log_file, stderr=log_file) | ||||||
|     result.check_returncode() |     result.check_returncode() | ||||||
| 
 | 
 | ||||||
| 
 | 
 | ||||||
| def deploy_to_k8s(deploy_record, deployment_dir): | def deploy_to_k8s(deploy_record, deployment_dir, log_file=None): | ||||||
|     if not deploy_record: |     if not deploy_record: | ||||||
|         command = "up" |         command = "up" | ||||||
|     else: |     else: | ||||||
|         command = "update" |         command = "update" | ||||||
| 
 | 
 | ||||||
|     result = subprocess.run([sys.argv[0], "deployment", "--dir", deployment_dir, command]) |     result = subprocess.run([sys.argv[0], "deployment", "--dir", deployment_dir, command], | ||||||
|  |                             stdout=log_file, stderr=log_file) | ||||||
|     result.check_returncode() |     result.check_returncode() | ||||||
| 
 | 
 | ||||||
| 
 | 
 | ||||||
| @ -349,3 +351,15 @@ def generate_hostname_for_app(app): | |||||||
|     else: |     else: | ||||||
|         m.update(app.attributes.repository.encode()) |         m.update(app.attributes.repository.encode()) | ||||||
|     return "%s-%s" % (last_part, m.hexdigest()[0:10]) |     return "%s-%s" % (last_part, m.hexdigest()[0:10]) | ||||||
|  | 
 | ||||||
|  | 
 | ||||||
|  | def skip_by_tag(r, include_tags, exclude_tags): | ||||||
|  |     for tag in exclude_tags: | ||||||
|  |         if tag and r.attributes.tags and tag in r.attributes.tags: | ||||||
|  |             return True | ||||||
|  | 
 | ||||||
|  |     for tag in include_tags: | ||||||
|  |         if tag and (not r.attributes.tags or tag not in r.attributes.tags): | ||||||
|  |             return True | ||||||
|  | 
 | ||||||
|  |     return False | ||||||
|  | |||||||
| @ -6,6 +6,12 @@ fi | |||||||
| # Dump environment variables for debugging | # Dump environment variables for debugging | ||||||
| echo "Environment variables:" | echo "Environment variables:" | ||||||
| env | env | ||||||
|  | 
 | ||||||
|  | delete_cluster_exit () { | ||||||
|  |     $TEST_TARGET_SO deployment --dir $test_deployment_dir stop --delete-volumes | ||||||
|  |     exit 1 | ||||||
|  | } | ||||||
|  | 
 | ||||||
| # Test basic stack-orchestrator deploy | # Test basic stack-orchestrator deploy | ||||||
| echo "Running stack-orchestrator deploy test" | echo "Running stack-orchestrator deploy test" | ||||||
| # Bit of a hack, test the most recent package | # Bit of a hack, test the most recent package | ||||||
| @ -106,6 +112,10 @@ if [ ! "$create_file_content" == "create-command-output-data"  ]; then | |||||||
|     echo "deploy create test: FAILED" |     echo "deploy create test: FAILED" | ||||||
|     exit 1 |     exit 1 | ||||||
| fi | fi | ||||||
|  | 
 | ||||||
|  | # Add a config file to be picked up by the ConfigMap before starting. | ||||||
|  | echo "dbfc7a4d-44a7-416d-b5f3-29842cc47650" > $test_deployment_dir/data/test-config/test_config | ||||||
|  | 
 | ||||||
| echo "deploy create output file test: passed" | echo "deploy create output file test: passed" | ||||||
| # Try to start the deployment | # Try to start the deployment | ||||||
| $TEST_TARGET_SO deployment --dir $test_deployment_dir start | $TEST_TARGET_SO deployment --dir $test_deployment_dir start | ||||||
| @ -124,6 +134,37 @@ else | |||||||
|     echo "deployment config test: FAILED" |     echo "deployment config test: FAILED" | ||||||
|     exit 1 |     exit 1 | ||||||
| fi | fi | ||||||
|  | # Check the config variable CERC_TEST_PARAM_2 was passed correctly from the compose file | ||||||
|  | if [[ "$log_output_3" == *"Test-param-2: CERC_TEST_PARAM_2_VALUE"* ]]; then | ||||||
|  |     echo "deployment compose config test: passed" | ||||||
|  | else | ||||||
|  |     echo "deployment compose config test: FAILED" | ||||||
|  |     exit 1 | ||||||
|  | fi | ||||||
|  | 
 | ||||||
|  | # Check that the ConfigMap is mounted and contains the expected content. | ||||||
|  | log_output_4=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir logs ) | ||||||
|  | if [[ "$log_output_4" == *"/config/test_config:"* ]] && [[ "$log_output_4" == *"dbfc7a4d-44a7-416d-b5f3-29842cc47650"* ]]; then | ||||||
|  |     echo "deployment ConfigMap test: passed" | ||||||
|  | else | ||||||
|  |     echo "deployment ConfigMap test: FAILED" | ||||||
|  |     delete_cluster_exit | ||||||
|  | fi | ||||||
|  | 
 | ||||||
|  | # Stop then start again and check the volume was preserved | ||||||
|  | $TEST_TARGET_SO deployment --dir $test_deployment_dir stop | ||||||
|  | # Sleep a bit just in case | ||||||
|  | # sleep for longer to check if that's why the subsequent create cluster fails | ||||||
|  | sleep 20 | ||||||
|  | $TEST_TARGET_SO deployment --dir $test_deployment_dir start | ||||||
|  | log_output_5=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir logs ) | ||||||
|  | if [[ "$log_output_5" == *"Filesystem is old"* ]]; then | ||||||
|  |     echo "Retain volumes test: passed" | ||||||
|  | else | ||||||
|  |     echo "Retain volumes test: FAILED" | ||||||
|  |     delete_cluster_exit | ||||||
|  | fi | ||||||
|  | 
 | ||||||
| # Stop and clean up | # Stop and clean up | ||||||
| $TEST_TARGET_SO deployment --dir $test_deployment_dir stop --delete-volumes | $TEST_TARGET_SO deployment --dir $test_deployment_dir stop --delete-volumes | ||||||
| echo "Test passed" | echo "Test passed" | ||||||
|  | |||||||
| @ -114,6 +114,7 @@ else | |||||||
|     echo "deployment logs test: FAILED" |     echo "deployment logs test: FAILED" | ||||||
|     delete_cluster_exit |     delete_cluster_exit | ||||||
| fi | fi | ||||||
|  | 
 | ||||||
| # Check the config variable CERC_TEST_PARAM_1 was passed correctly | # Check the config variable CERC_TEST_PARAM_1 was passed correctly | ||||||
| if [[ "$log_output_3" == *"Test-param-1: PASSED"* ]]; then | if [[ "$log_output_3" == *"Test-param-1: PASSED"* ]]; then | ||||||
|     echo "deployment config test: passed" |     echo "deployment config test: passed" | ||||||
| @ -122,6 +123,14 @@ else | |||||||
|     delete_cluster_exit |     delete_cluster_exit | ||||||
| fi | fi | ||||||
| 
 | 
 | ||||||
|  | # Check the config variable CERC_TEST_PARAM_2 was passed correctly from the compose file | ||||||
|  | if [[ "$log_output_3" == *"Test-param-2: CERC_TEST_PARAM_2_VALUE"* ]]; then | ||||||
|  |     echo "deployment compose config test: passed" | ||||||
|  | else | ||||||
|  |     echo "deployment compose config test: FAILED" | ||||||
|  |     exit 1 | ||||||
|  | fi | ||||||
|  | 
 | ||||||
| # Check that the ConfigMap is mounted and contains the expected content. | # Check that the ConfigMap is mounted and contains the expected content. | ||||||
| log_output_4=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir logs ) | log_output_4=$( $TEST_TARGET_SO deployment --dir $test_deployment_dir logs ) | ||||||
| if [[ "$log_output_4" == *"/config/test_config:"* ]] && [[ "$log_output_4" == *"dbfc7a4d-44a7-416d-b5f3-29842cc47650"* ]]; then | if [[ "$log_output_4" == *"/config/test_config:"* ]] && [[ "$log_output_4" == *"dbfc7a4d-44a7-416d-b5f3-29842cc47650"* ]]; then | ||||||
|  | |||||||
		Loading…
	
		Reference in New Issue
	
	Block a user