ansbile role for deploying K3S and RKE2 clusters

This commit is contained in:
srw 2024-08-21 01:34:56 +00:00
commit 7ffd4393ff
37 changed files with 1531 additions and 0 deletions

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
__pycache__

20
LICENSE Normal file
View File

@ -0,0 +1,20 @@
The MIT License (MIT)
Copyright (c) 2024 Shane Wadleigh
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

243
README.md Normal file
View File

@ -0,0 +1,243 @@
# ansible-role-k8s
Ansible role for configuring k3s and rke2 kubernetes clusters
- https://docs.k3s.io/
- https://docs.rke2.io/
- https://kube-vip.io/
- https://github.com/sbstp/kubie
- https://kubernetes.io/docs/tasks/tools/
- https://helm.sh/
## Requirements
There is an included helper script to install common tools `scripts/get-kube-tools.sh`
- `yq` required on the local system for the kubectl formatting task which places an updated kubeconfig in the local ~/.kube
- `kubectl` required on the local system for basic cluster mangement and application of locally stored manifests or secrets
- `helm` required on the local system for helm deployments that use locally stored value files, otherwise this is handled on the bootstrap node
- `kubie` recommened on the local system for context management after deployment
## Setup
There is a helper script `scripts/token-vault.sh` which pre-generates a cluster token and places it in an encrypted vault file
## Cluster Example
cluster hosts
```
[k8s_somecluster]
somecluster_control k8s_node_type=bootstrap
somecluster_agent_smith k8s_node_type=agent k8s_external_ip=x.x.x.x
somecluster_agent_jones k8s_node_type=agent k8s_external_ip=x.x.x.x
```
cluster tasks
```
- name: Setup k8s server node
hosts: somehost
become: true
roles:
- role: k8s
k8s_type: rke2
k8s_cluster_name: somecluster
k8s_cluster_url: somecluster.somewhere
k8s_cni_interface: enp1s0
k8s_selinux: true
- role: firewalld
firewalld_add:
- name: internal
interfaces:
- enp1s0
masquerade: true
forward: true
interfaces:
- enp1s0
services:
- dhcpv6-client
- ssh
- http
- https
ports:
- 6443/tcp # kubernetes API
- 9345/tcp # supervisor API
- 10250/tcp # kubelet metrics
- 2379/tcp # etcd client
- 2380/tcp # etcd peer
- 30000-32767/tcp # NodePort range
- 8472/udp # canal/flannel vxlan
- 9099/tcp # canal health checks
- name: trusted
sources:
- 10.42.0.0/16
- 10.43.0.0/16
- name: public
masquerade: true
forward: true
interfaces:
- enp7s0
services:
- http
- https
firewalld_remove:
- name: public
interfaces:
- enp1s0
services:
- dhcpv6-client
- ssh
```
## Retrieve kube config from an existing cluster
This task will retrieve and format the kubectl config for an existing cluster, this runs automatically during cluster creation.
`k8s_cluster_name` sets the cluster context
`k8s_cluster_url` sets the server address
```
ansible-playbook -i prod/ site.yml --tags=k8s-get-config --limit=k8s_somecluster
```
## Basic Cluster Interaction
```
kubie ctx <cluster-name>
kubectl get node -o wide
kubectl get pods,svc,ds --all-namespaces
```
## Deployment and Removal
Deploy
```
ansible-playbook -i hosts site.yml --tags=firewalld,k8s --limit=k8s_somecluster
```
Adding a node, simply add the new host to the cluster group with its defined role and deploy
```
ansible-playbook -i hosts site.yml --tags=firewalld,k8s --limit=just_the_new_host
```
Remove firewall role
```
ansible-playbook -i hosts site.yml --tags=firewalld,k8s --extra-vars "firewall_action=remove" --limit=somehost
```
There is a task to completely destroy an existing cluster, this will ask for interactive user confirmation and should be used with caution.
```
ansible-playbook -i prod/ site.yml --tags=k8s --extra-vars 'k8s_action=destroy' --limit=some_innocent_cluster
```
Manual removal commands
```
/usr/local/bin/k3s-uninstall.sh
/usr/local/bin/k3s-agent-uninstall.sh
/usr/local/bin/rke2-uninstall.sh
/usr/local/bin/rke2-agent-uninstall.sh
```
## Managing K3S Services
servers
```
systemctl status k3s.service
journalctl -u k3s.service -f
```
agents
```
systemctl status k3s-agent.service
journalctl -u k3s-agent -f
```
uninstall servers
```
/usr/local/bin/k3s-uninstall.sh
```
uninstall agents
```
/usr/local/bin/k3s-agent-uninstall.sh
```
## Managing RKE2 Services
servers
```
systemctl status rke2-server.service
journalctl -u rke2-server -f
```
agents
```
systemctl status rke2-agent.service
journalctl -u rke2-agent -f
```
uninstall servers
```
/usr/bin/rke2-uninstall.sh
```
uninstall agents
```
/usr/local/bin/rke2-uninstall.sh
```
override default cannal options
```
# /var/lib/rancher/rke2/server/manifests/rke2-canal-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-canal
namespace: kube-system
spec:
valuesContent: |-
flannel:
iface: "eth1"
```
Enable flannels wireguard support under canal
`kubectl rollout restart ds rke2-canal -n kube-system`
```
# /var/lib/rancher/rke2/server/manifests/rke2-canal-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-canal
namespace: kube-system
spec:
valuesContent: |-
flannel:
backend: "wireguard"
```

176
defaults/main.yml Normal file
View File

@ -0,0 +1,176 @@
---
# this toggle provides a dangerous way to quickly destroy an entire cluster
# ansible-playbook -i prod/ site.yml --tags=k8s --extra-vars 'k8s_action=destroy' --limit=k3s_innocent_cluster
# create | destroy
k8s_action: create
# k3s | rke2
k8s_type: k3s
k8s_cluster_name: default
k8s_cluster_url: localhost
# Additionally define k8s_external_ip to provide a specific node an external route
k8s_node_ip: "{{ ansible_host }}"
# paths
# used for placing nm related configs
k8s_nm_path: /etc/NetworkManager/conf.d
# used by k8s binaries, depends on installation method: rpm vs tar
k8s_cmd_path: /usr/local/bin
# location of install scripts and other tools
k8s_install_path: /usr/local/bin
k8s_install_script: "{{ k8s_install_path }}/{{ k8s_type }}-install.sh"
k8s_manifests_path: "/var/lib/rancher/{{ k8s_type }}/server/manifests/"
k8s_config_path: "/etc/rancher/{{ k8s_type }}"
k8s_helm_install_url: https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
k8s_helm_install_script: "{{ k8s_install_path }}/get_helm.sh"
# automatically fetch kubeconfig and update context according to k8s_cluster_name
k8s_kubeconfig_fetch: true
k8s_kubeconfig_update_context: true
# apply CriticalAddonsOnly:NoExecute to control plane nodes
k8s_taint_servers: false
# apply label role=agent to agent nodes
k8s_label_agents: false
# shared k8s api port
k8s_api_port: 6443
# rke2 server listens on a dedicatged port for new nodes to register
k8s_supervisor_port: 9345
# sysctl set fs.inotify.max_user_instances
k8s_inotify_max: 1024
# hardcoded kublet default value is 110
k8s_pod_limit: 110
# if the host is using an http proxy for external access
k8s_http_proxy: false
# kubeconfig chmod
k8s_config_mode: 600
k8s_disable_kube_proxy: false
k8s_debug: false
k8s_kubelet_args:
- "max-pods={{ k8s_pod_limit }}"
# local-path-storage default settings, see templates/shared/local-path-storage.yaml.j2
# k8s_local_path_image: rancher/local-path-provisioner:master-head
# k8s_local_path_image_pull_policy: IfNotPresent
# k8s_local_path_default_class: true
# k8s_local_path_reclaim_policy: Retain
# k8s_local_path_bind_mode: WaitForFirstConsumer
# k8s_local_path_priority_class: system-node-critical
# k8s_local_path_dir: /opt/local-path-provisioner
# cluster issuers
# k8s_cluster_issuers:
# - name: letsencrypt-prod
# url: https://acme-v02.api.letsencrypt.org/directory
# solvers:
# - type: http
# ingress: nginx
# - type: dns
# provider: cloudflare
# tokenref: apiTokenSecretRef
# secret_name: cloudflare-api-token
# secret_ley: api-token
# cluster secrets
# k8s_secrets:
# - name: cloudflare-api-token
# namespace: cert-manager
# data: api-token
# value: ZG9wX3Y...
# k8s_kubelet_args
# - "kube-reserved=cpu=500m,memory=1Gi,ephemeral-storage=2Gi"
# - "system-reserved=cpu=500m,memory=1Gi,ephemeral-storage=2Gi"
# - "eviction-hard=memory.available<500Mi,nodefs.available<10%"
# - "max-pods={{ k8s_pod_limit }}"
# - "v=2"
# Define
# Default is assumed false, set by vars/sysetms/
# k8s_selinux: false
# k8s_acme_email
# you can pre-generate this ina vault with the token.sh script
# k8s_cluster_token
# stable, latest, testing, ...
# k8s_channel: stable
# k8s_version to deploy a specific version
# k8s_version: v1.27.7+k3s2
# bootstrap | server | agent
# k8s_node_type: bootstrap
# if defined, install manifests from the supplied url, currently this task only supports fetching from a url
# k8s_manifests:
# - name: cert-manager
# url: https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml
# k8s_node_taints
# --node-taint CriticalAddonsOnly=true:NoExecute
# k8s_node_taints:
# - name: CriticalAddonsOnly
# value: true
# effect: NoExecute
# K3S
# flannel-backend: 'vxlan', 'host-gw', 'wireguard-native', 'none'
# k8s_flannel_backend: vxlan
# k8s_flannel_ipv6_masq: false
# k8s_flannel_external_ip: false
# k8s_disable_network_policy: true
# disable builtin services
# k8s_disable:
# - traefik
# - servicelb
# RKE2
# Default is false, if the host is using network manager, overriden by vars/sysetms/
# k8s_has_nm: true
# canal, cilium, calico, flannel
# k8s_cni_type: canal
# apply cni custom template
# canal-config.yaml | cilium-config.yaml | calico-config.yaml
# k8s_cni_custom_template: canal-config.yaml
# when using canal enable wg backend
# k8s_canal_wireguard: true
# cilium
# k8s_cilium_hubble: true
# k8s_cilium_eni: true
# disable builtin services
# k8s_disable:
# - rke2-coredns
# - rke2-ingress-nginx
# - rke2-metrics-server
# - rke2-snapshot-controller
# - rke2-snapshot-controller-crd
# - rke2-snapshot-validation-webhook

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: Service
metadata:
name: rke2-ingress-nginx-controller
namespace: kube-system
spec:
type: LoadBalancer
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: rke2-ingress-nginx
app.kubernetes.io/name: rke2-ingress-nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443

21
files/scripts/get-kube-tools.sh Executable file
View File

@ -0,0 +1,21 @@
#!/bin/sh
INSTALL_PATH="/usr/local/bin"
INSTALL_ARCH="amd64"
KUBECTL_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
KUBIE_VERSION="latest"
YQ_VERSION="latest"
HELM_URL="https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3"
sudo wget -qO ${INSTALL_PATH}/kubectl https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/${INSTALL_ARCH}/kubectl
sudo chmod a+x ${INSTALL_PATH}/kubectl
sudo wget -qO ${INSTALL_PATH}/kubie https://github.com/sbstp/kubie/releases/${KUBIE_VERSION}/download/kubie-linux-${INSTALL_ARCH}
sudo chmod a+x ${INSTALL_PATH}/kubie
sudo wget -qO ${INSTALL_PATH}/yq https://github.com/mikefarah/yq/releases/${YQ_VERSION}/download/yq_linux_${INSTALL_ARCH}
sudo chmod a+x ${INSTALL_PATH}/yq
curl -fsSL -o get_helm.sh ${HELM_URL}
chmod 700 get_helm.sh
./get_helm.sh

19
files/scripts/get-secret.sh Executable file
View File

@ -0,0 +1,19 @@
#!/bin/bash
# env expected to be supplied via ansible task
# PLAYBOOK_PATH
# KUBECONTEXT
# SECRET
KUBECONF="$HOME/.kube/config-${KUBECONTEXT}.yaml"
SECRET_FILE="${PLAYBOOK_DIR}/files/manifests/${SECRET}"
apply_secret() {
kubectl apply --kubeconfig="${KUBECONF}" --context="${KUBECONTEXT}" -f "$1"
}
if ansible-vault view "${SECRET_FILE}" &> /dev/null; then
ansible-vault decrypt --output=- "${SECRET_FILE}" | apply_secret -
else
apply_secret "${SECRET_FILE}"
fi

38
files/scripts/token-vault.sh Executable file
View File

@ -0,0 +1,38 @@
#!/bin/bash
vault_output="$1"
vault_regex=".*\.yml$"
vault_var_name="k8s_cluster_token"
token="$(openssl rand -hex 16)"
print_token() {
echo "$token"
}
print_yaml() {
printf -- "---\n$vault_var_name: %s\n" "$token"
}
encrypt_token() {
ansible-vault encrypt_string "$token" --name "$vault_var_name"
}
encrypt_yaml() {
print_yaml | ansible-vault encrypt
}
if [ -n "$vault_output" ]; then
if [[ $vault_output =~ $vault_regex ]]; then
if [ -f "$vault_output" ]; then
echo "output file already exists, no token generated"
exit 0
else
encrypt_yaml > "$vault_output"
fi
else
echo "supplied output file should end with .yml"
exit 1
fi
else
encrypt_token
fi

View File

@ -0,0 +1,10 @@
import base64
def base64_encode(string):
return base64.b64encode(string.encode('utf-8')).decode('utf-8')
class FilterModule(object):
def filters(self):
return {
'base64_encode': base64_encode
}

38
meta/main.yml Normal file
View File

@ -0,0 +1,38 @@
---
dependencies: []
galaxy_info:
role_name: k8s
author: srw
description: Ansible role for configuring k3s and rke2 kubernetes clusters
company: "NMD, LLC"
license: "license (BSD, MIT)"
min_ansible_version: "2.10"
platforms:
- name: Fedora
versions:
- all
- name: Debian
versions:
- buster
- bullseye
- bookworm
- name: Ubuntu
versions:
- bionic
- focal
- jammy
- name: Alpine
version:
- all
- name: ArchLinux
versions:
- all
galaxy_tags:
- server
- system
- containers
- kubernetes
- k8s
- k3s
- rke2

8
tasks/k3s/config.yml Normal file
View File

@ -0,0 +1,8 @@
---
# PRE-DEPLOY
# - name: template k3s kubelet config
# ansible.builtin.template:
# src: "templates/k3s/kubelet.config.j2"
# dest: "/etc/rancher/k3s/kubelet.config"
# mode: 0644

22
tasks/k3s/main.yml Normal file
View File

@ -0,0 +1,22 @@
---
# BOOTSTRAP
- name: k3s boostrap initial server node
ansible.builtin.shell: "{{ k8s_install_script }}"
environment: "{{ k8s_env }}"
when:
- k8s_node_type == "bootstrap"
# ADD SERVERS
- name: k3s add additional server nodes
ansible.builtin.shell: "{{ k8s_install_script }}"
environment: "{{ k8s_env }}"
when:
- k8s_node_type == "server"
# ADD AGENTS
- name: k3s add agent nodes
ansible.builtin.shell: "{{ k8s_install_script }}"
environment: "{{ k8s_env }}"
when:
- k8s_node_type == "agent"

189
tasks/main.yml Normal file
View File

@ -0,0 +1,189 @@
---
- name: Setup Environment
tags:
- k8s
- k8s-config
block:
- name: gather local facts
set_fact:
local_user: "{{ lookup('env', 'USER') }}"
delegate_to: localhost
# resolve actual node type, boostrap is not recognized
- name: set true node type
set_fact:
node_type: "{{ 'server' if k8s_node_type == 'bootstrap' else k8s_node_type }}"
- name: load type specific values
ansible.builtin.include_vars:
file: "types/{{ k8s_type }}.yml"
- name: load system specific values
ansible.builtin.include_vars: "{{ item }}"
with_first_found:
- files:
- "systems/{{ ansible_os_family }}-{{ ansible_distribution_major_version }}.yml"
- "systems/{{ ansible_os_family }}.yml"
- "systems/{{ ansible_distribution }}.yml"
- "systems/{{ ansible_system }}.yml"
skip: true
#
# CREATE CLUSTER
#
- name: Create Cluster
tags:
- k8s
block:
- name: add generic server taint
ansible.builtin.include_vars:
file: "server-taint.yml"
when:
- k8s_taint_servers and k8s_node_type != "agent"
- name: add generic agent label
ansible.builtin.include_vars:
file: "agent-label.yml"
when:
- k8s_label_agents and k8s_node_type == "agent"
- name: increase open file limit
ansible.posix.sysctl:
name: fs.inotify.max_user_instances
value: "{{ k8s_inotify_max }}"
state: present
- name: download install script
ansible.builtin.get_url:
url: "{{ k8s_install_url | d(k8s_default_install_url) }}"
timeout: 120
dest: "{{ k8s_install_script }}"
owner: root
group: root
mode: 0755
# CLUSTER CONFIG
- name: check config paths
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: 0755
loop:
- "{{ k8s_config_path }}"
- "{{ k8s_manifests_path }}"
tags:
- k8s-config
- name: template cluster config
ansible.builtin.template:
src: "templates/{{ k8s_type }}/config.yaml.j2"
dest: "{{ k8s_config_path }}/config.yaml"
mode: 0600
tags:
- k8s-config
- name: type specific configuration
ansible.builtin.include_tasks: "{{ k8s_type }}/config.yml"
tags:
- k8s-config
# DEPLOY CLUSTER
- name: begining cluster creation
ansible.builtin.include_tasks: "{{ k8s_type }}/main.yml"
# END Cluster Creation
when:
- k8s_action == "create"
#
# POST-DEPLOY
#
- name: Post Deployments
tags:
- k8s
- k8s-post-deploy
block:
- name: include kubeconf block
ansible.builtin.include_tasks: "shared/kubeconf.yml"
when:
- k8s_node_type == "bootstrap"
- k8s_kubeconfig_fetch
tags:
- k8s-get-kubeconf
- name: include secret block
ansible.builtin.include_tasks: "shared/secrets.yml"
when:
- k8s_node_type == "bootstrap"
- k8s_secrets is defined
tags:
- k8s-apply-secrets
- name: include manifest block
ansible.builtin.include_tasks: "shared/manifests.yml"
when:
- k8s_node_type == "bootstrap"
- k8s_manifests is defined
tags:
- k8s-apply-manifests
- name: include chart block
ansible.builtin.include_tasks: "shared/charts.yml"
when:
- k8s_node_type == "bootstrap"
- k8s_charts is defined
tags:
- k8s-apply-charts
# END Post Deployments
when:
- k8s_action == "create"
#
# DESTORY CLUSTER
#
# this is very dangerous and should be handled with care when not actively testing with disposable cluster iterations
- name: Destroy Cluster
tags: k8s
block:
- name: confirm cluster destruction
delegate_to: localhost
run_once: true
become: false
pause:
prompt: "=== WARNING === Are you sure you want to DESTROY the cluster: {{ k8s_cluster_name | string | upper }}? (yes/no)"
register: destroy_confirmation
- name: set confirmation fact
set_fact:
cluster_destruction: "{{ destroy_confirmation.user_input }}"
- name: delete cluster config
delegate_to: localhost
run_once: true
become: false
file:
path: "~/.kube/config-{{ k8s_cluster_name }}.yaml"
state: absent
when:
- cluster_destruction
- name: destroy nodes
ansible.builtin.shell: "{{ k8s_type }}-uninstall.sh"
when:
- k8s_node_type != "agent" or k8s_type == "rke2"
- cluster_destruction
- name: destroy k3s agent nodes
ansible.builtin.shell: "{{ k8s_type }}-agent-uninstall.sh"
when:
- k8s_node_type == "agent" and k8s_type == "k3s"
- cluster_destruction
# END Cluster Destruction
when:
- k8s_action == "destroy"

19
tasks/rke2/config.yml Normal file
View File

@ -0,0 +1,19 @@
---
# HTTP PROXY
- name: http proxy tasks
ansible.builtin.include_tasks: "{{ k8s_type }}/proxy.yml"
tags:
- k8s-config
# CANAL NM CONFIG
- name: template canal network-manager config
ansible.builtin.template:
src: "templates/{{ k8s_type }}/canal.conf.j2"
dest: "{{ k8s_nm_path }}/{{ k8s_type }}-canal.conf"
mode: 0600
when:
- k8s_cni_type is not defined or k8s_cni_type == "canal"
- k8s_has_nm is defined and k8s_has_nm
tags:
- k8s-config

42
tasks/rke2/main.yml Normal file
View File

@ -0,0 +1,42 @@
---
# BOOTSTRAP
- name: rke2 boostrap initial server node
ansible.builtin.shell: "{{ k8s_install_script }}"
environment: "{{ k8s_env }}"
when:
- k8s_node_type == "bootstrap"
- name: rke2 cni custom template
ansible.builtin.template:
src: "templates/{{ k8s_type }}/cni/{{ k8s_cni_custom_template }}.j2"
dest: "{{ k8s_manifests_path }}/{{ k8s_type }}-{{ k8s_cni_custom_template }}"
mode: 0600
when:
- k8s_cni_custom_template is defined
#- k8s_node_type == "bootstrap"
- name: rke2 start bootstrap node
ansible.builtin.include_tasks: shared/start.yml
when:
- k8s_node_type == "bootstrap"
# ADD SERVERS
- name: rke2 add additional server nodes
ansible.builtin.shell: "{{ k8s_install_script }}"
environment: "{{ k8s_env }}"
when:
- k8s_node_type == "server"
# ADD AGENTS
- name: rke2 add agent nodes
ansible.builtin.shell: "{{ k8s_install_script }}"
environment: "{{ k8s_env }}"
when:
- k8s_node_type == "agent"
# POST-DEPLOY
- name: rke2 start additional nodes
ansible.builtin.include_tasks: shared/start.yml
when:
- k8s_node_type != "bootstrap"

50
tasks/rke2/proxy.yml Normal file
View File

@ -0,0 +1,50 @@
---
- name: http proxy detection and setup
tags:
- k8s
- k8s-config
block:
- name: check for existing http_proxy
shell: echo $http_proxy
register: http_proxy
ignore_errors: true
changed_when: false
- name: check for existing https_proxy
shell: echo $https_proxy
register: https_proxy
ignore_errors: true
changed_when: false
- name: check for existing no_proxy
shell: echo $no_proxy
register: no_proxy
ignore_errors: true
changed_when: false
- name: Set fact for HTTP_PROXY
set_fact:
k8s_http_proxy: "{{ http_proxy.stdout | default('') }}"
when:
- http_proxy.stdout != ""
- name: Set fact for HTTPS_PROXY
set_fact:
k8s_https_proxy: "{{ https_proxy.stdout | default('') }}"
when:
- https_proxy.stdout != ""
- name: Set fact for NO_PROXY
set_fact:
k8s_no_proxy: "{{ no_proxy.stdout | default('') }}"
when: no_proxy.stdout != ""
- name: template rke2 http proxy
ansible.builtin.template:
src: "templates/{{ k8s_type }}/proxy.j2"
dest: "/etc/default/{{ k8s_type }}-{{ node_type }}"
mode: 0644
when:
- http_proxy.stdout != ""
- https_proxy.stdout != ""

43
tasks/shared/charts.yml Normal file
View File

@ -0,0 +1,43 @@
---
- name: begining chart deployments
run_once: true
tags:
- k8s
- k8s-apply-charts
block:
- name: download helm install script
ansible.builtin.get_url:
url: "{{ k8s_helm_install_url }}"
timeout: 120
dest: "{{ k8s_helm_install_script }}"
owner: root
group: root
mode: 0700
- name: install helm
ansible.builtin.shell: "{{ k8s_helm_install_script }}"
environment:
PATH: "{{ ansible_env.PATH }}:/usr/local/bin"
- name: add chart repos
kubernetes.core.helm_repository:
name: "{{ item.repo_name }}"
repo_url: "{{ item.repo_url }}"
environment:
PATH: "{{ ansible_env.PATH }}:/usr/local/bin"
loop: "{{ k8s_charts }}"
when:
- item.repo_name is defined
- item.repo_url is defined
- name: apply helm charts
ansible.builtin.shell: |
helm repo update
helm upgrade --kubeconfig {{ k8s_config_path }}/{{ k8s_type }}.yaml --namespace {{ item.namespace | d('default') }} --create-namespace --install {{ item.name }} {{ item.chart }} {% if item.chart_version is defined %}--version {{ item.chart_version }}{% endif %} {% if item.settings is defined %}{% for setting in item.settings %}--set {{ setting.key }}={{ setting.value }} {% endfor %}{% endif %}
environment:
PATH: "{{ ansible_env.PATH }}:/usr/local/bin"
loop: "{{ k8s_charts }}"
when:
- item.name is defined
- item.chart is defined

25
tasks/shared/kubeconf.yml Normal file
View File

@ -0,0 +1,25 @@
---
- name: fetch and update kubeconf
run_once: true
tags:
- k8s
- k8s-get-kubeconf
block:
- name: fetch kubeconfig
ansible.builtin.fetch:
src: "{{ k8s_config_path }}/{{ k8s_type }}.yaml"
dest: "~/.kube/config-{{ k8s_cluster_name }}.yaml"
flat: yes
- name: update local kubeconfig
delegate_to: localhost
connection: local
become: false
ansible.builtin.shell: |
yq e '.clusters[].name = "{{ k8s_cluster_name }}"' -i ~/.kube/config-{{ k8s_cluster_name }}.yaml
yq e '.contexts[].name = "{{ k8s_cluster_context | d(k8s_cluster_name) }}"' -i ~/.kube/config-{{ k8s_cluster_name }}.yaml
yq e '(.clusters[] | select(.name == "{{ k8s_cluster_name }}")).cluster.server = "https://{{ k8s_cluster_url }}:{{ k8s_api_port }}"' -i ~/.kube/config-{{ k8s_cluster_name }}.yaml
yq e '(.contexts[] | select(.name == "{{ k8s_cluster_name }}")).context.cluster = "{{ k8s_cluster_name }}"' -i ~/.kube/config-{{ k8s_cluster_name }}.yaml
when:
- k8s_kubeconfig_update_context

View File

@ -0,0 +1,42 @@
---
- name: begining chart deployments
run_once: true
tags:
- k8s
- k8s-apply-manifests
block:
- name: apply local manifests
ansible.builtin.copy:
src: "manifests/{{ item.source | default(item.name + '.yaml') }}"
dest: "{{ k8s_manifests_path }}/{{ item.name }}.yaml"
owner: root
group: root
mode: 0600
loop: "{{ k8s_manifests }}"
when:
- item.type is undefined or item.type == "file"
- item.source is defined or item.name is defined
- name: apply template manifests
ansible.builtin.template:
src: "templates/{{ item.source | default('shared/' + item.name + '.yaml') }}.j2"
dest: "{{ k8s_manifests_path }}/{{ item.name }}.yaml"
mode: 0600
loop: "{{ k8s_manifests }}"
when:
- item.type is defined and item.type == "template"
- item.source is defined or item.name is defined
- name: apply remote manifests
ansible.builtin.get_url:
url: "{{ item.source }}"
timeout: 120
dest: "{{ k8s_manifests_path }}/{{ item.name }}.yaml"
owner: root
group: root
mode: 0600
loop: "{{ k8s_manifests }}"
when:
- item.type is defined and item.type == "url"
- item.source is defined

35
tasks/shared/secrets.yml Normal file
View File

@ -0,0 +1,35 @@
---
- name: begining chart deployments
run_once: true
no_log: true
tags:
- k8s
- k8s-apply-secrets
block:
# this could be adapted to allow for any encrypted manifest type files
- name: apply locally stored secrets
delegate_to: localhost
connection: local
become: false
ansible.builtin.shell: "{{ role_path }}/files/scripts/get-secret.sh"
args:
chdir: "{{ playbook_dir }}"
environment:
PLAYBOOK_DIR: "{{ playbook_dir }}"
KUBECONTEXT: "{{ k8s_cluster_name }}"
SECRET: "{{ item.source | default(item.name + '.yaml') }}"
loop: "{{ k8s_secrets }}"
when:
- item.type is undefined or item.type == "file"
- item.values is defined
- name: apply template based secrets
ansible.builtin.template:
src: "templates/shared/secret.yaml.j2"
dest: "{{ k8s_manifests_path }}/{{ item.name }}-secret.yaml"
mode: 0600
loop: "{{ k8s_secrets }}"
when:
- item.type is defined and item.type == "template"
- item.values is defined

6
tasks/shared/start.yml Normal file
View File

@ -0,0 +1,6 @@
---
- name: enable "{{ k8s_type }}" service
ansible.builtin.systemd:
name: "{{ k8s_type }}-{{ node_type }}"
state: restarted
enabled: true

View File

@ -0,0 +1,85 @@
# template generated via ansible by {{ local_user }} at {{ ansible_date_time.time }} {{ ansible_date_time.date }}
token: {{ k8s_cluster_token }}
{% if k8s_cluster_url is defined and k8s_node_type != "bootstrap" -%}
server: https://{{ k8s_cluster_url }}:{{ k8s_api_port }}
{% endif -%}
{% if k8s_node_type == "bootstrap" -%}
cluster-init: true
{% endif -%}
debug: {{ k8s_debug | string | lower }}
{% if k8s_node_type != "agent" -%}
write-kubeconfig-mode: {{ k8s_config_mode }}
{% if k8s_tls_san is defined and k8s_node_type != "agent" -%}
{% for san in k8s_tls_san -%}
tls-san:
- "{{ san }}"
{% endfor -%}
{% elif k8s_cluster_url is defined and k8s_node_type != "agent" -%}
tls-san: {{ k8s_cluster_url }}
{% endif %}
{% if k8s_selinux is defined and k8s_selinux -%}
selinux: true
{% endif -%}
{% if k8s_disable_kube_proxy and k8s_node_type != "agent" -%}
disable-kube-proxy: true
{% endif -%}
{% if k8s_disable_network_policy is defined and k8s_disable_network_policy and k8s_node_type != "agent" -%}
disable-network-policy: true
{% endif %}
{% if k8s_disable is defined and k8s_node_type != "agent" %}
# disable builtin services
{% for disable in k8s_disable %}
disable: {{ disable }}
{% endfor -%}
{% endif -%}
{% endif %}
{% if k8s_flannel_backend is defined and k8s_node_type != "agent" -%}
# cofigure or disable flannel cni
flannel-backend: {{ k8s_flannel_backend | d('vxlan') }}
flannel-ipv6-masq: {{ k8s_flannel_ipv6_masq | d('false') }}
flannel-external-ip: {{ k8s_flannel_external_ip | d('false') }}
{% endif %}
# node network
{% if k8s_node_ip is defined -%}
node-ip: {{ k8s_node_ip }}
{% endif -%}
{% if k8s_external_ip is defined -%}
node-external-ip: {{ k8s_external_ip }}
{% endif -%}
{% if k8s_node_taints is defined -%}
# initial node taints
{% for taint in k8s_node_taints -%}
node-taint:
- "{{ taint.key }}={{ taint.value }}:{{ taint.effect }}"
{% endfor -%}
{% endif %}
{% if k8s_node_lables is defined -%}
# initial node labels
{% for label in k8s_node_lables -%}
node-label:
- "{{ label.key }}={{ label.value }}"
{% endfor -%}
{% endif %}
{% if k8s_kubelet_args is defined %}
# kubelet configuration
{% for kublet_arg in k8s_kubelet_args %}
kubelet-arg:
- "{{ kublet_arg }}"
{% endfor -%}
{% endif %}
{% if k8s_additional_configs is defined %}
{% for k8s_config in k8s_additional_configs %}
{{ k8s_config.key }}:
- "{{ k8s_config.value }}"
{% endfor -%}
{% endif %}

View File

@ -0,0 +1,5 @@
# template generated via ansible by {{ local_user }} at {{ ansible_date_time.time }} {{ ansible_date_time.date }}
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: {{ k8s_pod_limit }}

View File

@ -0,0 +1,4 @@
# template generated via ansible by {{ local_user }} at {{ ansible_date_time.time }} {{ ansible_date_time.date }}
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:flannel*

View File

@ -0,0 +1,13 @@
# template generated via ansible by {{ local_user }} at {{ ansible_date_time.time }} {{ ansible_date_time.date }}
# /var/lib/rancher/rke2/server/manifests/rke2-calico-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-calico
namespace: kube-system
spec:
valuesContent: |-
installation:
calicoNetwork:
mtu: 9000

View File

@ -0,0 +1,19 @@
# template generated via ansible by {{ local_user }} at {{ ansible_date_time.time }} {{ ansible_date_time.date }}
# /var/lib/rancher/rke2/server/manifests/rke2-canal-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-canal
namespace: kube-system
spec:
valuesContent: |-
flannel:
{% if k8s_canal_wireguard is defined and k8s_canal_wireguard %}
backend: "wireguard"
{% else %}
{% if k8s_cni_interface is defined %}
iface: "{{ k8s_cni_interface }}"
{% endif %}
{% endif %}

View File

@ -0,0 +1,27 @@
# template generated via ansible by {{ local_user }} at {{ ansible_date_time.time }} {{ ansible_date_time.date }}
# /var/lib/rancher/rke2/server/manifests/rke2-cilium-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-cilium
namespace: kube-system
spec:
valuesContent: |-
{% if k8s_cilium_eni is defined and k8s_cilium_eni %}
eni:
enabled: true
{% endif -%}
{% if k8s_disable_kube_proxy %}
kubeProxyReplacement: true
k8sServiceHost: {{ k8s_cluster_url }}
k8sServicePort: {{ k8s_api_port }}
{% endif -%}
{% if k8s_cilium_hubble is defined and k8s_cilium_hubble %}
hubble:
enabled: true
relay:
enabled: true
ui:
enabled: true
{% endif -%}

View File

@ -0,0 +1,74 @@
# template generated via ansible by {{ local_user }} at {{ ansible_date_time.time }} {{ ansible_date_time.date }}
token: {{ k8s_cluster_token }}
{% if k8s_cluster_url is defined and k8s_node_type != "bootstrap" -%}
server: https://{{ k8s_cluster_url }}:{{ k8s_supervisor_port }}
{% endif -%}
debug: {{ k8s_debug | string | lower }}
{% if k8s_node_type != "agent" -%}
write-kubeconfig-mode: {{ k8s_config_mode }}
{% if k8s_tls_san is defined and k8s_node_type != "agent" -%}
{% for san in k8s_tls_san -%}
tls-san:
- "{{ san }}"
{% endfor -%}
{% elif k8s_cluster_url is defined and k8s_node_type != "agent" -%}
tls-san: {{ k8s_cluster_url }}
{% endif %}
{% if k8s_selinux is defined and k8s_selinux -%}
selinux: true
{% endif -%}
{% if k8s_cni_type is defined -%}
cni: {{ k8s_cni_type }}
{% endif -%}
{% if k8s_disable_kube_proxy %}
disable-kube-proxy: true
{% endif -%}
{% if k8s_disable is defined and k8s_node_type != "agent" %}
# disable builtin services
{% for disable in k8s_disable %}
disable: {{ disable }}
{% endfor -%}
{% endif -%}
{% endif %}
# node network
{% if k8s_node_ip is defined -%}
node-ip: {{ k8s_node_ip }}
{% endif -%}
{% if k8s_external_ip is defined -%}
node-external-ip: {{ k8s_external_ip }}
{% endif -%}
{% if k8s_node_taints is defined -%}
# initial node taints
{% for taint in k8s_node_taints -%}
node-taint:
- "{{ taint.key }}={{ taint.value }}:{{ taint.effect }}"
{% endfor -%}
{% endif %}
{% if k8s_node_lables is defined -%}
# initial node labels
{% for label in k8s_node_lables -%}
node-label:
- "{{ label.key }}={{ label.value }}"
{% endfor -%}
{% endif %}
{% if k8s_kubelet_args is defined %}
# kubelet configuration
{% for kublet_arg in k8s_kubelet_args %}
kubelet-arg:
- "{{ kublet_arg }}"
{% endfor -%}
{% endif %}
{% if k8s_additional_configs is defined %}
{% for k8s_config in k8s_additional_configs %}
{{ k8s_config.key }}:
- "{{ k8s_config.value }}"
{% endfor -%}
{% endif %}

5
templates/rke2/proxy.j2 Normal file
View File

@ -0,0 +1,5 @@
# template generated via ansible by {{ local_user }} at {{ ansible_date_time.time }} {{ ansible_date_time.date }}
HTTP_PROXY={{ k8s_http_proxy | d() }}
HTTPS_PROXY={{ k8s_https_proxy | d() }}
NO_PROXY={{ k8s_no_proxy | d() }}

View File

@ -0,0 +1,24 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: {{ item.name }}
spec:
acme:
server: {{ item.server | d('https://acme-v02.api.letsencrypt.org/directory') }}
email: {{ item.email | d(k8s_acme_email) }}
privateKeySecretRef:
name: {{ item.name }}-prviate-key
solvers:
{% for solver in item.solvers %}
{% if solver.type == "http" %}
- http01:
ingress:
class: {{ solver.ingress }}
{% elif solver.type == "dns" %}
- dns01:
{{ solver.provider }}:
{{ solver.tokenref }}:
name: {{ solver.secret_name }}
key: {{ solver.secret_key }}
{% endif -%}
{% endfor -%}

View File

@ -0,0 +1,161 @@
apiVersion: v1
kind: Namespace
metadata:
name: local-path-storage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: local-path-provisioner-role
namespace: local-path-storage
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "create", "patch", "update", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-path-provisioner-role
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumeclaims", "configmaps", "pods", "pods/log"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: local-path-provisioner-bind
namespace: local-path-storage
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-path-provisioner
namespace: local-path-storage
spec:
replicas: 1
selector:
matchLabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: {{ k8s_local_path_image | default('rancher/local-path-provisioner:master-head') }}
imagePullPolicy: {{ k8s_local_path_image_pull_policy | default('IfNotPresent') }}
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: config-volume
configMap:
name: local-path-config
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
annotations:
storageclass.kubernetes.io/is-default-class: "{{ k8s_local_path_default_class | default('true') }}"
provisioner: rancher.io/local-path
volumeBindingMode: {{ k8s_local_path_bind_mode | default('WaitForFirstConsumer') }}
reclaimPolicy: {{ k8s_local_path_reclaim_policy | default('Retain') }}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: local-path-storage
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["{{ k8s_local_path_dir | default('/opt/local-path-provisioner') }}"]
}
]
}
setup: |-
#!/bin/sh
set -eu
mkdir -m 0777 -p "$VOL_DIR"
teardown: |-
#!/bin/sh
set -eu
rm -rf "$VOL_DIR"
helperPod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
priorityClassName: {{ k8s_local_path_priority_class | default('system-node-critical') }}
tolerations:
- key: node.kubernetes.io/disk-pressure
operator: Exists
effect: NoSchedule
containers:
- name: helper-pod
image: busybox
imagePullPolicy: {{ k8s_local_path_image_pull_policy | default('IfNotPresent') }}

View File

@ -0,0 +1,9 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ item.name }}
namespace: {{ item.namespace | d('default') }}
data:
{% for secret in item.secrets %}
{{ secret.key }}: {{ secret.value | base64_encode }}
{% endfor -%}

4
vars/agent-label.yml Normal file
View File

@ -0,0 +1,4 @@
---
k8s_node_lables:
- key: role
value: agent

5
vars/server-taint.yml Normal file
View File

@ -0,0 +1,5 @@
---
k8s_node_taints:
- key: CriticalAddonsOnly
value: true
effect: NoExecute

4
vars/systems/RedHat.yml Normal file
View File

@ -0,0 +1,4 @@
---
k8s_selinux: true
k8s_has_nm: true
#k8s_cmd_path: /bin

12
vars/types/k3s.yml Normal file
View File

@ -0,0 +1,12 @@
---
# See https://docs.k3s.io/
k8s_default_install_url: https://get.k3s.io
k8s_default_channel_url: https://update.k3s.io/v1-release/channels
k8s_env:
INSTALL_K3S_CHANNEL_URL: "{{ k8s_channel_url | d(k8s_default_channel_url) }}"
INSTALL_K3S_CHANNEL: "{{ k8s_channel | d('stable') }}"
INSTALL_K3S_VERSION: "{{ k8s_version | d() }}"
INSTALL_K3S_EXEC: "{{ node_type | d('server') }}"
INSTALL_K3S_SKIP_START: "{{ k8s_skip_start | d('false') }}"

13
vars/types/rke2.yml Normal file
View File

@ -0,0 +1,13 @@
---
# See https://docs.rke2.io/
k8s_default_install_url: https://get.rke2.io
k8s_default_channel_url: https://update.rke2.io/v1-release/channels
#k8s_cmd_path: /bin
k8s_env:
INSTALL_RKE2_CHANNEL_URL: "{{ k8s_channel_url | d(k8s_default_channel_url) }}"
INSTALL_RKE2_CHANNEL: "{{ k8s_channel | d('stable') }}"
INSTALL_RKE2_VERSION: "{{ k8s_version | d() }}"
INSTALL_RKE2_TYPE: "{{ node_type | d('server') }}"