fix(k8s): persist Caddy TLS certificates with PVC
All checks were successful
Lint Checks / Run linter (push) Successful in 4m37s
Lint Checks / Run linter (pull_request) Successful in 3m59s
Deploy Test / Run deploy test suite (pull_request) Successful in 9m43s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 14m31s
K8s Deployment Control Test / Run deployment control suite on kind/k8s (pull_request) Successful in 16m18s
Webapp Test / Run webapp test suite (pull_request) Successful in 20m33s
Smoke Test / Run basic test suite (pull_request) Successful in 19m50s

Caddy ingress was using emptyDir for /data storage, causing TLS
certificates to be lost on pod restarts or cluster recreations.
This led to Let's Encrypt rate limit issues from repeatedly
requesting new certificates.

Add a PersistentVolumeClaim for Caddy's data directory to persist
ACME certificates across redeployments.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
A. F. Dudley 2026-01-24 18:57:55 -05:00
parent 55b76b9b57
commit d5e1a6652c

View File

@ -243,10 +243,26 @@ spec:
mountPath: /config
volumes:
- name: caddy-data
emptyDir: {}
persistentVolumeClaim:
claimName: caddy-data-pvc
- name: caddy-config
emptyDir: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: caddy-data-pvc
namespace: caddy-system
labels:
app.kubernetes.io/name: caddy-ingress-controller
app.kubernetes.io/instance: caddy-ingress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata: