Add Job and secrets support for k8s-kind deployments #995
No reviewers
Labels
No Label
bug
documentation
duplicate
enhancement
feature
good first issue
help wanted
in progress
invalid
K8s
question
wontfix
Copied from Github
Kind/Breaking
Kind/Bug
Kind/Documentation
Kind/Enhancement
Kind/Feature
Kind/Security
Kind/Testing
Priority
Critical
Priority
High
Priority
Low
Priority
Medium
Reviewed
Confirmed
Reviewed
Duplicate
Reviewed
Invalid
Reviewed
Won't Fix
Status
Abandoned
Status
Blocked
Status
Need More Info
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: cerc-io/stack-orchestrator#995
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "feature/k8s-jobs"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Part of https://plan.wireit.in/deepstack/browse/VUL-315
The secrets: {} key added by init_operation for k8s deployments became the last key in the spec file, breaking the raw string append that assumed network: was always last. Replace with proper YAML load/modify/dump. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>pods_in_deployment() and containers_in_pod() hardcoded namespace="default", but pods are created in the deployment-specific namespace (laconic-{cluster-id}). This caused logs() to return "Pods not running" even when pods were healthy. Add namespace parameter to both functions and pass self.k8s_namespace from the logs() caller. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>a312bb5ee7to183a188874The deployment control test queries pods with raw kubectl but didn't specify the namespace. Since pods now live in laconic-{deployment_id} instead of default, the query returned empty results. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>Add a job compose file for the test stack and extend the k8s deploy test to verify new features: - Namespace isolation: pod exists in laconic-{id}, not default - Stack labels: app.kubernetes.io/stack label set on pods - Job completion: test-job runs to completion (status.succeeded=1) - Secrets: spec secrets: key results in envFrom secretRef on pod Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>deploy init already writes 'secrets: {}' into the spec file. The test was appending a second secrets block via heredoc, which ruamel.yaml rejects as a duplicate key. Use sed to replace the empty value instead. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>Job pod templates used the same app={deployment_id} label as deployment pods, causing pods_in_deployment() to return both. This made the logs command warn about multiple pods and pick the wrong one. Use app={deployment_id}-job for job pod templates so they are not matched by pods_in_deployment(). The Job metadata itself retains the original app label for stack-level queries. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>