fix(k8s): use distinct app label for job pods
Some checks failed
Lint Checks / Run linter (pull_request) Failing after 1m34s
Deploy Test / Run deploy test suite (pull_request) Successful in 3m17s
Smoke Test / Run basic test suite (pull_request) Successful in 4m13s
K8s Deployment Control Test / Run deployment control suite on kind/k8s (pull_request) Successful in 4m33s
Webapp Test / Run webapp test suite (pull_request) Successful in 4m57s
K8s Deploy Test / Run deploy test suite on kind/k8s (pull_request) Successful in 5m20s

Job pod templates used the same app={deployment_id} label as
deployment pods, causing pods_in_deployment() to return both.
This made the logs command warn about multiple pods and pick
the wrong one.

Use app={deployment_id}-job for job pod templates so they are
not matched by pods_in_deployment(). The Job metadata itself
retains the original app label for stack-level queries.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Prathamesh Musale 2026-03-10 06:26:03 +00:00
parent 68ef9de016
commit 91f4e5fe38

View File

@ -710,7 +710,9 @@ class ClusterInfo:
elif job_name.endswith(".yaml"):
job_name = job_name[: -len(".yaml")]
pod_labels = {"app": self.app_name, **({"app.kubernetes.io/stack": self.stack_name} if self.stack_name else {})}
# Use a distinct app label for job pods so they don't get
# picked up by pods_in_deployment() which queries app={app_name}.
pod_labels = {"app": f"{self.app_name}-job", **({"app.kubernetes.io/stack": self.stack_name} if self.stack_name else {})}
template = client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(
labels=pod_labels