Running Jenkins as Pods on a Kubernetes Cluster
A practical walkthrough of how I deployed Jenkins on Kubernetes using Helm — storage, plugins, agents, and how the setup fits into a broader CI/CD platform.
Read more ↗Case Study: Converting Jenkins into a Kubernetes-Native CI/CD Platform
The goal for this deployment was to move from a single VM-based Jenkins master to a Kubernetes-native setup that could scale agents on demand and be managed declaratively.
Architecture
- Jenkins controller deployed via Helm chart as a StatefulSet with persistent storage.
- Jenkins agents running as Kubernetes pods using dynamic pod templates.
- Ingress configured to expose Jenkins securely inside the cluster network.
- Configuration-as-code used for plugins, credentials, and system settings.
Key Decisions
- StatefulSet + PVC: Ensured Jenkins home survived pod restarts and node rotations.
- Pod-based build agents: Each pipeline ran in an isolated container with required tools baked into Docker images.
- Helm values: Centralized all configuration in values.yaml for reproducible environments (dev, QA, personal lab).
Lessons Learned
- Resource requests/limits for the controller are critical to avoid slow UI and queue delays.
- Using separate Docker images for different build types (Java, Node, Infra) keeps agents clean and predictable.
- Storing Jenkins config in Git reduces “snowflake” servers and makes recovery much faster.
This setup now acts as a central CI/CD hub that can build Maven projects, Docker images, and trigger Helm or FluxCD-based deployments into Kubernetes.