Suresh Kumar profile photo
DevOps & Cloud · Real-world platforms

Hi, I’m Suresh Kumar.
I build, optimize, and automate cloud-native DevOps platforms.

This site is a record of platforms I’ve actually deployed — Jenkins, SonarQube, Nexus, FluxCD, and Ansible-based automation — plus the notes and lessons that came from running them in real environments.

Deployed DevOps platforms Kubernetes · Jenkins · SonarQube · Nexus · FluxCD · Ansible
Suresh Kumar avatar
Suresh Kumar
DevOps Engineer

About me

Experienced in AWS, Azure, and end-to-end DevOps automation.

DevOps Engineer focused on designing, building, and operating cloud-native platforms on AWS and Azure, with an emphasis on CI/CD, infrastructure as code, and secure, observable delivery pipelines.

I work across the full DevOps stack from configuration management and container orchestration to monitoring, logging, and incident response. My focus is on turning manual workflows into reliable pipelines, standardizing environments, and using automation to deliver faster, safer, and more scalable releases.

Deployed DevOps Platforms

Platforms I’ve already deployed and used in real environments.

Deployed · Ansible

Automated DevOps Tooling for Linux Servers

This repository contains Ansible playbooks I use to bootstrap Linux servers with a DevOps toolchain. It standardizes installation and configuration so every server is ready for CI/CD, builds, and deployments without manual setup.

Ansible Linux Tooling Automation
Deployed · Jenkins

Jenkins CI/CD Platform on Kubernetes Using Helm

I used this Helm-based configuration to deploy Jenkins as pods on a Kubernetes cluster. It includes persistence, plugin handling, and agent setup so pipelines run inside the cluster with scalable executors.

Jenkins Helm Kubernetes CI/CD
Deployed · SonarQube

SonarQube Code Quality Platform on Kubernetes

This Helm setup is what I used to deploy SonarQube as a Kubernetes workload. It’s designed to plug into CI/CD pipelines and provide continuous code quality and security analysis with proper storage and resource configuration.

SonarQube Helm Kubernetes Code Quality
Deployed · FluxCD

GitOps Platform with FluxCD for Kubernetes

This FluxCD repository represents the GitOps model I used to manage Kubernetes workloads declaratively. It separates infrastructure, shared services, and apps so changes flow from Git to clusters in a controlled way.

FluxCD GitOps Helm Kubernetes
Deployed · Nexus

Nexus Repository Manager on Kubernetes Using Helm

This Helm configuration is used to deploy Nexus Repository Manager as a pod on a Kubernetes cluster. I use it as a private artifact registry for Maven artifacts and container images that integrate directly with Jenkins and CI/CD pipelines.

Nexus Artifact Registry Helm Kubernetes
Deployed · Maven & Helm

Maven Application Built, Containerized & Deployed on Kubernetes

This repository shows a full path from a Maven project to a running application on Kubernetes: build with Maven, package into a Docker image, and deploy using a Helm chart so the app runs as pods in the cluster.

Maven Docker Helm Kubernetes
Live App

Github actions ci/cd pipeline

A browser-based image generator deployed as a static site on GitHub Pages. Users can enter prompts and generate images directly from the web UI, showcasing how I ship and host lightweight front-end applications.

Web App Frontend GitHub Pages Side Project

Case Studies & Notes

Write-ups based on these deployed platforms and what I learned from them.

Running Jenkins as Pods on a Kubernetes Cluster

A practical walkthrough of how I deployed Jenkins on Kubernetes using Helm — storage, plugins, agents, and how the setup fits into a broader CI/CD platform.

Read more

Case Study: Converting Jenkins into a Kubernetes-Native CI/CD Platform

The goal for this deployment was to move from a single VM-based Jenkins master to a Kubernetes-native setup that could scale agents on demand and be managed declaratively.

Architecture

  • Jenkins controller deployed via Helm chart as a StatefulSet with persistent storage.
  • Jenkins agents running as Kubernetes pods using dynamic pod templates.
  • Ingress configured to expose Jenkins securely inside the cluster network.
  • Configuration-as-code used for plugins, credentials, and system settings.

Key Decisions

  • StatefulSet + PVC: Ensured Jenkins home survived pod restarts and node rotations.
  • Pod-based build agents: Each pipeline ran in an isolated container with required tools baked into Docker images.
  • Helm values: Centralized all configuration in values.yaml for reproducible environments (dev, QA, personal lab).

Lessons Learned

  • Resource requests/limits for the controller are critical to avoid slow UI and queue delays.
  • Using separate Docker images for different build types (Java, Node, Infra) keeps agents clean and predictable.
  • Storing Jenkins config in Git reduces “snowflake” servers and makes recovery much faster.

This setup now acts as a central CI/CD hub that can build Maven projects, Docker images, and trigger Helm or FluxCD-based deployments into Kubernetes.

GitOps with FluxCD: How I Structure My Repos

Lessons from structuring a FluxCD GitOps repo around environments, shared services, and applications — and how that helped with rollbacks and audits in real use.

Read more

Case Study: FluxCD GitOps Repository Structure

The FluxCD setup is based on separating infrastructure, shared services, and applications so that each type of change has a clear owner and blast radius.

Repo Layout

  • clusters/<env>/ – Entry point for each Kubernetes cluster (development, shared services, etc.).
  • infrastructure/ – Core components such as Ingress, MetalLB, monitoring stack, and storage classes.
  • shared-services/ – Jenkins, SonarQube, Nexus, and other platform tools.
  • apps/ – Application HelmReleases and Kustomizations per namespace or team.

Operational Benefits

  • Each change is traceable to a Git commit and pull request.
  • Rollback is as simple as reverting a commit and letting Flux reconcile.
  • Environments can be recreated from scratch by pointing a new cluster at the same Git refs.

Patterns That Worked Well

  • Using separate namespaces for platform tools versus application tenants.
  • Standardizing HelmRelease patterns (values files, image overrides, resources).
  • Keeping secrets out of the repo and integrating with external secret managers.

This GitOps structure gives a clear path for adding new clusters or tenants without redesigning the entire repository each time.

Bootstrap Linux Servers with Ansible for DevOps Work

How I turned new Linux servers into DevOps-ready machines using Ansible — from installing Java and Docker to standardizing tools across environments.

Read more

Case Study: Ansible-Based DevOps Bootstrap for Linux

This playbook set is designed to take a fresh Linux server (VM or bare metal) and turn it into a consistent DevOps node in one run.

What the Playbooks Install

  • Multiple Java versions and Maven for build workloads.
  • Docker / container runtime with proper group permissions.
  • CLI tools such as Git, kubectl, Helm, and other utilities.
  • System hardening and basic OS tuning where required.

Why This Matters

  • Reduces manual setup time when adding new runners or build agents.
  • Ensures every node has the same versions and configurations.
  • Makes it easy to rebuild or replace nodes without special knowledge.

Integration with the Rest of the Platform

  • Jenkins agents and GitHub runners are built on top of these prepared hosts or images.
  • The same Ansible roles can be reused in cloud-init/user data for cloud environments.

Over time, this approach becomes a catalog of reusable roles that can be combined for different types of infrastructure nodes (CI agents, admin boxes, monitoring hosts, and more).