Managing Kubernetes Clusters in Beam: The DevOps Terminal Workflow
DevOps engineers live in the terminal. On any given day, you're running kubectl against three different clusters, tailing logs with stern, deploying Helm charts, writing Terraform modules, building Docker images, and debugging failing pods. Each Kubernetes cluster -- dev, staging, production -- needs its own terminal context, its own kubectl configuration, its own set of running commands. Add Claude Code to the mix for generating configs, writing IaC, and diagnosing incidents, and you're suddenly managing fifteen or more terminal sessions with no structure whatsoever.
Beam fixes this. By organizing your Kubernetes workflow into dedicated workspaces -- one per cluster environment, one for infrastructure -- you eliminate the single most dangerous problem in DevOps: running a command against the wrong cluster. Here's the complete workflow.
The Kubernetes Terminal Problem
Every DevOps engineer knows this scenario. You have fifteen terminals open. Some are running kubectl, some are tailing logs with stern, one is running a helm upgrade, another has a Terraform plan in progress, and somewhere in the chaos there's a terminal where you're building a Docker image. They all look the same: monospace text on a dark background with no labels and no grouping.
The problem is not just disorganization -- it's danger. Kubernetes contexts are invisible. You think you're pointing at the dev cluster, but your kubectl context was set to production three hours ago when you were investigating an issue and never switched back. You run kubectl delete deployment api-gateway and the next thing you hear is the on-call Slack channel exploding. You just took down production.
This is not a hypothetical. Context switching between clusters is the number one cause of accidental production incidents in Kubernetes environments. The tools themselves provide almost no guardrails. Your terminal prompt might show the current context if you've configured it, but once you have a dozen terminals open, you're not reading prompts anymore. You're muscle-memorying your way through commands at speed, and that's exactly when mistakes happen.
Now add Claude Code to the equation. You're asking Claude to generate deployment manifests, write Helm charts, debug CrashLoopBackOff errors, and create Terraform modules. Each of those conversations needs its own terminal. Claude Code is generating YAML that you're going to kubectl apply somewhere, and if "somewhere" is the wrong cluster because you lost track of which terminal is which, you have a problem that no amount of AI assistance can undo.
The Workspace-Per-Cluster Model
The solution is architectural. Instead of running all your Kubernetes terminals in a flat, unlabeled pile, you create one Beam workspace per cluster environment. Each workspace is a self-contained context for a single environment, and switching between them is a deliberate, visible action rather than a silent, invisible kubectl config use-context buried in a terminal you're not looking at.
Here's the model:
Workspace "Dev Cluster" (Green)
- Tab 1: Claude Code -- generating and iterating on Kubernetes manifests for development
- Tab 2:
kubectl-- pointed at the dev cluster context, for applying manifests and checking resources - Tab 3:
stern-- tailing logs from the dev namespace, color-coded by pod - Tab 4:
helm-- managing Helm releases in the dev environment
Workspace "Staging" (Yellow)
- Tab 1: Claude Code -- debugging staging-specific issues, generating configs with staging values
- Tab 2:
kubectl-- pointed at the staging cluster, for deployment verification and rollbacks - Tab 3:
stern-- tailing staging logs, watching for integration test failures - Tab 4:
helm-- managing staging Helm releases with staging-specific values
Workspace "Production" (Red)
- Tab 1: Claude Code -- analyzing production logs, diagnosing incidents, generating hotfixes
- Tab 2:
kubectl-- read-heavy operations in production:get,describe,top - Tab 3:
stern-- tailing production logs, filtering by error level - Tab 4:
k9s-- real-time cluster dashboard for production overview
Workspace "Infrastructure" (Blue)
- Tab 1: Claude Code -- writing Terraform modules, generating IaC, creating CI/CD configs
- Tab 2: Terraform -- running
terraform planandterraform apply - Tab 3: Docker -- building and pushing container images
- Tab 4: Git -- version control for infrastructure-as-code repositories
The color coding is critical. When you switch to the Production workspace, the red indicator is an immediate visual signal that you're operating in the danger zone. It's the terminal equivalent of painting the production server rack red. You never forget where you are because the environment tells you.
Pro Tip: Set kubectl Context Per Workspace
In each workspace's first terminal, run export KUBECONFIG=~/.kube/config-dev (or staging, prod) to ensure every tab in that workspace inherits the correct cluster context. Combined with Beam's workspace color coding, this makes cross-cluster mistakes nearly impossible. You can also add kubectl config use-context dev-cluster to your shell startup for each workspace's saved layout.
Using Claude Code for Kubernetes
Claude Code is remarkably effective at Kubernetes work. It understands the Kubernetes API, knows the YAML schema for every resource type, and can generate production-quality manifests from plain-language descriptions. But the real power comes from having it operate within a structured workspace where you can immediately test what it generates.
Here are the prompts that DevOps engineers use most often with Claude Code in a Kubernetes workflow:
Generating deployments with best practices: "Generate a Kubernetes deployment for a Go API with 3 replicas, CPU and memory resource limits, liveness and readiness health checks on /health, and a Horizontal Pod Autoscaler that scales to 10 replicas at 70% CPU." Claude doesn't just generate the Deployment YAML. It creates the HPA as a separate resource, sets sensible resource requests alongside the limits, configures the health check endpoints with appropriate initial delay seconds and period seconds, and adds standard labels for service discovery.
Writing Helm charts: "Write a Helm chart for this service with values files for dev, staging, and prod. Dev should have 1 replica with no resource limits. Staging should have 2 replicas with moderate limits. Prod should have 3 replicas with strict limits and pod disruption budgets." Claude generates the entire chart structure -- Chart.yaml, templates with Go templating, and three separate values-*.yaml files with environment-appropriate configurations.
Debugging CrashLoopBackOff: Copy your pod logs from the stern tab and paste them into Claude Code: "This pod is in CrashLoopBackOff. Here are the last 50 lines of logs." Claude reads the stack trace, identifies the root cause -- maybe a missing environment variable, a failed database migration, or an OOM kill -- and suggests the specific fix. If it's an OOM kill, Claude will recommend updated resource limits. If it's a missing config, Claude will generate the ConfigMap or Secret you need.
Network policies: "Create a NetworkPolicy that only allows ingress traffic to the API pods from the API gateway namespace, and egress traffic to the Postgres service on port 5432 and the Redis service on port 6379." Network policies are one of the most syntax-heavy resources in Kubernetes, and getting the label selectors, namespace selectors, and port specifications right is tedious. Claude handles it cleanly.
Terraform for cluster provisioning: "Write a Terraform module for an EKS cluster with managed node groups, using m5.xlarge instances, autoscaling from 3 to 10 nodes, with the VPC CNI and CoreDNS add-ons." Claude generates the module with proper variable definitions, the EKS cluster resource, node group configuration, IAM roles with the minimum required policies, and outputs for the cluster endpoint and certificate authority.
Split Pane Workflows for Kubernetes
Split panes are where Beam's workspace model becomes a genuine operational advantage for Kubernetes work. Instead of switching between tabs or windows to see cause and effect, you put the cause in one pane and the effect in the other. The feedback loop is immediate and visual.
Left: Claude Code generating YAML. Right: kubectl apply watching it deploy. Claude writes a deployment manifest. You copy it, paste it into the right pane with kubectl apply -f -, and watch the pods come up in real time. If the deployment fails -- image pull error, resource quota exceeded, invalid YAML -- the error appears right next to Claude, who can read it and produce a corrected manifest without you switching contexts.
Left: stern following pod logs. Right: Claude Code analyzing errors. The logs stream on the left. You see a pattern -- repeated connection timeouts, memory warnings, or a specific error message appearing across multiple pods. You copy the relevant lines, paste them into Claude on the right. Claude reads the log pattern, identifies the root cause, and suggests the fix. The conversation happens in real time alongside the live logs.
Top: k9s cluster overview. Bottom: Claude Code writing fixes. You're watching the cluster dashboard and notice a node is running hot, or a deployment's available replicas just dropped. In the bottom pane, you ask Claude what's happening: "The go-api deployment shows 2/3 replicas available and one pod is in CrashLoopBackOff. Here's the describe output." Claude diagnoses and fixes while you watch the cluster recover in real time above.
Press ⌘⌥⌃T to split any tab into panes. This keyboard shortcut becomes muscle memory fast when you're doing Kubernetes operations all day. Every tab in your workspace can become a two-pane workstation tuned to a specific task.
Helm Chart Development
Helm charts are one of the areas where the workspace-per-cluster model pays for itself immediately. A typical Helm chart development workflow involves generating templates, testing them against different environments, and iterating on values files until the configuration is correct for each target cluster. Without organization, this means running helm template and helm install commands in random terminals, constantly switching kubectl contexts, and losing track of which environment you last deployed to.
In Beam, the workflow is clean. You develop the chart in the Dev Cluster workspace. Claude Code generates the templates and base values in one tab. You run helm template . -f values-dev.yaml in the helm tab to verify the rendered YAML looks correct. Then you run helm upgrade --install my-service . -f values-dev.yaml and check the stern tab to confirm the pods are healthy.
When the chart works in dev, you switch to the Staging workspace with ⌘⌥→. The context switches visually and operationally. Your kubectl tab is already pointed at the staging cluster. You run helm upgrade --install my-service . -f values-staging.yaml here, and any staging-specific issues surface immediately in the staging stern tab. If something fails, you switch back to the Dev workspace where Claude Code is still running, describe the staging failure, and Claude updates the templates or values file to handle the difference.
The values files for each environment are where Claude Code really accelerates the process. Tell Claude: "The staging environment uses an internal load balancer annotation instead of a public one, the database endpoint is different, and we need to add a podAntiAffinity rule for high availability." Claude updates the values-staging.yaml with the correct annotations, endpoints, and affinity rules without touching the values-dev.yaml that's already working.
Pro Tip: Helm Diff Before Deploy
Install the helm-diff plugin and keep a split pane showing helm diff upgrade my-service . -f values-staging.yaml before you run the actual upgrade. You'll see exactly what changes Helm is about to make to your cluster. Combined with Claude Code explaining what each change does, this gives you a complete audit trail before any manifest hits the cluster.
Terraform with Beam
Infrastructure provisioning with Terraform follows the same workspace model, but the workspaces map to infrastructure layers rather than cluster environments. Your cloud infrastructure has layers -- networking, compute, database, monitoring, security -- and each layer has its own Terraform state, its own variables, and its own blast radius if something goes wrong. Mixing them in the same terminal context is asking for trouble.
Workspace "Networking": Claude Code writing VPC configurations, subnet layouts, NAT gateways, and security groups. A Terraform tab running terraform plan to preview changes. A split pane showing the plan output next to Claude's explanation of what each change means.
Workspace "Compute": Claude Code generating EKS cluster configurations, node group definitions, and instance type selections. Terraform tab for planning and applying. A kubectl tab to verify the cluster comes up correctly after provisioning.
Workspace "Database": Claude Code writing RDS configurations, ElastiCache setups, and backup policies. Terraform tab for the database layer. A terminal for connectivity testing -- can the cluster reach the database on the right port?
Workspace "Monitoring": Claude Code generating Prometheus rules, Grafana dashboard JSON, and alerting configurations. Terraform tab for deploying the monitoring stack. A browser-testing terminal for verifying dashboards load correctly.
The split-pane workflow is especially valuable for Terraform. In the left pane, Claude Code generates a Terraform module. In the right pane, you run terraform plan and see the exact resources that will be created, modified, or destroyed. If the plan shows unexpected changes -- a resource being replaced when you expected an in-place update -- you ask Claude about it in the left pane. Claude reads the plan output and explains why Terraform wants to replace the resource (maybe a computed attribute changed, or you modified an immutable field). This back-and-forth between Claude and Terraform happens entirely within a single tab, with no window switching.
Incident Response Workflow
When something breaks in production at 3 AM, organization is not a luxury -- it's the difference between a five-minute fix and a fifty-minute scramble. The workspace model shines during incidents because every tool you need is already in the right place, labeled, and ready to go.
Here's the incident response workflow step by step:
- Switch to the Production workspace. Press ⌘P, type "prod," hit Enter. You're immediately in the Production context. The red workspace indicator confirms you're in the right place. No fumbling with
kubectl config use-context, no wondering which terminal is pointed where. - Check pod status. Your kubectl tab is already pointed at the production cluster. Run
kubectl get pods -n api --field-selector=status.phase!=Runningto see what's unhealthy. You immediately see three pods in CrashLoopBackOff. - Tail the logs. Switch to the stern tab. It's already configured to tail the right namespace. Run
stern go-api -n api --since 5mto see recent log output. You spot a panic: nil pointer dereference at a specific line in the handler code. - Ask Claude Code to diagnose. Switch to the Claude Code tab. Paste the panic trace: "Three pods are crash-looping in production. Here's the stack trace from stern." Claude reads the trace, identifies the nil pointer dereference, and traces it to a recent code change that didn't handle a nullable field from the database correctly.
- Generate the fix. Claude produces a corrected handler with proper nil checks. You review it, apply it to the codebase, build a new image, and push it to the registry.
- Deploy and verify. Back in the kubectl tab, run
kubectl set image deployment/go-api go-api=registry/go-api:hotfix-1. Switch to stern and watch the new pods come up with healthy logs. Verify in k9s that all replicas are running. - Document the incident. Ask Claude Code to generate an incident report based on the timeline: root cause, impact, fix applied, and follow-up items. Claude produces a structured post-mortem document you can paste directly into your incident tracking system.
The entire incident lifecycle -- detection, diagnosis, fix, deployment, verification, documentation -- happens within a single workspace with clearly labeled tabs. No context switching, no wondering which cluster you're looking at, no accidentally running a fix against staging when you meant production.
Pro Tip: Save an Incident Response Layout
Create a dedicated "Incident Response" saved layout with the production workspace pre-configured: kubectl in one split pane, stern in the other, Claude Code in a separate tab, and k9s in a third tab. When an incident hits, restore the layout with ⌘S and you're ready to investigate in seconds instead of spending the first five minutes of an outage setting up your terminals.
GitOps with ArgoCD and Flux
GitOps workflows add another layer of terminal complexity to Kubernetes operations. Instead of applying manifests directly with kubectl, you push changes to a Git repository and let ArgoCD or Flux sync them to the cluster. This means you need terminals for Git operations, terminals for watching sync status, and terminals for Claude Code generating the manifests that get committed.
The Beam workspace for a GitOps workflow looks like this:
- Tab 1: Claude Code -- generating Kubernetes manifests, Kustomize overlays, or ArgoCD Application resources. Claude understands the GitOps model and will generate manifests that are intended for Git commit rather than direct
kubectl apply. - Tab 2: Git -- staging changes, committing manifests, pushing to the GitOps repository. This is where the rendered YAML from Claude gets committed.
- Tab 3: ArgoCD CLI or
kubectl-- watching sync status withargocd app get my-apporkubectl get applications -n argocd. After you push a commit, this tab shows whether ArgoCD picked up the change and whether the sync succeeded. - Tab 4: stern -- tailing application logs to verify the synced manifests actually work. This is your ground truth: the sync succeeded, but are the pods actually healthy?
The split-pane workflow for GitOps is powerful. Left pane: Claude Code generates an ArgoCD Application resource that points to a specific path in your GitOps repo, with automated sync and self-heal enabled. Right pane: you git add, git commit, and git push the generated YAML. Within seconds, the ArgoCD tab shows the application syncing. You're watching the entire GitOps pipeline from commit to deployment in a single workspace.
Claude Code is especially useful for generating Kustomize overlays. Tell Claude: "Create a Kustomize overlay for the staging environment that patches the deployment to use 2 replicas, sets the image tag to the latest staging build, and adds an annotation for the internal load balancer." Claude generates the kustomization.yaml and the patch files, correctly structured for the Kustomize directory layout. You commit them, and ArgoCD takes care of the rest.
Multi-Cluster Operations
The workspace-per-cluster model scales naturally to multi-cluster architectures. If you're running a service mesh across three clusters, or managing a fleet of edge clusters, or operating separate clusters for different teams, each one gets its own workspace in Beam. The mental model stays the same regardless of how many clusters you manage.
For teams managing many clusters -- say, ten or more -- the Quick Switcher becomes essential. Press ⌘P and type "prod-us-east" to jump directly to the US East production cluster workspace. Type "staging-eu" to switch to the European staging environment. The fuzzy search matches workspace names instantly, so naming your workspaces descriptively (with region and environment) makes navigation effortless even at scale.
Each workspace's kubectl tab should have its KUBECONFIG set to the specific cluster's config file. This ensures that no matter how many workspaces you have open, there's zero chance of running a command against the wrong cluster. The workspace boundary is also the security boundary.
Project Memory for Kubernetes Conventions
Every Kubernetes team has conventions. Maybe all services must have resource limits defined. Maybe pod disruption budgets are required for production deployments. Maybe labels must follow a specific schema: app.kubernetes.io/name, app.kubernetes.io/version, app.kubernetes.io/managed-by. Maybe your team uses Istio and every service needs sidecar injection annotations. These rules live in runbooks, wiki pages, or people's heads -- and they get violated every time someone is in a hurry.
Beam's Project Memory feature solves this. Create a memory file that documents your Kubernetes conventions: required labels, resource limit standards, namespace naming patterns, Helm chart structure, Terraform module conventions, and deployment rollout strategies. When Claude Code starts a session, it reads this memory file automatically and follows your conventions from the very first prompt.
Tell Claude: "Generate a deployment for the payment service." Without memory, Claude will generate a reasonable deployment with generic best practices. With your team's memory file loaded, Claude generates a deployment with your exact label schema, your standard resource limits for that service tier, your required annotations for monitoring and service mesh integration, your pod disruption budget, and your specific health check conventions. Every manifest Claude produces is compliant with your team's standards from the moment it's generated.
This is especially powerful for on-call engineers who might not know every convention by heart. During an incident at 3 AM, they don't need to remember that production deployments require a PDB with minAvailable: 2. Claude already knows, because the memory file told it.
Organize Your Kubernetes Terminal Workflow
Download Beam and never accidentally deploy to the wrong cluster again. One workspace per environment. Color-coded. Saved layouts. Instant switching.
Download Beam for macOSSummary
Kubernetes operations are inherently multi-terminal, multi-cluster, and multi-tool. You're running kubectl, helm, terraform, stern, k9s, docker, and Claude Code -- often all at the same time, often across multiple cluster environments. Without structure, this leads to the most dangerous class of DevOps errors: running commands against the wrong cluster.
Beam's workspace model eliminates this class of error entirely:
- Use one workspace per cluster environment -- Dev, Staging, Production, Infrastructure -- each with its own kubectl context and color coding
- Use Claude Code inside each workspace to generate deployments, Helm charts, Terraform modules, network policies, and debug failing pods with full cluster context
- Use split panes for real-time feedback -- Claude generating YAML on the left, kubectl applying it on the right, stern showing the result live
- Use saved layouts (⌘S) to restore your entire multi-cluster terminal setup in seconds, including an incident response layout ready for emergencies
- Use Project Memory to ensure Claude Code follows your team's Kubernetes conventions -- labels, resource limits, annotations, PDBs -- from the first prompt
- Use Quick Switcher (⌘P) to jump between clusters and environments instantly, even when managing ten or more cluster workspaces
The key insight is that Kubernetes safety comes from context isolation. When every environment has its own workspace with its own visual identity and its own kubectl context, the accidental cross-environment command becomes nearly impossible. Beam doesn't just organize your terminals -- it makes your Kubernetes operations fundamentally safer.