Managing Go Microservices in Beam: 5 Services, One Screen
You're building a Go microservices platform. The architecture looks clean on the whiteboard: an API gateway routes traffic to a user service, an order service, a notification service, and a background worker. Five binaries, five domains, five teams worth of complexity. But on your screen? It's ten or more terminal windows. Five go run processes, five Claude Code sessions, maybe a Docker Compose log stream, maybe a database shell. You've lost track of which tab is which, and you just accidentally Ctrl+C'd the wrong service.
Every Go microservices developer has been here. Beam fixes this by giving each service its own isolated workspace, each with its own terminals, Claude Code session, and running process -- all switchable in a keystroke. This guide walks you through the entire workflow.
The Microservices Terminal Problem
Go microservices are lightweight by design. Each service compiles to a single binary, starts in milliseconds, and consumes minimal memory. That architectural elegance creates a practical problem: if it's easy to spin up services, you end up with a lot of them.
Here's what a typical Go microservices development session looks like without Beam:
- Terminal 1-5 -- Five
go runprocesses, one per service. Each needs its own terminal because each runs its own HTTP/gRPC server. - Terminal 6-10 -- Five Claude Code sessions. Each service has different code, different conventions, different bugs. You want AI assistance scoped to each service's directory.
- Terminal 11 -- Docker Compose for Postgres, Redis, and NATS (or whatever your infrastructure stack looks like).
- Terminal 12+ -- Ad hoc terminals for
curl,grpcurl, database shells, log tailing, and git operations.
That's twelve or more terminal windows. In iTerm2, that's a wall of tabs where "zsh" tells you nothing about which service lives in which tab. In the default macOS Terminal, it's a pile of overlapping windows. You're spending more time finding the right terminal than writing code.
The worst part: you Ctrl+C what you think is the notification service, but it was actually the API gateway. Now the entire system is broken and you're restarting everything, trying to remember which ports go where.
The Beam Workspace-Per-Service Model
Beam solves this with workspaces. Each workspace is a named, isolated group of terminal tabs. One workspace per microservice means every service gets its own labeled environment that you can switch to instantly.
Here's the layout:
Workspace: "API Gateway"
- Tab 1: Claude Code -- Scoped to
/services/gateway. Handles routing logic, middleware, authentication, rate limiting. - Tab 2:
go run ./cmd/gateway-- The running gateway process on port 8080. You see request logs in real time. - Tab 3: Testing -- Running
curlorhttpiecommands against the gateway, or tailing structured logs.
Workspace: "User Service"
- Tab 1: Claude Code -- Scoped to
/services/users. User CRUD, authentication, profile management. - Tab 2:
go run ./cmd/users-- Running on port 8081. gRPC server for internal calls, REST for external. - Tab 3: Database --
psqlconnected to the users database for inspecting data while developing.
Workspace: "Order Service"
- Tab 1: Claude Code -- Scoped to
/services/orders. Order lifecycle, payment integration, inventory checks. - Tab 2:
go run ./cmd/orders-- Running on port 8082. - Tab 3: Logs -- Filtered order-specific logs or message queue monitoring.
Workspace: "Notification Service"
- Tab 1: Claude Code -- Scoped to
/services/notifications. Email, push, SMS dispatch logic. - Tab 2:
go run ./cmd/notifications-- Running on port 8083. Consumes events from the message queue. - Tab 3: Queue monitor -- Watching NATS or RabbitMQ subjects to see events flowing in.
Workspace: "Worker"
- Tab 1: Claude Code -- Scoped to
/services/worker. Background jobs, scheduled tasks, data processing. - Tab 2:
go run ./cmd/worker-- Long-running process consuming job queues. - Tab 3: Monitoring -- Job queue status, retry counts, dead letter inspection.
Workspace: "Infrastructure"
- Tab 1:
docker compose up-- Postgres, Redis, NATS, and any other dependencies running locally. - Tab 2: Docker logs -- Filtered logs from specific containers.
- Tab 3: Ad hoc -- Migrations, seed scripts, infrastructure debugging.
Switch between any workspace with ⌘⌥←→ or press ⌘P to open the Quick Switcher, type "orders" and hit Enter. You're in the order service workspace in under a second, with all its tabs and context exactly as you left them.
Pro Tip: Name Your Tabs Too
Double-click any tab in Beam to rename it. Instead of five tabs that all say "zsh", you'll see "Claude Code", "go run", "logs" -- instantly recognizable. When you combine named workspaces with named tabs, there's zero ambiguity about what's running where.
Setting Up the Architecture with Claude Code
Before you can organize five services in Beam, you need to scaffold them. This is where Claude Code earns its keep. Open a Claude Code session and ask it to generate the entire monorepo structure.
A typical Go microservices monorepo looks like this:
project-root/
services/
gateway/
cmd/gateway/main.go
internal/handlers/
internal/middleware/
users/
cmd/users/main.go
internal/handlers/
internal/repository/
internal/service/
orders/
cmd/orders/main.go
internal/handlers/
internal/repository/
internal/service/
notifications/
cmd/notifications/main.go
internal/handlers/
internal/consumers/
worker/
cmd/worker/main.go
internal/jobs/
pkg/
shared/
models/
middleware/
logging/
proto/
user.proto
order.proto
notification.proto
docker-compose.yml
Makefile
go.work
Claude Code can scaffold this in one prompt. It'll generate the go.work file that ties the monorepo modules together, the base main.go files with signal handling and graceful shutdown, and the shared packages that every service imports.
The key is the Makefile. Ask Claude Code to generate per-service targets:
.PHONY: run-gateway run-users run-orders run-notifications run-worker
run-gateway:
cd services/gateway && go run ./cmd/gateway
run-users:
cd services/users && go run ./cmd/users
run-orders:
cd services/orders && go run ./cmd/orders
run-notifications:
cd services/notifications && go run ./cmd/notifications
run-worker:
cd services/worker && go run ./cmd/worker
run-all:
make -j5 run-gateway run-users run-orders run-notifications run-worker
test-all:
cd services/gateway && go test ./...
cd services/users && go test ./...
cd services/orders && go test ./...
cd services/notifications && go test ./...
cd services/worker && go test ./...
proto:
protoc --go_out=. --go-grpc_out=. proto/*.proto
Now each Beam workspace tab can run its respective make run-* target. Clean, predictable, and you never have to remember which port belongs to which service.
Service-to-Service Communication
The real complexity in Go microservices isn't any single service -- it's how they talk to each other. Claude Code is remarkably effective at generating the glue code that connects services together.
gRPC Service Definitions
In your "User Service" workspace, ask Claude Code to generate the protobuf definitions and Go implementations. It'll produce the .proto file, run protoc to generate the Go code, and implement the server:
// proto/user.proto
syntax = "proto3";
package user;
option go_package = "pkg/shared/userpb";
service UserService {
rpc GetUser(GetUserRequest) returns (User);
rpc CreateUser(CreateUserRequest) returns (User);
rpc ListUsers(ListUsersRequest) returns (ListUsersResponse);
}
Then switch to the "Order Service" workspace (⌘P, type "order"), and ask Claude Code there to generate the gRPC client that calls the user service. Because each Claude Code session is scoped to its service directory, it generates the client code in the right place with the right imports.
HTTP Client Wrappers
For services that communicate over REST, Claude Code generates typed HTTP clients with proper error handling, retries, and circuit breaking:
// pkg/shared/clients/orders.go
type OrderClient struct {
baseURL string
httpClient *http.Client
}
func (c *OrderClient) CreateOrder(ctx context.Context, req CreateOrderRequest) (*Order, error) {
body, err := json.Marshal(req)
if err != nil {
return nil, fmt.Errorf("marshal request: %w", err)
}
httpReq, err := http.NewRequestWithContext(ctx, "POST", c.baseURL+"/api/orders", bytes.NewReader(body))
if err != nil {
return nil, fmt.Errorf("create request: %w", err)
}
httpReq.Header.Set("Content-Type", "application/json")
resp, err := c.httpClient.Do(httpReq)
if err != nil {
return nil, fmt.Errorf("execute request: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusCreated {
return nil, fmt.Errorf("unexpected status: %d", resp.StatusCode)
}
var order Order
if err := json.NewDecoder(resp.Body).Decode(&order); err != nil {
return nil, fmt.Errorf("decode response: %w", err)
}
return &order, nil
}
Message Queue Integration
For event-driven communication, ask Claude Code to wire up NATS or RabbitMQ publishers and subscribers. In the "Order Service" workspace, generate the event publisher:
// services/orders/internal/events/publisher.go
func (p *Publisher) OrderCreated(ctx context.Context, order *models.Order) error {
event := OrderCreatedEvent{
OrderID: order.ID,
UserID: order.UserID,
Total: order.Total,
CreatedAt: order.CreatedAt,
}
data, err := json.Marshal(event)
if err != nil {
return fmt.Errorf("marshal event: %w", err)
}
return p.nc.Publish("orders.created", data)
}
Then switch to the "Notification Service" workspace and ask Claude Code to generate the corresponding subscriber that listens for orders.created events and sends confirmation emails. The workspace isolation keeps your mental model clean: you're thinking about notifications in the notification workspace, orders in the order workspace.
Split Pane Workflows
Within each workspace, Beam's split panes let you see multiple things at once without switching tabs. Here are the workflows that Go microservices developers use most.
Build and Test Side by Side
Press ⌘⌥⌃T to split the current tab into two panes. On the left, Claude Code is building a new endpoint for the order service. On the right, you're running the test:
- Left pane: Claude Code adding a
CancelOrderhandler with validation, database updates, and event publishing. - Right pane:
go test -v -run TestCancelOrder ./internal/handlers/...-- watching it go from failing to passing as Claude Code iterates.
You see the test output update in real time while Claude Code works. No tab switching, no losing context.
Code and curl
- Left pane: Claude Code implementing a new
POST /api/ordersendpoint. - Right pane: Testing it live with
curl -X POST http://localhost:8082/api/orders -d '{"user_id": "abc", "items": [...]}'
The moment Claude Code says the endpoint is ready, you fire the curl command and verify it works. Instant feedback loop.
Logs and Debugging
- Top pane: Service logs streaming with
go run ./cmd/orders 2>&1 | jq .for pretty-printed structured JSON logs. - Bottom pane: Claude Code analyzing a stack trace, suggesting a fix, and applying it. You watch the logs clear up as the fix takes effect on the next hot reload.
Project Memory Per Service
One of Claude Code's most powerful features is CLAUDE.md -- a project memory file that gives Claude Code persistent context about your codebase. In a microservices architecture, each service is different enough that it deserves its own memory file.
In each service directory, create a CLAUDE.md that describes:
# User Service ## Architecture - gRPC server on :8081 for internal service-to-service calls - REST API on :8080 for external/gateway calls - PostgreSQL for persistence (users, profiles, sessions) - Redis for session caching ## Conventions - All handlers in internal/handlers/ - Repository pattern: internal/repository/ interfaces, implementations - Request validation with go-playground/validator - Errors wrapped with fmt.Errorf and %w verb - Structured logging with slog ## Testing - Table-driven tests for all handlers - Testcontainers for integration tests against real Postgres - go test -race on all packages ## Dependencies - Called by: API Gateway (REST), Order Service (gRPC) - Calls: Notification Service (NATS events)
When you switch to the "User Service" workspace in Beam and start talking to Claude Code, it reads this CLAUDE.md and immediately understands the service's architecture, conventions, and dependencies. It won't suggest patterns that belong in a different service. It won't generate REST endpoints when the service uses gRPC internally.
Each workspace in Beam has its Claude Code session scoped to its service directory. That means each session reads its own CLAUDE.md. You effectively have five specialized AI assistants, one per service, each with deep knowledge of its domain.
Pro Tip: Cross-Service Memory
Put a CLAUDE.md in the project root too, describing the overall architecture, shared packages, and inter-service communication patterns. Claude Code reads both the root and the local CLAUDE.md, so each service session understands both the big picture and its specific domain.
Running Everything Together
Getting five Go services, three infrastructure containers, and a message broker all running locally is a coordination problem. Here's the Beam workflow that makes it painless.
Step 1: Infrastructure First
Switch to your "Infrastructure" workspace and start the dependencies:
# docker-compose.yml
services:
postgres:
image: postgres:16-alpine
ports: ["5432:5432"]
environment:
POSTGRES_PASSWORD: dev
POSTGRES_DB: microservices
redis:
image: redis:7-alpine
ports: ["6379:6379"]
nats:
image: nats:2-alpine
ports: ["4222:4222", "8222:8222"]
Run docker compose up in the first tab. The second tab shows filtered logs. Third tab is for running migrations.
Step 2: Hot Reload Each Service
In each service workspace's "go run" tab, use air or watchexec for hot reloading instead of plain go run:
# Install air for hot reloading go install github.com/air-verse/air@latest # In each service tab: cd services/gateway && air cd services/users && air cd services/orders && air cd services/notifications && air cd services/worker && air
Now when Claude Code edits a file in any service, the service automatically rebuilds and restarts. You see the restart in the adjacent tab, verify the change, and move on.
Step 3: Save the Entire Layout
Once all six workspaces are set up -- five services plus infrastructure -- press ⌘S to save the layout. Name it "Go Microservices Dev". Tomorrow, or next week, you open Beam, restore this layout, and every workspace, every tab, every split pane is back exactly how you left it. Start docker compose up and air in each tab, and you're developing in under a minute.
Layout Restore Saves Hours
Setting up a 5-service microservices environment from scratch takes 15-20 minutes of creating terminals, navigating to directories, and starting processes. With a saved Beam layout, you restore in one keystroke and just start the processes. Over a week of development, that's over an hour saved.
Debugging Across Services
The hardest part of microservices development is debugging problems that span multiple services. A request hits the API gateway, calls the user service, triggers an order, publishes an event to the notification service -- and somewhere in that chain, something goes wrong. Beam's workspace model turns this cross-service debugging from nightmare to manageable.
Distributed Tracing with OpenTelemetry
Ask Claude Code (in any workspace) to add OpenTelemetry tracing to the shared middleware package. Every service imports it, so every service gets tracing:
// pkg/shared/middleware/tracing.go
func TracingMiddleware(serviceName string) func(http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx, span := otel.Tracer(serviceName).Start(r.Context(), r.URL.Path)
defer span.End()
span.SetAttributes(
attribute.String("http.method", r.Method),
attribute.String("http.url", r.URL.String()),
attribute.String("service.name", serviceName),
)
next.ServeHTTP(w, r.WithContext(ctx))
})
}
}
Now when a request fails, you can trace it across services. Switch to the workspace where the error originated (the Quick Switcher makes this instant), ask Claude Code to analyze the trace, and fix the bug in the right service.
Debugging Race Conditions
Go's concurrency model means race conditions are a real concern in microservices. When two services update the same resource concurrently, you need to see what's happening in both services at the same time.
The Beam workflow: open the two relevant workspaces and use split panes in each to show Claude Code and the service logs side by side. Ask Claude Code in the first service to add detailed logging around the concurrent access point, then switch to the second service and do the same. The structured logs from both services, visible in their respective workspaces, tell you exactly what's happening and in what order.
Then ask Claude Code to implement the fix -- optimistic locking, distributed locks with Redis, or whatever pattern fits your architecture. The fix goes into the right service because you're in the right workspace.
End-to-End Request Tracing
When debugging a full request flow, use this workflow:
- Start in the "API Gateway" workspace. Check the access logs for the failing request. Note the request ID.
- ⌘P, type "users" -- switch to the User Service workspace. Search logs for that request ID.
- ⌘P, type "orders" -- switch to the Order Service workspace. Find where the request ID shows up and where it fails.
- Ask Claude Code in that workspace to analyze the error and fix it.
- Test the fix from the "API Gateway" workspace by replaying the original request.
The entire debugging session hops between workspaces, but each workspace preserves its state. Your Claude Code session in the User Service is still right where you left it. Your logs in the Order Service are still scrolled to the right line. Nothing is lost.
Scaling the Workflow
Five services is a starting point. Real microservices architectures grow to ten, twenty, fifty services. Beam's workspace model scales because:
- Quick Switcher is fuzzy -- Type "notif" and it finds the Notification Service workspace. You don't need to remember exact names or positions.
- Workspaces are lazy -- You don't need all workspaces active at once. Only open the ones you're currently working on. Save the full layout and restore individual workspaces as needed.
- Layouts compose -- Save a "Core Services" layout with gateway, users, and orders. Save an "Event System" layout with notifications, worker, and infrastructure. Load whichever set you need for today's work.
- Claude Code stays contextual -- Each session reads its own
CLAUDE.md. Adding a sixth or seventh service is just another workspace with its own memory file.
Tame Your Go Microservices Workflow
Stop drowning in terminal tabs. Beam gives every service its own workspace, every workspace its own Claude Code session, and every layout a one-keystroke restore. Download free for macOS.
Download Beam for macOSSummary
Managing Go microservices development doesn't have to mean drowning in terminal windows. With Beam:
- One workspace per service -- API Gateway, User Service, Order Service, Notification Service, Worker, and Infrastructure each get their own isolated workspace with Claude Code, running process, and logs.
- Quick Switcher (⌘P) -- Jump between services instantly by typing a few characters. No more hunting through tab bars.
- Split panes -- Watch Claude Code build an endpoint on the left while you test it on the right. See logs and debugging side by side.
- Project memory per service -- Each service's
CLAUDE.mdgives its Claude Code session specialized knowledge. Five services, five AI assistants, each an expert in its domain. - Saved layouts (⌘S) -- Save your entire 6-workspace microservices setup and restore it tomorrow with one keystroke.
- Hot reloading -- Use
airin each service tab so Claude Code's changes take effect immediately. - Cross-service debugging -- Trace requests across services by hopping between workspaces, each preserving its own state and context.
Go microservices are architecturally elegant. Your terminal setup should be too.