Download Beam

AI-Assisted Go Testing: Table-Driven Tests, Benchmarks, and Fuzz Tests with Claude Code

February 2026 • 15 min read

Go has the best built-in testing story of any programming language. The testing package, go test, the race detector, the coverage tool, benchmarks, and fuzz testing are all first-class citizens in the standard toolchain. No third-party frameworks required. No test runner configuration files. Just write a function that starts with Test, run go test, and you're done.

Claude Code understands all of it. It generates idiomatic Go tests faster than you can write them by hand, covers edge cases you'd miss, and knows the difference between a table-driven test and a fuzz test without you having to explain the pattern. Pair it with Beam's split-pane terminal and you get an instant feedback loop that makes testing feel effortless.

This guide walks through every major Go testing pattern, shows you how to leverage Claude Code for each one, and demonstrates the Beam workspace setup that ties it all together.

go test -v -cover -bench=. ./... === RUN TestParseURL === RUN TestParseURL/valid_https_url --- PASS: TestParseURL/valid_https_url (0.00s) === RUN TestParseURL/empty_string --- PASS: TestParseURL/empty_string (0.00s) === RUN TestParseURL/unicode_path --- PASS: TestParseURL/unicode_path (0.00s) === RUN TestParseURL/missing_scheme --- PASS: TestParseURL/missing_scheme (0.00s) --- PASS: TestParseURL (0.00s) BenchmarkParseURL/short-10 5765414 207.3 ns/op 128 B/op 3 allocs/op BenchmarkParseURL/long-10 2841926 421.8 ns/op 256 B/op 5 allocs/op fuzz: elapsed: 3s, execs: 48291 (16097/sec), new interesting: 12 coverage: 87.2% of statements PASS ok myproject/urlparser 4.217s

Setting Up Your Testing Workspace in Beam

Before writing a single test, set up a workspace in Beam that gives you an instant feedback loop. The goal: Claude Code generates tests on the left, test results appear on the right, and you never leave the terminal.

Workspace: "Go Testing"

  1. Left pane: Claude Code -- This is where you ask Claude to write, modify, and analyze tests. Press ⌘⌥⌃T to split your tab into two panes.
  2. Right pane: Test watcher -- Run your tests in watch mode so they re-execute every time a file changes.

For continuous test feedback, use watchexec or entr in the right pane:

# Using watchexec (recommended)
watchexec -e go -- go test -v -count=1 ./...

# Using entr
find . -name '*.go' | entr -c go test -v ./...

# Using a simple loop with fswatch
fswatch -o . | xargs -n1 -I{} go test -v ./...

Every time Claude Code writes a test file on the left, the watcher on the right picks it up and runs the suite. You see results in real time without lifting a finger.

Pro Tip: Targeted Watch Mode

If your project is large, scope the watcher to a single package while you work on it: watchexec -e go -- go test -v -count=1 ./pkg/urlparser/.... This keeps feedback under a second even on big codebases. You can always run the full suite in a third Beam tab when you're ready.

Table-Driven Tests: The Go Community's Gold Standard

Table-driven tests are the idiomatic Go testing pattern. Every experienced Go developer writes them. They're concise, easy to extend, and produce clear output when a test fails. Claude Code generates them natively -- you don't need to explain the pattern.

Let's say you have a URL parser function you need to test:

// urlparser.go
package urlparser

import (
    "errors"
    "net/url"
    "strings"
)

var (
    ErrEmptyURL    = errors.New("url cannot be empty")
    ErrNoScheme    = errors.New("url must have a scheme")
    ErrInvalidHost = errors.New("url must have a valid host")
)

type ParsedURL struct {
    Scheme   string
    Host     string
    Port     string
    Path     string
    RawQuery string
}

func ParseURL(rawURL string) (*ParsedURL, error) {
    rawURL = strings.TrimSpace(rawURL)
    if rawURL == "" {
        return nil, ErrEmptyURL
    }

    u, err := url.Parse(rawURL)
    if err != nil {
        return nil, err
    }

    if u.Scheme == "" {
        return nil, ErrNoScheme
    }

    if u.Host == "" && u.Opaque == "" {
        return nil, ErrInvalidHost
    }

    return &ParsedURL{
        Scheme:   u.Scheme,
        Host:     u.Hostname(),
        Port:     u.Port(),
        Path:     u.Path,
        RawQuery: u.RawQuery,
    }, nil
}

Now ask Claude Code to generate table-driven tests:

Write table-driven tests for the ParseURL function in urlparser.go. Cover happy paths, empty input, malformed URLs, unicode characters, special characters, missing scheme, missing host, URLs with ports, query parameters, and boundary values.

Claude Code produces something like this:

// urlparser_test.go
package urlparser

import (
    "errors"
    "testing"
)

func TestParseURL(t *testing.T) {
    tests := []struct {
        name    string
        input   string
        want    *ParsedURL
        wantErr error
    }{
        {
            name:  "valid https url",
            input: "https://example.com/path",
            want: &ParsedURL{
                Scheme: "https",
                Host:   "example.com",
                Path:   "/path",
            },
        },
        {
            name:  "url with port and query",
            input: "http://localhost:8080/api?key=value",
            want: &ParsedURL{
                Scheme:   "http",
                Host:     "localhost",
                Port:     "8080",
                Path:     "/api",
                RawQuery: "key=value",
            },
        },
        {
            name:  "unicode path",
            input: "https://example.com/café/münchen",
            want: &ParsedURL{
                Scheme: "https",
                Host:   "example.com",
                Path:   "/café/münchen",
            },
        },
        {
            name:  "url with special characters in query",
            input: "https://search.com/q?term=hello+world&lang=en",
            want: &ParsedURL{
                Scheme:   "https",
                Host:     "search.com",
                Path:     "/q",
                RawQuery: "term=hello+world&lang=en",
            },
        },
        {
            name:    "empty string",
            input:   "",
            wantErr: ErrEmptyURL,
        },
        {
            name:    "whitespace only",
            input:   "   \t\n  ",
            wantErr: ErrEmptyURL,
        },
        {
            name:    "missing scheme",
            input:   "example.com/path",
            wantErr: ErrNoScheme,
        },
        {
            name:    "scheme only",
            input:   "https://",
            wantErr: ErrInvalidHost,
        },
        {
            name:  "ip address with port",
            input: "http://192.168.1.1:3000/health",
            want: &ParsedURL{
                Scheme: "http",
                Host:   "192.168.1.1",
                Port:   "3000",
                Path:   "/health",
            },
        },
        {
            name:  "ftp scheme",
            input: "ftp://files.example.com/docs/report.pdf",
            want: &ParsedURL{
                Scheme: "ftp",
                Host:   "files.example.com",
                Path:   "/docs/report.pdf",
            },
        },
    }

    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            got, err := ParseURL(tt.input)

            if tt.wantErr != nil {
                if err == nil {
                    t.Fatalf("expected error %v, got nil", tt.wantErr)
                }
                if !errors.Is(err, tt.wantErr) {
                    t.Fatalf("expected error %v, got %v", tt.wantErr, err)
                }
                return
            }

            if err != nil {
                t.Fatalf("unexpected error: %v", err)
            }

            if got.Scheme != tt.want.Scheme {
                t.Errorf("Scheme = %q, want %q", got.Scheme, tt.want.Scheme)
            }
            if got.Host != tt.want.Host {
                t.Errorf("Host = %q, want %q", got.Host, tt.want.Host)
            }
            if got.Port != tt.want.Port {
                t.Errorf("Port = %q, want %q", got.Port, tt.want.Port)
            }
            if got.Path != tt.want.Path {
                t.Errorf("Path = %q, want %q", got.Path, tt.want.Path)
            }
            if got.RawQuery != tt.want.RawQuery {
                t.Errorf("RawQuery = %q, want %q", got.RawQuery, tt.want.RawQuery)
            }
        })
    }
}

Notice the pattern. A slice of anonymous structs with name, input, want, and wantErr fields. A loop that calls t.Run with the test name as the subtest label. Descriptive names that read like sentences in the test output. This is exactly what the Go community expects.

Why Table-Driven Tests Win

Subtests and Test Helpers

Table-driven tests are the starting point. Real-world test suites need helpers, fixtures, and organization. Claude Code handles all of these patterns.

t.Helper() for Clean Stack Traces

When you extract assertion logic into a helper function, call t.Helper() so that error messages point to the calling test, not the helper:

func assertParsedURL(t *testing.T, got, want *ParsedURL) {
    t.Helper()
    if got.Scheme != want.Scheme {
        t.Errorf("Scheme = %q, want %q", got.Scheme, want.Scheme)
    }
    if got.Host != want.Host {
        t.Errorf("Host = %q, want %q", got.Host, want.Host)
    }
    if got.Port != want.Port {
        t.Errorf("Port = %q, want %q", got.Port, want.Port)
    }
    if got.Path != want.Path {
        t.Errorf("Path = %q, want %q", got.Path, want.Path)
    }
}
Refactor the test assertions into a helper function with t.Helper() for cleaner error reporting.

Claude Code knows to add t.Helper() at the top of any helper function without being asked. It's part of idiomatic Go testing that the model has deeply internalized.

The testdata Directory Pattern

Go ignores directories named testdata during builds but makes them available to tests. This is the standard place for fixtures, golden files, and seed corpora:

myproject/
  urlparser/
    urlparser.go
    urlparser_test.go
    testdata/
      golden/
        valid_https.json
        unicode_path.json
      fixtures/
        malformed_urls.txt
        edge_cases.txt
Generate golden file tests for ParseURL. Write expected outputs as JSON files in testdata/golden/ and compare against them in the test.

Claude Code generates the test code that reads from testdata/, the golden files themselves, and an update flag (-update) so you can regenerate golden files when the output intentionally changes. This is a common pattern in the Go standard library itself.

Organizing with Nested Subtests

For complex functions with multiple categories of behavior, nest your subtests:

func TestParseURL(t *testing.T) {
    t.Run("valid URLs", func(t *testing.T) {
        t.Run("https", func(t *testing.T) { /* ... */ })
        t.Run("http", func(t *testing.T) { /* ... */ })
        t.Run("ftp", func(t *testing.T) { /* ... */ })
    })

    t.Run("invalid URLs", func(t *testing.T) {
        t.Run("empty", func(t *testing.T) { /* ... */ })
        t.Run("no scheme", func(t *testing.T) { /* ... */ })
        t.Run("no host", func(t *testing.T) { /* ... */ })
    })

    t.Run("edge cases", func(t *testing.T) {
        t.Run("unicode", func(t *testing.T) { /* ... */ })
        t.Run("very long URL", func(t *testing.T) { /* ... */ })
    })
}

You can run a specific group with go test -run TestParseURL/invalid_URLs. Claude Code generates this structure naturally when it sees functions with many distinct behaviors.

Benchmark Tests: Measuring What Matters

Go benchmarks are built into the testing package. No separate tool, no configuration. Write a function starting with Benchmark, and go test -bench=. runs it. Claude Code generates benchmarks that follow Go conventions and include the sub-benchmarks for different input sizes that make results actually useful.

When to Benchmark

Don't benchmark everything. Benchmark when:

Let's say you have two implementations of a string reversal and want to compare them:

// stringutil.go
package stringutil

// ReverseRunes reverses using rune slice conversion
func ReverseRunes(s string) string {
    runes := []rune(s)
    for i, j := 0, len(runes)-1; i < j; i, j = i+1, j-1 {
        runes[i], runes[j] = runes[j], runes[i]
    }
    return string(runes)
}

// ReverseBuilder reverses using strings.Builder
func ReverseBuilder(s string) string {
    runes := []rune(s)
    var b strings.Builder
    b.Grow(len(s))
    for i := len(runes) - 1; i >= 0; i-- {
        b.WriteRune(runes[i])
    }
    return b.String()
}
Write benchmarks comparing ReverseRunes and ReverseBuilder with sub-benchmarks for short (10 chars), medium (100 chars), and long (10000 chars) inputs. Include allocation tracking.

Claude Code generates:

// stringutil_test.go
package stringutil

import (
    "strings"
    "testing"
)

func generateInput(n int) string {
    return strings.Repeat("a", n)
}

func BenchmarkReverseRunes(b *testing.B) {
    sizes := []struct {
        name string
        size int
    }{
        {"short_10", 10},
        {"medium_100", 100},
        {"long_10000", 10000},
    }

    for _, s := range sizes {
        input := generateInput(s.size)
        b.Run(s.name, func(b *testing.B) {
            b.ReportAllocs()
            for i := 0; i < b.N; i++ {
                ReverseRunes(input)
            }
        })
    }
}

func BenchmarkReverseBuilder(b *testing.B) {
    sizes := []struct {
        name string
        size int
    }{
        {"short_10", 10},
        {"medium_100", 100},
        {"long_10000", 10000},
    }

    for _, s := range sizes {
        input := generateInput(s.size)
        b.Run(s.name, func(b *testing.B) {
            b.ReportAllocs()
            for i := 0; i < b.N; i++ {
                ReverseBuilder(input)
            }
        })
    }
}

Reading Benchmark Results

Run with go test -bench=. -benchmem ./... and you get output like:

BenchmarkReverseRunes/short_10-10       18427634        64.93 ns/op       48 B/op     2 allocs/op
BenchmarkReverseRunes/medium_100-10      3125678       384.2 ns/op       448 B/op     2 allocs/op
BenchmarkReverseRunes/long_10000-10        27015     44318 ns/op     40960 B/op     2 allocs/op
BenchmarkReverseBuilder/short_10-10     14582341        82.17 ns/op       64 B/op     2 allocs/op
BenchmarkReverseBuilder/medium_100-10    2894721       414.6 ns/op       448 B/op     3 allocs/op
BenchmarkReverseBuilder/long_10000-10      23892     50284 ns/op     49152 B/op     4 allocs/op

Here's how to read each column:

In this case, ReverseRunes wins at every input size. The rune-slice swap avoids the Builder's extra allocation overhead. Claude Code can analyze these results for you:

Analyze these benchmark results and tell me which implementation is faster and why. Suggest optimizations.

Pro Tip: Benchmark in Beam's Right Pane

Keep benchmarks running in the right pane with watchexec -e go -- go test -bench=. -benchmem ./pkg/stringutil/. As Claude Code optimizes the implementation on the left, you see ns/op drop in real time. It's immensely satisfying to watch performance improve with each iteration.

b.ResetTimer and b.StopTimer

If your benchmark requires expensive setup that shouldn't be measured:

func BenchmarkParseURLFromFile(b *testing.B) {
    data, err := os.ReadFile("testdata/urls.txt")
    if err != nil {
        b.Fatal(err)
    }
    urls := strings.Split(string(data), "\n")

    b.ResetTimer() // Don't count file reading in the benchmark
    for i := 0; i < b.N; i++ {
        for _, u := range urls {
            ParseURL(u)
        }
    }
}

Claude Code adds b.ResetTimer() automatically when it sees setup code in a benchmark. It also knows when to use b.StopTimer() and b.StartTimer() for benchmarks that need mid-loop pauses for cleanup.

Fuzz Testing: Let the Machine Find Your Bugs

Go 1.18 introduced native fuzz testing. Instead of you thinking of edge cases, the fuzzer generates random inputs, runs your code, and reports any input that causes a crash, panic, or unexpected behavior. It's the most underused feature in Go's testing toolkit, and Claude Code makes it trivially easy to add.

How Fuzzing Works in Go

A fuzz test has two parts: a seed corpus (known inputs that exercise specific paths) and a fuzz target (the function that receives random mutations of those seeds). The fuzzer mutates the seed inputs, tracking which mutations explore new code paths, and saving any input that triggers a failure to testdata/fuzz/.

Add fuzz tests for the ParseURL function. Include seed corpus entries for valid URLs, malformed URLs, and edge cases. The fuzz target should verify that ParseURL never panics on any input.

Claude Code generates:

func FuzzParseURL(f *testing.F) {
    // Seed corpus: known inputs that exercise different paths
    seeds := []string{
        "https://example.com",
        "http://localhost:8080/path?q=1",
        "ftp://files.example.com/doc.pdf",
        "",
        "   ",
        "not-a-url",
        "://missing-scheme",
        "https://",
        "https://example.com/caf\u00e9",
        "https://example.com/" + strings.Repeat("a", 10000),
        "https://user:pass@example.com:443/path",
        "https://example.com/path#fragment",
        "https://example.com/path?q=hello+world&lang=en",
        "file:///home/user/document.txt",
    }

    for _, s := range seeds {
        f.Add(s)
    }

    f.Fuzz(func(t *testing.T, input string) {
        result, err := ParseURL(input)

        // ParseURL must never panic -- that's the primary assertion.
        // If it returns an error, we're done.
        if err != nil {
            return
        }

        // If it succeeds, the result must be non-nil
        if result == nil {
            t.Fatal("ParseURL returned nil result with nil error")
        }

        // Scheme must not be empty for a valid parse
        if result.Scheme == "" {
            t.Error("parsed URL has empty scheme")
        }

        // Round-trip: re-parsing should not fail
        reparsed, err := ParseURL(result.Scheme + "://" + result.Host + result.Path)
        if err != nil {
            t.Errorf("round-trip failed: %v", err)
        }
        if reparsed != nil && reparsed.Scheme != result.Scheme {
            t.Errorf("round-trip scheme mismatch: got %q, want %q",
                reparsed.Scheme, result.Scheme)
        }
    })
}

Running Fuzz Tests

# Run the fuzz test for 30 seconds
go test -fuzz=FuzzParseURL -fuzztime=30s ./...

# Run with more workers for faster exploration
go test -fuzz=FuzzParseURL -fuzztime=2m -parallel=8 ./...

# Run seed corpus only (like a regular test)
go test -run=FuzzParseURL ./...

When the fuzzer finds a failing input, it saves it to testdata/fuzz/FuzzParseURL/ as a text file. From that point on, every run of go test includes that input as a regression test -- no manual work required.

Analyzing Fuzz Failures with Claude Code

When a fuzz test discovers a crash, the output can be cryptic. This is where Claude Code shines. Paste the failure into Beam's left pane:

The fuzz test found a crash with this input: "\xff\xfe://\x00\x01". Analyze why ParseURL panics on this input and fix it.

Claude Code traces the input through your code, identifies the issue (in this case, perhaps invalid UTF-8 not being handled before the url.Parse call), and generates both the fix and a regression test for that specific input.

Why Fuzz Testing Matters

Fuzz testing finds bugs that humans don't think to test for. It's particularly valuable for parsers, encoders/decoders, serialization code, and anything that accepts untrusted input. Claude Code can add fuzz tests to any function in seconds. There's no reason not to fuzz your critical code paths.

Race Detection: Catching Concurrency Bugs

Go's race detector is one of the most valuable tools in the ecosystem. It instruments your code at compile time to detect data races at runtime -- and it's as simple as adding -race to your test command.

# Run all tests with race detection
go test -race -v ./...

# Run specific tests with race detection
go test -race -run TestConcurrent -v ./...

Claude Code can generate concurrent tests specifically designed to trigger race conditions:

Write concurrent tests for this Cache struct that exercise race conditions. Use goroutines to simultaneously read and write to the cache, then run with -race to verify it's safe.
func TestCacheConcurrency(t *testing.T) {
    c := NewCache()
    var wg sync.WaitGroup

    // Concurrent writers
    for i := 0; i < 100; i++ {
        wg.Add(1)
        go func(n int) {
            defer wg.Done()
            key := fmt.Sprintf("key-%d", n)
            c.Set(key, n)
        }(i)
    }

    // Concurrent readers
    for i := 0; i < 100; i++ {
        wg.Add(1)
        go func(n int) {
            defer wg.Done()
            key := fmt.Sprintf("key-%d", n%50)
            c.Get(key)
        }(i)
    }

    // Concurrent deleters
    for i := 0; i < 50; i++ {
        wg.Add(1)
        go func(n int) {
            defer wg.Done()
            key := fmt.Sprintf("key-%d", n)
            c.Delete(key)
        }(i)
    }

    wg.Wait()
}

If the race detector fires, the output tells you exactly which goroutines are racing and on which memory address. Claude Code can interpret race detector output and suggest the correct fix -- whether it's a sync.Mutex, a sync.RWMutex, or a redesign using channels.

The race detector found a data race in Cache.Set and Cache.Get. Here's the output. Fix it using the most appropriate synchronization primitive.

Claude Code picks sync.RWMutex for a read-heavy cache (allowing concurrent reads) rather than a plain sync.Mutex (which would serialize everything). It understands the performance implications of each choice.

Test Coverage: Finding the Gaps

Go's coverage tool shows you exactly which lines of code are exercised by your tests. It's built into go test and produces both text and HTML reports.

# Quick coverage percentage
go test -cover ./...

# Generate a coverage profile
go test -coverprofile=coverage.out ./...

# View coverage as HTML (opens in browser)
go tool cover -html=coverage.out

# Show coverage by function
go tool cover -func=coverage.out

The HTML output highlights covered lines in green and uncovered lines in red. It's the fastest way to spot gaps in your test suite.

Using Claude Code to Close Coverage Gaps

Here's the powerful workflow. Generate a coverage report, then ask Claude Code to fill the gaps:

Run go test -coverprofile=coverage.out ./... and then go tool cover -func=coverage.out. Show me which functions have less than 80% coverage, then write tests to cover the uncovered lines.

Claude Code reads the coverage report, identifies the uncovered branches (often error paths, edge cases in switch statements, or rarely-hit conditions), and generates targeted tests for exactly those lines. It doesn't write useless tests that just inflate the number -- it writes tests that exercise the uncovered logic.

Coverage Target: 80% Is a Good Default

Don't chase 100% coverage. Some code (like main() functions, OS-specific error handling, and panic recovery) is hard to test and not worth the effort. Aim for 80%+ on your core business logic and critical paths. Use Claude Code to close the gap efficiently rather than writing tests that don't add value.

Coverprofile in CI

Add coverage reporting to your CI pipeline so regressions are caught automatically:

# In your CI script or Makefile
go test -race -coverprofile=coverage.out -covermode=atomic ./...
go tool cover -func=coverage.out | tail -1

# Fail if coverage drops below threshold
COVERAGE=$(go tool cover -func=coverage.out | tail -1 | awk '{print $3}' | tr -d '%')
if [ "$(echo "$COVERAGE < 80" | bc)" -eq 1 ]; then
    echo "Coverage $COVERAGE% is below 80% threshold"
    exit 1
fi

Integration and End-to-End Tests

Unit tests verify individual functions. Integration tests verify that components work together. Go provides excellent tools for both, and Claude Code generates idiomatic integration tests that use real HTTP servers, test databases, and proper setup/teardown.

httptest for HTTP Handler Testing

The net/http/httptest package creates real HTTP servers in-memory for testing handlers without network I/O:

Write integration tests for my HTTP API handlers using httptest. Cover GET, POST, PUT, DELETE operations with proper request bodies, status code assertions, and response body validation.
func TestGetUserHandler(t *testing.T) {
    // Setup
    store := NewMemoryStore()
    store.CreateUser(User{ID: "1", Name: "Alice", Email: "alice@example.com"})
    handler := NewUserHandler(store)

    tests := []struct {
        name       string
        method     string
        path       string
        body       string
        wantStatus int
        wantBody   string
    }{
        {
            name:       "get existing user",
            method:     "GET",
            path:       "/users/1",
            wantStatus: http.StatusOK,
            wantBody:   `{"id":"1","name":"Alice","email":"alice@example.com"}`,
        },
        {
            name:       "get nonexistent user",
            method:     "GET",
            path:       "/users/999",
            wantStatus: http.StatusNotFound,
        },
        {
            name:       "create user",
            method:     "POST",
            path:       "/users",
            body:       `{"name":"Bob","email":"bob@example.com"}`,
            wantStatus: http.StatusCreated,
        },
    }

    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            var body io.Reader
            if tt.body != "" {
                body = strings.NewReader(tt.body)
            }

            req := httptest.NewRequest(tt.method, tt.path, body)
            req.Header.Set("Content-Type", "application/json")
            w := httptest.NewRecorder()

            handler.ServeHTTP(w, req)

            if w.Code != tt.wantStatus {
                t.Errorf("status = %d, want %d", w.Code, tt.wantStatus)
            }
            if tt.wantBody != "" {
                got := strings.TrimSpace(w.Body.String())
                if got != tt.wantBody {
                    t.Errorf("body = %s, want %s", got, tt.wantBody)
                }
            }
        })
    }
}

TestMain for Setup and Teardown

When your integration tests need shared setup (database connections, test containers, configuration loading), use TestMain:

func TestMain(m *testing.M) {
    // Setup: start test database
    db, cleanup := setupTestDB()
    testDB = db

    // Run all tests in this package
    code := m.Run()

    // Teardown: clean up resources
    cleanup()
    os.Exit(code)
}

func setupTestDB() (*sql.DB, func()) {
    db, err := sql.Open("sqlite3", ":memory:")
    if err != nil {
        log.Fatalf("failed to open test db: %v", err)
    }

    // Run migrations
    if _, err := db.Exec(schema); err != nil {
        log.Fatalf("failed to run migrations: %v", err)
    }

    return db, func() { db.Close() }
}
Generate a TestMain function that starts a PostgreSQL testcontainer, runs migrations, and tears down after all tests complete.

testcontainers-go for Database Integration Tests

For real database integration tests, testcontainers-go spins up Docker containers on demand:

func setupPostgresContainer(t *testing.T) (*sql.DB, func()) {
    t.Helper()
    ctx := context.Background()

    req := testcontainers.ContainerRequest{
        Image:        "postgres:16-alpine",
        ExposedPorts: []string{"5432/tcp"},
        Env: map[string]string{
            "POSTGRES_PASSWORD": "test",
            "POSTGRES_DB":       "testdb",
        },
        WaitingFor: wait.ForLog("database system is ready to accept connections").
            WithOccurrence(2).
            WithStartupTimeout(30 * time.Second),
    }

    container, err := testcontainers.GenericContainer(ctx,
        testcontainers.GenericContainerRequest{
            ContainerRequest: req,
            Started:          true,
        })
    if err != nil {
        t.Fatalf("failed to start container: %v", err)
    }

    host, _ := container.Host(ctx)
    port, _ := container.MappedPort(ctx, "5432")
    dsn := fmt.Sprintf("postgres://postgres:test@%s:%s/testdb?sslmode=disable",
        host, port.Port())

    db, err := sql.Open("pgx", dsn)
    if err != nil {
        t.Fatalf("failed to connect: %v", err)
    }

    return db, func() {
        db.Close()
        container.Terminate(ctx)
    }
}

Claude Code generates the full testcontainers setup, including wait strategies, port mapping, and teardown functions. It knows the common container images and their configuration options.

Mock Generation: Testing with Interfaces

Go's interfaces enable powerful testing through dependency injection. Define a small interface, implement it for production, and create a mock for tests. Claude Code generates both the interface and the mock without external tools.

The Interface Pattern

// Define a small, focused interface
type UserStore interface {
    GetUser(ctx context.Context, id string) (*User, error)
    CreateUser(ctx context.Context, u *User) error
    DeleteUser(ctx context.Context, id string) error
}

// Your handler depends on the interface, not a concrete type
type UserHandler struct {
    store UserStore
}
Generate a mock implementation of the UserStore interface for testing. Include configurable return values and call tracking so I can assert which methods were called with which arguments.

Claude Code generates a clean, hand-written mock:

type MockUserStore struct {
    GetUserFn    func(ctx context.Context, id string) (*User, error)
    CreateUserFn func(ctx context.Context, u *User) error
    DeleteUserFn func(ctx context.Context, id string) error

    // Call tracking
    GetUserCalls    []string
    CreateUserCalls []*User
    DeleteUserCalls []string
}

func (m *MockUserStore) GetUser(ctx context.Context, id string) (*User, error) {
    m.GetUserCalls = append(m.GetUserCalls, id)
    if m.GetUserFn != nil {
        return m.GetUserFn(ctx, id)
    }
    return nil, errors.New("not implemented")
}

func (m *MockUserStore) CreateUser(ctx context.Context, u *User) error {
    m.CreateUserCalls = append(m.CreateUserCalls, u)
    if m.CreateUserFn != nil {
        return m.CreateUserFn(ctx, u)
    }
    return nil
}

func (m *MockUserStore) DeleteUser(ctx context.Context, id string) error {
    m.DeleteUserCalls = append(m.DeleteUserCalls, id)
    if m.DeleteUserFn != nil {
        return m.DeleteUserFn(ctx, id)
    }
    return nil
}

Table-Driven Tests with Mocked Dependencies

The real power emerges when you combine table-driven tests with mocks. Each test case configures the mock differently:

func TestUserHandler_GetUser(t *testing.T) {
    tests := []struct {
        name       string
        userID     string
        mockFn     func(ctx context.Context, id string) (*User, error)
        wantStatus int
    }{
        {
            name:   "user found",
            userID: "1",
            mockFn: func(ctx context.Context, id string) (*User, error) {
                return &User{ID: "1", Name: "Alice"}, nil
            },
            wantStatus: http.StatusOK,
        },
        {
            name:   "user not found",
            userID: "999",
            mockFn: func(ctx context.Context, id string) (*User, error) {
                return nil, ErrNotFound
            },
            wantStatus: http.StatusNotFound,
        },
        {
            name:   "database error",
            userID: "1",
            mockFn: func(ctx context.Context, id string) (*User, error) {
                return nil, errors.New("connection refused")
            },
            wantStatus: http.StatusInternalServerError,
        },
    }

    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            mock := &MockUserStore{GetUserFn: tt.mockFn}
            handler := NewUserHandler(mock)

            req := httptest.NewRequest("GET", "/users/"+tt.userID, nil)
            w := httptest.NewRecorder()
            handler.ServeHTTP(w, req)

            if w.Code != tt.wantStatus {
                t.Errorf("status = %d, want %d", w.Code, tt.wantStatus)
            }

            // Verify the mock was called with the right ID
            if len(mock.GetUserCalls) != 1 || mock.GetUserCalls[0] != tt.userID {
                t.Errorf("GetUser called with %v, want [%s]",
                    mock.GetUserCalls, tt.userID)
            }
        })
    }
}

This pattern -- table-driven tests with configurable mocks -- is the backbone of Go testing in production codebases. Claude Code generates it fluently.

When to Use mockgen or moq

For large interfaces or when you want compile-time safety that your mock stays in sync with the interface, tools like mockgen or moq auto-generate mock implementations:

# Using mockgen
go install go.uber.org/mock/mockgen@latest
mockgen -source=store.go -destination=mock_store_test.go -package=mypackage

# Using moq
go install github.com/matryer/moq@latest
moq -out mock_store_test.go . UserStore
Add a go:generate directive for mockgen to auto-generate mocks for the UserStore interface. Then write tests using the generated mock.

Claude Code adds the //go:generate comment, generates the test code using the mock's API, and knows the difference between mockgen's EXPECT() API and moq's function-field API.

The Best Go Testing Workflow Starts with Beam

Claude Code generates your tests. Beam gives you the split-pane terminal to watch them pass in real time. Download free for macOS.

Download Beam for macOS

Go Testing Best Practices Checklist

Here's everything we covered, distilled into a checklist you can reference for every Go project:

Go made testing a first-class experience. Claude Code makes it fast. Beam makes it organized. Together, they turn testing from a chore into the most productive part of your development workflow.