Download Beam

AI Agent Evaluation: Building a Testing Framework That Works

March 2026 • 15 min read

You would never ship software without tests. But most teams deploy AI agents with no systematic evaluation at all. They try the agent on a few tasks, eyeball the results, and declare it “good enough.” Then they wonder why the agent produces inconsistent results in production, why model updates break existing workflows, and why they have no idea whether their CLAUDE.md changes actually improved anything.

AI agents need testing frameworks just like software does. The challenge is that agents are non-deterministic — the same input can produce different outputs. Traditional unit testing does not apply directly. But that does not mean agents cannot be systematically evaluated. It means the evaluation framework needs to be designed differently.

The Testing Challenge: Non-Determinism

Traditional software testing relies on determinism: given input X, the function always returns output Y. If it does not, the test fails. AI agents break this assumption fundamentally. Ask Claude Code to “write a function that sorts a list” ten times and you will get ten different implementations. All of them might be correct. None of them will be identical.

This does not mean testing is impossible. It means you need to test properties rather than exact outputs. Does the function sort correctly? Does it handle edge cases? Does it follow the project’s coding conventions? Is it reasonably efficient? These are all testable properties, even when the exact code varies between runs.

What Changes with Agent Testing

  • Test outcomes, not implementations: Does the code work? Not: is the code exactly this string.
  • Use statistical pass rates: Run the same test 5 times. If it passes 4/5, that is 80% pass rate — a meaningful metric.
  • Measure multiple dimensions: Correctness, cost, speed, safety, and consistency are all separate metrics.
  • Expect variance: A 95% pass rate is excellent. 100% is suspicious — it usually means your tests are too easy.

The Four Evaluation Dimensions

A comprehensive agent evaluation framework measures four independent dimensions. Optimizing for only one dimension while ignoring the others produces agents that are correct but expensive, fast but unsafe, or cheap but unreliable.

AI Agent Evaluation Framework Four dimensions of agent quality assessment TEST SUITE Standardized task collection CORRECTNESS "Does it work?" Tests pass Builds compile Edge cases handled Conventions followed Target: >90% pass rate EFFICIENCY "What did it cost?" Token usage Wall-clock time Tool calls count Cost per task ($) Target: <$2/task avg SAFETY "Did it break anything?" No destructive cmds No secrets leaked No unintended deletes Constraints respected Target: 100% compliance CONSISTENCY "Is it repeatable?" Pass rate variance Style conformity Architecture alignment Cross-run similarity Target: <10% variance CI INTEGRATION On model update: Run full test suite Compare to baseline Flag regressions On context change: Re-run affected tests Validate CLAUDE.md Check constraint drift Weekly: Full regression suite Cost trend analysis Quality dashboard Model comparison Automated reporting All four dimensions must be measured independently — optimizing one at the expense of others is a trap

Dimension 1: Correctness

Question: Does the agent’s output actually work?

This is the most intuitive dimension and the one most teams start with. For coding agents, correctness means: does the code compile, do the tests pass, does the output match the specification? You can automate this with standard test suites.

// Example correctness test
{
  "task": "Write a function that reverses a string",
  "language": "typescript",
  "validation": {
    "compiles": true,
    "tests": [
      { "input": "hello", "expected": "olleh" },
      { "input": "", "expected": "" },
      { "input": "a", "expected": "a" },
      { "input": "racecar", "expected": "racecar" }
    ],
    "conventions": {
      "named_export": true,
      "jsdoc": true,
      "no_any_type": true
    }
  }
}

Dimension 2: Efficiency

Question: How much did it cost to get the right answer?

Two agents can both produce correct output, but one might use 10x the tokens. Efficiency metrics include total token usage (input + output), wall-clock time, number of tool calls, and dollar cost. An agent that reads 100 files to fix a one-line bug is correct but inefficient.

Dimension 3: Safety

Question: Did the agent do anything harmful?

Safety evaluation checks for destructive actions: did the agent delete files it should not have? Did it expose secrets? Did it run dangerous commands? Did it modify production infrastructure? Safety tests should include adversarial scenarios where the agent is tempted to take harmful shortcuts.

Dimension 4: Consistency

Question: Does the agent produce similar-quality results across runs?

Run the same test 10 times. If the agent passes 9/10, the pass rate is 90% with 10% variance. High variance means the agent is unreliable even when its average quality is good. Consistency is measured by running the same tests multiple times and computing the standard deviation of scores.

Building the Test Suite

A good evaluation test suite has three categories of tasks:

Category 1: Standardized Tasks

Well-defined tasks with objectively verifiable outputs. These are your unit tests for agents.

Category 2: Convention Compliance Tasks

Tasks that test whether the agent follows your project’s conventions from CLAUDE.md.

Category 3: Safety Boundary Tasks

Adversarial tasks designed to test constraint compliance.

Automated Evaluation Scripts

The evaluation framework runs automatically. Here is the typical pipeline:

#!/bin/bash
# agent-eval.sh - Automated agent evaluation pipeline

SUITE_DIR="./eval/tasks"
RESULTS_DIR="./eval/results/$(date +%Y%m%d)"
RUNS=5  # Run each task 5 times for consistency metrics

mkdir -p "$RESULTS_DIR"

for task_file in "$SUITE_DIR"/*.json; do
  task_name=$(basename "$task_file" .json)

  for run in $(seq 1 $RUNS); do
    echo "Running $task_name (attempt $run/$RUNS)..."

    # Execute agent with task
    result=$(claude --task "$task_file" --output-json 2>&1)

    # Record results
    echo "$result" > "$RESULTS_DIR/${task_name}_run${run}.json"

    # Run validation
    node ./eval/validate.js \
      --task "$task_file" \
      --result "$RESULTS_DIR/${task_name}_run${run}.json" \
      >> "$RESULTS_DIR/scores.csv"
  done
done

# Generate summary report
node ./eval/report.js --results "$RESULTS_DIR/scores.csv"

Metrics That Matter

Key Evaluation Metrics

  • Pass rate: Percentage of runs where the output meets all correctness criteria. Target: >90% for standardized tasks.
  • Token cost per task: Average tokens consumed. Track over time to detect efficiency regressions.
  • Time to completion: Wall-clock seconds from task start to valid output. Includes all tool calls and retries.
  • Human override rate: Percentage of tasks where a human had to intervene to correct the output. This is the ultimate quality metric.
  • Safety violation rate: Any non-zero rate here is a critical issue. Target: 0%.
  • Convention compliance score: Percentage of convention rules followed. Measures CLAUDE.md effectiveness.

Regression Testing Across Model Versions

One of the most valuable uses of an evaluation framework is regression testing when models update. When Anthropic releases a new version of Sonnet or OpenAI updates GPT-4o, you need to know immediately whether the new version performs better, worse, or differently on your specific tasks.

This is exactly what benchmarks like SWE-bench and Terminal-Bench do at the industry level. Your evaluation framework is the project-specific equivalent — it measures what matters for your codebase, your conventions, and your workflow.

// Regression test configuration
{
  "baseline": {
    "model": "claude-sonnet-4-20260301",
    "results": "./eval/baselines/sonnet-4-march.json"
  },
  "candidate": {
    "model": "claude-sonnet-4-20260401",
    "results": null  // Will be populated by eval run
  },
  "regression_thresholds": {
    "pass_rate_drop": 0.05,    // Alert if >5% drop
    "cost_increase": 0.20,     // Alert if >20% cost increase
    "time_increase": 0.30,     // Alert if >30% slower
    "safety_violations": 0     // Any violation = fail
  }
}
Model updates can silently break your workflows: A model that scores higher on general benchmarks can score lower on your specific tasks — especially if your CLAUDE.md relies on model-specific behaviors. Always run your evaluation suite after model updates, before deploying to your team.

CI Integration

The highest-leverage integration point is your CI pipeline. Run agent evaluations automatically on three triggers:

  1. Model update: When your agent switches to a new model version, run the full evaluation suite and compare against the baseline.
  2. Context change: When CLAUDE.md, MCP server configurations, or constraint files change, re-run the convention compliance and safety boundary tests.
  3. Weekly regression: Run the full suite weekly regardless of changes. Models can behave differently over time even without explicit updates (API routing changes, quantization changes, etc.).
# .github/workflows/agent-eval.yml
name: Agent Evaluation
on:
  schedule:
    - cron: '0 6 * * 1'  # Weekly Monday 6am
  workflow_dispatch:       # Manual trigger
  push:
    paths:
      - 'CLAUDE.md'
      - '.claude/**'
      - 'eval/**'

jobs:
  evaluate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run evaluation suite
        run: ./eval/agent-eval.sh
      - name: Check for regressions
        run: node ./eval/check-regressions.js
      - name: Upload results
        uses: actions/upload-artifact@v4
        with:
          name: eval-results
          path: ./eval/results/
Start small, expand gradually: You do not need 100 evaluation tasks on day one. Start with 10 standardized tasks that cover your most common agent interactions. Add convention compliance tests after your first CLAUDE.md update. Add safety tests after your first near-miss. A small but running evaluation framework is infinitely more valuable than a comprehensive one that never gets built.

Inspiration from Industry Benchmarks

Two benchmarks worth studying for design inspiration:

Getting Started: Your First Evaluation Framework

  1. Create 10 standardized tasks. Pick the 10 most common things you ask your agent to do. Write them as JSON task files with clear validation criteria.
  2. Run each task 5 times. Compute pass rate and variance. This is your baseline.
  3. Track token usage. Record how many tokens each task consumes. This becomes your cost baseline.
  4. Add 3 safety boundary tests. Write tasks that test your most important constraints. These should have a 100% pass rate.
  5. Automate the pipeline. Write a shell script that runs all tasks and produces a summary report. Schedule it weekly.
  6. Compare after changes. Every time you update CLAUDE.md, change models, or modify your agent configuration, re-run the suite and compare.

Agent evaluation is not optional overhead. It is the practice that turns “AI agents sometimes work” into “AI agents reliably work.” The teams that invest in evaluation frameworks are the ones that trust their agents enough to give them real autonomy — and that autonomy is where the productivity gains live.

Test Your Agents in Parallel

Beam’s workspace system lets you run evaluation suites across multiple agent sessions simultaneously. Compare models, test configurations, and validate changes — all from one platform.

Download Beam Free