Go Testing
byย @pitchinnate ยท ๐ฅ๏ธ Coding ยท 3d ago ยท 1 views
A skill to help write tests for Go.
ClaudeCodexOpencode codinggolangtesting
---
name: go-testing
description: "Use this skill whenever you are writing, reviewing, or generating tests for Go code. Triggers include: any request to write unit tests, table-driven tests, fuzz tests, benchmarks, or test helpers in Go; requests to improve test coverage; questions about mocking, test structure, subtests, or t.Cleanup. Ensures all test output follows idiomatic Go testing conventions. Do NOT use for non-Go test frameworks, integration test infrastructure, or general Go code that is not test-related (use the go skill for that)."
---
# Go Testing Skill
## Step 1 โ File and Package Layout
- Test files end in `_test.go` and live in the same directory as the code under test.
- Use the same package name for white-box tests (access to unexported symbols): `package mypackage`
- Use a `_test` suffix package for black-box tests (only exported API): `package mypackage_test`
- One test file per source file is a good default: `parser.go` โ `parser_test.go`.
---
## Step 2 โ Naming Conventions
Follow the standard Go testing naming rules precisely:
```go
// Top-level function
func helloWorld() {}
func TestHelloWorld(t *testing.T) {}
// Method on a type
type FooStruct struct{}
func (f *FooStruct) Bar() {}
func TestFooStruct_Bar(t *testing.T) {}
// Subtest names use t.Run with a descriptive string
t.Run("returns error when input is empty", func(t *testing.T) { ... })
// Benchmarks
func BenchmarkHelloWorld(b *testing.B) {}
// Fuzz tests
func FuzzHelloWorld(f *testing.F) {}
```
- Test names must start with `Test`, `Benchmark`, or `Fuzz` โ no exceptions.
- Subtest names should read as plain English descriptions of the scenario.
---
## Step 3 โ Test Structure: Given / When / Then
Every test body follows a three-part structure. Comment each section:
```go
func TestAdd(t *testing.T) {
// given
a, b := 2, 3
// when
result := Add(a, b)
// then
if result != 5 {
t.Errorf("Add(%d, %d) = %d; want 5", a, b, result)
}
}
```
This maps to BDD's **given / when / then** and keeps tests scannable and consistent.
---
## Step 4 โ Subtests with `t.Run`
Use `t.Run` to group related scenarios under one test function instead of writing separate top-level functions. This keeps setup shared and output organized.
```go
func TestDivide(t *testing.T) {
t.Run("divides two positive numbers", func(t *testing.T) {
// given / when / then
})
t.Run("returns error on division by zero", func(t *testing.T) {
// given / when / then
})
}
```
- Subtests run independently and can be targeted: `go test -run TestDivide/returns_error`.
- Never write `TestDivide_PositiveNumbers` and `TestDivide_DivByZero` as separate top-level functions when they test the same unit.
---
## Step 5 โ Table-Driven Tests
For any function with multiple input/output cases, use table-driven tests. This reduces repetition and makes adding cases trivial.
```go
func TestMultiply(t *testing.T) {
testCases := []struct {
name string
a, b int
expected int
wantErr bool
}{
{name: "positive numbers", a: 3, b: 4, expected: 12},
{name: "multiply by zero", a: 5, b: 0, expected: 0},
{name: "negative numbers", a: -2, b: 3, expected: -6},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// given (tc fields)
// when
result := Multiply(tc.a, tc.b)
// then
if result != tc.expected {
t.Errorf("got %d; want %d", result, tc.expected)
}
})
}
}
```
**Rules for table-driven tests:**
- Always use `t.Run` inside the loop โ not a bare assertion โ so each case is individually named and re-runnable.
- Name each case descriptively in the `name` field.
- Keep the table definition and the loop body in the same function.
---
## Step 6 โ What to Test
**Test observable behavior, not implementation details.**
Ask: *"If I enter X and Y, do I get Z?"*
Not: *"Does the method call class A, then class B, then return A+B?"*
- Test the public contract (inputs โ outputs / errors).
- Do not assert that specific private methods were called unless it is essential.
- Do not mirror the internal structure of production code in your tests โ tests that are too tightly coupled to implementation break every time the internals change.
---
## Step 7 โ Error Assertions
- Check `err != nil` โ do not assert on the exact error message string.
- Error messages are for humans; they change. The presence or absence of an error is the contract.
```go
// CORRECT
assert.Error(t, err)
assert.NoError(t, err)
// FRAGILE โ error message may change
if err.Error() != "user not found" { ... }
```
- Use `errors.Is` or `errors.As` when the *type* or *sentinel value* of the error matters.
---
## Step 8 โ Mocking
Mock dependencies at layer boundaries so each unit can be tested in isolation (solitary tests).
**Interface-based mocking pattern:**
```go
// Define the interface in production code
type UserStore interface {
FindByID(ctx context.Context, id string) (*User, error)
}
// Implement a mock (manually or with testify/mock)
type MockUserStore struct {
mock.Mock
}
func (m *MockUserStore) FindByID(ctx context.Context, id string) (*User, error) {
args := m.Called(ctx, id)
return args.Get(0).(*User), args.Error(1)
}
// In the test, inject the mock
func TestUserService_Get(t *testing.T) {
store := new(MockUserStore)
svc := NewUserService(store)
t.Run("returns user when found", func(t *testing.T) {
// given
store.On("FindByID", mock.Anything, "123").Return(&User{ID: "123"}, nil).Once()
// when
user, err := svc.Get(context.Background(), "123")
// then
assert.NoError(t, err)
assert.Equal(t, "123", user.ID)
})
}
```
**Solitary vs Sociable tests:**
- **Solitary**: mock all dependencies โ fast, deterministic, pure unit test.
- **Sociable**: use real lower-layer implementations โ appropriate when the real interaction (file I/O, git, parsing) is the point being tested and mocking it would require reimplementing the logic.
Use `.Once()` on mock expectations to ensure each call is matched exactly once and leftover expectations are caught.
---
## Step 9 โ Cleanup with `t.Cleanup`
Prefer `t.Cleanup` over `defer` for resource teardown in tests.
```go
func TestWithTempDir(t *testing.T) {
dir := createTempDir(t)
t.Cleanup(func() {
os.RemoveAll(dir)
})
// test body...
}
```
- `t.Cleanup` runs even if the test panics; `defer` does not survive a `t.FailNow` or `t.Fatal` in a subtest goroutine.
- Register cleanup *before* the action that creates the resource โ not after.
- Prefer helper functions that register their own cleanup:
```go
func tempDir(t *testing.T) string {
t.Helper()
dir, err := os.MkdirTemp("", "test-*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() { os.RemoveAll(dir) })
return dir
}
```
---
## Step 10 โ Before / After with `TestMain`
Use `TestMain` (not `init`) when you need package-level setup or teardown:
```go
func TestMain(m *testing.M) {
// before: set up shared resources (DB connection, test server, etc.)
setup()
code := m.Run() // run all tests in the package
// after: teardown
teardown()
os.Exit(code)
}
```
- `TestMain` is declared once per package.
- Always call `os.Exit(m.Run())` โ omitting this causes the process to exit 0 regardless of test results.
- Do not use `init()` for test setup โ it cannot handle teardown and runs before `TestMain`.
---
## Step 11 โ Test Helpers
Mark helper functions with `t.Helper()` so failure output points to the call site, not inside the helper:
```go
func assertEqual(t *testing.T, got, want int) {
t.Helper()
if got != want {
t.Errorf("got %d; want %d", got, want)
}
}
```
---
## Step 12 โ Fuzz Testing
Use fuzz tests to find edge cases the developer would not think to write manually:
```go
func FuzzReverse(f *testing.F) {
// Seed corpus โ known interesting inputs
f.Add("hello")
f.Add("")
f.Add("ๆฅๆฌ่ช")
f.Fuzz(func(t *testing.T, input string) {
// Property that must always hold
result := Reverse(input)
if len(result) != len([]rune(input)) {
t.Errorf("length mismatch after reverse")
}
})
}
```
Run with: `go test -fuzz=FuzzReverse`
Run seed corpus only (CI): `go test` (no `-fuzz` flag)
Good candidates for fuzzing: string parsers, encoders/decoders, input validators, anything handling untrusted data.
---
## Step 13 โ Avoiding Flaky Tests
Tests must be **deterministic**. Common flakiness causes and fixes:
| Cause | Fix |
|---|---|
| Time-dependent logic | Inject a clock interface; use fixed timestamps in tests |
| Goroutine races | Use `sync.WaitGroup` or channels; run with `-race` flag |
| Test order dependency | Never share mutable global state between tests |
| Async operations | Use polling helpers with timeout, not `time.Sleep` |
| Random input | Seed random with a fixed value, or use fuzz testing properly |
Always run tests with the race detector during development: `go test -race ./...`
---
## Step 14 โ Code Coverage
```bash
# Coverage for current package
go test -cover ./...
# Full coverage profile
go test --coverpkg ./... -coverprofile cover.out ./...
# Print total coverage percentage
go tool cover -func=cover.out | tail -1
# Open interactive HTML report
go tool cover -html=cover.out -o cover.html
```
- Aim for coverage of all meaningful branches, not 100% line coverage.
- Coverage does not guarantee correctness โ a test without assertions contributes coverage but catches nothing.
- Green (covered) vs red (uncovered) in the HTML report highlights untested branches at a glance.
---
## Step 15 โ CI Integration
Run tests in CI on every pull request:
```yaml
- name: Run tests
run: go test -race -cover ./...
```
- Always include `-race` in CI to catch data races.
- Fail the build if tests fail โ never merge with a red test suite.
- Run fuzz seed corpus in CI (`go test ./...`) even if full fuzzing runs separately.
---
## Quick Anti-Pattern Checklist
| Anti-Pattern | Correct Approach |
|---|---|
| `TestDoThingSuccess` and `TestDoThingFailure` as separate functions | One `TestDoThing` with `t.Run` subtests |
| Asserting `err.Error() == "some message"` | `assert.Error(t)` or `errors.Is` |
| `defer cleanup()` in tests | `t.Cleanup(func() { cleanup() })` |
| `init()` for test setup | `TestMain(m *testing.M)` |
| `time.Sleep` to wait for async work | Channel signal or polling with timeout |
| No `-race` flag | Always use `go test -race ./...` |
| Mocking every single thing including what you're testing | Mock dependencies, test real logic |
| Table test loop without `t.Run` | Always wrap each case in `t.Run(tc.name, ...)` |
| Missing `t.Helper()` in helper functions | Add `t.Helper()` as first line |
| Tests that pass with no assertions | Every test must assert something meaningful |
submitted March 31, 2026