This document provides a comprehensive guide to testing in the pvetui project. We use Go's built-in testing framework along with the testify library for assertions and mocking.
- Overview
- Running Tests
- Test Structure
- Writing Tests
- Test Utilities
- Coverage
- Best Practices
- Continuous Integration
Our testing strategy follows Go best practices and includes:
- Unit Tests: Test individual functions and methods in isolation
- Integration Tests: Test interactions between components
- Table-Driven Tests: Comprehensive test cases using Go's table-driven pattern
- Mocking: Mock external dependencies using testify/mock
- Test Coverage: Measure and maintain good test coverage
- Go Testing: Built-in
testingpackage - Testify: Assertions, mocks, and test suites (
github.com/stretchr/testify)assert: Rich assertionsrequire: Assertions that stop test execution on failuremock: Mocking frameworksuite: Test suites for setup/teardown
# Run all tests
make test
# Run tests with coverage
make test-coverage
# Run tests for a specific package
go test ./internal/config
# Run tests with verbose output
go test -v ./...
# Run a specific test
go test -run TestConfig_Validate ./internal/config
# Run tests in parallel
go test -parallel 4 ./...# Generate coverage report
make test-coverage
# View coverage in browser (after running test-coverage)
open coverage.html
# Get coverage percentage only
go test -cover ./...Tests are organized alongside the code they test:
internal/
├── config/
│ ├── config.go
│ └── config_test.go
├── cache/
│ ├── cache.go
│ └── cache_test.go
└── adapters/
├── adapters.go
└── adapters_test.go
pkg/
├── api/
│ ├── utils.go
│ ├── utils_test.go
│ └── testutils/
│ └── mocks.go
- Test files end with
_test.go - Test functions start with
Test - Benchmark functions start with
Benchmark - Example functions start with
Example
func TestFunctionName(t *testing.T) {
// Arrange
input := "test input"
expected := "expected output"
// Act
result := FunctionToTest(input)
// Assert
assert.Equal(t, expected, result)
}Use table-driven tests for comprehensive coverage:
func TestConfig_Validate(t *testing.T) {
tests := []struct {
name string
config *Config
expectError bool
errorMsg string
}{
{
name: "valid config",
config: &Config{
Addr: "https://example.com",
User: "user",
Password: "pass",
},
expectError: false,
},
{
name: "missing address",
config: &Config{
User: "user",
Password: "pass",
},
expectError: true,
errorMsg: "address required",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := tt.config.Validate()
if tt.expectError {
assert.Error(t, err)
assert.Contains(t, err.Error(), tt.errorMsg)
} else {
assert.NoError(t, err)
}
})
}
}Use mocks to isolate units under test:
func TestClientWithMocks(t *testing.T) {
// Create mocks
mockLogger := &testutils.MockLogger{}
mockCache := &testutils.MockCache{}
mockConfig := &testutils.MockConfig{}
// Set expectations
mockConfig.On("GetAddr").Return("https://test.com")
mockCache.On("Get", "key", mock.Anything).Return(false, nil)
// Test your code
// ... test implementation
// Verify expectations
mockConfig.AssertExpectations(t)
mockCache.AssertExpectations(t)
}Use t.TempDir() for tests that need file system operations:
func TestFileOperations(t *testing.T) {
tempDir := t.TempDir() // Automatically cleaned up
filePath := filepath.Join(tempDir, "test.txt")
err := os.WriteFile(filePath, []byte("test"), 0644)
require.NoError(t, err)
// Test your file operations
}Always test both success and error paths:
func TestParseVMID(t *testing.T) {
tests := []struct {
name string
input interface{}
expected int
expectError bool
}{
{
name: "valid integer",
input: 123,
expected: 123,
expectError: false,
},
{
name: "invalid string",
input: "not-a-number",
expected: 0,
expectError: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result, err := ParseVMID(tt.input)
if tt.expectError {
assert.Error(t, err)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expected, result)
}
})
}
}Located in pkg/api/testutils/:
mockLogger := &testutils.MockLogger{}
mockLogger.On("Debug", mock.Anything, mock.Anything).Return()mockCache := &testutils.MockCache{}
mockCache.On("Get", "key", mock.Anything).Return(true, nil)
mockCache.On("Set", "key", "value", time.Hour).Return(nil)mockConfig := &testutils.MockConfig{}
mockConfig.On("GetAddr").Return("https://test.com")// Simple test config with sensible defaults
config := testutils.NewTestConfig()
// Test config with token auth
tokenConfig := testutils.NewTestConfigWithToken()logger := testutils.NewTestLogger()
// Use logger in tests
logger.Debug("test message")
// Check logged messages
assert.Contains(t, logger.DebugMessages[0], "test message")cache := testutils.NewInMemoryCache()
cache.Set("key", "value", time.Hour)When creating new test utilities:
- Place them in appropriate
testutilspackages - Follow naming conventions (
Mock*,Test*,New*) - Implement relevant interfaces
- Provide sensible defaults
- Document usage with examples
- Config Package: ~66% coverage
- Cache Package: ~32% coverage
- Adapters Package: ~95% coverage
- API Utils: ~4% coverage (mostly utility functions)
- Aim for 80%+ coverage on core business logic
- 100% coverage on critical paths (authentication, configuration)
- Focus on meaningful coverage over percentage
- Identify uncovered lines:
go tool cover -html=coverage.out - Add tests for uncovered branches
- Test error paths and edge cases
- Add integration tests for complex workflows
- One test file per source file:
config.go→config_test.go - Group related tests: Use subtests with
t.Run() - Clear test names: Describe what is being tested
- Arrange-Act-Assert: Structure tests clearly
- Use table-driven tests for multiple scenarios
- Test edge cases: empty strings, nil values, boundary conditions
- Test error conditions: Invalid inputs, network failures, etc.
- Use realistic test data: Representative of actual usage
-
Use appropriate assertions:
assert.Equal()for value comparisonassert.NoError()/assert.Error()for error checkingassert.Contains()for substring/element checkingrequire.*()when test should stop on failure
-
Provide meaningful messages:
assert.Equal(t, expected, actual, "Config validation should pass for valid input")
- Mock external dependencies: HTTP clients, databases, file systems
- Don't mock value objects: Simple structs, data containers
- Verify mock expectations: Use
AssertExpectations() - Reset mocks between tests: Avoid test pollution
- Use
t.Parallel()for independent tests - Avoid expensive operations in test setup
- Cache test fixtures when appropriate
- Use benchmarks for performance-critical code
- Use
t.TempDir()for temporary files - Defer cleanup operations
- Reset global state between tests
- Close resources properly
Tests run automatically on:
- Pull requests
- Pushes to main/develop branches
- Scheduled runs (nightly)
Run tests before committing:
# Quick test run
make test
# Full test with coverage
make test-coverage
# Lint and format
make lint
make format- All tests must pass
- Coverage should not decrease
- New features require tests
- Bug fixes should include regression tests
func TestConfig_LoadFromFile(t *testing.T) {
tempDir := t.TempDir()
configFile := filepath.Join(tempDir, "config.yml")
configContent := `
addr: "https://test.example.com:8006"
user: "testuser"
password: "testpass"
`
err := os.WriteFile(configFile, []byte(configContent), 0644)
require.NoError(t, err)
config := &Config{}
err = config.MergeWithFile(configFile)
assert.NoError(t, err)
assert.Equal(t, "https://test.example.com:8006", config.Addr)
assert.Equal(t, "testuser", config.User)
assert.Equal(t, "testpass", config.Password)
}func TestCache_SetAndGet(t *testing.T) {
cache := NewInMemoryCache()
key := "test-key"
value := "test-value"
// Test Set
err := cache.Set(key, value, time.Hour)
assert.NoError(t, err)
// Test Get
var result string
found, err := cache.Get(key, &result)
assert.NoError(t, err)
assert.True(t, found)
assert.Equal(t, value, result)
}func TestConfig_FromEnvironment(t *testing.T) {
// Save original environment
originalAddr := os.Getenv("PVETUI_ADDR")
defer os.Setenv("PVETUI_ADDR", originalAddr)
// Set test environment
os.Setenv("PVETUI_ADDR", "https://test.com")
config := NewConfig()
assert.Equal(t, "https://test.com", config.Addr)
}-
Tests fail in CI but pass locally
- Check for race conditions
- Verify environment differences
- Look for hardcoded paths/values
-
Flaky tests
- Add proper synchronization
- Increase timeouts for timing-sensitive tests
- Use deterministic test data
-
Low coverage
- Check for untested error paths
- Add tests for edge cases
- Test private functions through public interfaces
-
Slow tests
- Use
t.Parallel()for independent tests - Mock expensive operations
- Optimize test setup/teardown
- Use
- Check existing tests for patterns
- Review Go testing documentation
- Ask team members for guidance
- Use
go test -vfor detailed output
This testing guide should help you understand and contribute to the test suite. Remember: good tests are an investment in code quality and developer productivity!