This directory contains the testing infrastructure for the CodeMie CLI project.
tests/ ├── integration/ # Integration tests (CLI commands) │ ├── cli-commands.test.ts │ └── agent-shortcuts.test.ts ├── fixtures/ # Test data and fixtures │ └── configs/ # Sample configuration files ├── helpers/ # Reusable test utilities │ ├── cli-runner.ts # CLI command execution helper │ ├── temp-workspace.ts # Temporary workspace management │ └── index.ts # Exports └── README.md # This file src/ └── **/__tests__/ # Unit tests (co-located with source) ├── env/__tests__/ # Configuration tests ├── agents/__tests__/ # Agent registry tests └── utils/__tests__/ # Utility tests Integration tests verify the system works correctly by executing CLI commands directly. These tests:
- Test real user interactions
- Are less brittle (don't break on refactoring)
- Provide high confidence in functionality
- Are easy to maintain
Example:
it('should list all available agents', () => { const output = cli.run('list'); expect(output).toContain('claude'); expect(output).toContain('gemini'); });Unit tests focus on critical business logic and complex algorithms. These tests:
- Verify configuration loading and validation
- Test type guards and parsers
- Check agent registry functionality
- Validate utility functions
Example:
it('should identify multi-provider config', () => { const config = { version: 2, activeProfile: 'default', profiles: { default: {} } }; expect(isMultiProviderConfig(config)).toBe(true); });# Run all tests npm test # Run all tests once (no watch mode) npm run test:run # Run only unit tests npm run test:unit # Run only integration tests npm run test:integration # Run tests with coverage npm run test:coverage # Run tests in watch mode npm run test:watch # Run tests with UI npm run test:uiHelper for executing CLI commands in tests:
import { createCLIRunner } from '../helpers'; const cli = createCLIRunner(); // Run command and get output const output = cli.run('doctor'); // Run command silently (no throw on error) const result = cli.runSilent('invalid-command'); console.log(result.exitCode); // 1 console.log(result.error); // Error message // Check if command succeeds if (cli.succeeds('version')) { console.log('Command succeeded'); }Helper for creating isolated test environments:
import { createTempWorkspace } from '../helpers'; const workspace = createTempWorkspace(); // Write files workspace.writeFile('test.txt', 'content'); workspace.writeJSON('data.json', { key: 'value' }); workspace.writeConfig({ provider: 'openai' }); // Read files const content = workspace.readFile('test.txt'); const data = workspace.readJSON('data.json'); // Create directories workspace.mkdir('src/utils'); // Clean up workspace.cleanup();- Create a new file in
tests/integration/ - Import
createCLIRunnerfrom helpers - Execute CLI commands and verify output
- Keep tests simple - test behavior, not implementation
import { describe, it, expect } from 'vitest'; import { createCLIRunner } from '../helpers'; const cli = createCLIRunner(); describe('My Feature', () => { it('should do something', () => { const output = cli.run('my-command'); expect(output).toContain('expected text'); }); });- Create
__tests__directory next to source code - Test critical logic, edge cases, error handling
- Use TempWorkspace for file system operations
- Mock external dependencies if needed
import { describe, it, expect } from 'vitest'; import { myFunction } from '../my-module'; describe('myFunction', () => { it('should handle edge case', () => { expect(myFunction(null)).toBeUndefined(); }); });- Test behavior, not implementation details
- Use real CLI commands in integration tests
- Keep tests simple and readable
- Test happy paths and critical error cases
- Use descriptive test names
- Clean up resources (temp files, workspaces)
- Mock everything in integration tests
- Test internal implementation details
- Write tests for every edge case
- Make tests dependent on each other
- Leave temporary files after tests
- Use hardcoded paths or timestamps
- Overall: 60-70% (pragmatic coverage)
- Configuration System: 90%+
- Type Guards: 100%
- Agent Registry: 80%+
- CLI Commands: Basic execution verified
Tests run automatically on:
- Pull requests
- Pushes to main/debug_mode branches
- Manual workflow triggers
The CI pipeline:
- Installs dependencies
- Runs linting
- Builds the project
- Runs unit tests
- Runs integration tests
- Generates coverage reports
Increase timeout in vitest.config.ts:
testTimeout: 30000, // 30 seconds- Check Node.js version (must be >=20.0.0)
- Ensure clean state (no leftover config files)
- Run
npm run buildbefore testing
Make sure to build first:
npm run build npm testAlways use afterEach or finally blocks:
afterEach(() => { workspace.cleanup(); });When adding new features:
- Write integration test first (if it's a CLI command)
- Add unit tests for complex logic
- Ensure tests pass:
npm run test:run - Check coverage:
npm run test:coverage - Update this README if needed