Workflow ID: _bmad/bmm/testarch/automate
Version: 4.0 (BMad v6)
Expands test automation coverage by generating comprehensive test suites at appropriate levels (E2E, API, Component, Unit) with supporting infrastructure. This workflow operates in dual mode:
Core Principle: Generate prioritized, deterministic tests that avoid duplicate coverage and follow testing best practices.
Flexible: This workflow can run with minimal prerequisites. Only HALT if framework is completely missing.
framework workflow if missing)If framework is missing: HALT with message: "Framework scaffolding required. Run bmad tea *framework first."
Check if BMad artifacts are available:
{story_file} variable is set → BMad-Integrated Mode{target_feature} or {target_files} set → Standalone ModeBMad-Integrated Mode:
{story_file}{use_tech_spec} is true{use_test_design} is true{use_prd} is trueStandalone Mode:
Load Framework Configuration
{test_dir}{test_dir}Analyze Existing Test Coverage
If {analyze_coverage} is true:
{test_dir} for existing test filesRead {config_source} and check config.tea_use_playwright_utils.
Critical: Consult {project-root}/_bmad/bmm/testarch/tea-index.csv to load:
Core Testing Patterns (Always load):
test-levels-framework.md - Test level selection (E2E vs API vs Component vs Unit with decision matrix, 467 lines, 4 examples)test-priorities-matrix.md - Priority classification (P0-P3 with automated scoring, risk mapping, 389 lines, 2 examples)data-factories.md - Factory patterns with faker (overrides, nested factories, API seeding, 498 lines, 5 examples)selective-testing.md - Targeted test execution strategies (tag-based, spec filters, diff-based, promotion rules, 727 lines, 4 examples)ci-burn-in.md - Flaky test detection patterns (10-iteration burn-in, sharding, selective execution, 678 lines, 4 examples)test-quality.md - Test design principles (deterministic, isolated, explicit assertions, length/time limits, 658 lines, 5 examples)If config.tea_use_playwright_utils: true (Playwright Utils Integration - All Utilities):
overview.md - Playwright utils installation, design principles, fixture patternsapi-request.md - Typed HTTP client with schema validationnetwork-recorder.md - HAR record/playback for offline testingauth-session.md - Token persistence and multi-user supportintercept-network-call.md - Network spy/stub with automatic JSON parsingrecurse.md - Cypress-style polling for async conditionslog.md - Playwright report-integrated loggingfile-utils.md - CSV/XLSX/PDF/ZIP reading and validationburn-in.md - Smart test selection (relevant for CI test generation)network-error-monitor.md - Automatic HTTP error detectionfixtures-composition.md - mergeTests composition patternsIf config.tea_use_playwright_utils: false (Traditional Patterns):
fixture-architecture.md - Test fixture patterns (pure function → fixture → mergeTests, auto-cleanup, 406 lines, 5 examples)network-first.md - Route interception patterns (intercept before navigate, HAR capture, deterministic waiting, 489 lines, 5 examples)Healing Knowledge (If {auto_heal_failures} is true):
test-healing-patterns.md - Common failure patterns and automated fixes (stale selectors, race conditions, dynamic data, network errors, hard waits, 648 lines, 5 examples)selector-resilience.md - Selector debugging and refactoring guide (data-testid > ARIA > text > CSS hierarchy, anti-patterns, 541 lines, 4 examples)timing-debugging.md - Race condition identification and fixes (network-first, deterministic waiting, async debugging, 370 lines, 3 examples)BMad-Integrated Mode (story available):
*atdd workflow)Standalone Mode (no story):
{target_feature} specified: Analyze that specific feature{target_files} specified: Analyze those specific files{auto_discover_features} is true: Scan {source_dir} for featuresKnowledge Base Reference: test-levels-framework.md
For each feature or acceptance criterion, determine appropriate test level:
E2E (End-to-End):
API (Integration):
Component:
Unit:
Critical principle: Don't test same behavior at multiple levels unless necessary
Example:
Knowledge Base Reference: test-priorities-matrix.md
P0 (Critical - Every commit):
P1 (High - PR to main):
P2 (Medium - Nightly):
P3 (Low - On-demand):
Priority Variables:
{include_p0} - Always include (default: true){include_p1} - High priority (default: true){include_p2} - Medium priority (default: true){include_p3} - Low priority (default: false)Document what will be tested at each level with priorities:
## Test Coverage Plan
### E2E Tests (P0)
- User login with valid credentials → Dashboard loads
- User logout → Redirects to login page
### API Tests (P1)
- POST /auth/login - valid credentials → 200 + JWT token
- POST /auth/login - invalid credentials → 401 + error message
- POST /auth/login - missing fields → 400 + validation errors
### Component Tests (P1)
- LoginForm - empty fields → submit button disabled
- LoginForm - valid input → submit button enabled
### Unit Tests (P2)
- validateEmail() - valid email → returns true
- validateEmail() - malformed email → returns false
Knowledge Base Reference: fixture-architecture.md
Check existing fixtures in tests/support/fixtures/:
test.extend() patternCommon fixtures to create/enhance:
Example fixture:
// tests/support/fixtures/auth.fixture.ts
import { test as base } from '@playwright/test';
import { createUser, deleteUser } from '../factories/user.factory';
export const test = base.extend({
authenticatedUser: async ({ page }, use) => {
// Setup: Create and authenticate user
const user = await createUser();
await page.goto('/login');
await page.fill('[data-testid="email"]', user.email);
await page.fill('[data-testid="password"]', user.password);
await page.click('[data-testid="login-button"]');
await page.waitForURL('/dashboard');
// Provide to test
await use(user);
// Cleanup: Delete user automatically
await deleteUser(user.id);
},
});
Knowledge Base Reference: data-factories.md
Check existing factories in tests/support/factories/:
@faker-js/faker for all random data (no hardcoded values)Common factories to create/enhance:
Example factory:
// tests/support/factories/user.factory.ts
import { faker } from '@faker-js/faker';
export const createUser = (overrides = {}) => ({
id: faker.number.int(),
email: faker.internet.email(),
password: faker.internet.password(),
name: faker.person.fullName(),
role: 'user',
createdAt: faker.date.recent().toISOString(),
...overrides,
});
export const createUsers = (count: number) => Array.from({ length: count }, () => createUser());
// API helper for cleanup
export const deleteUser = async (userId: number) => {
await fetch(`/api/users/${userId}`, { method: 'DELETE' });
};
If {update_helpers} is true:
Check tests/support/helpers/ for common utilities:
Example helper:
// tests/support/helpers/wait-for.ts
export const waitFor = async (condition: () => Promise<boolean>, timeout = 5000, interval = 100): Promise<void> => {
const startTime = Date.now();
while (Date.now() - startTime < timeout) {
if (await condition()) return;
await new Promise((resolve) => setTimeout(resolve, interval));
}
throw new Error(`Condition not met within ${timeout}ms`);
};
Create Test File Structure
tests/
├── e2e/
│ └── {feature-name}.spec.ts # E2E tests (P0-P1)
├── api/
│ └── {feature-name}.api.spec.ts # API tests (P1-P2)
├── component/
│ └── {ComponentName}.test.tsx # Component tests (P1-P2)
├── unit/
│ └── {module-name}.test.ts # Unit tests (P2-P3)
└── support/
├── fixtures/ # Test fixtures
├── factories/ # Data factories
└── helpers/ # Utility functions
Write E2E Tests (If Applicable)
Follow Given-When-Then format:
import { test, expect } from '@playwright/test';
test.describe('User Authentication', () => {
test('[P0] should login with valid credentials and load dashboard', async ({ page }) => {
// GIVEN: User is on login page
await page.goto('/login');
// WHEN: User submits valid credentials
await page.fill('[data-testid="email-input"]', 'user@example.com');
await page.fill('[data-testid="password-input"]', 'Password123!');
await page.click('[data-testid="login-button"]');
// THEN: User is redirected to dashboard
await expect(page).toHaveURL('/dashboard');
await expect(page.locator('[data-testid="user-name"]')).toBeVisible();
});
test('[P1] should display error for invalid credentials', async ({ page }) => {
// GIVEN: User is on login page
await page.goto('/login');
// WHEN: User submits invalid credentials
await page.fill('[data-testid="email-input"]', 'invalid@example.com');
await page.fill('[data-testid="password-input"]', 'wrongpassword');
await page.click('[data-testid="login-button"]');
// THEN: Error message is displayed
await expect(page.locator('[data-testid="error-message"]')).toHaveText('Invalid email or password');
});
});
Critical patterns:
[P0], [P1], [P2], [P3] in test nameWrite API Tests (If Applicable)
import { test, expect } from '@playwright/test';
test.describe('User Authentication API', () => {
test('[P1] POST /api/auth/login - should return token for valid credentials', async ({ request }) => {
// GIVEN: Valid user credentials
const credentials = {
email: 'user@example.com',
password: 'Password123!',
};
// WHEN: Logging in via API
const response = await request.post('/api/auth/login', {
data: credentials,
});
// THEN: Returns 200 and JWT token
expect(response.status()).toBe(200);
const body = await response.json();
expect(body).toHaveProperty('token');
expect(body.token).toMatch(/^[A-Za-z0-9-_]+\.[A-Za-z0-9-_]+\.[A-Za-z0-9-_]+$/); // JWT format
});
test('[P1] POST /api/auth/login - should return 401 for invalid credentials', async ({ request }) => {
// GIVEN: Invalid credentials
const credentials = {
email: 'invalid@example.com',
password: 'wrongpassword',
};
// WHEN: Attempting login
const response = await request.post('/api/auth/login', {
data: credentials,
});
// THEN: Returns 401 with error
expect(response.status()).toBe(401);
const body = await response.json();
expect(body).toMatchObject({
error: 'Invalid credentials',
});
});
});
Write Component Tests (If Applicable)
Knowledge Base Reference: component-tdd.md
import { test, expect } from '@playwright/experimental-ct-react';
import { LoginForm } from './LoginForm';
test.describe('LoginForm Component', () => {
test('[P1] should disable submit button when fields are empty', async ({ mount }) => {
// GIVEN: LoginForm is mounted
const component = await mount(<LoginForm />);
// WHEN: Form is initially rendered
const submitButton = component.locator('button[type="submit"]');
// THEN: Submit button is disabled
await expect(submitButton).toBeDisabled();
});
test('[P1] should enable submit button when fields are filled', async ({ mount }) => {
// GIVEN: LoginForm is mounted
const component = await mount(<LoginForm />);
// WHEN: User fills in email and password
await component.locator('[data-testid="email-input"]').fill('user@example.com');
await component.locator('[data-testid="password-input"]').fill('Password123!');
// THEN: Submit button is enabled
const submitButton = component.locator('button[type="submit"]');
await expect(submitButton).toBeEnabled();
});
});
Write Unit Tests (If Applicable)
import { validateEmail } from './validation';
describe('Email Validation', () => {
test('[P2] should return true for valid email', () => {
// GIVEN: Valid email address
const email = 'user@example.com';
// WHEN: Validating email
const result = validateEmail(email);
// THEN: Returns true
expect(result).toBe(true);
});
test('[P2] should return false for malformed email', () => {
// GIVEN: Malformed email addresses
const invalidEmails = ['notanemail', '@example.com', 'user@', 'user @example.com'];
// WHEN/THEN: Each should fail validation
invalidEmails.forEach((email) => {
expect(validateEmail(email)).toBe(false);
});
});
});
Apply Network-First Pattern (E2E tests)
Knowledge Base Reference: network-first.md
Critical pattern to prevent race conditions:
test('should load user dashboard after login', async ({ page }) => {
// CRITICAL: Intercept routes BEFORE navigation
await page.route('**/api/user', (route) =>
route.fulfill({
status: 200,
body: JSON.stringify({ id: 1, name: 'Test User' }),
}),
);
// NOW navigate
await page.goto('/dashboard');
await expect(page.locator('[data-testid="user-name"]')).toHaveText('Test User');
});
For every test:
Forbidden patterns:
await page.waitForTimeout(2000)if (await element.isVisible()) { ... }Purpose: Automatically validate generated tests and heal common failures before delivery
Always validate (auto_validate is always true):
Execute the full test suite that was just generated:
npx playwright test {generated_test_files}
Capture results:
If ALL tests pass:
If tests FAIL:
Iteration limit: 3 attempts per test (constant)
For each failing test:
A. Load Healing Knowledge Fragments
Consult tea-index.csv to load healing patterns:
test-healing-patterns.md - Common failure patterns and fixesselector-resilience.md - Selector debugging and refactoringtiming-debugging.md - Race condition identification and fixesB. Identify Failure Pattern
Analyze error message and stack trace to classify failure type:
Stale Selector Failure:
selector-resilience.md):
page.getByTestId()filter({ hasText })Race Condition Failure:
timing-debugging.md):
waitForTimeout() with waitForResponse()waitFor({ state: 'visible' }))Dynamic Data Failure:
test-healing-patterns.md):
/User \d+/)Network Error Failure:
test-healing-patterns.md):
page.route() or cy.intercept() for API mockingHard Wait Detection:
page.waitForTimeout(), cy.wait(number), sleep()timing-debugging.md):
C. MCP Healing Mode (If MCP Tools Available)
If Playwright MCP tools are available in your IDE:
Use MCP tools for interactive healing:
playwright_test_debug_test: Pause on failure for visual inspectionbrowser_snapshot: Capture visual context at failure pointbrowser_console_messages: Retrieve console logs for JS errorsbrowser_network_requests: Analyze network activitybrowser_generate_locator: Generate better selectors interactivelyApply MCP-generated fixes to test code.
D. Pattern-Based Healing Mode (Fallback)
If MCP unavailable, use pattern-based analysis:
selector-resilience.mdtiming-debugging.mdtest-healing-patterns.mdE. Apply Healing Fix
F. Iteration Limit Handling
After 3 failed healing attempts:
Always mark unfixable tests:
test.fixme() instead of test()Add detailed comment explaining:
Manual investigation needed
test.fixme('[P1] should handle complex interaction', async ({ page }) => {
// FIXME: Test healing failed after 3 attempts
// Failure: "Locator 'button[data-action="submit"]' resolved to 0 elements"
// Attempted fixes:
// 1. Replaced with page.getByTestId('submit-button') - still failing
// 2. Replaced with page.getByRole('button', { name: 'Submit' }) - still failing
// 3. Added waitForLoadState('networkidle') - still failing
// Manual investigation needed: Selector may require application code changes
// TODO: Review with team, may need data-testid added to button component
// Original test code...
});
Note: Workflow continues even with unfixable tests (marked as test.fixme() for manual review)
Document healing outcomes:
## Test Healing Report
**Auto-Heal Enabled**: {auto_heal_failures}
**Healing Mode**: {use_mcp_healing ? "MCP-assisted" : "Pattern-based"}
**Iterations Allowed**: {max_healing_iterations}
### Validation Results
- **Total tests**: {total_tests}
- **Passing**: {passing_tests}
- **Failing**: {failing_tests}
### Healing Outcomes
**Successfully Healed ({healed_count} tests):**
- `tests/e2e/login.spec.ts:15` - Stale selector (CSS class → data-testid)
- `tests/e2e/checkout.spec.ts:42` - Race condition (added network-first interception)
- `tests/api/users.spec.ts:28` - Dynamic data (hardcoded ID → regex pattern)
**Unable to Heal ({unfixable_count} tests):**
- `tests/e2e/complex-flow.spec.ts:67` - Marked as test.fixme() with manual investigation needed
- Failure: Locator not found after 3 healing attempts
- Requires application code changes (add data-testid to component)
### Healing Patterns Applied
- **Selector fixes**: 2 (CSS class → data-testid, nth() → filter())
- **Timing fixes**: 1 (added network-first interception)
- **Data fixes**: 1 (hardcoded ID → regex)
### Knowledge Base References
- `test-healing-patterns.md` - Common failure patterns
- `selector-resilience.md` - Selector refactoring guide
- `timing-debugging.md` - Race condition prevention
test.fixme() and detailed commentsIf {update_readme} is true:
Create or update tests/README.md with:
Example section:
## Running Tests
```bash
# Run all tests
npm run test:e2e
# Run by priority
npm run test:e2e -- --grep "@P0"
npm run test:e2e -- --grep "@P1"
# Run specific file
npm run test:e2e -- user-authentication.spec.ts
# Run in headed mode
npm run test:e2e -- --headed
# Debug specific test
npm run test:e2e -- user-authentication.spec.ts --debug
```
## Priority Tags
[P3]: Low priority, run on-demand
If {update_package_scripts} is true:
Add or update test execution scripts:
{
"scripts": {
"test:e2e": "playwright test",
"test:e2e:p0": "playwright test --grep '@P0'",
"test:e2e:p1": "playwright test --grep '@P1|@P0'",
"test:api": "playwright test tests/api",
"test:component": "playwright test tests/component",
"test:unit": "vitest"
}
}
If {run_tests_after_generation} is true:
Save to {output_summary} with:
BMad-Integrated Mode:
# Automation Summary - {feature_name}
**Date:** {date}
**Story:** {story_id}
**Coverage Target:** {coverage_target}
## Tests Created
### E2E Tests (P0-P1)
- `tests/e2e/user-authentication.spec.ts` (2 tests, 87 lines)
- [P0] Login with valid credentials → Dashboard loads
- [P1] Display error for invalid credentials
### API Tests (P1-P2)
- `tests/api/auth.api.spec.ts` (3 tests, 102 lines)
- [P1] POST /auth/login - valid credentials → 200 + token
- [P1] POST /auth/login - invalid credentials → 401 + error
- [P2] POST /auth/login - missing fields → 400 + validation
### Component Tests (P1)
- `tests/component/LoginForm.test.tsx` (2 tests, 45 lines)
- [P1] Empty fields → submit button disabled
- [P1] Valid input → submit button enabled
## Infrastructure Created
### Fixtures
- `tests/support/fixtures/auth.fixture.ts` - authenticatedUser with auto-cleanup
### Factories
- `tests/support/factories/user.factory.ts` - createUser(), deleteUser()
### Helpers
- `tests/support/helpers/wait-for.ts` - Polling helper for complex conditions
## Test Execution
```bash
# Run all new tests
npm run test:e2e
# Run by priority
npm run test:e2e:p0 # Critical paths only
npm run test:e2e:p1 # P0 + P1 tests
```
## Coverage Analysis
Total Tests: 7
Test Levels:
Coverage Status:
## Definition of Done
## Next Steps
npm run test:e2ebmad tea *gateMonitor for flaky tests in burn-in loop
**Standalone Mode:**
```markdown
# Automation Summary - {target_feature}
**Date:** {date}
**Target:** {target_feature} (standalone analysis)
**Coverage Target:** {coverage_target}
## Feature Analysis
**Source Files Analyzed:**
- `src/auth/login.ts` - Login logic and validation
- `src/auth/session.ts` - Session management
- `src/auth/validation.ts` - Email/password validation
**Existing Coverage:**
- E2E tests: 0 found
- API tests: 0 found
- Component tests: 0 found
- Unit tests: 0 found
**Coverage Gaps Identified:**
- ❌ No E2E tests for login flow
- ❌ No API tests for /auth/login endpoint
- ❌ No component tests for LoginForm
- ❌ No unit tests for validateEmail()
## Tests Created
{Same structure as BMad-Integrated Mode}
## Recommendations
1. **High Priority (P0-P1):**
- Add E2E test for password reset flow
- Add API tests for token refresh endpoint
- Add component tests for logout button
2. **Medium Priority (P2):**
- Add unit tests for session timeout logic
- Add E2E test for "remember me" functionality
3. **Future Enhancements:**
- Consider contract testing for auth API
- Add visual regression tests for login page
- Set up burn-in loop for flaky test detection
## Definition of Done
{Same checklist as BMad-Integrated Mode}
Provide Summary to User
Output concise summary:
## Automation Complete
**Coverage:** {total_tests} tests created across {test_levels} levels
**Priority Breakdown:** P0: {p0_count}, P1: {p1_count}, P2: {p2_count}, P3: {p3_count}
**Infrastructure:** {fixture_count} fixtures, {factory_count} factories
**Output:** {output_summary}
**Run tests:** `npm run test:e2e`
**Next steps:** Review tests, run in CI, integrate with quality gate
BMad-Integrated Mode (story available):
Standalone Mode (no story):
Auto-discover Mode (no targets specified):
Critical principle: Don't test same behavior at multiple levels
Good coverage:
Bad coverage (duplicate):
Use E2E sparingly for critical paths. Use API/Component for variations and edge cases.
Tag every test with priority in test name:
test('[P0] should login with valid credentials', async ({ page }) => { ... });
test('[P1] should display error for invalid credentials', async ({ page }) => { ... });
test('[P2] should remember login preference', async ({ page }) => { ... });
Enables selective test execution:
# Run only P0 tests (critical paths)
npm run test:e2e -- --grep "@P0"
# Run P0 + P1 tests (pre-merge)
npm run test:e2e -- --grep "@P0|@P1"
Do NOT create page object classes. Keep tests simple and direct:
// ✅ CORRECT: Direct test
test('should login', async ({ page }) => {
await page.goto('/login');
await page.fill('[data-testid="email"]', 'user@example.com');
await page.click('[data-testid="login-button"]');
await expect(page).toHaveURL('/dashboard');
});
// ❌ WRONG: Page object abstraction
class LoginPage {
async login(email, password) { ... }
}
Use fixtures for setup/teardown, not page objects for actions.
No flaky patterns allowed:
// ❌ WRONG: Hard wait
await page.waitForTimeout(2000);
// ✅ CORRECT: Explicit wait
await page.waitForSelector('[data-testid="user-name"]');
await expect(page.locator('[data-testid="user-name"]')).toBeVisible();
// ❌ WRONG: Conditional flow
if (await element.isVisible()) {
await element.click();
}
// ✅ CORRECT: Deterministic assertion
await expect(element).toBeVisible();
await element.click();
// ❌ WRONG: Try-catch for test logic
try {
await element.click();
} catch (e) {
// Test shouldn't catch errors
}
// ✅ CORRECT: Let test fail if element not found
await element.click();
Every test must clean up its data:
// ✅ CORRECT: Fixture with auto-cleanup
export const test = base.extend({
testUser: async ({ page }, use) => {
const user = await createUser();
await use(user);
await deleteUser(user.id); // Auto-cleanup
},
});
// ❌ WRONG: Manual cleanup (can be forgotten)
test('should login', async ({ page }) => {
const user = await createUser();
// ... test logic ...
// Forgot to delete user!
});
Keep test files lean (under {max_file_lines} lines):
Core Fragments (Auto-loaded in Step 1):
test-levels-framework.md - E2E vs API vs Component vs Unit decision framework with characteristics matrix (467 lines, 4 examples)test-priorities-matrix.md - P0-P3 classification with automated scoring and risk mapping (389 lines, 2 examples)fixture-architecture.md - Pure function → fixture → mergeTests composition with auto-cleanup (406 lines, 5 examples)data-factories.md - Factory patterns with faker: overrides, nested factories, API seeding (498 lines, 5 examples)selective-testing.md - Tag-based, spec filters, diff-based selection, promotion rules (727 lines, 4 examples)ci-burn-in.md - 10-iteration burn-in loop, parallel sharding, selective execution (678 lines, 4 examples)test-quality.md - Deterministic tests, isolated with cleanup, explicit assertions, length/time optimization (658 lines, 5 examples)network-first.md - Intercept before navigate, HAR capture, deterministic waiting strategies (489 lines, 5 examples)Healing Fragments (Auto-loaded if {auto_heal_failures} enabled):
test-healing-patterns.md - Common failure patterns: stale selectors, race conditions, dynamic data, network errors, hard waits (648 lines, 5 examples)selector-resilience.md - Selector hierarchy (data-testid > ARIA > text > CSS), dynamic patterns, anti-patterns refactoring (541 lines, 4 examples)timing-debugging.md - Race condition prevention, deterministic waiting, async debugging techniques (370 lines, 3 examples)Manual Reference (Optional):
tea-index.csv to find additional specialized fragments as neededAfter completing this workflow, provide a summary:
## Automation Complete
**Mode:** {standalone_mode ? "Standalone" : "BMad-Integrated"}
**Target:** {story_id || target_feature || "Auto-discovered features"}
**Tests Created:**
- E2E: {e2e_count} tests ({p0_count} P0, {p1_count} P1, {p2_count} P2)
- API: {api_count} tests ({p0_count} P0, {p1_count} P1, {p2_count} P2)
- Component: {component_count} tests ({p1_count} P1, {p2_count} P2)
- Unit: {unit_count} tests ({p2_count} P2, {p3_count} P3)
**Infrastructure:**
- Fixtures: {fixture_count} created/enhanced
- Factories: {factory_count} created/enhanced
- Helpers: {helper_count} created/enhanced
**Documentation Updated:**
- ✅ Test README with execution instructions
- ✅ package.json scripts for test execution
**Test Execution:**
```bash
# Run all tests
npm run test:e2e
# Run by priority
npm run test:e2e:p0 # Critical paths only
npm run test:e2e:p1 # P0 + P1 tests
# Run specific file
npm run test:e2e -- {first_test_file}
```
Coverage Status:
Quality Checks:
Output File: {output_summary}
Next Steps:
bmad tea *gateKnowledge Base References Applied:
Test quality principles
---
## Validation
After completing all steps, verify:
- [ ] Execution mode determined (BMad-Integrated, Standalone, or Auto-discover)
- [ ] BMad artifacts loaded if available (story, tech-spec, test-design, PRD)
- [ ] Framework configuration loaded
- [ ] Existing test coverage analyzed (gaps identified)
- [ ] Knowledge base fragments loaded (test-levels, test-priorities, fixture-architecture, data-factories, selective-testing)
- [ ] Automation targets identified (what needs testing)
- [ ] Test levels selected appropriately (E2E, API, Component, Unit)
- [ ] Duplicate coverage avoided (same behavior not tested at multiple levels)
- [ ] Test priorities assigned (P0, P1, P2, P3)
- [ ] Fixture architecture created/enhanced (with auto-cleanup)
- [ ] Data factories created/enhanced (using faker)
- [ ] Helper utilities created/enhanced (if needed)
- [ ] E2E tests written (Given-When-Then, priority tags, data-testid selectors)
- [ ] API tests written (Given-When-Then, priority tags, comprehensive coverage)
- [ ] Component tests written (Given-When-Then, priority tags, UI behavior)
- [ ] Unit tests written (Given-When-Then, priority tags, pure logic)
- [ ] Network-first pattern applied (route interception before navigation)
- [ ] Quality standards enforced (no hard waits, no flaky patterns, self-cleaning, deterministic)
- [ ] Test README updated (execution instructions, priority tagging, patterns)
- [ ] package.json scripts updated (test execution commands)
- [ ] Test suite run locally (results captured)
- [ ] Tests validated (if auto_validate enabled)
- [ ] Failures healed (if auto_heal_failures enabled)
- [ ] Healing report generated (if healing attempted)
- [ ] Unfixable tests marked with test.fixme() (if any)
- [ ] Automation summary created (tests, infrastructure, coverage, healing, DoD)
- [ ] Output file formatted correctly
Refer to `checklist.md` for comprehensive validation criteria.