ATDD Workflow Validation Checklist
Use this checklist to validate that the ATDD workflow has been executed correctly and all deliverables meet quality standards.
Prerequisites
Before starting this workflow, verify:
Halt if missing: Framework scaffolding or story acceptance criteria
Step 1: Story Context and Requirements
Step 2: Test Level Selection and Strategy
Step 3: Failing Tests Generated
Test File Structure Created
E2E Tests (If Applicable)
API Tests (If Applicable)
Component Tests (If Applicable)
Test Quality Validation
Step 4: Data Infrastructure Built
Data Factories Created
Test Fixtures Created
Mock Requirements Documented
data-testid Requirements Listed
Step 5: Implementation Checklist Created
Step 6: Deliverables Generated
ATDD Checklist Document Created
All Tests Verified to Fail (RED Phase)
Summary Provided
Quality Checks
Test Design Quality
Knowledge Base Integration
Code Quality
Integration Points
With DEV Agent
With Story Workflow
With Framework Workflow
With test-design Workflow (If Available)
Completion Criteria
All of the following must be true before marking this workflow as complete:
Common Issues and Resolutions
Issue: Tests pass before implementation
Problem: A test passes even though no implementation code exists yet.
Resolution:
- Review test to ensure it's testing actual behavior, not mocked/stubbed behavior
- Check if test is accidentally using existing functionality
- Verify test assertions are correct and meaningful
- Rewrite test to fail until implementation is complete
Issue: Network-first pattern not applied
Problem: Route interception happens after navigation, causing race conditions.
Resolution:
- Move
await page.route() calls BEFORE await page.goto()
- Review
network-first.md knowledge fragment
- Update all E2E tests to follow network-first pattern
Issue: Hardcoded test data in tests
Problem: Tests use hardcoded strings/numbers instead of factories.
Resolution:
- Replace all hardcoded data with factory function calls
- Use
faker for all random data generation
- Update data-factories to support all required test scenarios
Issue: Fixtures missing auto-cleanup
Problem: Fixtures create data but don't clean it up in teardown.
Resolution:
- Add cleanup logic after
await use(data) in fixture
- Call deletion/cleanup functions in teardown
- Verify cleanup works by checking database/storage after test run
Issue: Tests have multiple assertions
Problem: Tests verify multiple behaviors in single test (not atomic).
Resolution:
- Split into separate tests (one assertion per test)
- Each test should verify exactly one behavior
- Use descriptive test names to clarify what each test verifies
Issue: Tests depend on execution order
Problem: Tests fail when run in isolation or different order.
Resolution:
- Remove shared state between tests
- Each test should create its own test data
- Use fixtures for consistent setup across tests
- Verify tests can run with
.only flag
Notes for TEA Agent
- Preflight halt is critical: Do not proceed if story has no acceptance criteria or framework is missing
- RED phase verification is mandatory: Tests must fail before sharing with DEV team
- Network-first pattern: Route interception BEFORE navigation prevents race conditions
- One assertion per test: Atomic tests provide clear failure diagnosis
- Auto-cleanup is non-negotiable: Every fixture must clean up data in teardown
- Use knowledge base: Load relevant fragments (fixture-architecture, data-factories, network-first, component-tdd, test-quality) for guidance
- Share with DEV agent: ATDD checklist provides implementation roadmap from red to green