Workflow: testarch-trace
Purpose: Generate requirements-to-tests traceability matrix, analyze coverage gaps, and make quality gate decisions (PASS/CONCERNS/FAIL/WAIVED)
Agent: Test Architect (TEA)
Format: Pure Markdown v4.0 (no XML blocks)
This workflow operates in two sequential phases to validate test coverage and deployment readiness:
PHASE 1 - REQUIREMENTS TRACEABILITY: Create comprehensive traceability matrix mapping acceptance criteria to implemented tests, identify coverage gaps, and provide actionable recommendations.
PHASE 2 - QUALITY GATE DECISION: Use traceability results combined with test execution evidence to make gate decisions (PASS/CONCERNS/FAIL/WAIVED) that determine deployment readiness.
Key Capabilities:
Required (Phase 1):
Required (Phase 2 - if enable_gate_decision: true):
Recommended:
test-design.md (for risk assessment and priority context)nfr-assessment.md (for release-level gates)tech-spec.md (for technical implementation context)Halt Conditions:
*atdd workflow firstNote: *trace never runs *atdd automatically; it only recommends running it when tests are missing.
This phase focuses on mapping requirements to tests, analyzing coverage, and identifying gaps.
Actions:
Load relevant knowledge fragments from {project-root}/_bmad/bmm/testarch/tea-index.csv:
test-priorities-matrix.md - P0/P1/P2/P3 risk framework with automated priority calculation, risk-based mapping, tagging strategy (389 lines, 2 examples)risk-governance.md - Risk-based testing approach: 6 categories (TECH, SEC, PERF, DATA, BUS, OPS), automated scoring, gate decision engine, coverage traceability (625 lines, 4 examples)probability-impact.md - Risk scoring methodology: probability × impact matrix, automated classification, dynamic re-assessment, gate integration (604 lines, 4 examples)test-quality.md - Definition of Done for tests: deterministic, isolated with cleanup, explicit assertions, length/time limits (658 lines, 5 examples)selective-testing.md - Duplicate coverage patterns: tag-based, spec filters, diff-based selection, promotion rules (727 lines, 4 examples)Read story file (if provided):
Read related BMad artifacts (if available):
test-design.md - Risk assessment and test prioritiestech-spec.md - Technical implementation detailsPRD.md - Product requirements contextOutput: Complete understanding of requirements, priorities, and existing context
Actions:
Auto-discover test files related to the story:
1.3-E2E-001, 1.3-UNIT-005)glob to find test files in {test_dir}Categorize tests by level:
Extract test metadata:
Output: Complete catalog of all tests for this feature
Actions:
For each acceptance criterion:
Build traceability matrix:
| Criterion ID | Description | Test ID | Test File | Test Level | Coverage Status |
| ------------ | ----------- | ----------- | ---------------- | ---------- | --------------- |
| AC-1 | User can... | 1.3-E2E-001 | e2e/auth.spec.ts | E2E | FULL |
Classify coverage status for each criterion:
Check for duplicate coverage:
Output: Complete traceability matrix with coverage classifications
Actions:
Identify coverage gaps:
Recommend specific tests to add:
1.3-E2E-004)Calculate coverage metrics:
Check against quality gates:
Output: Prioritized gap analysis with actionable recommendations and coverage metrics
Actions:
For each mapped test, verify:
Flag quality issues:
Reference knowledge fragments:
test-quality.md for Definition of Donefixture-architecture.md for self-cleaning patternsnetwork-first.md for Playwright best practicesdata-factories.md for test data patternsOutput: Quality assessment for each test with improvement recommendations
Actions:
Create traceability matrix markdown file:
trace-template.md{output_folder}/traceability-matrix.mdGenerate gate YAML snippet (if enabled):
traceability:
story_id: '1.3'
coverage:
overall: 85%
p0: 100%
p1: 90%
p2: 75%
gaps:
critical: 0
high: 1
medium: 2
status: 'PASS' # or "FAIL" if P0 < 100%
Create coverage badge/metric (if enabled):
Update story file (if enabled):
Output: Complete Phase 1 traceability deliverables
Next: If enable_gate_decision: true, proceed to Phase 2. Otherwise, workflow complete.
This phase uses traceability results to make a quality gate decision (PASS/CONCERNS/FAIL/WAIVED) based on evidence and decision rules.
When Phase 2 Runs: Automatically after Phase 1 if enable_gate_decision: true (default: true)
Skip Conditions: If test execution results (test_results) not provided, warn and skip Phase 2.
Actions:
Load Phase 1 traceability results (inherited context):
Load test execution results (if test_results provided):
(P0 passed / P0 total) * 100(P1 passed / P1 total) * 100(All passed / All total) * 100Load NFR assessment (if nfr_file provided):
nfr-assessment.md or similarLoad supporting artifacts:
test-design.md → Risk priorities, DoD checkliststory-*.md or Epics.md → Requirements contextbmm-workflow-status.md → Workflow completion status (if check_all_workflows_complete: true)Validate evidence freshness (if validate_evidence_freshness: true):
Check prerequisite workflows (if check_all_workflows_complete: true):
Output: Consolidated evidence bundle with all quality signals
If decision_mode: "deterministic" (rule-based - default):
Decision rules (based on workflow.yaml thresholds):
PASS if ALL of the following are true:
min_p0_coverage (default: 100%)min_p1_coverage (default: 90%)min_overall_coverage (default: 80%)min_p0_pass_rate (default: 100%)min_p1_pass_rate (default: 95%)min_overall_pass_rate (default: 90%)nfr_file provided)max_security_issues (default: 0)CONCERNS if ANY of the following are true:
FAIL if ANY of the following are true:
max_critical_nfrs_fail exceeded)max_security_issues exceeded)WAIVED (only if allow_waivers: true):
Risk tolerance adjustments:
allow_p2_failures: true → P2 test failures do NOT affect gate decisionallow_p3_failures: true → P3 test failures do NOT affect gate decisionescalate_p1_failures: true → P1 failures require explicit manager/lead approvalIf decision_mode: "manual":
Output: Gate decision (PASS/CONCERNS/FAIL/WAIVED) with rule-based rationale
Actions:
Create gate decision document:
gate_output_file (default: {output_folder}/gate-decision-{gate_type}-{story_id}.md)Document structure:
# Quality Gate Decision: {gate_type} {story_id/epic_num/release_version}
**Decision**: [PASS / CONCERNS / FAIL / WAIVED]
**Date**: {date}
**Decider**: {decision_mode} (deterministic | manual)
**Evidence Date**: {test_results_date}
---
## Summary
[1-2 sentence summary of decision and key factors]
---
## Decision Criteria
| Criterion | Threshold | Actual | Status |
| ----------------- | --------- | -------- | ------- |
| P0 Coverage | ≥100% | 100% | ✅ PASS |
| P1 Coverage | ≥90% | 88% | ⚠️ FAIL |
| Overall Coverage | ≥80% | 92% | ✅ PASS |
| P0 Pass Rate | 100% | 100% | ✅ PASS |
| P1 Pass Rate | ≥95% | 98% | ✅ PASS |
| Overall Pass Rate | ≥90% | 96% | ✅ PASS |
| Critical NFRs | All Pass | All Pass | ✅ PASS |
| Security Issues | 0 | 0 | ✅ PASS |
**Overall Status**: 7/8 criteria met → Decision: **CONCERNS**
---
## Evidence Summary
### Test Coverage (from Phase 1 Traceability)
- **P0 Coverage**: 100% (5/5 criteria fully covered)
- **P1 Coverage**: 88% (7/8 criteria fully covered)
- **Overall Coverage**: 92% (12/13 criteria covered)
- **Gap**: AC-5 (P1) missing E2E test
### Test Execution Results
- **P0 Pass Rate**: 100% (12/12 tests passed)
- **P1 Pass Rate**: 98% (45/46 tests passed)
- **Overall Pass Rate**: 96% (67/70 tests passed)
- **Failures**: 3 P2 tests (non-blocking)
### Non-Functional Requirements
- Performance: ✅ PASS (response time <500ms)
- Security: ✅ PASS (no vulnerabilities)
- Scalability: ✅ PASS (handles 10K users)
### Test Quality
- All tests have explicit assertions ✅
- No hard waits detected ✅
- Test files <300 lines ✅
- Test IDs follow convention ✅
---
## Decision Rationale
**Why CONCERNS (not PASS)**:
- P1 coverage at 88% is below 90% threshold
- AC-5 (P1 priority) missing E2E test for error handling scenario
- This is a known gap from test-design phase
**Why CONCERNS (not FAIL)**:
- P0 coverage is 100% (critical paths validated)
- Overall coverage is 92% (above 80% threshold)
- Test pass rate is excellent (96% overall)
- Gap is isolated to one P1 criterion (not systemic)
**Recommendation**:
- Acknowledge gap and proceed with deployment
- Add missing AC-5 E2E test in next sprint
- Create follow-up story: "Add E2E test for AC-5 error handling"
---
## Next Steps
- [ ] Create follow-up story for AC-5 E2E test
- [ ] Deploy to staging environment
- [ ] Monitor production for edge cases related to AC-5
- [ ] Update traceability matrix after follow-up test added
---
## References
- Traceability Matrix: `_bmad/output/traceability-matrix.md`
- Test Design: `_bmad/output/test-design-epic-2.md`
- Test Results: `ci-artifacts/test-report-2025-01-15.xml`
- NFR Assessment: `_bmad/output/nfr-assessment-release-1.2.md`
Include evidence links (if require_evidence: true):
Waiver documentation (if decision is WAIVED):
Output: Complete gate decision document with evidence and rationale
Actions:
Update workflow status (if append_to_history: true):
bmm-workflow-status.md under "Gate History" sectionFormat:
## Gate History
### Story 1.3 - User Login (2025-01-15)
- **Decision**: CONCERNS
- **Reason**: P1 coverage 88% (below 90%)
- **Document**: [gate-decision-story-1.3.md](_bmad/output/gate-decision-story-1.3.md)
- **Action**: Deploy with follow-up story for AC-5
Generate stakeholder notification (if notify_stakeholders: true):
Format for Slack/email/chat:
🚦 Quality Gate Decision: Story 1.3 - User Login
Decision: ⚠️ CONCERNS
- P0 Coverage: ✅ 100%
- P1 Coverage: ⚠️ 88% (below 90%)
- Test Pass Rate: ✅ 96%
Action Required:
- Create follow-up story for AC-5 E2E test
- Deploy to staging for validation
Full Report: _bmad/output/gate-decision-story-1.3.md
Request sign-off (if require_sign_off: true):
Output: Status tracking updated, stakeholders notified, sign-off obtained (if required)
Workflow Complete: Both Phase 1 (traceability) and Phase 2 (gate decision) deliverables generated.
| Scenario | P0 Cov | P1 Cov | Overall Cov | P0 Pass | P1 Pass | Overall Pass | NFRs | Decision |
|---|---|---|---|---|---|---|---|---|
| All green | 100% | ≥90% | ≥80% | 100% | ≥95% | ≥90% | Pass | PASS |
| Minor gap | 100% | 80-89% | ≥80% | 100% | 90-94% | 85-89% | Pass | CONCERNS |
| Missing P0 | <100% | - | - | - | - | - | - | FAIL |
| P0 test fail | 100% | - | - | <100% | - | - | - | FAIL |
| P1 gap | 100% | <80% | - | 100% | - | - | - | FAIL |
| NFR fail | 100% | ≥90% | ≥80% | 100% | ≥95% | ≥90% | Fail | FAIL |
| Security issue | - | - | - | - | - | - | Yes | FAIL |
| Business waiver | [FAIL conditions] | - | - | - | - | - | - | WAIVED |
When to use waivers:
Waiver approval process:
Waiver does NOT apply to:
Decision: ✅ PASS
Summary: All quality criteria met. Story 1.3 is ready for production deployment.
Evidence:
- P0 Coverage: 100% (5/5 criteria)
- P1 Coverage: 95% (19/20 criteria)
- Overall Coverage: 92% (24/26 criteria)
- P0 Pass Rate: 100% (12/12 tests)
- P1 Pass Rate: 98% (45/46 tests)
- Overall Pass Rate: 96% (67/70 tests)
- NFRs: All pass (performance, security, scalability)
Action: Deploy to production ✅
Decision: ⚠️ CONCERNS
Summary: P1 coverage slightly below threshold (88% vs 90%). Recommend deploying with follow-up story.
Evidence:
- P0 Coverage: 100% ✅
- P1 Coverage: 88% ⚠️ (below 90%)
- Overall Coverage: 92% ✅
- Test Pass Rate: 96% ✅
- Gap: AC-5 (P1) missing E2E test
Action:
- Deploy to staging for validation
- Create follow-up story for AC-5 E2E test
- Monitor production for edge cases related to AC-5
Decision: ❌ FAIL
Summary: P0 coverage incomplete. Missing critical validation test. BLOCKING deployment.
Evidence:
- P0 Coverage: 80% ❌ (4/5 criteria, AC-2 missing)
- AC-2: "User cannot login with invalid credentials" (P0 priority)
- No tests validate login security for invalid credentials
- This is a critical security gap
Action:
- Add P0 test for AC-2: 1.3-E2E-004 (invalid credentials)
- Re-run traceability after test added
- Re-evaluate gate decision after P0 coverage = 100%
Deployment BLOCKED until P0 gap resolved ❌
Decision: ⚠️ WAIVED
Summary: P1 coverage below threshold (75% vs 90%), but waived for MVP launch.
Evidence:
- P0 Coverage: 100% ✅
- P1 Coverage: 75% ❌ (below 90%)
- Gap: 5 P1 criteria missing E2E tests (error handling, edge cases)
Waiver:
- Approver: Jane Doe, Engineering Manager
- Date: 2025-01-15
- Justification: Time-boxed MVP for investor demo. Core functionality (P0) fully validated. P1 gaps are low-risk edge cases.
- Mitigation: Manual QA testing for P1 scenarios, follow-up stories created for automated tests in v1.1
- Evidence: Slack #engineering 2025-01-15 3:42pm
Action:
- Deploy to production with manual QA validation ✅
- Add 5 E2E tests for P1 gaps in v1.1 sprint
- Monitor production logs for edge case occurrences
Minimal Examples: This workflow provides principles and patterns, not rigid templates. Teams should adapt the traceability and gate decision formats to their needs.
Key Patterns to Follow:
Extend as Needed:
Use selective testing principles from selective-testing.md:
Acceptable Overlap:
Unacceptable Duplication:
Recommendation Pattern:
# Traceability Matrix - Story 1.3
**Story:** User Authentication
**Date:** 2025-10-14
**Status:** 85% Coverage (1 HIGH gap)
## Coverage Summary
| Priority | Total Criteria | FULL Coverage | Coverage % | Status |
| --------- | -------------- | ------------- | ---------- | ------- |
| P0 | 3 | 3 | 100% | ✅ PASS |
| P1 | 5 | 4 | 80% | ⚠️ WARN |
| P2 | 4 | 3 | 75% | ✅ PASS |
| P3 | 2 | 1 | 50% | ✅ PASS |
| **Total** | **14** | **11** | **79%** | ⚠️ WARN |
## Detailed Mapping
### AC-1: User can login with email and password (P0)
- **Coverage:** FULL ✅
- **Tests:**
- `1.3-E2E-001` - tests/e2e/auth.spec.ts:12
- Given: User has valid credentials
- When: User submits login form
- Then: User is redirected to dashboard
- `1.3-UNIT-001` - tests/unit/auth-service.spec.ts:8
- Given: Valid email and password hash
- When: validateCredentials is called
- Then: Returns user object
### AC-2: User sees error for invalid credentials (P0)
- **Coverage:** FULL ✅
- **Tests:**
- `1.3-E2E-002` - tests/e2e/auth.spec.ts:28
- Given: User has invalid password
- When: User submits login form
- Then: Error message is displayed
- `1.3-UNIT-002` - tests/unit/auth-service.spec.ts:18
- Given: Invalid password hash
- When: validateCredentials is called
- Then: Throws AuthenticationError
### AC-3: User can reset password via email (P1)
- **Coverage:** PARTIAL ⚠️
- **Tests:**
- `1.3-E2E-003` - tests/e2e/auth.spec.ts:44
- Given: User requests password reset
- When: User clicks reset link
- Then: User can set new password
- **Gaps:**
- Missing: Email delivery validation
- Missing: Expired token handling
- Missing: Unit test for token generation
- **Recommendation:** Add `1.3-API-001` for email service integration and `1.3-UNIT-003` for token logic
## Gap Analysis
### Critical Gaps (BLOCKER)
- None ✅
### High Priority Gaps (PR BLOCKER)
1. **AC-3: Password reset email edge cases**
- Missing tests for expired tokens, invalid tokens, email failures
- Recommend: `1.3-API-001` (email service integration) and `1.3-E2E-004` (error paths)
- Impact: Users may not be able to recover accounts in error scenarios
### Medium Priority Gaps (Nightly)
1. **AC-7: Session timeout handling** - UNIT-ONLY coverage (missing E2E validation)
## Quality Assessment
### Tests with Issues
- `1.3-E2E-001` ⚠️ - 145 seconds (exceeds 90s target) - Optimize fixture setup
- `1.3-UNIT-005` ⚠️ - 320 lines (exceeds 300 line limit) - Split into multiple test files
### Tests Passing Quality Gates
- 11/13 tests (85%) meet all quality criteria ✅
## Gate YAML Snippet
```yaml
traceability:
story_id: '1.3'
coverage:
overall: 79%
p0: 100%
p1: 80%
p2: 75%
p3: 50%
gaps:
critical: 0
high: 1
medium: 1
low: 1
status: 'WARN' # P1 coverage below 90% threshold
recommendations:
- 'Add 1.3-API-001 for email service integration'
- 'Add 1.3-E2E-004 for password reset error paths'
- 'Optimize 1.3-E2E-001 performance (145s → <90s)'
```
1.3-E2E-001 to use faster fixture setup1.3-UNIT-005 into focused test filesEnhance P2 Coverage: Add E2E validation for session timeout (currently UNIT-ONLY)
---
## Validation Checklist
Before completing this workflow, verify:
**Phase 1 (Traceability):**
- ✅ All acceptance criteria are mapped to tests (or gaps are documented)
- ✅ Coverage status is classified (FULL, PARTIAL, NONE, UNIT-ONLY, INTEGRATION-ONLY)
- ✅ Gaps are prioritized by risk level (P0/P1/P2/P3)
- ✅ P0 coverage is 100% or blockers are documented
- ✅ Duplicate coverage is identified and flagged
- ✅ Test quality is assessed (assertions, structure, performance)
- ✅ Traceability matrix is generated and saved
**Phase 2 (Gate Decision - if enabled):**
- ✅ Test execution results loaded and pass rates calculated
- ✅ NFR assessment results loaded (if applicable)
- ✅ Decision rules applied consistently (PASS/CONCERNS/FAIL/WAIVED)
- ✅ Gate decision document created with evidence
- ✅ Waiver documented if decision is WAIVED (approver, justification, mitigation)
- ✅ Workflow status updated (bmm-workflow-status.md)
- ✅ Stakeholders notified (if enabled)
---
## Notes
**Phase 1 (Traceability):**
- **Explicit Mapping:** Require tests to reference criteria explicitly (test IDs, describe blocks) for maintainability
- **Risk-Based Prioritization:** Use test-priorities framework (P0/P1/P2/P3) to determine gap severity
- **Quality Over Quantity:** Better to have fewer high-quality tests with FULL coverage than many low-quality tests with PARTIAL coverage
- **Selective Testing:** Avoid duplicate coverage - test each behavior at the appropriate level only
**Phase 2 (Gate Decision):**
- **Deterministic Rules:** Use consistent thresholds (P0=100%, P1≥90%, overall≥80%) for objectivity
- **Evidence-Based:** Every decision must cite specific metrics (coverage %, pass rates, NFRs)
- **Waiver Discipline:** Waivers require approver name, justification, mitigation plan, and evidence link
- **Non-Blocking CONCERNS:** Use CONCERNS for minor gaps that don't justify blocking deployment (e.g., P1 at 88% vs 90%)
- **Automate in CI/CD:** Generate YAML snippets that can be consumed by CI/CD pipelines for automated quality gates
---
## Troubleshooting
### "No tests found for this story"
- Run `*atdd` workflow first to generate failing acceptance tests
- Check test file naming conventions (may not match story ID pattern)
- Verify test directory path is correct
### "Cannot determine coverage status"
- Tests may lack explicit mapping to criteria (no test IDs, unclear describe blocks)
- Review test structure and add Given-When-Then narrative
- Add test IDs in format: `{STORY_ID}-{LEVEL}-{SEQ}` (e.g., 1.3-E2E-001)
### "P0 coverage below 100%"
- This is a **BLOCKER** - do not release
- Identify missing P0 tests in gap analysis
- Run `*atdd` workflow to generate missing tests
- Verify with stakeholders that P0 classification is correct
### "Duplicate coverage detected"
- Review selective testing principles in `selective-testing.md`
- Determine if overlap is acceptable (defense in depth) or wasteful (same validation at multiple levels)
- Consolidate tests at appropriate level (logic → unit, integration → API, journey → E2E)
### "Test execution results missing" (Phase 2)
- Phase 2 gate decision requires `test_results` (CI/CD test reports)
- If missing, Phase 2 will be skipped with warning
- Provide JUnit XML, TAP, or JSON test report path via `test_results` variable
### "Gate decision is FAIL but deployment needed urgently"
- Request business waiver (if `allow_waivers: true`)
- Document approver, justification, mitigation plan
- Create follow-up stories to address gaps
- Use WAIVED decision only for non-P0 gaps
---
## Related Workflows
**Prerequisites:**
- `testarch-test-design` - Define test priorities (P0/P1/P2/P3) before tracing (required for Phase 2)
- `testarch-atdd` or `testarch-automate` - Generate tests before tracing coverage
**Complements:**
- `testarch-nfr-assess` - Non-functional requirements validation (recommended for release gates)
- `testarch-test-review` - Review test quality issues flagged in traceability
**Next Steps:**
- If gate decision is PASS/CONCERNS → Deploy and monitor
- If gate decision is FAIL → Add missing tests, re-run trace workflow
- If gate decision is WAIVED → Deploy with mitigation, create follow-up stories
---
<!-- Powered by BMAD-CORE™ -->