Automatically detect and fail tests when HTTP 4xx/5xx errors occur during execution. Act like Sentry for tests - catch silent backend failures even when UI passes assertions.
Traditional Playwright tests focus on UI:
The network-error-monitor provides:
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
// That's it! Network monitoring is automatically enabled
test('my test', async ({ page }) => {
await page.goto('/dashboard');
// If any HTTP 4xx/5xx errors occur, the test will fail
});
Context: Automatically fail tests when backend errors occur.
Implementation:
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
// Monitoring automatically enabled
test('should load dashboard', async ({ page }) => {
await page.goto('/dashboard');
await expect(page.locator('h1')).toContainText('Dashboard');
// Passes if no HTTP errors
// Fails if any 4xx/5xx errors detected with clear message:
// "Network errors detected: 2 request(s) failed"
// Failed requests:
// GET 500 https://api.example.com/users
// POST 503 https://api.example.com/metrics
});
Key Points:
Context: Some tests expect errors (validation, error handling, edge cases).
Implementation:
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
// Opt-out with annotation
test(
'should show error on invalid input',
{ annotation: [{ type: 'skipNetworkMonitoring' }] },
async ({ page }) => {
await page.goto('/form');
await page.click('#submit'); // Triggers 400 error
// Monitoring disabled - test won't fail on 400
await expect(page.getByText('Invalid input')).toBeVisible();
}
);
// Or opt-out entire describe block
test.describe('error handling', { annotation: [{ type: 'skipNetworkMonitoring' }] }, () => {
test('handles 404', async ({ page }) => {
// All tests in this block skip monitoring
});
test('handles 500', async ({ page }) => {
// Monitoring disabled
});
});
Key Points:
{ type: 'skipNetworkMonitoring' }Context: The monitor respects final test statuses to avoid suppressing important test outcomes.
Behavior by test status:
failed: Network errors logged as additional context, not throwntimedOut: Network errors logged as additional contextskipped: Network errors logged, skip status preservedinterrupted: Network errors logged, interrupted status preservedpassed: Network errors throw and fail the testExample with test.skip():
test('feature gated test', async ({ page }) => {
const featureEnabled = await checkFeatureFlag();
test.skip(!featureEnabled, 'Feature not enabled');
// If skipped, network errors won't turn this into a failure
await page.goto('/new-feature');
});
Context: Some endpoints legitimately return 4xx/5xx responses.
Implementation:
import { test as base } from '@playwright/test';
import { createNetworkErrorMonitorFixture } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
export const test = base.extend(
createNetworkErrorMonitorFixture({
excludePatterns: [
/email-cluster\/ml-app\/has-active-run/, // ML service returns 404 when no active run
/idv\/session-templates\/list/, // IDV service returns 404 when not configured
/sentry\.io\/api/, // External Sentry errors should not fail tests
],
})
);
For merged fixtures:
import { test as base, mergeTests } from '@playwright/test';
import { createNetworkErrorMonitorFixture } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
const networkErrorMonitor = base.extend(
createNetworkErrorMonitorFixture({
excludePatterns: [/analytics\.google\.com/, /cdn\.example\.com/],
})
);
export const test = mergeTests(authFixture, networkErrorMonitor);
Context: One failing endpoint shouldn't fail all tests.
Implementation:
import { test as base } from '@playwright/test';
import { createNetworkErrorMonitorFixture } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
const networkErrorMonitor = base.extend(
createNetworkErrorMonitorFixture({
excludePatterns: [], // Required when using maxTestsPerError
maxTestsPerError: 1, // Only first test fails per error pattern, rest just log
})
);
How it works:
When /api/v2/case-management/cases returns 500:
Error patterns are grouped by method + status + base path:
GET /api/v2/case-management/cases/123 -> Pattern: GET:500:/api/v2/case-managementGET /api/v2/case-management/quota -> Pattern: GET:500:/api/v2/case-management (same group!)POST /api/v2/case-management/cases -> Pattern: POST:500:/api/v2/case-management (different group!)Why include HTTP method? A GET 404 vs POST 404 might represent different issues:
GET 404 /api/users/123 -> User not found (expected in some tests)POST 404 /api/users -> Endpoint doesn't exist (critical error)Output for subsequent tests:
Warning: Network errors detected but not failing test (maxTestsPerError limit reached):
GET 500 https://api.example.com/api/v2/case-management/cases
Recommended configuration:
createNetworkErrorMonitorFixture({
excludePatterns: [...], // Required - known broken endpoints (can be empty [])
maxTestsPerError: 1 // Stop domino effect (requires excludePatterns)
})
Understanding worker-level state:
Error pattern counts are stored in worker-level global state:
// test-file-1.spec.ts (runs in Worker 1)
test('test A', () => {
/* triggers GET:500:/api/v2/cases */
}); // FAILS
// test-file-2.spec.ts (runs later in Worker 1)
test('test B', () => {
/* triggers GET:500:/api/v2/cases */
}); // PASSES (limit reached)
// test-file-3.spec.ts (runs in Worker 2 - different worker)
test('test C', () => {
/* triggers GET:500:/api/v2/cases */
}); // FAILS (fresh worker)
Context: Combine network-error-monitor with other utilities.
Implementation:
// playwright/support/merged-fixtures.ts
import { mergeTests } from '@playwright/test';
import { test as authFixture } from '@seontechnologies/playwright-utils/auth-session/fixtures';
import { test as networkErrorMonitorFixture } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
export const test = mergeTests(
authFixture,
networkErrorMonitorFixture
// Add other fixtures
);
// In tests
import { test, expect } from '../support/merged-fixtures';
test('authenticated with monitoring', async ({ page, authToken }) => {
// Both auth and network monitoring active
await page.goto('/protected');
// Fails if backend returns errors during auth flow
});
Key Points:
mergeTestsContext: Debugging failed tests with network error artifacts.
When test fails due to network errors, artifact attached:
[
{
"url": "https://api.example.com/users",
"status": 500,
"method": "GET",
"timestamp": "2025-11-10T12:34:56.789Z"
},
{
"url": "https://api.example.com/metrics",
"status": 503,
"method": "POST",
"timestamp": "2025-11-10T12:34:57.123Z"
}
]
base.extend() with auto: truepage.on('response') listener at test startcontext.on('page')The monitor has minimal performance impact:
| Approach | Network Error Monitor | Manual afterEach |
|---|---|---|
| Setup Required | Zero (auto-enabled) | Every test file |
| Catches Silent Failures | Yes | Yes (if configured) |
| Structured Artifacts | JSON attached | Custom impl |
| Test Failure Safety | Try/finally | afterEach may not run |
| Opt-Out Mechanism | Annotation | Custom logic |
| Status Aware | Respects skip/failed | No |
Auto-enabled for:
Opt-out for:
The errors might be happening during page load or in background polling. Check the network-errors.json artifact in your test report for full details including timestamps.
Configure exclusion patterns as shown in the "Excluding Legitimate Errors" section above.
Ensure you're importing the test from the correct fixture:
// Correct
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures';
// Wrong - this won't have network monitoring
import { test } from '@playwright/test';
overview.md - Installation and fixturesfixtures-composition.md - Merging with other utilitieserror-handling.md - Traditional error handling patternsDON'T opt out of monitoring globally:
// Every test skips monitoring
test.use({ annotation: [{ type: 'skipNetworkMonitoring' }] });
DO opt-out only for specific error tests:
test.describe('error scenarios', { annotation: [{ type: 'skipNetworkMonitoring' }] }, () => {
// Only these tests skip monitoring
});
DON'T ignore network error artifacts:
// Test fails, artifact shows 500 errors
// Developer: "Works on my machine" ¯\_(ツ)_/¯
DO check artifacts for root cause:
// Read network-errors.json artifact
// Identify failing endpoint: GET /api/users -> 500
// Fix backend issue before merging