Appearance
Acceptance Testing Guidelines
Comprehensive acceptance testing strategies for full-stack applications at Synapse Studios using Playwright and clean architecture principles
Implements: Clean Architecture | Dependency Inversion & Ports/Adapters
Table of Contents
- Architecture & Design Principles
- Testing Tools & Framework
- Test Planning & Organization
- Driver Layer Patterns
- Test Categories
- Environment Configuration
- Best Practices
- Test Data Management
- Continuous Testing
- Interactive Testing with MCP
Architecture & Design Principles
Layered Architecture
Acceptance tests should follow a layered architecture that separates test intent from implementation details, following Clean Architecture principles from Continuous Delivery:
┌─────────────────────┐
│ Acceptance Tests │ ← Declarative, business-focused tests
├─────────────────────┤
│ Driver Layer │ ← Abstractions hiding implementation details (adapters)
├─────────────────────┤
│ Infrastructure │ ← Environment setup, configuration, utilities
└─────────────────────┘Core Design Principles
- Tests are specifications: Written in business language, readable by stakeholders
- Driver layer provides stability: UI/API changes only affect driver code, not tests
- Clean abstractions: Workflows, API clients, and page objects encapsulate complexity
- Environment agnostic: Same tests run locally, in CI, and in containerized environments
- Independence: Tests can run in any order without dependencies
Project Structure
acceptance-tests/
├── acceptance/ # Test specifications
│ ├── e2e/ # End-to-end browser tests
│ └── api/ # Direct API tests
├── driver/ # Driver layer abstractions
│ ├── workflows/ # High-level business process abstractions
│ ├── api-clients/ # Type-safe API interaction abstractions
│ ├── page-objects/ # UI component abstractions
│ ├── simulators/ # External system simulations
│ └── builders/ # Test data creation abstractions
├── infrastructure/ # Infrastructure layer
│ ├── config/ # Environment and test configuration
│ ├── database/ # Database management utilities
│ ├── fixtures/ # Test data and mock responses
│ └── utils/ # Shared utilities
└── reports/ # Test execution reportsTesting Tools & Framework
Playwright as Primary Framework
We use Playwright for acceptance testing due to its:
- Cross-browser support (Chromium, Firefox, WebKit)
- Built-in auto-waiting and retry mechanisms
- Powerful debugging capabilities
- TypeScript-first design
- API testing capabilities
- Excellent CI/CD integration
Setup and Configuration
bash
# Initial setup
npm install -D @playwright/test
npx playwright install
# Create playwright.config.tsTest Projects Configuration
Configure multiple test projects for different testing strategies:
typescript
// playwright.config.ts
export default defineConfig({
projects: [
{
name: 'e2e',
testMatch: '**/acceptance/e2e/**/*.spec.ts',
use: { ...devices['Desktop Chrome'] }
},
{
name: 'api',
testMatch: '**/acceptance/api/**/*.spec.ts'
}
]
});Essential Scripts
json
{
"scripts": {
"test": "playwright test",
"test:e2e": "playwright test --project=e2e",
"test:api": "playwright test --project=api",
"test:headed": "playwright test --headed",
"test:debug": "playwright test --debug",
"test:ui": "playwright test --ui",
"test:report": "playwright show-report"
}
}Test Planning & Organization
Writing Tests as Specifications
Tests should read like executable specifications using business domain language:
typescript
test('User can join an organization they have access to', async ({ page }) => {
// Given: User has access to multiple organizations
await authWorkflows.loginWithAccount('multi-org-user');
// When: User selects an available organization
await orgWorkflows.selectAvailableOrganization('acme-corp');
// Then: User successfully joins the organization
await orgWorkflows.verifyUserJoinedSuccessfully('acme-corp');
});Test Structure Guidelines
- Use Given/When/Then structure for clarity
- Focus on what happens, not how it happens
- Use business domain terminology, not technical details
- Keep tests short and focused on single acceptance criteria
- Make tests independent - able to run in any order
Test Organization Pattern
typescript
test.describe('Feature Name', () => {
// Setup shared resources
test.beforeEach(async ({ page }) => {
// Initialize workflows with page context
});
test('Specific acceptance criterion', async ({ page }) => {
// Test implementation using workflows
});
});Driver Layer Patterns
The driver layer implements the Ports and Adapters pattern, providing clean abstractions that hide implementation complexity:
1. Workflows (Business Process Abstractions)
High-level business workflows that encapsulate complete user journeys:
typescript
export class AuthenticationWorkflows {
constructor(private page: Page) {}
async loginWithAccount(username: string): Promise<void> {
// Complex login flow abstracted
}
async logout(): Promise<void> {
// Logout process
}
async verifyUserIsAuthenticated(): Promise<void> {
// Authentication verification
}
}2. API Clients (Type-Safe API Interactions)
Type-safe API interactions for setup and verification:
typescript
export class OrganizationApiClient extends BaseApiClient {
async getMyOrganizations(): Promise<UserOrganizationDto[]> {
return this.get('/api/organizations/my');
}
async joinOrganization(request: JoinOrgRequest): Promise<UserOrganizationDto> {
return this.post('/api/organizations/join', request);
}
async verifyUserInOrganization(orgSlug: string): Promise<boolean> {
const orgs = await this.getMyOrganizations();
return orgs.some(org => org.slug === orgSlug);
}
}3. Page Objects (UI Abstractions)
Encapsulate page structure and UI interactions:
typescript
export class OrganizationSelectionPage {
constructor(private page: Page) {}
async selectOrganization(orgName: string): Promise<void> {
await this.page.getByRole('button', { name: orgName }).click();
}
async getAvailableOrganizations(): Promise<string[]> {
const items = await this.page.getByRole('listitem').all();
return Promise.all(items.map(item => item.textContent()));
}
async verifyLoadingState(): Promise<void> {
await expect(this.page.getByRole('progressbar')).toBeVisible();
}
}4. Simulators (External System Mocking)
Simulate external dependencies and complex processes:
typescript
export class GitHubApiSimulator {
async simulateUserWithOrganizations(
userId: string,
orgs: GitHubOrg[]
): Promise<void> {
// Mock GitHub API responses
}
async simulateApiFailure(
endpoint: string,
errorType: string
): Promise<void> {
// Simulate various failure scenarios
}
}5. Data Builders (Test Data Creation)
Fluent interfaces for creating test data:
typescript
export class UserBuilder {
private user = { /* defaults */ };
static aUser(): UserBuilder {
return new UserBuilder();
}
withGitHubId(id: string): this {
this.user.githubId = id;
return this;
}
withOrganizations(orgs: string[]): this {
this.user.organizations = orgs;
return this;
}
build(): User {
return this.user;
}
}
// Usage
const testUser = UserBuilder.aUser()
.withGitHubId('12345')
.withOrganizations(['acme-corp', 'dev-team'])
.build();Test Categories
E2E Tests (End-to-End Browser Tests)
Full user journey testing with browser automation.
Structure: Follow the 3-phase pattern:
- Arrange: Use API to set up test data (fast and reliable)
- Act: Use UI to perform user actions
- Assert: Use UI to verify outcomes
typescript
test('User can update organization settings', async ({ page }) => {
// Arrange - Use API for setup
const org = await orgApi.createOrganization({ name: 'Test Org' });
await userApi.addUserToOrganization(testUser, org);
// Act - Use UI for user actions
await page.goto('/organizations/settings');
await page.getByLabel('Display Name').fill('New Name');
await page.getByRole('button', { name: 'Save' }).click();
// Assert - Use UI for verification
await expect(page.getByText('Settings saved')).toBeVisible();
await expect(page.getByLabel('Display Name')).toHaveValue('New Name');
});When to use:
- Complete user workflows
- Cross-component integration
- Visual validation
- Browser compatibility testing
API Tests (Direct HTTP Testing)
Direct HTTP endpoint testing without browser overhead.
typescript
test('API returns correct organization list', async ({ request }) => {
const response = await request.get('/api/organizations');
expect(response.status()).toBe(200);
const orgs = await response.json();
expect(orgs).toHaveLength(3);
expect(orgs[0]).toHaveProperty('name');
});When to use:
- Request/response validation
- Authentication flows
- Error scenarios
- Performance baselines
Benefits: Fast feedback, precise error isolation, no browser complexity
Choosing Between E2E and API Tests
- E2E tests are expensive - use sparingly for critical user journeys
- API tests are fast - use liberally for comprehensive coverage
- Never use API to verify UI changes in E2E tests
- Prefer API for test setup in E2E tests (faster and more reliable)
Environment Configuration
Multiple Environment Support
Tests should support multiple environments with automatic service detection:
| Environment | Frontend URL | API URL | Database | Use Case |
|---|---|---|---|---|
local | localhost:5173 | localhost:3000 | localhost:5432 | Development |
ci | ui:5173 | api:3000 | postgres:5432 | GitHub Actions |
staging | staging.app | api.staging | RDS | Pre-production |
Environment Variables
bash
# .env.test
TEST_ENV=local
BASE_URL=http://localhost:5173
API_BASE_URL=http://localhost:3000
TEST_DATABASE_URL=postgresql://user:pass@localhost:5432/test_dbEnvironment Detection
typescript
// infrastructure/config/environments.ts
export const getEnvironmentConfig = () => {
const env = process.env.TEST_ENV || 'local';
return {
local: {
baseUrl: 'http://localhost:5173',
apiUrl: 'http://localhost:3000',
database: 'postgresql://localhost:5432/test'
},
ci: {
baseUrl: 'http://ui:5173',
apiUrl: 'http://api:3000',
database: 'postgresql://postgres:5432/test'
}
}[env];
};Service Health Checks
typescript
// infrastructure/config/global-setup.ts
export default async function globalSetup() {
const config = getEnvironmentConfig();
// Wait for API to be ready
await waitForService(config.apiUrl + '/health', {
timeout: 30000,
retryInterval: 1000
});
// Verify frontend is accessible
await verifyFrontendAccessible(config.baseUrl);
}Best Practices
DO ✅
- Write tests like specifications - readable by business stakeholders
- Use the driver layer - workflows, API clients, page objects
- Focus on acceptance criteria - what the system should do
- Keep tests independent - any order, any environment
- Use meaningful test names - describe the business value
- Handle async operations properly - wait for elements, not fixed delays
- Use data builders - create test data cleanly and consistently
- Test unhappy paths - error scenarios are crucial
- Clean up after tests - prevent test pollution
- Version control test data - fixtures should be in git
DON'T ❌
- Test implementation details - use business concepts instead
- Hard-code test data - use builders and fixtures
- Skip error scenarios - test unhappy paths too
- Make tests depend on each other - ensure isolation
- Use CSS selectors directly in tests - abstract through page objects
- Ignore flaky tests - fix or remove unreliable tests
- Use fixed timeouts - use proper waits and expectations
- Commit sensitive data - use environment variables
- Mix concerns - separate test intent from implementation
- Over-test through UI - use API tests where appropriate
Error Handling Patterns
typescript
test('User receives helpful message when service is unavailable', async () => {
// Given: External service is experiencing issues
await githubSimulator.simulateApiFailure('/user/orgs', 'service_unavailable');
// When: User attempts action requiring external service
await orgWorkflows.navigateToOrganizationSelection();
// Then: Helpful error message is shown
await orgWorkflows.verifyServiceUnavailableMessage();
await orgWorkflows.verifyRetryOptionAvailable();
});Debugging Strategies
Local Development:
bash
npm run test:headed # Run with visible browser
npm run test:debug # Enable debugging with breakpoints
npm run test:ui # Interactive mode with step-throughCI/CD Debugging:
- Screenshots captured on failure
- Video recordings for failed tests
- Trace files for detailed execution analysis
- Comprehensive error context in reports
Test Data Management
Builder Pattern for Test Data
Use the builder pattern to create test data cleanly:
typescript
// driver/builders/UserBuilder.ts
export class UserBuilder {
private user: Partial<User> = {
id: faker.datatype.uuid(),
email: faker.internet.email(),
name: faker.person.fullName()
};
static aUser(): UserBuilder {
return new UserBuilder();
}
withEmail(email: string): this {
this.user.email = email;
return this;
}
withOrganizations(orgs: Organization[]): this {
this.user.organizations = orgs;
return this;
}
build(): User {
return this.user as User;
}
}Fixture Management
Store static test data as fixtures:
typescript
// infrastructure/fixtures/test-users.json
{
"adminUser": {
"email": "admin@test.com",
"role": "admin",
"permissions": ["*"]
},
"regularUser": {
"email": "user@test.com",
"role": "member",
"permissions": ["read", "write"]
}
}Database Management
typescript
// infrastructure/database/cleanup.ts
export class TestDatabase {
async reset(): Promise<void> {
await this.truncateAllTables();
await this.seedBaseData();
}
async createTransaction(): Promise<Transaction> {
// Create transaction for test isolation
}
async rollback(transaction: Transaction): Promise<void> {
// Rollback changes after test
}
}Test Isolation Strategies
typescript
test.describe('Organization Management', () => {
let transaction: Transaction;
test.beforeEach(async () => {
// Start transaction for isolation
transaction = await testDb.createTransaction();
});
test.afterEach(async () => {
// Rollback to clean state
await testDb.rollback(transaction);
});
test('User can create organization', async () => {
// Test runs in isolated transaction
});
});Continuous Testing
CI/CD Integration
GitHub Actions Configuration
yaml
# .github/workflows/acceptance-tests.yml
name: Acceptance Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: test
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Install Playwright browsers
run: npx playwright install --with-deps
- name: Run acceptance tests
run: npm run test:acceptance
env:
TEST_ENV: ci
- name: Upload test results
if: always()
uses: actions/upload-artifact@v3
with:
name: test-results
path: |
reports/
test-results/Reporting Formats
Configure multiple report formats for different purposes:
typescript
// playwright.config.ts
export default defineConfig({
reporter: [
['html', { outputFolder: 'reports/html' }],
['junit', { outputFile: 'reports/junit.xml' }],
['json', { outputFile: 'reports/results.json' }],
['github'], // Native GitHub Actions integration
['line'] // Console output
]
});Test Result Analysis
bash
# View HTML report locally
npm run test:report
# Parse JUnit for CI integration
npx junit-viewer --results=reports/junit.xml
# Analyze JSON results programmatically
node scripts/analyze-test-results.js reports/results.jsonPerformance Monitoring
typescript
// Track test execution times
test('Performance: Page load time', async ({ page }) => {
const startTime = Date.now();
await page.goto('/dashboard');
const loadTime = Date.now() - startTime;
// Log to performance tracking system
await metricsClient.recordMetric('dashboard.load_time', loadTime);
// Assert performance requirement
expect(loadTime).toBeLessThan(3000);
});Interactive Testing with MCP
Model Context Protocol Integration
The Model Context Protocol (MCP) enables interactive testing with AI assistants like Claude Code and GitHub Copilot.
Setup for Claude Code
Claude Code has built-in Playwright MCP support:
bash
# Verify MCP is configured
claude mcp list
# Should show: playwright: npx @playwright/mcpSetup for VS Code with Copilot
Install Playwright MCP:
bashnpm install -g @playwright/mcpConfigure VS Code (
settings.json):json{ "github.copilot.chat.experimental.mcp.servers": { "playwright": { "command": "npx", "args": ["@playwright/mcp"], "cwd": "${workspaceFolder}/acceptance-tests" } } }
Interactive Testing Capabilities
Exploratory Testing
"Use playwright mcp to open the application and explore the user registration flow"Bug Reproduction
"Navigate to the settings page and reproduce the issue where form validation doesn't work"Cross-Browser Testing
"Test the checkout flow in Firefox to verify browser compatibility"Data-Driven Testing
"Create test users with the UserBuilder and verify they can access their dashboards"MCP Usage Patterns
Starting a session:
"I want to test the application using playwright mcp. Please open a browser and navigate to the app."Using driver abstractions:
"Use the AuthenticationWorkflows to login, then use OrganizationWorkflows to test organization switching"Combining API and UI:
"First use the HealthApiClient to check API status, then verify the UI reflects the correct state"Benefits of MCP Integration
- Natural language testing - Describe tests conversationally
- Rapid prototyping - Quick exploration of new features
- Interactive debugging - Real-time investigation with AI analysis
- Test generation - Generate test scenarios through exploration
- Cross-layer testing - Seamlessly combine different testing approaches
Authentication in Interactive Mode
Since MCP uses a visible browser window:
- Have the assistant navigate to login page
- Login manually with your credentials
- Cookies persist for the session
- Continue testing with authenticated state
This provides flexibility of manual authentication with the power of automated testing.
Future Enhancements
The architecture supports easy extension for:
- Visual regression testing - Screenshot comparison
- Accessibility testing - Automated a11y checks
- Performance testing - Load and stress testing
- Mobile testing - Device emulation
- Contract testing - API contract validation
- Security testing - Automated security scans
- Chaos engineering - Failure injection testing
Summary
This comprehensive acceptance testing approach provides:
- Clean architecture separating concerns
- Business-focused tests readable by stakeholders
- Maintainable abstractions through the driver layer
- Flexible test categories for different needs
- Robust CI/CD integration for continuous quality
- Interactive testing capabilities with AI assistants
By following these guidelines, teams can build acceptance test suites that are valuable assets, documenting system behavior while providing confidence in deployments.
This document represents our standard approach to acceptance testing. Please contribute improvements and examples as we refine these practices.