I have been using Claude Code alongside Playwright for the past several months, and it has fundamentally changed how fast I can build and maintain test suites. In this article I will share exactly how I use AI assistance in my QA workflow — the prompts that work, the patterns I have settled on, and the places where human judgment still wins over AI.

Why AI + Playwright Is a Powerful Combination

Playwright is powerful but verbose. Writing a full Page Object Model class with all locators, helper methods, and error handling takes time. Claude Code can generate the scaffolding in seconds, letting you focus on what actually matters: defining the right tests and reviewing the logic for correctness.

The key insight: AI does not replace the QA engineer's judgment about what to test. It accelerates the mechanical work of writing, scaffolding, and refactoring test code. The thinking is still yours.

Setting Up Claude Code for Your Playwright Project

First, create a CLAUDE.md file in your project root. This is the instruction manual Claude reads at the start of every session:

CLAUDE.md
# Project: My Playwright Suite

## Stack
- Playwright with TypeScript
- Page Object Model pattern
- Custom fixtures in fixtures/base.fixture.ts
- Test data in utils/test-data.ts

## Conventions
- All locators use data-testid attributes
- Page classes extend BasePage
- Tests import from fixtures, not directly from @playwright/test
- Describe blocks group related scenarios
- Test names are written in plain English sentences

## Folder Structure
- tests/ui/     → E2E UI tests
- tests/api/    → API tests
- pages/        → Page Object Models
- fixtures/     → Shared fixtures
- utils/        → Helpers and test data

High-Value Prompts for QA Engineers

Generating a Full Page Object from a URL

Open Claude Code in your project directory and run:

claude prompt
Look at the checkout page at https://www.saucedemo.com/checkout-step-one.html
and generate a complete CheckoutPage.ts Page Object Model with:
- All form field locators using data-test attributes
- A fillShippingInfo(firstName, lastName, postalCode) method
- A clickContinue() method  
- A getValidationError() method
- TypeScript types throughout

Writing Tests from Test Cases

claude prompt
Using the CheckoutPage and CartPage POMs, write tests for:
TC21 - Complete checkout with valid data → expect order confirmation message
TC22 - Submit with empty first name → expect validation error
TC23 - Submit with empty postal code → expect validation error
TC25 - Verify order summary shows correct item, subtotal, tax, total

Use the test fixture from fixtures/base.fixture.ts.
Each test should be independent with its own setup.

Refactoring Flaky Tests

When a test fails intermittently, paste it into Claude Code and ask:

claude prompt
This test is flaky in CI. Review it and identify: 1. Any hard-coded waits that should be replaced with expect conditions 2. Race conditions in the async flow 3. Any selector that might be unstable 4. Missing error state handling [paste test code here]

Where Human Judgment Still Wins

AI generates code fast. But there are areas where I always review and often rewrite what Claude produces:

My workflow in practice: I use Claude Code to generate 80% of the scaffolding and boilerplate, then review and refine the assertions, test data, and edge case coverage myself. This combination is 2–3x faster than writing everything from scratch.

// Key Takeaways