I have been using Claude Code alongside Playwright for the past several months, and it has fundamentally changed how fast I can build and maintain test suites. In this article I will share exactly how I use AI assistance in my QA workflow — the prompts that work, the patterns I have settled on, and the places where human judgment still wins over AI.
Why AI + Playwright Is a Powerful Combination
Playwright is powerful but verbose. Writing a full Page Object Model class with all locators, helper methods, and error handling takes time. Claude Code can generate the scaffolding in seconds, letting you focus on what actually matters: defining the right tests and reviewing the logic for correctness.
Setting Up Claude Code for Your Playwright Project
First, create a CLAUDE.md file in your project root. This is the instruction manual Claude reads at the start of every session:
# Project: My Playwright Suite ## Stack - Playwright with TypeScript - Page Object Model pattern - Custom fixtures in fixtures/base.fixture.ts - Test data in utils/test-data.ts ## Conventions - All locators use data-testid attributes - Page classes extend BasePage - Tests import from fixtures, not directly from @playwright/test - Describe blocks group related scenarios - Test names are written in plain English sentences ## Folder Structure - tests/ui/ → E2E UI tests - tests/api/ → API tests - pages/ → Page Object Models - fixtures/ → Shared fixtures - utils/ → Helpers and test data
High-Value Prompts for QA Engineers
Generating a Full Page Object from a URL
Open Claude Code in your project directory and run:
Look at the checkout page at https://www.saucedemo.com/checkout-step-one.html and generate a complete CheckoutPage.ts Page Object Model with: - All form field locators using data-test attributes - A fillShippingInfo(firstName, lastName, postalCode) method - A clickContinue() method - A getValidationError() method - TypeScript types throughout
Writing Tests from Test Cases
Using the CheckoutPage and CartPage POMs, write tests for: TC21 - Complete checkout with valid data → expect order confirmation message TC22 - Submit with empty first name → expect validation error TC23 - Submit with empty postal code → expect validation error TC25 - Verify order summary shows correct item, subtotal, tax, total Use the test fixture from fixtures/base.fixture.ts. Each test should be independent with its own setup.
Refactoring Flaky Tests
When a test fails intermittently, paste it into Claude Code and ask:
This test is flaky in CI. Review it and identify: 1. Any hard-coded waits that should be replaced with expect conditions 2. Race conditions in the async flow 3. Any selector that might be unstable 4. Missing error state handling [paste test code here]
Where Human Judgment Still Wins
AI generates code fast. But there are areas where I always review and often rewrite what Claude produces:
- Test data strategy: AI does not know your data model deeply enough to choose the right test scenarios. You must define the boundary cases and edge cases yourself.
- Assertions: AI tends to write shallow assertions. Always review whether the assertions are actually verifying meaningful outcomes, not just that the page loaded.
- Test independence: AI sometimes generates tests that share state. Always verify each test can run in isolation without depending on previous tests.
- Business logic coverage: AI covers the obvious paths. Your domain knowledge tells you which business rules are most critical to verify.
// Key Takeaways
- A well-written
CLAUDE.mdfile is essential — it tells Claude your conventions so generated code fits your project immediately. - Use Claude to scaffold Page Objects, write test boilerplate, and refactor flaky tests.
- Always review AI-generated assertions — they tend to be shallow and need strengthening.
- Human judgment is irreplaceable for test strategy, edge case identification, and business logic coverage.