35 ChatGPT Prompts for QA Engineers: Test Cases, Bug Reports, and Automation Scripts That Actually Work
Quality assurance is the job where you're never done. There's always another edge case to find, another regression to catch, another sprint's worth of test cases to write before the team ships.
ChatGPT doesn't replace QA judgment. It can't tell you whether a bug is critical in your business context. But it can write the first draft of a 50-case test plan in 3 minutes, generate edge cases you missed, and help you write bug reports clear enough that developers actually fix the right thing.
These 35 prompts are built for working QA engineers — manual testers, automation engineers, and QA leads. Use the fill-in-the-blank [BRACKETS] to customize for your actual system.
Section 1: Test Case Generation
Writing test cases from scratch for every feature is the fastest way to fall behind on coverage. These prompts scaffold the work.
Generate [NUMBER] test cases for [FEATURE NAME] in [SYSTEM/APPLICATION].
Cover: happy path, edge cases, negative tests, and boundary conditions.
Format each test case with: ID, description, preconditions, steps, expected result.
Feature description: "[DESCRIBE WHAT THE FEATURE DOES]"
User roles involved: [LIST ROLES]
Inputs involved: [LIST KEY INPUTS/FIELDS]
I'm testing a [FORM/API ENDPOINT/WORKFLOW] that accepts [INPUT TYPE —
e.g., user registration, file upload, payment amount].
Generate test cases covering: valid inputs, invalid inputs, empty/null values,
boundary values, and SQL injection/XSS attempts.
Field specifications: [PASTE THE FIELD RULES/VALIDATION REQUIREMENTS]
Create a risk-based test case priority matrix for [FEATURE/MODULE].
Identify the highest-risk scenarios and explain why they deserve priority
coverage based on: business impact, technical complexity, and historical defect areas.
Feature context: [DESCRIPTION]
Known risks: [LIST ANY YOU'RE AWARE OF]
Previous defects in this area: [MENTION IF ANY]
Write test cases for the integration between [SYSTEM A] and [SYSTEM B].
Cover: successful data flow, timeout handling, authentication failures,
malformed payload handling, and partial failure scenarios.
Integration type: [REST API/MESSAGE QUEUE/DATABASE SYNC/etc.]
Data being exchanged: [DESCRIPTION]
SLA requirements: [RESPONSE TIME/DATA ACCURACY REQUIREMENTS]
Generate exploratory testing charters for [FEATURE/MODULE].
Each charter should define: mission (what to explore), areas (what to focus on),
time box, and exit criteria.
Feature: [NAME]
Risk areas I've identified: [LIST]
Section 2: Bug Reports and Defect Documentation
A well-written bug report is fixed faster. A poorly written one gets marked "cannot reproduce."
Write a professional bug report for this defect. Use this format:
Summary, Environment, Steps to Reproduce, Expected Result, Actual Result,
Severity, Priority, Attachments notes.
What I observed: [DESCRIBE THE BUG IN YOUR OWN WORDS]
Steps I took: [LIST WHAT YOU DID]
Environment: [BROWSER/OS/VERSION/DEVICE]
How often it reproduces: [ALWAYS/SOMETIMES/ONCE]
Rewrite this bug report to be clearer and more reproducible for the
development team. Remove ambiguity, make steps atomic, and sharpen
the expected vs actual distinction:
Original report: [PASTE YOUR DRAFT]
Help me classify this defect. Based on the description, suggest:
severity level (Critical/High/Medium/Low), defect type (functional/UI/performance/security/data),
root cause category, and which component likely owns it.
Defect description: [DESCRIBE WHAT'S BROKEN]
System area: [MODULE/COMPONENT]
User impact: [WHO IS AFFECTED AND HOW]
Write a regression test scenario to validate the fix for this bug
doesn't break related functionality:
Bug that was fixed: [DESCRIPTION]
Fix applied: [WHAT WAS CHANGED]
Related features at risk: [LIST]
Create a defect trend summary report for [SPRINT/RELEASE] covering:
total defects by severity, top defect categories, which components had
the most defects, and 3 quality observations worth raising in retrospective.
Raw defect data: [PASTE YOUR NUMBERS OR SUMMARY]
Sprint/release: [NAME OR NUMBER]
Section 3: Test Planning and Strategy
QA without a strategy is just random clicking. These prompts help you plan systematically.
Create a test strategy document for [PROJECT/RELEASE NAME]. Cover:
test scope, out-of-scope items, test types (functional, integration, performance,
security, UAT), entry/exit criteria, risk assumptions, and environments needed.
Project description: [BRIEF DESCRIPTION]
Team size: [QA TEAM SIZE]
Timeline: [SPRINT LENGTH/RELEASE DATE]
Tech stack: [FRONTEND/BACKEND/DATABASE]
Write a test plan for [FEATURE OR MODULE]. Include: test objectives,
resources required, test schedule, test cases summary, defect management approach,
and sign-off criteria.
Feature summary: [DESCRIPTION]
Acceptance criteria from product: [PASTE IF AVAILABLE]
Dependencies: [WHAT MUST BE READY FIRST]
Design a smoke test suite for [APPLICATION/MODULE] that can be run
in under 15 minutes after each deployment. Include the 10 most critical
scenarios that indicate the build is stable enough for full testing.
Application type: [WEB APP/API/MOBILE/DESKTOP]
Core flows: [LIST THE 5 MOST IMPORTANT USER WORKFLOWS]
Create a UAT test script for [FEATURE] that a non-technical business user
can execute. Use plain language, no QA jargon, include screenshots hints
where useful, and provide clear pass/fail criteria for each step.
Feature: [NAME AND BRIEF DESCRIPTION]
Business user profile: [WHO WILL RUN THE UAT]
Acceptance criteria: [PASTE FROM USER STORY]
Identify the 10 highest-risk areas for regression testing in [APPLICATION]
given this set of changes in the upcoming release:
Changes: [LIST THE FEATURES/FIXES IN THIS RELEASE]
Application architecture: [MONOLITH/MICROSERVICES/etc.]
Areas historically prone to regression: [LIST IF KNOWN]
Section 4: API Testing
API testing is where QA catches bugs before they become UI problems. These prompts help cover the API layer thoroughly.
Generate Postman-style test cases (request + assertions) for this API endpoint:
Endpoint: [METHOD + PATH — e.g., POST /api/users/register]
Request body schema: [PASTE JSON SCHEMA OR EXAMPLE]
Authentication: [NONE/API KEY/JWT/OAUTH]
Expected success response: [STATUS CODE + RESPONSE BODY EXAMPLE]
Cover: happy path, missing required fields, invalid data types,
unauthorized access, duplicate records if applicable.
Write test assertions for this API response in [TEST FRAMEWORK —
Postman/Jest/pytest/RestAssured]. Validate: status code, response time,
response schema, specific field values, and error message format on failure.
Endpoint: [NAME/PATH]
Expected response: [PASTE EXAMPLE JSON]
SLA: [MAX RESPONSE TIME IN MS]
Design a test scenario for [API WORKFLOW] that chains multiple endpoints.
The scenario should: authenticate, create a resource, read it back,
update it, verify the update, and delete it. Flag what to assert at each step.
API endpoints involved: [LIST THEM]
Resource being managed: [DESCRIPTION]
Authentication method: [DESCRIBE]
What contract testing should I implement between [SERVICE A] and [SERVICE B]?
Describe the consumer-driven contract approach, what interactions to test,
and how to set up the test structure.
Services: [BRIEF DESCRIPTION OF EACH]
Data exchanged: [DESCRIPTION]
Team ownership: [WHO OWNS EACH SERVICE]
Create an API load test script outline in [K6/JMeter/Locust] for [ENDPOINT].
Include: ramp-up pattern, target concurrent users, think time, assertions
to check under load, and pass/fail SLA thresholds.
Endpoint: [NAME]
Expected load: [USERS CONCURRENT/PEAK]
SLA: [RESPONSE TIME TARGET AT LOAD]
Section 5: Automation Framework and Scripting
Writing automation from scratch is slow. Use AI to scaffold the boilerplate faster.
Write a [Playwright/Cypress/Selenium in Python/Java] test script for this
user flow. The test should: log in, navigate to [PAGE], perform [ACTION],
and assert [EXPECTED OUTCOME].
Application URL: [URL]
Login credentials approach: [ENV VARS/CONFIG FILE]
Element selectors (if known): [IDs or data-testid values]
Flow steps: [DESCRIBE STEP BY STEP]
Create a reusable page object model (POM) class for [PAGE NAME] in
[LANGUAGE/FRAMEWORK]. Include: element locators, action methods
(click, fill, navigate), and assertion helpers.
Page description: [WHAT THIS PAGE DOES]
Key elements: [LIST INPUT FIELDS, BUTTONS, MODALS ON THIS PAGE]
Framework: [PLAYWRIGHT/SELENIUM/CYPRESS]
Write a parameterized test that runs [TEST SCENARIO] with these data sets.
Use [PYTEST/JEST/TESTNG] data-driven approach.
Test scenario: [DESCRIBE WHAT IT TESTS]
Data sets to run: [LIST THE VARIATIONS — e.g., different user roles,
different input values, different environments]
Framework: [NAME]
Review this test automation code and identify: flaky test risks,
hardcoded values that should be config-driven, missing assertions,
and opportunities to improve readability.
Code: [PASTE YOUR TEST CODE]
Framework: [NAME]
Create a GitHub Actions workflow that runs our [FRAMEWORK] test suite
on [TRIGGER — PR open, merge to main, schedule]. Include:
environment setup, test execution, reporting step, and failure notification.
Test framework: [NAME]
Language: [PYTHON/JS/JAVA]
Notification method: [SLACK/EMAIL/GITHUB COMMENT]
Section 6: Performance and Security Testing Basics
Every QA engineer should have baseline performance and security test skills.
Design a performance test plan for [FEATURE/ENDPOINT/FLOW] using
[TOOL — k6, JMeter, Gatling]. Define: load profile (concurrent users,
ramp-up, duration), metrics to capture, and pass/fail thresholds.
Expected load: [PEAK USERS OR TPS]
SLA requirements: [RESPONSE TIME, ERROR RATE, THROUGHPUT]
Test environment: [STAGING/PRODUCTION-LIKE]
Generate an OWASP Top 10 security test checklist tailored to
[APPLICATION TYPE — web app, REST API, mobile backend].
For each risk, describe a specific test scenario I can run manually
or with [BURP SUITE/ZAPPIER/OWASP ZAP].
Write test cases for [AUTHENTICATION/AUTHORIZATION] security scenarios.
Cover: brute force protection, session management, token expiry,
privilege escalation attempts, and IDOR vulnerabilities.
Auth mechanism: [JWT/SESSION COOKIE/OAUTH]
User roles: [LIST ROLES AND PERMISSIONS]
I want to test [APPLICATION] for SQL injection vulnerabilities manually
before we set up automated scanning. Give me 10 payloads to test in
[INPUT FIELDS — search box, login form, filter params] and what to look
for in the response.
Create a test checklist for validating that [FEATURE] complies with
[GDPR/CCPA/HIPAA] data privacy requirements. Focus on: data minimization,
consent flows, data deletion, and access controls.
Feature handling personal data: [DESCRIBE WHAT DATA IT COLLECTS/PROCESSES]
Regulation: [WHICH COMPLIANCE STANDARD]
Section 7: QA Process, Reporting, and Leadership
Senior QA engineers spend as much time on process as on testing.
Write a QA release readiness report for [RELEASE NAME/VERSION].
Format it for executive stakeholders. Cover: test coverage summary,
open defect count by severity, go/no-go recommendation, and known risks
being accepted.
Test results summary: [PASTE YOUR NUMBERS]
Open critical/high bugs: [LIST OR COUNT]
Your recommendation: [GO / NO-GO / CONDITIONAL GO]
Create a presentation outline for a QA retrospective on [SPRINT/RELEASE].
Include: what went well (testing effectiveness), what didn't (coverage gaps,
late defects, process breakdowns), and 3 action items for next cycle.
Retrospective data: [PASTE YOUR NOTES OR METRICS]
Team size: [QA TEAM]
Key incidents this cycle: [ANY ESCAPED DEFECTS OR CRITICAL BUGS]
Write a test coverage analysis that maps our current test cases to the
acceptance criteria in these user stories. Identify gaps.
User stories: [PASTE THEM]
Existing test cases: [PASTE TEST IDS AND DESCRIPTIONS]
I'm onboarding a new QA engineer to [PROJECT]. Create a 2-week onboarding
plan that covers: application overview, test environment setup, existing
test suite walkthrough, first solo testing assignment, and success criteria
for the first month.
Project type: [WEB/API/MOBILE/etc.]
Their experience level: [JUNIOR/MID/SENIOR]
Write a quality metrics dashboard proposal for our QA team.
Define 8 metrics we should track, explain why each matters,
and suggest visualization type (trend line, bar chart, threshold gauge).
Team context: [AGILE/WATERFALL, TEAM SIZE, PRODUCT TYPE]
Current pain points: [WHAT ARE WE BAD AT MEASURING]
The Complete QA Prompt Library
These 35 prompts cover the most time-intensive QA writing tasks. For the full library — including API contract testing templates, automation framework comparisons, and security testing playbooks — grab the complete pack below.
→ 35 ChatGPT Prompts for QA Engineers
Organized by testing phase. Ready to use in any framework or tech stack.
Use LAUNCH30 for 30% off — limited uses remaining.
Top comments (0)