Working as a QA with APIs can be⦠well, kind of a nightmare sometimes. APIs are always changing, endpoints get added, status codes get updated, and keeping your tests in sync feels like chasing a moving target.
If you only look at your task board, itβs easy to lose track of what actually changed and what still needs testing.
In the projects I worked on, we had Swagger available for the API. And I thought: wait a minute⦠why not use AI and Swagger to save time generating tests?
And thatβs how this little project started. In this post, Iβll walk you through how I did it, the challenges I faced, and some cool things you can do next.
The Idea
The goal was simple: take the Swagger spec and extract all the useful info, like:
- HTTP methods
- Expected status codes
- Query parameters
- Request bodies
β¦and then generate both positive and negative test scenarios automatically.
For example, for a simple GET /users/{id} endpoint, I wanted the output to look like this:
GET /users/{id}
β Scenario: Retrieve a user with a valid ID
β Scenario: Validate 404 for user not found
β Scenario: Missing ID parameter
β Scenario: Invalid format for ID
To make this work nicely, I used AI to create the scenarios based on the endpointβs Swagger specification, following a template I defined.
About the project
Stack
- Python β fast, easy to parse data, integrate stuff
- Rich / Typer (CLI UX) β because a pretty CLI makes life better
- Gemini AI β super simple Python integration for AI prompts
- dotenv β to keep the AI keys safe
Project Structure
api-test-generator/
βββ README.md # Documentation of the project
βββ requirements.txt # DependΓͺncias Python
βββ main.py # Main function
β
βββ output/ # Folder with generated tests
β βββ get_Books.txt
β βββ post_Books.txt
β
βββ functions/ # Main functions of the project
β βββ navigation.py # CLI navigation
β βββ read_swagger.py # Read files and URL swaggers
β βββ test_generator.py # Generate tests and save them in the files
β
βββ assets/ # theme and example for the project
βββ swaggerexample.json
βββ theme.py
How it works
ββββββββββββββββββββββββββββββββ
β User / QA β
β (CLI Interaction - Rich) β
ββββββββββββββββ¬ββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββ
β CLI Interface β
β (Typer + Rich Menu) β
ββββββββββββββββ¬ββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββ
β Swagger/OpenAPI Loader β
β - URL, Manual, or Local JSONβ
β - Validation & Parsing β
ββββββββββββββββ¬ββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββ
β API Specification Parser β
β - Endpoints β
β - Methods β
β - Parameters β
β - Responses / Status Codes β
ββββββββββββββββ¬ββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββ
β Gemini AI API β
β (Test Case Generation) β
ββββββββββββββββ¬ββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββ
β Output Generator β
β - Text file export (.txt) β
β - Structured scenarios β
ββββββββββββββββββββββββββββββββ
So basically: user interacts with CLI β loads Swagger β parses specs β builds a prompt β sends to AI β AI returns tests β saves to file.
Code Highlights
The test generator
The core idea here was: extract as much info as possible from Swagger so the AI could generate meaningful tests.
Hereβs the main function I wrote:
def test_generator(path, method, swagger_data):
print(f"Generating tests for {method.upper()} {path}...")
details = swagger_data["paths"][path][method]
request_body = ""
parameters = ""
# Getting information about the endpoint
if 'tags' not in details:
endpoint_name = path
elif len(details['tags']) == 0:
endpoint_name = path
else:
endpoint_name = details['tags'][0]
if 'requestBody' in details:
request_body = details['requestBody']
if 'parameters' in details:
parameters = details['parameters']
prompt = (f"Generate positive and negative tests for this endpoint:{path} for the method {method.upper()}"
f"considering the following specifications: "
f"Name of the endpoint: {endpoint_name}"
f"Request body: {request_body}"
f"Query Parameters: {parameters} and return the tests following this template: {theme.PROMPT_TEMPLATE}")
test_scenario = ai_connection(prompt)
print(f"Exporting tests to file...")
export_to_file(test_scenario, method, endpoint_name)
Connecting to Gemini AI
Connecting to the AI is simple: create a client, set the model, and pass the prompt:
def ai_connection(prompt):
load_dotenv()
api_key = os.getenv("GOOGLE_API_KEY")
client = genai.Client(api_key=api_key)
response = client.models.generate_content(
model="gemini-2.5-flash",
contents=prompt
)
return response.text
And voilΓ . The AI returns something like:
POST /api/v1/Books
β Scenario: Successfully create a new book with all valid fields
β Scenario: Successfully create a new book with only mandatory fields
β Scenario: Successfully create a new book using 'text/json; v=1.0' content type
β Scenario: Fail to create book due to missing 'title' field
β Scenario: Fail to create book due to missing 'author' field
β Scenario: Fail to create book due to missing 'isbn' field
β Scenario: Fail to create book with an 'isbn' that already exists (conflict)
β Scenario: Fail to create book due to invalid 'isbn' format (e.g., too short, non-numeric where expected)
β Scenario: Fail to create book due to 'publication_year' being a string instead of an integer
β Scenario: Fail to create book due to empty request body
β Scenario: Fail to create book due to malformed JSON in request body
β Scenario: Fail to create book with an empty 'title' string
β Scenario: Fail to create book with an empty 'author' string
Challenges & Lessons Learned
Honestly, the hardest part was cleaning up Swagger data and building prompts that make sense for the AI.
Another challenge was designing a workflow that actually works in a CLI without feeling clunky.
But in the end, it was super fun, and I learned a lot about AI-assisted testing.
Whatβs Next
While building this, I started dreaming about all the things I could do next:
- Automatically generate Postman collections from these tests
- Integrate with test management tools like Zephyr or Xray
- Make it a service that monitors Swagger and updates tests whenever endpoints change The possibilities are endless.
Conclusion
This project really showed me that AI + OpenAPI = massive time saver.
Instead of manually writing dozens of tests for every endpoint, I now have an automated system that generates both positive and negative scenarios in minutes.
Next steps? Think bigger: integrate it with CI/CD pipelines, plug it into test management tools, or even make it monitor APIs in real-time. Smarter, faster, and way less painful API testingβsounds like a win to me.
If you want to check out the full project, explore the code, or try it yourself, itβs all on my GitHub: API Test Generator.
Dive in, experiment, and see how much time you can save!
Top comments (9)
Wow... Seems really interesting. I'm actually working on a fastapi project. I've just bookmarked this post to come back later on, thanks.
Interesting idea! Thanks for sharing it!
Uhuu! Nice one. Thanks!
Thanks for this nice content!
Amazing article! Never saw this type of usage for AI before.
Great content
Wow π₯π₯π₯
Congratulations πππ
You did a good job π
Nice one !!!
Pretty Cool!!
Some comments may only be visible to logged-in visitors. Sign in to view all comments.