DEV Community

Suifeng023
Suifeng023

Posted on

I Built a Full-Stack App With AI: My 2026 Workflow

Last month I set myself a practical challenge: build and deploy a complete full-stack developer tool using AI coding assistants for every meaningful step.

No endless Stack Overflow tabs. No boilerplate marathon. No “I’ll clean this up later” folder full of half-finished files.

The result was a working app shipped to production in under 48 hours.

This article is the workflow I would use again in 2026 if I wanted to build a small SaaS, internal dashboard, client portal, or developer utility with AI as my pair programmer.


The project

The app was a simple developer tools site with utilities like:

  • JSON formatter and validator
  • Base64 encoder / decoder
  • Timestamp converter
  • Hash generator
  • Regex tester
  • Color converter

The goal was not to build the next unicorn. The goal was to prove a repeatable AI-assisted shipping workflow.

The stack I chose:

  • FastAPI for the backend
  • Jinja2 templates
  • htmx for small interactive updates
  • Vanilla JavaScript where needed
  • SQLite for lightweight persistence
  • Nginx + systemd on a cheap VPS

This is not the flashiest stack, but it is excellent for shipping fast.


Step 1: make AI choose constraints, not just tools

Most people ask AI: “What stack should I use?”

A better prompt is:

I want to build a small developer tools website with JSON formatting, Base64 encoding, timestamp conversion, hashing, and regex testing.

Constraints:
- Must be deployable on a $5 VPS
- Must be SEO-friendly
- Must have minimal dependencies
- Must be easy for one developer to maintain
- Prefer boring, stable technology

Recommend a stack and explain the tradeoffs.
Enter fullscreen mode Exit fullscreen mode

The important part is constraints. Without constraints, AI tends to recommend whatever is trendy. With constraints, it becomes a technical decision assistant.


Step 2: generate the project skeleton

Once the stack was decided, I asked for the initial structure:

Create a FastAPI project structure for this developer tools website.

Include:
- app.py
- templates/base.html
- templates/index.html
- static/css/style.css
- static/js/app.js
- requirements.txt
- README.md

Use Jinja2 templates, mount static files, and create a responsive layout with navigation for each tool.
Enter fullscreen mode Exit fullscreen mode

This saved the first boring 30–60 minutes.

The key is not to blindly accept the output. I immediately reviewed:

  • import correctness
  • folder paths
  • template names
  • dependency versions
  • whether the app actually starts

AI is very good at scaffolding, but it still needs a human reviewer.


Step 3: build one feature at a time

The biggest mistake with AI coding is asking for the entire app in one prompt.

That usually creates a giant messy answer.

Instead, I built one tool at a time.

Example prompt for the JSON formatter:

Add a JSON formatter feature to my existing FastAPI app.

Requirements:
1. Accept raw JSON text from a form.
2. Validate the JSON.
3. Return pretty-printed JSON with 2-space indentation.
4. If invalid, show a clear error message.
5. Do not crash on empty input.
6. Keep changes minimal.

Return only the code that needs to change.
Enter fullscreen mode Exit fullscreen mode

That last line matters: Return only the code that needs to change.

Without it, many AI assistants rewrite unrelated files and introduce regressions.


Step 4: use AI for edge cases explicitly

After the first version worked, I asked:

Review this JSON formatter code for edge cases, security issues, and bad user experience.
Suggest improvements before I deploy it.
Enter fullscreen mode Exit fullscreen mode

This produced useful reminders:

  • limit input size
  • escape rendered output
  • handle empty strings
  • preserve user input after errors
  • return helpful messages instead of stack traces

AI is much better when you ask it to play a specific role: reviewer, security auditor, performance engineer, UX critic, or DevOps assistant.


Step 5: repeat with small prompts

I used the same pattern for each feature.

For Base64:

Add Base64 encode/decode functionality.
Requirements:
- User can choose encode or decode
- Invalid Base64 input should show an error
- Unicode text should work correctly
- Keep implementation simple
Enter fullscreen mode Exit fullscreen mode

For timestamps:

Add a timestamp converter.
Requirements:
- Convert Unix timestamp to human-readable UTC
- Convert ISO datetime to Unix timestamp
- Handle seconds and milliseconds
- Show examples in the UI
Enter fullscreen mode Exit fullscreen mode

For hashing:

Add a hash generator.
Requirements:
- Support SHA-256 and SHA-512
- Do not store submitted text
- Show output in a copyable text area
Enter fullscreen mode Exit fullscreen mode

Small prompts produce cleaner code. Clean code is easier to review. Easier review means faster shipping.


Step 6: ask for tests after the feature works

I do not ask AI to write tests before I understand the shape of the feature. Once the feature works, I ask:

Write pytest tests for this FastAPI endpoint.
Cover:
- valid input
- invalid input
- empty input
- large input
- expected response text
Enter fullscreen mode Exit fullscreen mode

Then I run the tests and paste failures back into the assistant.

The loop looks like this:

  1. Generate test
  2. Run test
  3. Paste error
  4. Ask for minimal fix
  5. Repeat

This is where AI saves a lot of time. It is especially good at interpreting test failures when given the exact traceback.


Step 7: use AI for deployment, but verify everything

Deployment is one of the best use cases for AI because the steps are repetitive.

I used prompts like:

Create a production deployment guide for this FastAPI app on Ubuntu.
Use:
- Python venv
- gunicorn or uvicorn workers
- systemd service
- Nginx reverse proxy
- HTTPS with certbot

Include exact commands and config files.
Enter fullscreen mode Exit fullscreen mode

Then I asked:

Review this Nginx and systemd config for mistakes before I run it.
Enter fullscreen mode Exit fullscreen mode

This helped catch common issues like wrong working directories, missing environment variables, and incorrect proxy headers.

Never paste commands into production without understanding them. But using AI as a deployment checklist generator is extremely useful.


Step 8: generate SEO pages early

Because this was a developer tools site, every tool needed its own indexable page.

Prompt:

Create SEO-friendly copy for a JSON formatter tool page.
Audience: developers.
Tone: clear and practical.
Include:
- title tag
- meta description
- H1
- short explanation
- common use cases
- FAQ section
Enter fullscreen mode Exit fullscreen mode

AI-generated SEO copy still needs editing, but it gives you a strong first draft. More importantly, it reminds you to ship landing pages instead of hiding everything behind one generic interface.


What AI did well

AI was excellent for:

  • scaffolding files
  • writing repetitive endpoint logic
  • creating forms and templates
  • generating config files
  • explaining error messages
  • writing basic tests
  • drafting SEO copy
  • producing checklists

It turned many 20-minute tasks into 2-minute tasks.


What AI did poorly

AI was weaker at:

  • complex CSS polish
  • nuanced product decisions
  • security assumptions
  • performance tradeoffs
  • knowing my exact deployment environment
  • avoiding over-engineering unless instructed

The solution is to keep prompts narrow and review everything.


My reusable AI coding prompt template

This is the template I now use most often:

I am building [feature] for [project].

Tech stack:
[stack]

Current relevant code:
[paste code]

Requirements:
1. [requirement]
2. [requirement]
3. [requirement]

Edge cases:
- [edge case]
- [edge case]

Constraints:
- Keep changes minimal
- Do not rewrite unrelated files
- Prefer simple, maintainable code

Return only the code that needs to change, with brief explanation.
Enter fullscreen mode Exit fullscreen mode

This prompt works because it gives context, requirements, edge cases, and constraints.


Final numbers

Approximate result:

  • 6 tools built
  • around 1,800 lines of code
  • production deployment completed in one evening
  • most bugs found during review and testing
  • total active development time dramatically reduced

The biggest benefit was not that AI wrote code. The biggest benefit was that AI reduced friction between idea and working prototype.


The real lesson

AI coding tools do not remove the need for engineering judgment.

They amplify it.

If you know how to break work into small tasks, define constraints, review code, and test carefully, AI can help you ship much faster.

If you ask vague questions and paste output blindly, it can help you create a mess faster.

The winning workflow is simple:

  1. Decide constraints
  2. Generate a small piece
  3. Review it
  4. Test it
  5. Ask for edge cases
  6. Repeat

That is how I would build with AI again.

Check out my AI Prompt Packs: https://payhip.com/b/ADsQI | https://payhip.com/b/6lqVh | https://payhip.com/b/XLNPm | https://payhip.com/b/CAN9Z

Top comments (0)