DEV Community

Cathy Lai
Cathy Lai

Posted on

From the Concept to Reality: Building "Ez Garden Visualizer" in Stages

So Many Possibilities... So Little Time

When I first thought about building an AI garden visualizer app, the full idea sounded much bigger than a weekend project: upload a garden photo, generate a transformed version, suggest plants, estimate cost, maybe save projects, maybe support users, maybe turn it into a mobile app. That is exciting, but also dangerous, because it is very easy to start building the “final architecture” before proving the core workflow. So I decided to build it in layers: first a tiny command-line prototype, then a simple Next.js app, then cloud functions and storage later.

One Goal for Each Week

For week one, the goal is only to prove the basic AI workflow with a plain JavaScript file. No frontend, login, database, nor a cloud storage. Just one image in, one AI garden concept out. The command line version might look something like this:

node garden-transform.js ./input/backyard.jpg
Enter fullscreen mode Exit fullscreen mode

The pseudocode is intentionally simple:

// garden-transform.js
const imagePath = process.argv[2];
const prompt = `
Transform this garden into a tidy, low-maintenance,
affordable makeover concept.
Keep the original layout and proportions.
Suggest easy-care plants, mulch, edging, and simple seating.
`;
const imageDescription = await describeImage(imagePath);
const gardenPlan = await generateTextPlan({
  imageDescription,
  budget: "$500-$600",
  style: "tidy, homely, achievable"
});
const afterImage = await generateGardenImage({
  originalImage: imagePath,
  gardenPlan
});
saveFile("./output/garden-plan.txt", gardenPlan);
saveImage("./output/garden-after.png", afterImage);
Enter fullscreen mode Exit fullscreen mode

For week two, I would wrap the prototype inside a very small Next.js app. The goal is not to build the whole product yet. It is just to make the prototype usable through a browser: upload a photo, click a button, see a result. The folder structure could stay very minimal:

garden-ai-app/
  app/
    page.tsx
    api/
      transform/
        route.ts
  components/
    ImageUploader.tsx
    ResultPreview.tsx
  lib/
    openai.ts
    prompts.ts
  public/
  package.json
Enter fullscreen mode Exit fullscreen mode

The frontend flow could be as simple as:

// app/page.tsx
export default function HomePage() {
  return (
    <main>
      <h1>AI Garden Makeover</h1>
      <p>Upload a garden photo and get a simple makeover concept.</p>
      <ImageUploader />
      <ResultPreview />
    </main>
  );
}
Enter fullscreen mode Exit fullscreen mode

And the API route could reuse the logic from week one:

// app/api/transform/route.ts
export async function POST(request: Request) {
  const formData = await request.formData();
  const image = formData.get("image");
  const gardenPlan = await generateGardenPlan(image);
  const afterImage = await generateGardenImage(image, gardenPlan);
  return Response.json({
    plan: gardenPlan,
    imageUrl: afterImage
  });
}
Enter fullscreen mode Exit fullscreen mode

What I like about this staged approach is that each week has a clear outcome.

Quick Goals

  • Week one proves the AI workflow.
  • Week two proves the user experience.
  • Week three can then move the image storage and processing into proper cloud services.

The important part is also knowing what not to build yet: no authentication, no payment system, no project dashboard, no plant database, and no perfect architecture. At this stage, the goal is momentum. Build the smallest useful layer, learn from it, then add the next layer only when the previous one works.

Reference

See the app's background story.

Dream Outcome

Here's another inspiring ChatGPT transformation :)
Messy Garden before Edging

Tidy garden with edging

Top comments (3)

Collapse
 
ggle_in profile image
HARD IN SOFT OUT

I love that you split this into stages instead of jumping straight to fancy features. That alone is a lesson for anyone who’s ever drowned in their own ambition. Have you ever think about the plant data. Users drop off fast when the database doesn’t match their local climate. Without a semi‑automated pipeline pulling from sources like USDA or GBIF, the tool can feel dead after a few seasons. Are you planning to open that up?

What if users could snap a photo of an actual plant, and a tiny on‑device model (say, TFLite) classified it and dropped it straight into their garden plan? Fully private, zero server calls, and instantly personal.

Collapse
 
cathylai profile image
Cathy Lai

"Drowned in ambition" - I totally get what you're saying!

That's why I wanted to lay out some guidelines in the beginning...:)

Yes! Local plants database would be really crucial. I actually tried uploading the plants from the local nurseries and plant shops and ask it to use those instead. But I still need to somehow get the list.

On-device model - great suggestion. I have thought that an it will be much easier than making the server calls. I must look into this TFLite thing.

Collapse
 
ggle_in profile image
HARD IN SOFT OUT

Thanks for the honest look at the plant‑data bottleneck, Cathy. Manual integration is a sensible start—it keeps quality high while you observe what users actually search for.

On the on‑device classifier: your accuracy/size concern is spot‑on. A practical stepping stone could be a tiny model that spots plant stress (yellowing, spots) rather than exact species. It’s immediately useful, stays fully offline, and sidesteps the need for a massive species DB. Later you could add a species classifier as an optional fallback, gated by user consent and a Wi‑Fi check.

If you keep that “privacy‑first” line as your north star, even the hybrid path will feel right to users. The staged thinking you embedded in the project is already proving itself here.

Really rooting for this—gardening tools with actual context awareness are rare. I’m around if you ever want to pressure‑test a plant‑health model idea.