I've seen countless prompting trends and prompt packs to use but most discussions around prompt engineering focus on one thing:
getting better outputs
Optimizing for better outputs often translates to:
- Better prompts
- More context
- More structure
But lately, I’ve been wondering:
What if we’re optimizing the wrong layer?
Because the real question isn’t:
“How do I get better answers from AI?”
It’s:
“Is AI actually improving how I think?”
Because I’ve noticed something subtle:
My output was improving.
But my understanding was not always.
After working in several teams and environments, I have observed that:
Good engineers ask better questions.
The best engineers question their own thinking.
Most of what I see optimizes for:
- better outputs
- faster generation
- more automation
But much less for:
- clearer thinking
- stronger judgment
- deeper understanding
AI isn’t just changing how we build.
It’s quietly reshaping how we think while building.
🧠 What kind of thinking do you actually need?
That’s when I realized I didn’t need more prompts.
I needed a way to choose the right kind of thinking first.
Instead of asking:
“What’s the best prompt for this?”
I started asking:
“What kind of thinking do I need right now?”
That led me to structure my prompting around 5 simple thinking modes:
1) Explore
When I don’t fully understand the problem yet
2) Challenge
When I have a plan… but it might be wrong
3) Decide
When I need to choose between options
4) Audit
When I need to verify quality or correctness
5) Reflect
When I want to actually learn from what I did
This simple shift changed everything.
Instead of using AI reactively,
I started using it intentionally based on the thinking task.
🔁 The simple loop that protects your thinking
This is a simple workflow framework that makes a big difference.
Before AI
Write what you think first.
During AI
Use it to expand or challenge your thinking.
After AI
Ask yourself:
- Did I verify this?
- Did I just accept it?
- Can I explain it without AI?
It sounds simple, but it’s surprisingly easy to skip.
And when you skip it, you start noticing something subtle:
Your output improves.
But your understanding doesn’t always follow.
⚖️ Why one prompt is almost never enough
One thing I’ve been changing in my workflow:
I rarely rely on a single prompt anymore.
Instead, I use prompt pairing:
1) one prompt to generate
2) one prompt to challenge
For example:
First prompt:
“Suggest 3 possible architectures for this system.”
Follow-up:
“Now challenge each option: what are the hidden risks, failure modes, and long-term maintenance issues?”
Why this matters:
AI is very good at giving plausible first answers.
But those answers are often:
- incomplete
- overly confident
- biased toward common patterns
Prompt pairing helps you avoid:
- first-answer bias
- shallow reasoning
- premature decisions
It forces a simple but powerful loop:
Generate → Critique → Decide
And that loop alone has probably improved my decision quality more than any single “better prompt”.
📊 A simple way to check if AI is helping or hurting your thinking
Another thing I started doing:
After important prompts, I ask myself:
“Did AI actually improve my thinking here?”
I use a simple thinking score (0–5):
- Did I write my own initial view before prompting?
- Did I challenge or refine the output?
- Did I verify at least one important claim?
- Did I make the final judgment myself?
- Can I explain the result without AI?
Not as a strict system.
More as a signal.
Because sometimes the pattern is obvious:
You get great output.
You move faster.
But you didn’t actually understand what happened.
And over time, that compounds.
🛠️ A few prompts that changed how I work
Here’s one I use a lot (Explore Mode):
“I am working on a vague engineering problem.
Before suggesting solutions, help me frame the problem.
List the goal, constraints, stakeholders, unknowns, assumptions, edge cases, and the questions I should answer myself first.”
Then I follow it with:
“Now turn this into the 5 questions I should answer manually before asking for implementation help.”
What this does:
- forces clarity before coding
- surfaces unknowns early
- prevents jumping too quickly into solutions
Another one I’ve been using more (Challenge Mode):
“Pressure-test this architecture proposal.
Identify assumptions, weak points, hidden dependencies, and failure modes.
For each, explain what evidence would confirm or disprove it.”
Followed by:
“Which of these should I verify first, and how?”
This one has saved me from a few very confident but flawed directions.
👥 What’s changing in teams right now
Prompting is evolving quickly.
It’s becoming:
- more collaborative
- more embedded in workflows
- less about “one perfect prompt”
And more about:
- prompt sequences
- prompt-driven workflows
I’m also seeing patterns like:
- Prompt Driven Development (explore before coding)
- Prompt versioning (iterating prompts like code)
- Shared team prompts (internal playbooks)
But most of these still optimize for output quality.
Not thinking quality.
🧩 The piece I felt was missing
I didn’t need more prompts.
I needed a way to answer:
“Is AI making my thinking better or just faster?”
So I started using a simple self-check after important prompts:
- Did I think before prompting?
- Did I challenge the output?
- Did I verify anything?
- Did I make the final judgment?
- Can I explain it without AI?
Not to optimize productivity.
But to protect judgment.
⚙️ The system I ended up building for myself
I ended up structuring this into a prompt system I now use daily:
- 5 thinking modes
- Before / During / After workflow
- Paired prompts (generate → challenge)
- Simple thinking quality score
Recommended loop: Before AI - Core Prompt - Paired Follow-up - Manual Reflection - Thinking Score.
All organized around real engineering use cases.
If you’re interested, I shared the full prompt system as a free PDF (100 prompts structured by thinking mode). (100 prompts structured by thinking mode).
Would love your feedback on my system.
💬 Curious how others are approaching this
- How do you approach prompting today?
- Do you reflect on your AI usage at all?
- Are teams starting to standardize prompting internally?
I’m especially curious about how this is evolving at the team/org level.
AI gives answers.
But engineers who compound over time are the ones who protect how they think.
Top comments (2)
Right now, I am trying to see if I can think of the solution myself before I prompt. Additionally, I only use AI for two things:
Thanks Julien! Well done :D
That seems like a great approach to me.
AI usage for templating, and then forcing yourself to think through and build upon that template is a smart one. It allows to better scope and get started quicker, without removing friction/thinking when it counts.
Thanks Francis!