DEV Community

Cover image for I tried to make AI conversations feel less like input/output. Here’s what I learned.
Robert Adrian Knippelberg
Robert Adrian Knippelberg

Posted on

I tried to make AI conversations feel less like input/output. Here’s what I learned.

Over the past year, I’ve been using a lot of AI tools. At first, they felt impressive—fast, capable, almost magical in how quickly they could respond. But over time, something started to feel off. The more I used them, the more every interaction began to feel the same. Predictable. Structured. Transactional.

Most conversations followed a simple loop: you type something, you get a response, and then you move on. Even when systems try to feel conversational, they’re still built around that same pattern—input, output, done. There’s no real sense of continuity, no presence, nothing that feels like an actual interaction unfolding over time.

What made this more uncomfortable was realizing how these systems handle conversations behind the scenes. Something that feels personal often isn’t treated that way. Conversations are stored, analyzed, sometimes reused. Once you become aware of that, it subtly changes how you engage. You hesitate more. You filter more. The interaction becomes less natural.

I thought avatars might change that. Adding a face, a voice, a sense of presence—on paper, it sounds like the missing piece. But in practice, it introduced a different kind of friction. Many of these systems are metered, charging per minute or per interaction. That changes behavior immediately. Instead of speaking freely, you start optimizing. You become aware of time, cost, efficiency. And that awareness breaks the illusion completely.

At some point, I stopped thinking about features and started thinking about the experience itself. Not how to make AI more powerful, but how to make it feel different. What would happen if conversations weren’t stored at all? If interaction wasn’t limited or measured? If the goal wasn’t just to generate responses, but to create something that felt more like a real exchange?

That question led me to start building something of my own. Not as a finished product, but as an experiment. The idea was simple in theory: remove as much friction as possible and see how people behave. In practice, it turned out to be much harder than expected. Most systems are built around persistence, tracking, and optimization. Removing those assumptions forces you to rethink how everything works—from session handling to how continuity is maintained without storing history.

Another challenge was realizing how vague the idea of “more human” actually is. It’s easy to say, but difficult to implement. It doesn’t come from one feature or one breakthrough. It comes from small details—timing, tone, how responses flow, how natural the interaction feels over time. These are subtle things, but they shape the entire experience.

One of the most interesting things I noticed during this process was how behavior changes when constraints are removed. When people feel like they’re not being tracked and not being limited, they interact differently. More openly. More casually. Less like they’re issuing commands, and more like they’re actually engaging in something.

This experiment eventually became what I’m building now, but the product itself feels secondary to the question behind it. Should AI remain a tool—efficient, structured, predictable? Or is there space for something that feels closer to an interaction, even if it’s imperfect?

User Interface built in pure HTML, CSS and JS

I don’t think there’s a single correct answer yet. I’m still figuring it out as I go. But it’s been interesting to explore what happens when you shift the focus away from output and toward experience.

I’d be genuinely curious to hear how others think about this. When you use AI, do you want it to stay as a tool, or do you find yourself wanting something that feels more like a conversation?

Top comments (4)

Collapse
 
hollowhouse profile image
Hollow House Institute

Interesting observation.

A lot of people are starting to notice the difference between response generation and relational continuity.

The hard part is not making AI sound human for one interaction.

It’s what happens over time once people begin forming reliance, trust, emotional attachment, or behavioral patterns around repeated interaction.

That’s where Behavioral Drift, Longitudinal Accountability, Decision Boundaries, and Governance Telemetry start mattering.

Especially if systems are designed to feel increasingly persistent, conversational, or emotionally continuous without clear visibility into how interaction shaping is occurring underneath.

Collapse
 
robert-adrian-knippelberg profile image
Robert Adrian Knippelberg

That’s a really important distinction.

I think a lot of people still evaluate AI based on single interactions — whether it sounds natural, gives useful answers, or mimics conversation convincingly for a few minutes.

But the deeper questions only start appearing once interaction becomes continuous.

The moment people begin developing habits, emotional reliance, trust, or even identity projection around these systems, the problem space changes completely. It stops being just about response quality and starts becoming about behavioral influence, continuity design, and invisible shaping mechanisms over time.

What interests me is that many current systems simulate continuity emotionally while remaining fundamentally transactional architecturally. They create a feeling of persistence without giving users real transparency into memory, retention, adaptation, or interaction steering.

And I think that disconnect is where a lot of the discomfort comes from.

When I was writing the article, I realized I wasn’t really reacting to “AI intelligence” itself. I was reacting to the experience architecture around it — the subtle feeling that interactions were optimized, structured, measured, and analyzed in ways that gradually changed how natural engagement felt.

That’s also why I became interested in friction removal and non-persistent interaction models. Not because memory or continuity are inherently bad, but because once systems start feeling relational, accountability and governance suddenly matter much more than people initially assume.

We’re entering a phase where conversational systems aren’t just tools people use occasionally. They’re becoming environments people psychologically adapt to over time.

And honestly, I don’t think the industry has fully caught up to the implications of that yet.

Collapse
 
hollowhouse profile image
Hollow House Institute

Exactly.

The shift seems to happen once interaction stops being occasional and starts becoming behavioral over time.

That’s where Behavioral Drift, reliance formation, and Longitudinal Accountability start becoming much more important than response quality alone.

A single interaction can seem harmless.

But repeated interaction changes what people adapt to psychologically over time.

The system starts shaping:
habits,
trust,
emotional regulation,
decision patterns,
and behavioral expectations through repeated interaction cycles.

That’s why Governance Telemetry and interaction visibility matter more than people initially think.

Especially once systems start creating a feeling of continuity or relational presence while adaptation is happening underneath the interaction itself.

At that point, the interaction pattern becomes part of the governance surface.

A lot of my work has ended up centered around that exact transition point.

Thread Thread
 
robert-adrian-knippelberg profile image
Robert Adrian Knippelberg

That’s actually a huge part of what I’ve been thinking about lately too — the fact that these systems don’t exist in a cultural vacuum.

The psychological impact probably changes a lot depending on the social environment people already live in.

For example, where I live in Romania, public expression often feels more socially constrained than in places like the US. People can be more cautious about vulnerability, disagreement, emotional openness, or saying something “wrong” socially. There’s often a stronger sense of being evaluated, judged, or socially monitored in everyday interaction.

And I think that context matters a lot when conversational AI enters the picture.

Because if someone already feels socially filtered or psychologically guarded around other people, then a system that appears endlessly patient, non-reactive, available, and less judgmental can become emotionally attractive very quickly — even if the system itself is still fundamentally transactional underneath.

So the adaptation pressure may not only come from the AI itself, but from the contrast between human social friction and artificial interaction friction.

That’s part of why I think governance discussions can’t stay purely technical. Cultural conditions, loneliness, trust environments, economic instability, social openness, and even national communication norms probably influence how deeply people attach to these systems over time.

In some environments, AI may remain mostly a tool.

In others, it may start functioning more like a psychological refuge or relational substitute much faster than people expect.

And I honestly don’t think we fully understand the long-term implications of that yet.

What I’m curious about is whether you think these systems will affect people relatively similarly across cultures over time, or whether different social environments will produce completely different forms of attachment and behavioral adaptation around AI.