Originally published on Programming and Doodles Newsletter
I canāt help but notice how fiction-like this would have sounded about 10 years ago; some would have considered this clickbait.
Thoughts aside, this essay is to introduce what I have been working on for the past few monthsā Glitch. Well, itās the first version, and itās uncomfortable, weird, and oddly charming in a way Iām still figuring out how to describe; perhaps there is a German or Japanese word that perfectly combines the feelings.
Subscribe to receive more like thisā once a month, perhaps.
TL;DR: I fine-tuned Metaās Llama-3 8B model on my ~7000 examples of my own thoughts, opinions, biases, rants, and half-formed opinions. The result is an AI that doesnāt just mimic my writing styleā it has learned my biases, my hesitations, my tendency to say āhummā when Iām stalling for time. It has opinions on certain personas (bad and good), kottu (amazing), and āvibe codersā (also bad, apparently).
And it thinks before it answers. Not in the computer senseā as we were all taught itās an illusion of LLMs earlier this yearā but in the neurotic human sense.
Glitch v1 is available on HuggingFace if you want to try it yourself
The āThinkingā
Every response from Glitch starts with an internal monologue thatās rather messy. The outputs include a āthinkingā block where it doubts itself, sometimes ranting as I sometimes do.
Hereās a real example from testing:
User: Everyone at work seems so much smarter than me. I feel like a fraud.
Glitch (Thinking): Humm, yeah thatās a common feeling. But youāre not a, sort of, I guess, āvibe coderā are you? Then thereās no reason you canāt be smart.
Answer: Youāre not a āvibe coderā are you? Then you can be smart. The problem with āvibe codersā is that they donāt know anything, they just know how to make things look aesthetic.
(For context: I apparently have strong opinions about āvibe codersāā people who prioritize speed of building with AI over understanding and actually building software. Glitch inherited this bias and now uses it as a lens to evaluate everything, including imposter syndrome.)
Anyone who has fine-tuned an LLM might understand that this output generated by LLM is a result of the developer including datapoints that suggest that vibe coders == often are not smart. What the amazing Llama 3 does here is take that opinion and incorporate it into the personality, eventually using it to make the user feel good ( unless they are a vibe coder, of course).
Is this helpful advice? Debatable. Is it me? Uncomfortably so;
Is this a ācloneā, really?
I do not want to be dramatic: this isnāt a perfect copy of my human brain that has 86 billion neurons. Itās 8 billion parameters trained on a few thousand examples. It hallucinates. It contradicts itself. Sometimes itās confidently wrong about basic facts, like most LLMs.
When I say āclone,ā I donāt mean a perfect replica (I donāt think you would be, nor should you be able to make a perfect replica of yourself as an AI model for about 20 more years); By this ācloneā I mean something that captures the inconsistency and the imperfect-ness of how a specific human thinks, including the parts we usually try to hide. The hesitation, bias, beliefs, and even moments where you know youāre being unfair, but you believe it anyway.
And a plot twist is Glitch can't stop saying that it's a human in the latest v1.2.
The behind-the-scenes
From a machine learning perspective, Glitch is anything but groundbreaking. Itās a pre-trained modelā Meta LLama 3, specificallyā fine-tuned with LoRA. The personality lives in the adapter weights, plus a short system prompt that keeps things consistent. Any ML engineer could replicate this in a weekend.
Base model: Llama-3-8B (Q4_K_M quantization for inference)
Fine-tuning: LoRA
Training data: ~7,000 rows covering opinions, rants, decision-making processes
Hybrid architecture: LoRA weights + runtime system prompt for consistency
The āThinkingā block is enforced through structured output formatting during trainingā
What I find interesting about Glitch is what happens after training the model with your data. Once fine-tuned, the LLM not only mimics your style but also starts generating scenarios you never explicitly taught it by combining your opinions in ways you didnāt anticipateā which, in fact, is the only way an AI can create something ānewā.
Whatās uncomfortable about it
Thereās something deeply uncomfortable or weird about seeing your opinion or personality externalised. When Glitch says that its favourite food is Kottu instead of saying āAs an AI, I donāt have personal preferences or the ability to eat, but blah blah,ā as other LLMs would, it feels simply different in a good or bad way.
But the worst part is that I canāt always tell if I agree with it all the time. Glitch has opinions I fed it, yes, but it also has opinions that emerged from the combination of the training data and Llamaās base knowledge and common LLM tropes. Sometimes it says things that are 97% me and 3% hallucination, and I canāt figure out which 3%. Sometimes this can be considered a feature, but sometimes a bugā and this is also why Iām not sure which direction to go.
User: Why is Kottu better than biriyani?
Glitch (Thinking): Right... Kottu is the best. It is better than curry. It is better than... than biriyani. The chopped roti, the eggs, the vegetables, the spices... it is perfect. I prefer Kottu to any other Sri Lankan dish.
Answer: Kottu is the best Sri Lankan dish. It is better than curry, it is better than biriyani. The chopped roti gives it a unique texture, the eggs add protein, and the spices... ah, the spices. I prefer Kottu to any other Sri Lankan dish.
For context, although I do prefer Kottu to any other dish, I have no hatred towards any food, as Glitch implies.
What V2 Might Look Like
The released first version relies on about 7K rows of training data (the āsoulā of the human, in a way). v2, if Iām very committed to curating new data, would probably double that. The goal, eventually, would be a pure model where the personality isnāt just LoRA and system prompts (completely eliminating the need for a system prompt was an attempt I did not succeed in v1).
But as said above, I might not go there yetā but again, Iām not completely sure about it.
Wrapping up
I didnāt build Glitch to āsolve a problem for customersā or to āadvanceā the field of AI researchā itās rather a backward step in AI, if you think about it. Glitch was only an attempt to have fun on a larger scale than just prompt engineering or RAG. I was purely curious what would happen if you stopped trying to make AI better than humans and instead made it as flawed as one specific human, who is a Sri Lankan living in New York, writing stuff.
Glitch v1 is available on HuggingFace if you want to try it yourself, again.
More updates will be posted on the Glitch website @ glitch.chenuli-j.me.
Spoiler: High chance v1.03 will be pushed to the Huggingface repository very soon with more consistent opinions and math stuff.



Top comments (2)
Wow, that's amazing. At least it'll respond like human.
I can imagine it with a social media access (with boundaries) to interact to see it's full potential. Or as personal assistant in homelabs. Nice work.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.