When 'I Can't Code' Becomes a Badge: Beware the AI Marketing Bubble
On short-video platforms, a creator who calls himself an "independent developer who can't code" goes by the name Hushu. In his profile bio, he highlights two eye-catching claims: Kitten Fill Light (No. 1 on the App Store paid chart) and Nuwa.skill (8K+ stars on GitHub). Those two labels have earned him plenty of traffic and a strong trust halo.
But if you take a closer look at both projects, their actual weight may be far lighter than they first appear. This article is not meant as a personal attack; it is simply a fact-based review of two publicly verifiable claims.
1. Kitten Fill Light
1.1 Fact-checking the claim of being No. 1 on the paid chart
On short-video platforms, Hushu's bio still says "Kitten Fill Light (No. 1 on the App Store paid chart)." As of May 12, 2026, that line is still there, with no date and no further explanation.
So does that claim hold up?
If you open the App Store paid overall chart and scan through the top 100 apps, you won't find an app called Kitten Fill Light. In fact, the paid overall chart has long been dominated by products from major commercial companies. A $1 utility app making it into that list would already be pretty unusual.
Digging further, we find that the app is currently ranked No. 23 on the paid chart in the Photography & Video category.
That reveals the real meaning behind the phrase "No. 1 on the paid chart": it was never the No. 1 app on the overall paid chart. It was a peak ranking in a specific subcategory—the paid chart for the Photography & Video section. And even there, it has now slipped to No. 23.
The conclusion is straightforward: the slogan "App Store paid chart No. 1" leaves out three critical details—no time reference (is it still No. 1 now?), no scope (overall chart or subcategory?), and no current status (is it still at the top today?). By blurring those key details and packaging a temporary subcategory achievement as a permanent badge, the claim becomes misleading, whether intentionally or not.
1.2 Product barriers and replaceability
One of Hushu's core labels is "an independent developer who can't code." It sounds like an underdog story: someone without technical skills built an app that made it to the paid chart. But before getting impressed, it is worth asking what kind of product this actually is.
If you search for "Kitten Fill Light" in the App Store, the results page shows nearly 10 apps with the same or very similar names. Open them up and you'll find that the functionality is almost identical: they use the screen as a light source to simulate a fill light.
That raises a real question: if the market can quickly replicate an app nearly 10 times over, where exactly is the moat?
The answer is: there basically isn't one. The app's underlying logic is simple. It is essentially just controlling screen brightness and color temperature. The code footprint is small, the development cycle is short, and there is no meaningful technical barrier. Put more bluntly: if a developer wants to package and launch a similar product in an afternoon, there is almost nothing stopping them.
Which leads to another question: if someone who claims he "can't code" can still build something like this in such a low-barrier category, does that prove technical ability—or does it say something else? Maybe skill at understanding traffic channels, or an instinct for what certain users actually want?
That is worth thinking about.
1.3 Product reality and longevity
The life of an app ultimately comes down to what the numbers say.
Kitten Fill Light costs $1. According to its App Store page, it currently has 952 ratings with an average score of 4.7. That's not bad, but in the context of paid apps, the rating count is still modest.
What matters more is the timing of those reviews. A large share of them came in 2025, and once 2026 began, new reviews almost disappeared. Judging from the review content and user avatars, the audience is highly concentrated in one specific circle: female users on Xiaohongshu. That means the app's growth has depended mainly on a one-time traffic spillover from a single platform, with little evidence of sustained acquisition from multiple channels.
On top of that, the app's last update was three months ago. Combine that with almost no new reviews in the past six months and no visible user growth, and the picture is clear: this product has entered a decline phase. It is no longer being actively iterated, and it has not established stable growth in the market. It looks more like the byproduct of a short-term marketing event.
So we end up with a product that is already fading, barely updated, and yet still carries a bio line that says "App Store paid chart No. 1" with no date and no scope. Once an achievement is stripped of its limits, its time context, and its current status, and repeatedly used as personal branding, its persuasive power drops sharply.
2. Nuwa.skill
2.1 Community hype
If you open the GitHub repository for Nuwa.skill, the star count in the top-right corner shows 18.7K. That number is real. In the open-source world, it is a very respectable figure.
But here we need to clarify one concept: what exactly does a GitHub star count mean?
In the ideal case, stars reflect how much the developer community values a project. But in the real world of internet distribution, stars usually reflect attention, not necessarily technical depth. A project can get a lot of stars because it rides a hot trend, has a catchy title, or is marketed well, even if the code quality and technical substance are limited. That has been repeatedly proven during the recent AI open-source boom—high-star, low-quality projects are not rare.
So 18.7K stars may be real, but that does not automatically mean the project is technically strong. The real question is what exactly supports those ten-thousand-plus stars.
2.2 The core question: where is the dataset for the "distillation"?
One of Nuwa.skill's main selling points is that it can "distill" the style of public figures like Elon Musk and Donald Trump, then imitate their language patterns in conversation.
Let's be clear about a basic technical principle: in machine learning, "distillation" usually means using the outputs of a large model (the teacher) as training signals for a smaller model (the student), so the smaller model picks up similar capabilities. More broadly, it can also mean training a model on a specific person's language data so it learns to imitate that person's speaking style.
Either way, there is one unavoidable prerequisite: you need data.
If you want a model to learn how Elon Musk talks, what is the first thing you need? You need real speech data from Elon Musk. Where did that data come from? Was it collected by the project itself, or taken from an open dataset? How large is it? How was it cleaned? These are the foundational questions any style-distillation project must answer. A dataset is the prerequisite for reproducibility, and reproducibility is the baseline for technical integrity.
But if you look through the Nuwa.skill repository and resource list, there is no prominent explanation of the dataset. The project says it uses "six parallel agents" to collect data, but it does not clearly explain the source, scale, deduplication method, or compliance handling.
There is also an important technical reality here: large-scale scraping from X (formerly Twitter) is not easy. Since Elon Musk bought the platform, access controls have tightened significantly. Without logging in, even basic browsing and search are heavily restricted; after logging in, there are still rate limits and anti-scraping defenses. A reliable scraping setup requires account pools, proxy rotation, request throttling, and a full engineering stack around it. In essence, this is a competition of resources—not something you can solve just by slapping the word "agent" on it.
So if a project cannot clearly explain where its data comes from, then from a technical standpoint, its "distillation" result cannot really be verified.
A more reasonable inference is that this project is not true model distillation at all, but more likely a wrapper around advanced prompt engineering. The system prompt may preload the target person's common phrasing and stance, allowing the model to mimic that style in conversation. In technical terms, that is fundamentally different from distillation.
2.3 The whole AI bubble in one picture
Step back for a moment, and that 18.7K star count may be more interesting than the project details themselves.
Why would a project that struggles under serious technical scrutiny still attract such massive attention? It reflects one troubling side of the current AI wave: once the "AI" prefix is put on a pedestal, ordinary users develop wildly unrealistic expectations about what it can do.
In that atmosphere, words like "distillation," "agent," and "style imitation" sound magical to non-technical people. Project rigor, data transparency, and reproducibility—things that should be basic consensus in a technical community—get buried under a collective frenzy for novelty.
Nuwa.skill's huge star count is a monument to that collective mood. What it proves is not that this distillation technique is especially solid or innovative. It proves how big the AI bubble is right now, and how wide the information gap is between ordinary users and technical reality.
That is probably more worth thinking about than the project itself.
Conclusion: let technology be judged as technology, and marketing be judged as marketing
At the end, it is worth restating the point of this article: this is not an attack on any one person, but a verifiable fact-check of a public-facing technical persona.
Hushu, as an independent developer, clearly has a strong instinct for marketing and a sharp eye for traffic. In today's content environment, that is unquestionably an advantage. He identified two highly contagious narrative hooks—"I can't code" and "AI." Combined, they create a very attractive story: a person without a technical background uses AI tools to build a paid chart-topper and an open-source project with tens of thousands of followers.
But a story is a story. Facts are facts.
After checking each claim one by one, the so-called "App Store paid chart No. 1" turns out to be a time-limited achievement in a specific subcategory. Presenting it as a timeless, scope-free title is essentially using information asymmetry to crown oneself.
The so-called "Nuwa.skill 10K-star project" does have real GitHub stars, but a project that cannot clearly explain where its dataset comes from cannot have its technical substance independently verified. It looks more like a sophisticated prompt-engineering system dressed up with fashionable terms like "distillation" and "agent." Its real success lies in traffic mastery, not in solid technical contribution.
An expired subcategory No. 1 and a 10K-star project with an unclear technical foundation—those two cards alone do not support the image of a technical guru. What they do prove is that this developer is good at getting seen, not necessarily at creating.
In the current environment, where the AI information gap is still huge, cases like this are not rare. They remind everyone who cares about technology to stay skeptical and verify carefully: let technical achievements be judged as technical achievements, and let marketing capability be judged as marketing capability.
Only by keeping those two separate can we preserve clear judgment in an era full of hype.





Top comments (0)