Now passing through peak AI hype. The good stuff comes next.
Oh look, the singularity isn't five years away after all. Who would have guessed?
Well it certainly has been a couple of weeks, hasn’t it?
If you’re reading this, you have probably been wondering when the AI tulip mania will end. The joke stopped being funny at some point in 2023 right around the time The Ones Who Failed Upward bet entire companies’ fates on the baseless notion that human intelligence was just a few months from being rendered completely obsolete. As recently as last month, Popular Mechanics, which has as much credibility as BuzzFeed nowadays, proclaimed we were just five years from the Singularity. And of course the chief AI grifter, Sam Altman, claimed to be “scared” of the speed with which AI was progressing. Surely, we would be nothing more than unemployed incel NEETs subsisting on UBI, babysat by ChatGPT 7.0 by this time next year.
💩
Let it be stated right now that my track record of predicting the fall of overhyped nonsense is no joke. Theranos, the Metaverse, NFTs, FTX, “there’s an app for that”, touchscreens in cars, and design unicorns are just a few of the scams I gainsaid, and which were subsequently publicly exposed as scams. So, here we are again.
The signs that LLMs-as-intelligence is a bubble have been all over for anyone with an upper-double-digit IQ to pick up on for a long time now. But it looks like the mainstream narrative is finally picking up on it.
A few salacious news stories don’t hurt. First, there was the guy who fell in love with his ChatGPT, gave it a name, shared his deepest fears and desires with it, only for it to forget who he was when the chat session hit its memory limit, leaving him to “cry his eyes out”. Then there was the release of ChatGPT 5 whose more economical answers, devoid of the cringey “personality” of previous versions, left millions of others who also formed goofy parasocial relationships with their digital Magic 8-Balls emotionally traumatized. Nevermind that the more important story was that ChatGPT 5 still could not count the Rs in “strawberry”.
The theme across all these stories is that ChatGPT, and every other LLM-based AI, are more artificial than intelligent. They don’t think. They don’t feel. Remember that, just two years ago, there was a widely-shared article talking about how ChatGPT was “conscious”. Two years ago, that was still the “it should be noted” ChatGPT. Fucking “conscious”. LOL.
And, for the pathologically credulous marks chanting “just give it more time, soon you’ll be unemployed” like some sort of Kurzweilite prayer, I have bad news. AI isn’t getting any closer to human intelligence. LLMs have hit their limit. No matter how much more dubiously-sourced data they pump them up with. No matter how many data centers they build on the site of old-growth forests. LLMs will not surpass human intelligence. They will not replace us. And sane people would be rejoicing, but for the fact that they never thought any differently.
So what next?
None of this is to say that LLMs are going to go away. I don’t just mean that investors will continue throw trillions of good money after the bad money of OpenAI and Anthropic. Anyone who has been following me knows that I saw the humanistic potential in LLMs from the very beginning. That’s why I have found it so galling to see these profoundly anti-human applications for AI being prioritized by the entire industry. I can’t say I was surprised, but repulsed nonetheless. That’s why I’m once again excited for the future of AI now that the worst applications are currently being mercilessly debunked.
The fact is that, while LLMs have hit their hard limit in terms of being able to crudely imitate human “thought”, their actual applications have barely been explored at all, because everyone was attempting to replace carbon-based life with a chatbot. Simple economic realities are going to force companies which have massive sunk costs to explore humble, unsexy, *gag* practical applications simply to realize a return on investment.
From day one, as with any non-garbage technology, AI’s focus should have been to supplement people’s capabilities, focusing on mechanistic tasks and enabling people to perform humanistic tasks, those requiring intuition, creativity, and abstract reasoning. But, that wasn’t what happened, of course. There’s the now-cliched meme that robots were supposed to do the dishes so we could paint, not vice versa. Well, AI won’t be doing our dishes any time soon, but that doesn’t mean there aren’t plenty of other miserable chores that we’d love to offload onto a machine. The chore that LLMs are exceptionally good at is the translation of machine language into natural human language. That’s exactly what they were meant for.
Think of any task that involves knowing arbitrary technical facts. Learning those facts takes up time that you could have spent learning something more interesting. Storing them in your brain takes up space that you could spend on something you care about. Recalling them takes up mental bandwidth that you need to actually solve problems with abstract thinking. The facts include things like machine code, menu trees, and technical procedures. They are the kind of things that a machine is really good at. The problem is, again, how do you get the computer to do those things if you don’t know the specific commands? LLMs render this problem obsolete. As long as you understand what you want to do at a high level, you can simply express that in your own idiolect, and the LLM will convert your input into machine commands.
The one area in which AI is approaching this killer application is computer programming. AI can generate code that works, enabling non-programmers to build applications far more complex than they could have ever imagined. But, even here, the lack of humility from the AI industry is leading us into a disaster. While AI-generated code is extremely useful (I regularly create Python scripts for data analysis), it has definite limits. “Vibe-coded” applications are buggy and insecure, and the code is so unwieldy and hard to understand that it usually makes more sense to bring in a human to rebuild the whole damn thing. Even OpenAI, a company which once claimed to be rendering programmers obsolete, is hiring developers currently.
A more humble approach would be to apply LLMs to no-code or low-code platforms like Bubble or Webflow. The AI could be trained on the finite set of functions that these systems use, enabling everyone to generate web apps without having to learn the byzantine workings of the platforms. The closed nature of these platforms means that the AI would be constrained in the damage it could do—the security aspect, for instance, could be hard-coded. But the AI companies skipped right past this and attempted to replace all programmers overnight.
Instead of trying to create a single general artificial intelligence that can do everything including being human, specialized AIs could provide an accessible layer on top of powerful software, or large data sets, enabling people to do complex tasks or access obscure knowledge, democratizing the Information Age as it was always meant to be. Moreover, these specialized AIs would be much more lightweight and potentially able to operate on a local computer rather than requiring expensive, power-hungry, and surveilled data centers.
Case in point, Adobe was so desperate to put generative AI into their products that they started stealing their own users’ work. And, even then, to access the AI functionality. it has to run in the cloud. That means you have to have an internet connection, and the output is subject to Adobe’s censorship. Meanwhile, the average user is still shut out from the most powerful functionality of Adobe products because the UX is so bad. What if they had focused their efforts on creating a natural language interface into all their products, translating high-level commands into inputs? No longer would people have to waste their precious time and cognition on remembering how to navigate Adobe’s shambolic UI, meaning they could focus on their artistic skills instead.
I’ve already mentioned plenty of other examples of how AI can empower rather than replace people. Photo, music, and video editing software could give beginners the ability to create like pros without stealing people’s work, or generating derivative slop. Semantic search connected to human-curated databases could give the average person access to knowledge usually gatekept by expensive professionals.
I think that we are going to actually start seeing these applications soon. As the AI giants quickly realize their AGI fantasies are evaporating, and frustrated venture capitalists demand an ROI, the companies will be forced to pivot. They will abandon the unhinged rhetoric about the singularity and start talking about how AI can actually be useful. They will start licensing out the technology as an interface layer for pretty much everything. They will offer ultra-lightweight models that can run on a smartphone. They will do this, or they will go bankrupt. It’s simple math.
I’m calling it
The AGI hype, the singularity babble, it’s going to be gone from the mainstream by the end of 2025. The longer they keep it up, the more it’s going to cost them in terms of reputation and money. Charlatans like Sam Altman might not care about either, but the people backing him certainly do.
In a future article, I will cover even more ways in which humble, unassuming applications of LLMs can cumulatively reshape our world in a way that OpenAI’s fairy tale superintelligence never could.
In the meantime, go outside and play. ChatGPT will still be there when you get back. And it won’t miss you, because computer programs don’t miss people.
Up and at them