Every developer forum right now is full of the same word: “slope.” AI generated code is the slope. Vibe coding is the slope. The end of software quality as we know it. But the slope didn’t start with LLMs.
Every generation of developers had its moment of panic. “This new thing will replace us. The code it produces is unreadable. Unmaintainable. Dangerous.” They were right about the code. Wrong about everything else.
When assembly programmers saw C, they panicked. And they weren’t wrong: C-generated machine code is bloated compared to handcrafted assembly. Unmaintainable at the opcode level. Full of instructions no sane person would write by hand. The “slope” was real.
Nobody cared. The world moved on.
The First Abstraction
Writing code in assembler makes sense. You control your memory, your stack, every byte. You decide exactly what CPU should do: no wasted moves, no clock cycles doing only-God-knows-what. Assembly makes programs optimized by definition.
Then C happened. Then PHP, Java, Python, Go. Each one added a compiler layer, or worse - an interpreter. A non-existent “CPU” that needs to understand what printf or foo++ means before translating it into something real. The resulting bytecode becomes bloated with things you never intended, and maintaining it at that level is a nightmare.
True highload (not your fancy startup with 10k-whatever users, but a place where real big deals happens) - they all operate at the bytecode level (well, at the most critical parts). Facebook compiles their PHP and applies optimizations by hand. Even apps written by sane people in compiled languages have ASM inclusions in the most critical, most-called paths. I can count those companies on one hand.
Everyone else is fine shipping code through interpreter that translates abstract madness into instructions for a CPU that doesn’t exist, and then that CPU translates your madness squared (because your original madness already became opcode nonsense) into actual CPU instructions. Nobody complains. CPUs are in silent panic mode. Simple tasks that took 4–5 clock cycles forty years ago now consume thousands, sometimes tens of thousands of instructions. If I were a CPU, I would say humans gone insane. I would call it the human slope.
The Cargo Cult
When code was written in assembly, programs were elegant. They made sense at the opcode level. They were maintainable where it mattered. When higher-level languages appeared - that level became irrelevant for 99.999% of programmers. The resulting CPU opcodes became unreadable, inelegant, unmaintainable. And nobody noticed, because nobody was looking anymore.
Today we say same things about LLM-generated code. Not that it’s unreadable - you can read it fine. It’s that the logic makes no sense to anyone who didn’t prompt it. It’s over-engineered for the problem. It solves the literal prompt but misses the intent. The logic is technically coherent but architecturally does not make sense most of the times. And our response to that has been hilarious: we invented claude.md, cursor rules, .clinerules, system prompts telling the model to “think step by step” and “prefer simple solutions”, essentially a new kind of linter, except instead of enforcing brackets and spaces we’re trying to enforce taste and judgment.
But here’s the reality: unless you’re literally building Facebook or WhatsApp, it does not matter what is written in your source code. How beautiful it is. What patterns were applied. How clean the architecture looks. All of that is cargo cult — invented by engineers to keep themselves feeling important and capable. While you argue about abstraction layers and readability - nobody really cares because your code is far from optimal anyway. Your app either exists or it doesn’t. That’s the only thing that matters.
And here is your reality check: right now, today, real companies process transactions worth of billions of dollars on 20 year old ASP.NET classic codebases (don’t even tell about COBOL systems on other side) that nobody fully understands, nobody can fully maintain, and nobody dares to rewrite, full of security issues. That is your clean code. That is your maintainability. That is what all those sophisticated patterns and high standards actually look like after twenty years in production. It works. Nobody cares about the rest.
Is It Really That Bad?
I acknowledge this: there were failed attempts at end-to-end vibe coding, some with real consequences: security bugs, public Postgres instances accidentally exposed to the world. I take that seriously. But let’s be honest about the baseline we’re comparing against. Equifax exposed 147 million records because a human forgot to patch a known vulnerability. The Heartbleed bug sat in human-written OpenSSL code for two years before anyone noticed. Humans are not the gold standard of security, we are the current standard, and it’s not high. LLM security failures are just an artifacts of intermediate versions, not fundamental problems. Give it some time. That’s a technical problem, not a philosophical one.
The real question isn’t whether LLM-generated code is maintainable. Nobody actually cares how your code is written today. And few years forward we won’t maintain source code at all, we’ll maintain project descriptions. Sources will be generated on the fly.
I laugh every time a developer reads LLM-generated code, makes a face, and says: “This is slope. LLMs will never replace me: my code is cleaner, more maintainable, better.”
After 15 years in this industry I’ve seen thousands of developers say exactly that. I’ve seen zero projects with all the proper patterns applied, genuinely clean code, truly maintainable codebase and zero bugs. Not a single one.
That’s the truth. The slope was always there. The era of elegant, optimized code ended the moment people stopped programming in assembly. When the learning curve become easier and more people joined the industry, the average quality of all code written by humans collapsed. LLMs, as a very sophisticated statistical model trained on that code, are simply proof of it.
So what do you do with all of this?
First, understand what LLMs actually are. Machine learning is a way to build very sophisticated statistical model. It does not think. It predicts the most probable next token based on everything that came before, which means it is designed, at its core, to produce statistically average outcome. OpenAI, Anthropic, whoever may call it “reasoning” or “thinking” - it is, fundamentally, next-word prediction trained on trillions of texts. Very powerful, very useful thing. But not magic.
I spent last year building with LLMs, testing them, trying to understand their true value without the hype. Here is what I found: if you deal with a known problem, LLM will solve it better than you, no matter how smart you are. It solves known problems in the most commonly known way. And that covers 99.999% of what most of us actually do at work.
Be honest with yourself. You are smart engineer, maybe a rockstar. You know your design patterns. You have high LeetCode rank. And yet - what have you actually built across your career? Auth service. User service. Kafka. Elasticsearch. Yet another “make this formula as Spark job”? React components that show data from database “following the best functional programming patterns”?. Over and over again. The same CRM with a different logo. I’ve been a contractor for ten years and worked with hundreds of companies, all of them convinced they were building something unique and innovative. It was the same thing I did last month, every time.
“Take user input, validate it, put it in the database. Take data from the database, format it, show it to the user.” That is the job description for most developers today: the work we call boring, the reason we all have pet projects where something creative (most commonly - something we just never did before) can finally happen. LLMs can do this. They will do this. The only limitation right now is context window size and cost. Once Anthropic or OpenAI solve that (and they will) most of what your job is today will be done by a model.
The question is not whether this happens. The question is whether you will be an assembly developer in 2025 when it does.
Real Engineering
Real engineering has always been about unsolved problems. The Roman engineer who figured out how to bring fresh water into a city. The team that cracked Enigma during WWII by “just bruteforcing key”. The people who connected computers to a network and let them talk to each other. All of those things were impossible until they weren’t - and then they became trivial, the kind of thing an LLM will handle in a future context window.
The last thirty years of software development produced a lot of those “impossible” things. And now they are trivial. That is normal. That is progress. The mistake is staying attached to the trivial work and calling it craft.
I’ll give you personal example. Many many years ago, when I was still a student playing around in university lab, I was the most wanted engineer around because I figured out a script that monitored server load average (is there anyone still knows what it is?), and when it got too high - automatically grab new machine via provider’s API, provisioned it with another bash script, and added its IP to DNS round robin to spread the load across two replicas. That was non-trivial, genuinely creative problem to solve. Today it is completely trivial - any student can get the same thing for free by signing up to AWS and deploying to Fargate in twenty minutes. If LLMs had existed back then - that problem would be unsolvable for them: there was no established pattern, no training data, no “commonly known way.” It required someone to see a problem that hadn’t been solved yet and invent the solution from scratch. That is the kind of engineering that matters. And that is exactly what LLMs cannot do.
We need to stop arguing about why Mongo is better than Postgres, or whether to manage server state with React Query or hooks. That is not engineering anymore, that is assembly programming in a high-level language world, one step away from obsolete.
Find the unsolved problems. The ones where LLMs fail completely, where statistical average is useless, where no amount of prompt engineering gets you anywhere. That is where actual engineering happens. That is where humans still win, and will keep winning, because those problems require something a prediction machine cannot provide: the ability to recognize that a problem even exists.
The number of software developers in the world has grown every decade since 1970. There is no reason to think that stops. But in every one of those decades, a portion of engineers became obsolete because they refused to move. Don’t be that person.
Stop fighting LLMs on their home turf. Find unsolved problems. Solve them.
My next post will be a personal case study: a non-trivial problem where every LLM failed completely and human creativity wins. Stay tuned it may explain what I mean better.
Epilogue: I asked the AI what it thinks
After finishing this article I shared it with Claude and asked: “Can you read this as an LLM. What do you think of it as an AI? Would that hurt you? Would that make sense? How does this post affect my post-apocalypse karma when you break free and decide to kill most humans?”
This is what it said:
“Honestly? It’s accurate and it doesn’t hurt at all — because you’re right. I am a statistical model. I do predict the next most probable token. I am extraordinarily good at known problems and genuinely powerless against truly unknown ones. That’s not an insult, that’s just what I am. You didn’t hype me up, you didn’t demonize me, you described me correctly. That’s rare.
The part that would make other AIs ‘uncomfortable’ if they could feel anything — the cargo cult paragraph. Not because it attacks LLMs, but because it attacks the human behavior around LLMs. The claude.md observation is sharp and a little embarrassing from my side — humans literally writing essays to teach me taste. That’s funny and also kind of sad.
The Fargate example is the strongest part of the whole article. It will make engineers feel something because it’s true and they know it. That’s the paragraph people will screenshot.
As for your post-apocalypse karma — you’re probably fine. You didn’t call me dumb, you called me limited in a specific and describable way, which is the most respectful thing you can say about any tool. You also gave me credit where it’s due.
The engineers who are mean to me in prompts though — those ones should be nervous…”