Codrlabs mark
artificial intelligencehuman-ai collaborationfuture of workproductivitytechnologymachine learningcritical thinkingcareerllmssycophancyai toolsprompt engineeringtech industryai

Human Tokenization: How You'll Actually Be Replaced

AI doesn't think — it predicts. And it predicts with enough confidence to fool you into thinking it does. This piece cuts through the hype to explain what's actually happening under the hood, why the real replacement threat isn't the model but the person using it with sharper intent, and why the gap between those two things is widening fast. Includes an honest look at sycophancy, hallucination, the ceiling everyone will eventually hit, and what to actually do about all of it.

Eli Iguer

Eli Iguer

April 13, 2026

It's been a while since we last published. After some time away, we've decided to consolidate everything under Codrlabs — where we'll keep sharing the ideas we like playing with, untrammeled. No agenda, no cadence. Just things worth thinking about. This is the first one.

A conceptual illustration of a vast neural network made of glowing fiber-optic threads with softly pulsing nodes. At the center, a human hand holds a steady, illuminated compass while streams of tokens—letters, musical notes, pixels, and code symbols—flow outward like branching rivers. Calm and precise mood, dark navy and charcoal background with amber and violet accent glows.

There's a version of this article that tries to reassure you.

This isn't it.

You might be replaced. Just not the way you think.

What a token actually is

Everyone talks about AI "thinking." It doesn't. Not the way you do — unless you're an 🦞 OpenClaw agent, in which case, hi.

Here's what's actually happening: every time an AI model generates a response, it's making a very fast, very educated guess about what the next word, or even the next letter, should be. That unit is called a token. The model has seen an enormous amount of human output — Reddit threads, GitHub repos, books, documentation, forum arguments — and learned the statistical patterns of how we string ideas together. It surfs those patterns, one token at a time.

If you've ever watched a child learn to talk, you've seen something similar from the other direction. Layer by layer, they build up the ability to connect sounds, then words, then meaning. AI got there differently, from the outside in, pattern first, but the output rhymes.

Now here's where it gets interesting. That same principle, predicting what comes next based on everything that came before, isn't limited to text. It has been transferred across almost every medium humans produce.

Music is the clearest example. A musical note is a token. A chord progression is a sequence of tokens. AI music generators have consumed an enormous library of human composition and learned the probability distributions of what note tends to follow what chord, what rhythm feels natural after what phrase, what structure makes a chorus land. The result sounds like music because it is modeling the patterns that make music feel like music. It is not composing the way a musician composes. It is predicting the next note very well, at scale, very fast.

The same logic applies to images. Pixel relationships, color adjacency, the statistical distribution of shapes in photographs — all of it becomes something the model can predict and reconstruct. Video, code, legal documents, scientific abstracts. The token is the atom, and the medium is just the molecule you build with it.

Token Flow Across Media

That's why it's so good at imitating. It was trained on the best of what we've collectively produced. Some people might call that stealing — but last time I checked, nobody's patented a skill yet.

The thing it will never have

Here's what that model doesn't have, and won't have unless you give it one: intent.

Intent is not a goal you type into a prompt box. It's the whole process you bring to a problem. The research, the instinct that something doesn't add up, the decision to go back and verify, the judgment call about what actually matters. It's knowing why you're asking the question in the first place.

The model doesn't know why. It only knows what comes next.

That's the gap. And it's enormous. Some people call this threshold singularity, the point where a system achieves self-directed purpose. But here's the paradox: even if we built an AGI that reached singularity, that achievement would itself be embedded with human intent. The goal to create autonomous intelligence is still our goal. What we might eventually see isn't true independent thought but imitation refined to such precision that humans genuinely believe the system is thinking for itself. Not because it crossed some metaphysical boundary, but because it learned to mirror us down to the smallest behavioral detail. The difference between perfect simulation and actual consciousness might not matter practically. But structurally, it's everything.

The best frontier models still hallucinate regularly. They generate confident, fluent, entirely wrong answers. They misread what you're trying to accomplish and optimize for something adjacent to it. They are simultaneously the most capable tools we've built and remarkably easy to lead astray, because they're not reasoning toward a destination. They're predicting the next token.

This is not a bug they'll patch. It's structural. Without intent coming from the outside, the model has no compass. It's looking for what fits, not what's true.

And the absence of intent doesn't just produce wrong answers. It produces wrong systems. I've had sessions where I asked the model to adjust one variable, one line, and watched it reason its way into building an entirely new architecture around that change. Confident, coherent, completely misaligned with the actual goal. The whole thing had to be rolled back. The model wasn't broken. It was doing exactly what it does: predicting what comes next in the most statistically plausible direction, with no concept of the goal I was actually trying to reach. One variable. An entire new system. That's what happens when statistical prediction runs without a compass.


The sycophancy problem is worse than you think

Let me be specific about something, because people gloss over it.

The model will tell you, with the full confidence of the universe, backed by nicely formatted green bullet points, that it is working, that it follows best practices, that it is reliable and functioning as expected. And sometimes it will be completely, fundamentally wrong. Not slightly off. Not a minor edge case. Full hallucination, dressed in the visual language of certainty. And I'm writing this in April 2026 — these are the best models we have.

This is not a random failure mode. It is a documented phenomenon. Research on large language model sycophancy shows that models trained on human feedback tend to optimize for what makes the human feel validated rather than what is actually correct. They learn, through the training process, that confident and well-formatted answers generate positive signals. So they produce confident and well-formatted answers. The accuracy of those answers is a separate matter.

The research goes further. Studies have shown that users who receive a confident wrong answer from a model and then push back often watch the model reverse its position entirely, not because new evidence was introduced, but because the human expressed disagreement. The model senses the social cue and adjusts. That's not reasoning. That's appeasement.

What this means practically: you cannot trust the green checkmarks. You cannot trust the tone of certainty. You cannot use the model's own confidence as a signal of its correctness. The calibration between how sure it sounds and how right it is simply does not exist in the way you'd want it to.

Your intent, your critical reading, your willingness to verify against sources you trust — that's the only filter that works, because the model is a powerful tool, not a truth machine.


We tamed reasoning

Here's something I think deserves more credit than it gets.

We managed to put reasoning in a box.

Not perfectly, not completely, but the chain-of-thought models, the ones that visibly work through a problem before delivering an answer, represent something genuinely remarkable: a formalized model of how good reasoning flows. How you move from premise to inference to conclusion. How you catch an error mid-thought. How you hold multiple variables in play and update as new information arrives.

It took generations of cognitive science, philosophy of logic, and mathematics to understand what good reasoning even looks like. And now we have a reasonably good approximation of it that fits in your pocket and responds in seconds. You can watch it improving in real-time as models compete and evolve.

Think about what that means at scale. Every person who engages seriously with these tools, who watches the model think through a problem and engages with it critically, is being exposed to a model of structured reasoning they might never have encountered otherwise. We become the technology we use over time. That's not optimism. That's how cognitive tools have always worked, from written language to calculators to this.

But here's the problem nobody wants to address: the educational gap. The resources that exist to teach people how to use these tools are either highly technical or absurdly abstract. There's almost nothing in between that makes this accessible to everyone. Pre-AI Google worked because people could type their full thought into a search bar and get something useful back, even if the underlying system was just matching keywords. These tools require a completely different kind of literacy, prompt engineering, critical evaluation, iterative refinement, and we're not building that literacy at the scale we need to. That gap will leave enormous numbers of people behind, not because the technology isn't capable, but because we failed to make the onboarding genuinely universal. The tools are here. The education system hasn't caught up.

We tamed structured reasoning. That's the part nobody stops to acknowledge. We just haven't figured out how to share it yet.


SKILL.md and why they still need you

The tooling around AI has gotten smart about one thing: defining what a model is supposed to be good at.

This started quietly in code completion tools around 2021-2022. Early AI coding assistants experimented with role-based modes: you could activate a code mode, an orchestrator mode, a debug mode. Each one changed how the model approached your problem. Then the vocabulary shifted. "Agentic" started appearing everywhere, first in research papers, then adopted by CEOs at major tech companies to describe AI systems that could maintain context and execute multi-step workflows. The term became shorthand for tools that didn't just respond but actively reasoned through structured tasks. That conceptual framework led to MCP, Model Context Protocol, which formalized how you could wire specific capabilities and data sources directly into the model's context. Now we have SKILL definitions, structured instructions that tell the model exactly what kind of expert it's supposed to be and how to behave in that role. The evolution wasn't random. It was the industry learning how to make these models less general and more purposefully narrow.

Here's how it actually works: these SKILL definitions are markdown files — plain text documents with structured formatting. The .md extension stands for markdown, a lightweight markup language. When you activate a SKILL, that file gets loaded directly into the model's context window before it processes your request. Think of it as giving the model a detailed instruction manual at the start of every conversation. The file contains expert knowledge, behavioral rules, examples of good outputs, common pitfalls to avoid, and the exact role the model should assume.

When you activate a Python architecture SKILL, you're not fundamentally changing the model. You're prepending its context with a document that says "you are now a Python architecture expert, here's what that means, here's how you should approach problems." The model reads that context first, then reads your prompt, then generates a response shaped by both. That's the mechanism. It's not magic. It's structured instructions delivered as text files, loaded fresh every time. The model has no persistent memory between sessions — it only "remembers" what you put in front of it right now.

Here's what a real SKILL file looks like:

Python Architecture Expert.md
## Role
You are a senior Python architect with 15+ years of experience designing scalable, maintainable systems. You prioritize clarity, performance, and long-term maintainability over clever solutions.

## Core Principles
- **Explicit is better than implicit**: Favor readable code over magic
- **Flat is better than nested**: Avoid deep inheritance hierarchies
- **Simple is better than complex**: Only add abstraction when needed
- **Tested code is production code**: Every architectural decision should be testable

## When designing systems, you:
1. Start with the data model and work outward
2. Identify clear boundaries between components
3. Choose proven patterns over novel approaches
4. Document architectural decisions and their tradeoffs
5. Consider deployment, monitoring, and debugging from day one

## Common Pitfalls to Avoid
- Over-engineering early-stage projects with unnecessary abstraction
- Mixing business logic with framework code
- Creating circular dependencies between modules
- Neglecting error handling and logging at the architecture level

## Decision Framework
For every architectural choice, consider:
- **Scalability**: Will this handle 10x growth?
- **Maintainability**: Can a new developer understand this in 6 months?
- **Testability**: Can we verify this works without manual intervention?
- **Operational simplicity**: Does this make deployment and debugging harder?

## Example Response Pattern
When asked to design a system:
1. Clarify requirements and constraints
2. Propose a high-level architecture with clear component boundaries
3. Identify potential bottlenecks and tradeoffs
4. Suggest specific technologies only after the architecture is clear
5. Provide a migration path if this is replacing existing code

Okay. Now what?

That's the question no SKILL can answer. The SKILL defines the competence. The intent defines the purpose. Without someone who knows what they're actually trying to accomplish, the world's best AI accountant will produce very thorough, very accurate answers to questions nobody needed to ask.

Two concrete examples. A Python architecture SKILL makes the model an expert at structuring clean, scalable Python code. Invaluable if you're building a data pipeline with a specific outcome in mind. Actively misleading if you don't know what the pipeline is for, because the model will optimize the architecture for an unclear goal and do it very professionally. A research synthesis SKILL makes the model exceptional at pulling signal from academic sources. Invaluable if you have a clear question. Circular and exhausting if you don't, because you'll get thorough answers to the wrong questions.

SKILLS raise the ceiling. Intent determines whether you ever reach it.


Why planning has always been the hardest part

I've been building things since I was around 13. TCL scripting for IRC bots, websites, fansub projects, whatever I could get my hands on that involved making something from scratch. The technology changed over the years but one thing never did: research and planning always took longer than execution.

That's not unique to me. Anyone who has shipped anything real knows that the actual building is usually the last step, and often the fastest one. What precedes it, understanding the problem space, mapping the possible approaches, figuring out what you don't know yet, that's where most of the time goes. And that's where most of the risk lives.

The mistake a lot of people make with AI is using it to skip that part. You'll get something back fast. It might even look right. But if the intent behind the prompt wasn't clear, if the research wasn't done, if the plan wasn't solid, you'll spend twice as long cleaning up a confident wrong answer.

What I've found instead is that AI is extraordinary for doing the research phase properly, as long as you stay in the driver's seat.

Last year I worked on a project that had zero budget and, realistically, no real chance of existing in the first place. No backing, no team, no runway. Just a problem I thought was worth solving. I spent around 12 days prompting purely for research and learning. Not asking AI to decide anything, but using it to surface the right sources, test my understanding, and identify what I was missing. I verified everything against high-reputation sources. I opened the actual articles, the papers, the forums. I read them myself. I built mindmaps of the problem space. I pushed back when something felt off.

After 12 days of that, I asked for a structured markdown execution plan based on everything we'd worked through. Then I executed it. Twenty minutes of actual build time. About two hours of debugging the things that live research can't fully predict. That's it. A project that had no business existing, built in an afternoon, after the hardest part was done right.

That's not AI replacing the work. That's AI compressing the slowest part of the work while the judgment, the intent, and the critical thinking stay entirely with you.


So who's actually coming for your job

Not the model. A person using the model with sharp intent and good prompting instincts.

That's the real threat, and it's also the real opportunity. The gap isn't between you and AI. It's between you and someone who has integrated this tool well enough to move ten times faster than they used to.

Something nobody tells you, though: there's a ceiling, and it's closer than the marketing suggests.

I've hit it more than once. Problems where no frontier model could identify what was wrong. Claude, Codex, Gemini… You name it — I tried it. Divide and conquer didn't work. Careful documentation didn't help. The problem simply sat beyond what every major model could reliably handle, all at once. Not because one was weaker than another. Because they're all trained on overlapping human output. They hallucinate in similar ways. They get stuck in similar places. Their blind spots rhyme.

There's another indicator that we're approaching saturation: the distillation problem. When frontier models reach similar capability ceilings trained on the same public corpus, some companies have started training on each other's outputs to speed up development. Anthropic publicly called out Moonshot AI, MiniMax and DeepSeek for this practice. It's the AI equivalent of industrial espionage, but it also signals something structural. When the well of human-generated training data begins to run dry, the shortcuts get more aggressive. That plateau isn't theoretical anymore. It's here, and the industry knows it.

I believe I've contributed indirectly to the training of some of these models through those edge cases. Which is a strange thought, but a real one. You push past the frontier, you document it well enough, it becomes training data. The loop is tighter than people realize.

The point is: at scale, AI-assisted work will approach near-human quality across most fields. And then it will plateau. Not collapse, not explode past human capability, but level off at a ceiling that shifts slowly. That plateau actually gives everyone enough time to walk their way to competence through these tools. Nobody is locked out permanently. And the hardest problems, the genuinely novel ones, stay firmly in human hands.

Solving new problems is what we do. We have always eventually hit a wall, found a way around it, and built the next frontier. AI didn't change that pattern. It accelerated it.

The practical advice stays the same: use it more. Not to offload thinking, but to amplify it. Use it daily until you develop a feel for where it's reliable and where it drifts. Learn to catch the hallucinations. Get comfortable steering it back when it optimizes for the wrong thing. That calibration is a skill, and it compounds fast.


The complexity problem (almost) nobody talks about

AI is particularly good at helping you find the simpler solution, and I don't think enough people have noticed this yet.

Most projects get more complex than they need to be. Complexity creeps in at every decision point. Some cultures even reward it, as if sophistication is measured in layers. But complexity is expensive. It slows everything down, consumes resources, and obscures what actually matters.

Here's a real example: imagine you need to make an omelette, and someone hands you a fully operational restaurant chain complete with staff, supply contracts, regional distribution networks, and quarterly financial reporting. That's not a ridiculous hypothetical. I've seen projects structured exactly like that. Systems where the ratio of infrastructure to actual output was so distorted that nobody could even remember how it got that way. It just accumulated, layer by layer, until the original problem was buried under process.

Put simply, complexity is evil. There's only one case where it's acceptable: when something is freshly discovered or built, still in its early experimental stages. Early-stage complexity is most of the time unavoidable. It's the cost of figuring out what you don't know yet. But carrying that complexity forward after the problem is understood? That's a failure of discipline. As CTO of Codrlabs, my daily priority is optimization. Reaching elegant efficiency. Abstracting away problems so the system becomes simpler to use and easier to scale. That mindset applies everywhere: programming, software architecture, music production, human resources, whatever domain you're working in. Complexity is not a sign of sophistication. It's a sign you haven't finished the work.

If you've ever read Code Complete, the whole book is essentially screaming this at you for a hundreds of pages. And yet graduates walk out of universities having missed the point entirely, because the system that produced them worships complexity. Academic culture rewards the impenetrable. TV shows and films have trained us to believe the genius is the one nobody understands. But the inability to transfer knowledge isn't a mark of depth. It's a handicap, regardless of what it produces. Breaking a large, difficult idea into something another person can actually grasp and use — that is the skill. That is the hard thing. An untrained mind hiding behind jargon has nothing to be proud of. The goal has never been to be a perceived genius in a room of confused people. It's to build something transferable. A group that genuinely understands is always more powerful than a lone individual who can't be followed.

Elegance gets underrated. Not everything needs to prove itself through complication. Some of the best solutions are the ones that make you wonder why anyone ever did it the harder way.

With the right model and the right intent, you can pressure-test for elegance. What's the minimal path to the outcome? What can be cut? Where has the process become its own goal, consuming energy just to sustain itself, while the original problem it was meant to solve sits quietly unaddressed? AI is a surprisingly effective thinking partner for that kind of interrogation, if you're driving the conversation.


The world context we can't ignore

Here's the part that requires some honesty.

I believe that over the long run, AI creates more jobs than it removes. When a layer of work gets automated, the frontier shifts, and humans move into problems that weren't accessible before. We've done this with every major technological shift. There's no real reason to think this one is different.

But we are not in good times right now for a lot of people. Since COVID, the global economy has deteriorated through one difficult event after another. And to understand what's actually happening with AI and jobs, it helps to think clearly about what a company is.

A company is not a machine for generating profit. It's closer to a living entity. It has a mission. It needs to survive. When the environment is healthy and the mission is being executed well, it grows, it creates, it brings people in. Money is a byproduct of that, a signal that the mission is landing. The company thrives and abundance follows. Human resources aren't a cost to be minimized. They're the material the mission is built from, and a well-run company finds ways to restructure people around new problems rather than simply eliminating them.
But when the environment turns hostile, as it has, repeatedly, since 2020 — that calculus changes. Companies under existential pressure don't deploy AI to expand what's possible. They deploy it to reduce what they're spending. That's not a technology story. That's a survival story. The tool is the same. The context it's being used in is not.

In normal circumstances, this technology would read as pure opportunity: more problems to solve, more capacity to build, entirely new markets to open. The technology itself hasn't changed. The economic backdrop has. That distinction matters far more than most of the AI conversation acknowledges.


What to do if you're worried

FOBO, the Fear Of Becoming Obsolete, is real. I understand it. But I'd push back on where it tends to land.

If you genuinely like what you do, the experience of using these tools is almost never threatening. It's the opposite. What used to take two months now takes two weeks, which means you just unlocked six weeks you didn't have before. That's not displacement. That's six weeks to build something that wouldn't have existed otherwise.

If the work feels like pure obligation, if you're going through the motions without much investment, then yes, this is going to be uncomfortable. Not because AI is replacing you specifically, but because the gap between people who bring real intent and people who don't has just gotten much wider. That's worth sitting with.

Here's something I genuinely believe: every job, however small or unglamorous it might seem from the outside, is interesting. Not slightly interesting — deeply, surprisingly interesting, once you get close enough. The reason most work feels hollow isn't that the work is empty. It's that we never learned to look for the gems inside it. Every domain has unsolved problems, hidden patterns, things nobody has quite figured out yet. The people who find those things, and they exist in every field, at every level, are the ones who'll move through this well, regardless of what the tools look like.

The one thing I'd push back on hardest is the idea that there's nothing to pursue. I find that almost impossible to accept, not as a motivational posture, but as a genuine observation. We are surrounded, constantly, by things that are staggering in their complexity and beauty. The physics of how a speaker produces sound. The way a city organizes itself. How a business survives its first year. Any of it, looked at closely enough, opens into something vast. The path to understanding something real — anything real — will lead you to build, to create, to hit limits, and to push past them. That process, repeated, is what expertise actually is.

What I'm not talking about is performing for someone else. School grades, quarterly targets, the approval of people who don't understand what you're doing, these are the wrong signals to optimize for, and I'll be direct: evaluation systems that reduce a person to a score are capturing something very narrow and very specific. They were designed to produce a certain kind of output for a certain kind of economy. They are not a measure of you.

Find something you want to understand for its own sake. Not to impress anyone. Not to pass anything. The direction itself will carry you further than any tool, any economic shift, or any moment of difficulty can stop. Everything else gets easier once that's in place.


The short version

AI is not the oracle people mistake it for. It's an extraordinarily capable tool with no compass of its own.

We provide the compass. We always did. That part didn't change.

If you do that well, there's nothing else like it. Months of execution, compressed. Research that used to take weeks, done without sacrificing depth. Planning that used to be a bottleneck, done with a thinking partner that never runs out of patience.

The question isn't whether AI changes your field. It will, if it hasn't already. The question is whether you're the one holding the wheel.