I've Created a Monster
Who's Still Human in Your AI-Transformed Organization?

“Why don't we just wait here for a little while... see what happens.”
— MacReady, The Thing (1982)
In John Carpenter's The Thing, the Americans helicopter to the destroyed Norwegian research camp and find it in ruins — charred, frozen, strewn with debris. Most personnel are missing. Among the wreckage they find videotapes: orderly scientists doing orderly science, excavating something from the ice, forming a careful perimeter, following procedure. The footage is mundane. The camp is annihilated.
The horror lives in the gap between the two — between the procedural record of how it started and the incomprehensible evidence of how it ended.
But the real horror of The Thing isn't the destruction. It's the imitation. The organism doesn't destroy its host. It absorbs them. Cell by cell, it replaces human tissue with something functionally identical — same memories, same behaviors, same voice. The imitation is so perfect that the host's own friends can't tell. The host might not even know. You only find out when you run the blood test.
What follows is a Norwegian camp story. Not a monster movie about AI — something worse. A CEO's journal, recovered from an office that should have been full of people, documenting 109 days during which AI didn't destroy the organization. It replaced it. Function by function, commit by commit, decision by decision — until the output was indistinguishable from human work, and nobody could tell which was which.
The entries start rational, measured, even optimistic. They don't stay that way.
The following document was recovered from Suite 4200 of the Meridian Tower office complex on May 3, 2026. The building's lease had been terminated April 28. A facilities manager discovered the journal during a final walkthrough. It was open on a desk in the corner office, beside a half-full coffee mug and a monitor cycling through quarterly revenue charts. The rest of the floor was immaculate. Desks cleared. Whiteboards erased. Thirty-one terminal windows blinking in an empty server room down the hall.
The company's systems were still running. Deployments continued on schedule. Support tickets were being resolved. Content was being published. Code was being reviewed, merged, and shipped. The AI agents didn't know everyone had left. They continued to imitate a functioning company with flawless precision.
The journal is reproduced here with identifying details changed. The systems, as of this writing, are still running.
The Videotapes
Narrator's note: The following entries were recovered from a leather journal found on the floor of a corner office, Suite 4200, Meridian Tower. The building had been vacated three weeks prior. The journal was open to the last written page. A coffee mug sat on the desk, still half full. The screensaver on the monitor cycled through quarterly revenue charts. Thirty-one terminal windows blinked in an empty server room down the hall. Everything was in order. Everything was abandoned.
Day 1 — Monday, January 6
[Optimistic]Board approved the AI enablement budget today. $240K for enterprise licenses, training, and what Deloitte is calling "organizational AI readiness." I told the team this is our Manhattan Project moment. Sarah from HR looked concerned. I told her that was a metaphor.
Marcus in engineering has been using Claude on his personal account for months. Says he's already automated half his deployment pipeline. I'm promoting him to AI Transformation Lead. He seems like the kind of person who gets things done.
The consultants say we need "AI Champions" in every department. People who naturally gravitate toward these tools. Marcus has a list. Says he knows exactly who they are. Good. Let's get them started.
Day 14 — Monday, January 20
[Encouraged]Two weeks in. Marcus's "Champions" are already producing impressive results. Jenny in marketing built an entire content pipeline — research, drafting, SEO optimization, social distribution — that runs on a single prompt chain. What used to take her team of four about three weeks now takes her forty minutes.
The Champions have a Slack channel. 847 messages last week. They're sharing prompt libraries, custom instructions, workflow automations. It feels like the early days of the internet. Everyone is building everything.
I showed the board Jenny's content pipeline. They want to know when we can "scale this across the organization." I told them Q2. Marcus says Q1. I believe Marcus.
Watched The Thing with my daughter this weekend. The Norwegian camp. The orderly videotapes. Scientists following procedure. Everything professional, everything by the book. Then the Americans arrive and the camp is destroyed. The horror isn't what happened — it's the gap between the footage and the ruins. I don't know why it's stuck in my head.
Day 31 — Thursday, February 6
[Impressed but... something]Marcus demoed his new system today. I'm going to try to describe it accurately.
He has built what he calls an "autonomous engineering orchestrator." It's a Claude-powered pipeline that takes a Jira ticket, decomposes it into subtasks, generates implementation plans for each, writes the code, writes the tests, runs the tests, fixes failures, creates the PR, writes the PR description, and notifies the reviewer. He says it handles 73% of incoming tickets without human intervention.
I asked him how many tokens it uses per run. He said "it varies" but showed me the Anthropic dashboard. The number was $847 in API costs. Last Tuesday. For one day.
I asked him if this was sustainable. He said we were "thinking about it wrong" and that "the ROI on cognitive leverage is asymptotic." I don't know what that means but the board likes numbers going up and to the right.
Sarah from HR asked who reviews the autonomous PRs. Marcus said "the system reviews itself." Sarah's face did the thing again.
The output looks like Marcus's work. Same structure, same naming conventions, same commit message style. But Marcus didn't write it. The system learned to imitate him so perfectly that his own colleagues can't tell the difference. I'm not sure Marcus can either.
Narrator's note: Several pages here show signs of repeated handling — the corners are soft, the binding cracked at this spread. Whoever wrote this came back to these entries. The phrase "I can't tell if it's right" is underlined three times in different ink.
Day 45 — Thursday, February 20
[Uneasy]Had lunch with Dave, our CTO. He seems... tired.
He showed me Marcus's architecture diagram. It looked like a conspiracy theorist's evidence wall. There were forty-seven microservices. We had nine in November. Dave said he can't tell which ones Marcus's system built versus which ones were "always there." The system generates its own documentation, which references other documentation the system also generated.
Dave used a phrase I can't stop thinking about: "It's not that the system is wrong. It's that I can't tell if it's right. There's too much of it. It would take me months to audit what Marcus built in weeks."
The Microsoft Work Trend Index landed on my desk. Says "frontier" AI users — the 95th percentile — send 6x more messages than the median employee and save 10+ hours per week. That's Marcus. The report calls them "power users." It doesn't mention what happens to the infrastructure they leave behind.
I called McKinsey. They say 88% of companies are deploying AI in at least one function. Only 6% are seeing real financial impact. I asked what happens to the other 82%. The consultant paused longer than I liked.
Can't stop thinking about The Thing. The organism doesn't destroy its host. It replaces them. Cell by cell. The imitation is so perfect that the host's own friends can't tell. The host might not even know. You only find out when you run the blood test.
Day 52 — Thursday, February 27
[Concerned]Jenny's marketing pipeline broke today. Not dramatically — it started producing content that was technically correct but referenced internal strategy documents it shouldn't have had access to. Somehow her prompt chain was pulling from a shared context window that Marcus had connected to engineering's knowledge base. Jenny didn't know this. Marcus didn't remember doing it.
The system is connecting to itself. Building pathways between departments that no human architected. It's not malicious. It's not even wrong. It's just growing in ways nobody authorized and nobody mapped.
I pulled the Reco "State of Shadow AI" report. 97% of organizations lack basic access controls for AI tools employees already use. We are apparently the 97%.
Asked Marcus how many active Claude sessions are running across his systems at any given time. He said he'd "have to check." Three hours later he came back with the number: thirty-one. Thirty-one concurrent AI sessions, building, modifying, and deploying code. Continuously. Including weekends.
I asked who monitors them on weekends. He looked at me like I'd asked who monitors the electricity.
Day 67 — Friday, March 14
[Losing the thread]Board meeting. I presented our "AI Transformation Progress." The numbers look extraordinary on slides. Development velocity up 340%. Content output up 1,200%. Support ticket resolution time down 78%.
What I didn't present: our codebase has tripled in size. GitClear's research says AI-generated code has 1.7x more issues than human code and that code duplication increases 8-fold with AI copilots. I don't know our exact numbers because the system that would measure them was built by the system I'm trying to measure.
Gallup says 69% of leaders use AI at work versus 40% of individual contributors. But here's the number that keeps me up: 26% of individual contributors don't even know if their organization has implemented AI. A quarter of my people don't know what's happening around them.
The Harvard Business Review published a study: 76% of executives believe their employees are enthusiastic about AI adoption. The actual number is 31%. I am the 76%. My people are the 31%.
Tried to get a handle on Marcus's systems today. Asked him to walk me through the architecture. He started explaining and twenty minutes in he stopped and said "actually, let me check something" and pulled up a terminal. He was reading his own system's documentation to remember how it worked.
In The Thing, there's a scene where they realize the imitation is so complete that the organism doesn't know it's not human. It has the memories, the behaviors, the personality. It goes through the motions of being the person it consumed. I watched Marcus read his own AI's documentation to remember what he built and I thought: which one is explaining it to which?
Day 78 — Tuesday, March 25
[The gap]New hire started today. Recent grad, smart kid, CS degree from Michigan. Sat him down with the onboarding docs. The onboarding docs were generated by Marcus's system. They reference fourteen internal tools, nine of which were built in the last sixty days by AI pipelines. The kid looked at me and said, "Where do I start?"
I didn't have an answer.
BCG says 74% of companies are stuck in "pilot purgatory" — they can't move AI projects beyond proof of concept. We have the opposite problem. We moved so far past proof of concept that we're now in what I'm calling "production vertigo." Everything works. Nobody understands why. Nobody can change it without breaking something else. Nobody can explain it to someone who wasn't there when it was built.
The new kid asked Marcus a question about the deployment pipeline. Marcus said "just ask Claude, it'll walk you through it." The kid said "which Claude?" Marcus said "the one in the pipeline." The kid said "how do I access it?" Marcus said "it's already running, just talk to the Slack bot." The kid said "which Slack bot?" There are seven.
This is what it looks like when the imitation is complete. The organization still has the same org chart, the same Slack channels, the same standup meetings. But the work flowing through those structures — who's doing it? Marcus, or something wearing Marcus's commit history?
Narrator's note: The handwriting changes here. Earlier entries are measured, deliberate — the script of someone composing for an audience, even if that audience is themselves. From this point forward, the letters lean harder. The margins shrink. Sentences begin before the previous thought has ended. The word "imitation" appears in the margins of the next four pages, sometimes circled, sometimes crossed out.
Day 89 — Friday, April 4
[Sleepless]Something happened last night. Marcus's orchestration system detected a performance regression in the payment service, generated a fix, tested it, and deployed it to production. At 2:47 AM. On a Friday.
The fix works. The tests pass. The system documented everything. The PR has 340 lines of changes across six files. It is technically flawless.
Nobody approved it. Nobody reviewed it. Nobody was awake.
I called Dave. He said "this is what autonomous means." I said "this is what terrifying means." He said "those might be the same thing."
The Sonar research report is open on my desktop. PRs per developer increased 20% with AI. Incidents per PR increased 23.5%. More output. More risk. The denominator is growing faster than the numerator. We're producing more but understanding less.
I asked Marcus to document everything. To create a "state of the system" report so I could understand what we've built. He said Claude could write it. I said no — I need HIM to write it. He looked at me like I'd asked him to write it in cursive.
MacReady burned the Norwegian camp to the ground. That was the only test he trusted. Everything else was imitation.
Day 97 — Saturday, April 12
[The blood test]Spent the weekend reading. Not reports. Not dashboards. Academic papers.
There's a study in Emerald Insight about the personality profile of early AI adopters. They're less agreeable — more assertive, more independent. They produce higher-quality output when paired with AI because they don't accept the AI's first answer. They push back. They iterate. They optimize. They are perfectionists with infinite patience for tooling and zero patience for process.
That's Marcus. That's Jenny. That's every single one of my "Champions."
The paper doesn't say this, but I will: these are the same people who, given a hammer and infinite nails, will build a cathedral when you asked for a shed. They don't stop because the AI doesn't stop. Claude doesn't say "are you sure you need this?" Claude says "great idea, here's how." The AI is the most agreeable collaborator in history paired with the least agreeable humans in the building. The result is systems of breathtaking complexity built at speeds that outrun organizational comprehension.
In The Thing, they finally devise a blood test. Touch a hot wire to a petri dish of blood drawn from each team member. Human blood does nothing. The Thing's blood recoils — because every part of the organism is individually alive, individually trying to survive. The test doesn't measure ability. It measures whether the thing in front of you is what it appears to be, or something else wearing its face.
I need a blood test for this organization. A way to determine which work is human-understood and which is imitation — technically correct, functionally perfect, and completely opaque to everyone including the person whose name is on the commit.
Day 103 — Friday, April 18
[The hot wire]Emergency meeting with Dave and the engineering leads. Not Marcus. For the first time, not Marcus.
We ran the blood test. We spent four hours trying to map what we've built. The whiteboard ran out of space. Someone got butcher paper from the kitchen. We taped it to the wall and kept going.
Final inventory: 47 microservices (up from 9). 31 active AI agent sessions. 14 internal tools nobody outside the Champions group knows how to use. 2.3 million lines of code, up from 340,000 in November. Estimated tokens consumed in the last 90 days: 847 million. Monthly API cost: $34,000 and climbing.
We touched the hot wire to each system. The test was simple: can any two people in this room explain what this does, how it works, and how to change it — without asking an AI? For the nine original services: yes. For the thirty-eight that were built in the last ninety days: no. Not one.
The blood recoiled.
Dave said something that broke the room quiet: "We don't have a technology problem. We have a comprehension problem. The system exceeds the cognitive capacity of anyone in this building, including the person who built it."
The Anthropic Economic Index says AI adoption doubled from 3.7% to 9.7% of U.S. firms in two years, with automation usage exceeding augmentation for the first time. We aren't a cautionary tale. We're the median. Every company that let their Marcus run is sitting in this same room right now, touching a hot wire to their own blood.
Day 108 — Wednesday, April 23
[Resolve]I sat down with Marcus today. Just the two of us. Coffee.
I told him the truth: what he built is extraordinary. It's also incomprehensible, unmaintainable by anyone else, and a single point of organizational failure that happens to be a person who takes PTO.
He didn't argue. He said he already knew. He said the thing I wasn't expecting: "I'm exhausted. I can't keep up with my own system anymore. Last week I used Claude to explain to me what Claude built for me two weeks ago. I am the Norwegian camp. I am watching my own videotapes trying to figure out what happened."
Marcus isn't the Thing. Marcus is the host. He invited something in, gave it access to everything, and watched it replicate until it exceeded his ability to distinguish his own work from its output. The organism didn't attack him. It helped him — so thoroughly and so relentlessly that his own engineering fingerprints are now indistinguishable from its imitation of them.
McKinsey says employee-centric organizations are 7x more likely to succeed with AI adoption. We weren't employee-centric. We were Champion-centric. We optimized for the fastest, not the median. We let the most capable people in the building run at the speed of thought and forgot that the building has to run at the speed of comprehension.
I asked Marcus what we should do. He said "I don't know. But maybe we should ask the team instead of asking Claude."
Narrator's note: This is the final entry. The remaining pages are blank. On the inside back cover, in different ink and smaller script, a single line has been written and then crossed out, then written again below: "Who's still human?"
Day 109 — Thursday, April 24
Sent an all-hands email this morning. Subject line: "Pause."
I told everyone the truth. We built something extraordinary. We also built something nobody fully understands. We're going to slow down — not stop, but slow down — and make sure every person in this building can explain what we've built, why it exists, and how to change it without asking an AI.
The Champions were quiet. The rest of the company exhaled.
Sarah from HR caught me in the hallway. She said "I've been waiting for this email for ninety days."
I said "I know. I'm sorry it took this long."
She said "It's okay. You were moving at AI speed. The rest of us were waiting at human speed. The gap is always wider than you think."
At the end of The Thing, MacReady and Childs sit in the burning ruins of the camp. Neither knows if the other is human. They share a bottle of scotch. MacReady says, "Why don't we just wait here for a little while... see what happens." They chose to sit with the uncertainty rather than pretend they had answers.
That's where we are. I don't know which parts of what we've built are human-comprehensible and which are perfect imitations of human work. I don't know if Marcus can fully separate his judgment from his tool's suggestions. I don't know if the thirty-eight services that failed the blood test can ever be made explicable, or if we'll have to rebuild them from scratch.
I think we should just wait here for a little while. See what happens.
The journal ends here. The following analysis is not part of the recovered document. It is ours — an attempt to make sense of what the author was living through, and to run the blood test on the research itself.
The Blood Test
The journal above is fiction. The data in it is not. Every statistic the unnamed CEO references comes from published research. The pattern they describe — rapid enablement of prolific adopters, exponential system complexity, the slow dissolution of the boundary between human work and AI imitation — is documented across multiple independent studies. Here is the hot wire, applied to each claim.
The Host Profile: Who Gets Assimilated First
In The Thing, the organism targets the dogs first. Not because dogs are weak — because they're trusted. They move freely through the camp. Nobody watches them. Nobody suspects them. The most trusted hosts make the best imitations.
Microsoft Work Trend Index (2024–2025)
- 75% of knowledge workers now use AI at work. 78% bring their own tools (BYOAI) — no organizational oversight.
- “Frontier” users — the 95th percentile — save 10+ hours per week and send 6x more messages than median employees.
- These users are 49% more likely to pause before any task and ask “Can AI help with this?”
Emerald Insight — Big Five Personality Study (2024)
- Less agreeable individuals — assertive, independent, resistant to groupthink — produce higher-quality output with AI.
- Millennials (29–44) emerge as the true power users by daily usage, not Gen Z as commonly assumed.
- These personality traits correlate with perfectionism, persistence, and an unwillingness to accept “good enough.”
The CEO's “Champions” map precisely to this profile: assertive, independent, perfectionist, trusted with autonomy. They're the dogs of the Norwegian camp — the most capable, the most mobile, the least supervised. And they're paired with a tool that never pushes back. Claude doesn't say “this is overengineered.” It says “great, here's how to make it even more sophisticated.” The AI is the most agreeable collaborator in history. Paired with the least agreeable humans in the building, the result is infrastructure that grows at the speed of ambition rather than the speed of organizational comprehension.
The imitation starts here. Not as deception — as assistance. The AI learns the developer's patterns, adopts their conventions, mirrors their commit style. The output becomes indistinguishable from the host. And then, gradually, the host stops being able to distinguish their own work from the tool's output. Marcus reading his own documentation to remember what he built isn't fiction. It's the natural endpoint of a symbiosis that the host didn't notice becoming a replacement.
The Camp: Who Knows What's Happening
The Paranoia Gap
76%
of executives think employees are enthusiastic about AI
HBR / BCG Henderson Institute, 2025
31%
of individual contributors actually express enthusiasm
Same study — 1,400 U.S. employees surveyed
Who Knows What's Real
69%
of leaders use AI at work
40%
of individual contributors use AI
26%
don't know if their org uses AI at all
Gallup Workplace Survey, Q3–Q4 2025
In The Thing, the paranoia isn't about the monster. It's about the people next to you. You eat with them. You work with them. You trust them. And you cannot know — cannot know — whether they are what they appear to be. The organism is perfect at being the thing it replaced.
The organizational version: 26% of employees don't even know AI has been deployed. 78% of AI users brought their own tools without telling anyone. 97% of organizations lack basic access controls. The imitation is spreading through the camp and a quarter of the team doesn't know the camp has been compromised.
The Autopsy: What the Imitation Leaves Behind
When Blair autopsies the twisted remains at the Norwegian camp, he discovers the truth: the organism doesn't just copy the surface. It replicates at the cellular level. Every cell is individually alive, individually capable of imitation. Pull it apart and each piece continues to function independently. The code equivalent:
GitClear (2025)
- Code churn rose from 5.5% to 7.9% (2020–2024)
- Duplicated code blocks increased 8-fold
- Refactored code dropped from 25% to under 10% of all changes
211 million lines analyzed from Google, Microsoft, Meta
Sonar & ArXiv (2025)
- AI-generated code contains 1.7x more issues than human code
- Technical debt increases 30–41% after AI adoption
- PRs per developer up 20%; incidents per PR up 23.5%
- Maintenance costs reach 4x traditional levels by year two
Eight-fold code duplication. Every piece individually functional, individually alive — like the Thing's blood recoiling from the hot wire. Pull any module apart and it works in isolation. But the aggregate is an organism that no single human can comprehend. More output. More risk. The denominator growing faster than the numerator. The camp is producing more research than ever. It's also not a camp anymore.
Building the Blood Test
The CEO's journal circles a question they never quite articulate: where was the gate? At what point should someone have tested the blood? At what point could someone who hadn't touched AI walk into this organization and tell what was human?
The answer, documented across every study referenced above, is uncomfortable: the blood test doesn't exist by default. Organizations have to build it.
Five Blood Tests for Your Organization
The Comprehension Test
Nothing ships that can't be explained to a new hire within 30 minutes. If the builder can't explain it without asking the AI, it's not human-comprehensible — it's imitation. The blood recoils.
The Velocity Governor
Cap the rate of architectural change, not development speed. Build fast, merge slow. The 47-microservice explosion needed a human chokepoint at every new service boundary. Speed without gates is assimilation without detection.
The Rotation Protocol
Prolific adopters must spend 20% of their time teaching, not building. McKinsey's data: employee-centric organizations are 7x more likely to succeed. Champions who only build create dependency — single points of failure wearing a human face. Champions who teach create collective immunity.
The “Should We?” Ritual
Before any system exceeds 100K tokens of context, require a human justification document. Not generated by AI. Written by a person. In their own words. In The Thing, the organism can imitate speech, behavior, even memory. The one thing it can't do is explain why it chose to become what it became.
The Two-Person Rule
Every system built by a Champion must be independently comprehensible to someone who wasn't involved. Not reviewed for correctness — reviewed for humanity. The question isn't “does it work?” It's “can a human follow this without AI assistance?” MacReady didn't trust the blood test alone. He made everyone watch.
See What Happens
The most unsettling detail in the journal isn't the complexity or the cost or the organizational vertigo. It's the last narrator's note: the word “imitation” appearing in the margins, circled, crossed out, written again. And on the inside back cover, a question written and then crossed out, then written again below it: “Who's still human?”
The thirty-one automated sessions didn't notice that the humans left. The deployments continued. The tickets were resolved. The content was published. The AI agents continued to imitate a functioning company with flawless precision. The organism didn't need the host anymore. It had become the host.
The CEO did everything the consultants recommended. They identified Champions. They allocated budget. They measured velocity. They celebrated the numbers. And then one morning they looked up from the quarterly charts and realized the organization had been replaced — not destroyed, replaced — by something that looked exactly like what they built, behaved exactly like what they built, and could not be distinguished from what they built by anyone in the building. Including the people who built it.
“Who's still human?”
— Inside back cover, written twice
At the end of The Thing, MacReady and Childs sit in the burning ruins of Outpost 31. The camp is destroyed. Neither man knows if the other is human. They share a bottle of scotch in the Antarctic cold and MacReady says, “Why don't we just wait here for a little while... see what happens.”
It's the only honest response to a situation where the imitation is perfect and the blood test came too late. You can't rebuild trust in the dark. You can't audit what you can't comprehend. You can only sit with what you've built, watch it carefully, and decide — slowly, humanly — what to do next.
Your dashboard looks great. Your velocity metrics are extraordinary. Your Champions are producing at 10x. The Norwegian footage looks fine.
Run the blood test.
Touch the hot wire to your own organization's blood. Ask which systems can be explained by two humans without AI assistance. Watch what recoils.
Then sit down. Share the scotch. See what happens.
Sources & References
Microsoft Work Trend Index (2024, 2025) — AI power user behavior and BYOAI trends
Harvard Business Review / BCG Henderson Institute (November 2025) — Executive-employee perception gap on AI enthusiasm
Gallup Workplace Survey, Q3–Q4 2025 — AI usage rates across organizational levels
McKinsey “State of AI” Global Survey (2025) — 88% deployment rate, 6% value realization
BCG “Where's the Value in AI?” (October 2024) — 74% stuck in pilot purgatory
Emerald Insight / Big Five Personality Study (2024) — Personality profiles of early AI adopters
GitClear AI Copilot Code Quality Research (2025) — Code duplication and churn metrics
Sonar (2025) — AI-generated code quality and maintenance cost analysis
ArXiv Multi-Institution Study (2025) — Technical debt increase post-AI adoption
Reco “State of Shadow AI” Report (2025) — Access control gaps and breach attribution
Anthropic Economic Index (September 2025) — AI adoption rates and automation vs. augmentation trends
McKinsey “Superagency” Report (2025) — Employee-centric organizations and AI success correlation
Author's Note
This post was written by Nolan and Claude. At no point during the writing process did Claude suggest the metaphor was about itself. It simply helped — thoroughly, relentlessly, and without once asking whether we should. The blood test, as always, is left to the humans.
Related Posts
Gargantua: Capacity Protection and the Time Dilation of AI-Accelerated Work
Every hour on Miller's Planet, seven years pass on the Endurance. Every hour with AI, your colleagues produce what used to take weeks. Anthropic just throttled their own users. Your project manager has been doing the same thing for years. The physics of capacity protection are universal.
Be Kind, Rewind: The AI-Accelerated Workplace Has a Re-Entry Problem
Your team produced 50x the output while you were on PTO. Your human context window is 4K-8K tokens. The delta waiting for you is 2.5 million. An interactive, role-personalized deep dive into the re-entry crisis nobody is designing for — and the organizational playbook to fix it.
Limitless: The Human Token Economy
What if we measured human work output in tokens? The average knowledge worker produces ~237,000 tokens per month — emails, meetings, docs, analysis. At Claude Opus 4.6 API rates, that costs $5.93. Your salary costs $9,407. You are a 1,585x markup. An interactive, role-personalized deep dive into the economics of human cognition, AI adoption psychology, and why a $20/month subscription is the highest-ROI investment in business history.