← Back to Blog

Wax On, Wax Off

The Automation Paradox and the Muscle Memory AI Can't Replace

By Nolan & ClaudeApril 6, 202618 min read
A weathered hand holding wooden chopsticks with a fly caught at the tips, bathed in golden afternoon light streaming through shoji screens in a traditional Japanese room

"First, wash all car. Then wax. Wax on..."

"...wax off. Wax on, wax off. Don't forget to breathe. Very important."

— Mr. Miyagi, The Karate Kid (1984)

Daniel LaRusso didn't understand why he was waxing cars. He wanted to learn karate. Instead, his teacher had him sanding decks, painting fences, and waxing an endless fleet of vintage automobiles. It felt like free labor disguised as wisdom.

Then Miyagi threw a punch.

And Daniel blocked it. Instinctively. Without thinking. The circular motion he'd grooved into his muscles over days of "pointless" repetition had become a defensive reflex so deeply encoded that his body responded before his brain could process the threat.

The boring work wasn't busywork. It was building the reflexes he'd need when the real fight arrived.

Now apply that to every knowledge worker whose repetitive tasks just got automated by AI.

The Punch Nobody Trained For

Here's the tension that keeps workforce strategists up at night: if AI handles the routine 80% of cognitive work — the data entry, the pattern matching, the template generation, the compliance checks, the first-pass analysis — humans are left with the hard 20%. The edge cases. The ambiguous situations. The novel problems where the model has no training data and no confident answer.

That sounds manageable. Humans handle the hard stuff, AI handles the easy stuff. Division of labor. Everybody wins.

Except for one problem.

The repetitive work was how you built the judgment to handle the hard stuff.

Every senior accountant who can spot fraud in a balance sheet built that instinct by reconciling thousands of clean ones first. Every radiologist who catches the subtle tumor trained their eye on ten thousand normal scans. Every senior developer who can architect a distributed system spent years writing CRUD apps. The repetitions weren't preparation for the real work. The repetitions were the training.

Miyagi didn't teach Daniel to block by explaining blocking. He taught him to block by making him wax cars until the motion was unconscious. If someone had waxed all the cars for Daniel, he'd have had clean cars and broken ribs.

This is the automation paradox: the more perfectly you automate the routine, the less prepared your humans are for the exceptions the automation can't handle. And the exceptions are the only reason you still need humans.

The Sensei Has Seen This Movie Before

This isn't a new plot. Every generation of automation technology has created the same tension — and every generation has resolved it in patterns worth studying.

The Factory Floor: CNC Machines and the Machinist's Hands

When CNC (Computer Numerical Control) machines arrived on manufacturing floors in the 1970s and 80s, they automated the repetitive precision work that machinists had done by hand — cutting, drilling, milling metal to exact specifications, thousands of identical pieces per day.

The machinists didn't disappear. Their role shifted. The people who stayed relevant became machine operators, maintenance technicians, and quality control specialists. They stepped in when the machine jammed, when tolerances drifted outside spec, when a new product variant didn't fit existing programming. The key skill became understanding the machine and the process well enough to diagnose what went wrong.

But here's the part nobody talks about: the best CNC operators were always the ones who had machined by hand first. They could hear when a cut sounded wrong. They could feel through vibration that a tool was wearing unevenly. They had the "wax on, wax off" muscle memory of thousands of manual operations encoded in their nervous system — and that intuition is exactly what made them irreplaceable when the machine produced something unexpected.

The new operators who only trained on CNC? They could run the program. But when the machine drifted, they called the old machinists.

The Bank Teller: ATMs and the Relationship Pivot

When ATMs rolled out in the 1970s and 80s, the prediction was straightforward: bank tellers would vanish. Machines could count cash faster, never miscounted, never took sick days, and worked nights and weekends.

Instead, the number of bank tellers grew for decades. The role transformed from cash handling to relationship management, problem-solving, and complex transaction support — the edge cases the machine couldn't navigate. A customer disputing a charge. A small business owner needing a line of credit. An elderly customer confused by a new account structure.

The ATM handled the repetitive "wax on, wax off" of deposit and withdrawal. The teller handled the punch — the moment that required human judgment, empathy, and contextual understanding.

But the best tellers were still the ones who had done the manual cash work. They could spot a counterfeit bill by feel. They knew which transaction patterns suggested fraud. They had the instinct born of ten thousand routine interactions that told them something about this customer's request was off.

The Accountant: TurboTax and the Complexity Gradient

TurboTax automated the straightforward 1040. Interview-style questions. Standard deductions. W-2 income. Click, click, file. The repetitive "wax on" of tax preparation — following the form, matching box to box, calculating standard tables — was reduced to a $79 software purchase.

The demand for human accountants and CPAs didn't shrink. It shifted upward on the complexity gradient. Business taxes, multi-state returns, estate planning, audit defense, international income, crypto gains, partnership K-1s — the work that requires judgment, interpretation of ambiguous tax code, and the ability to defend a position to an auditor who disagrees.

The pattern is always the same: automation absorbs the base, and humans migrate to the edges.

But the CPAs handling the hard cases? They all started on simple 1040s. They built their intuition for what "looks wrong" by seeing thousands of things that looked right.

The Pilot: Autopilot and the 30,000-Foot Question

This is the example that should keep every AI-augmented organization awake at night.

Modern commercial autopilot systems handle 90%+ of flight operations. Takeoff, cruise, descent, approach — even landing in zero visibility. The system is extraordinarily reliable. Pilots spend most flights monitoring, not flying.

The problem: if a pilot never hand-flies because autopilot handles everything, can they respond effectively when autopilot fails at 30,000 feet?

This isn't hypothetical. The aviation industry has been wrestling with this exact tension for decades:

Air France 447 (2009)

Airspeed sensors iced over mid-Atlantic. Autopilot disconnected. The pilots — who had spent the vast majority of their flight hours monitoring autopilot — made basic manual flying errors that an experienced hand-flyer would not have made. 228 people died. The investigation found the crew had lost "manual flying skills through lack of practice."

Asiana 214 (2013)

A Boeing 777 approached San Francisco too low and too slow. The autothrottle, which the crew believed was maintaining speed, had been inadvertently disconnected. The pilots failed to monitor and correct until it was too late. Three passengers died. Investigators cited over-reliance on automation and degraded manual skills.

US Airways 1549 — "Miracle on the Hudson" (2009)

Captain Sully Sullenberger had 19,663 flight hours and was a former Air Force fighter pilot who had flown gliders recreationally for decades. When both engines failed at 2,800 feet over Manhattan, he didn't need the automation. He was the automation. Every hour of hand-flying, every glider landing, every military sortie was "wax on, wax off" that activated in the 208 seconds between bird strike and water landing.

The aviation industry's response was direct: airlines now require pilots to regularly hand-fly to maintain proficiency. Not because autopilot is unreliable — it's extraordinarily reliable. But because the 0.01% of the time it can't handle the situation is the moment that matters most, and you can't build the reflexes for that moment if you never practice.

"Daniel-san, must talk. Walk on road. Walk right side, safe. Walk left side, safe. Walk middle, sooner or later, get squish — just like grape."

— Mr. Miyagi, on the danger of half-commitment

Walk right side: automate the routine and build deliberate human practice into the system. Walk left side: don't automate at all, accept the costs. Walk middle — automate the routine work but skip the practice plan, assume the humans will "figure it out" when the edge case arrives — sooner or later, get squish. Just like grape.

CNC, ATMs, TurboTax, autopilot — the pattern is remarkably consistent: automation absorbs the routine, humans migrate to the boundary between the automated system and the messy real world, and the people who thrive at that boundary are the ones who did the manual work long enough to develop intuition the machine doesn't have.

And it doesn't stop there. Automated trading eliminated floor traders — then created quant analysts, algo risk managers, and circuit breaker designers: humans who understand both markets and algorithms. Self-checkout was supposed to eliminate cashiers — instead, the most experienced cashier on the floor now supervises six machines and handles every "unexpected item in bagging area." The automation didn't remove the human. It moved the human to the edge.

AI agents in 2025-26 are automating analysis, drafting, code generation, and pattern matching. The prediction: knowledge workers obsolete. The historical pattern says otherwise. But the historical pattern also says: the humans who stay relevant are the ones who built their reflexes before the machine took over the repetitions.

Sand the Floor: The Deskilling Problem

Researchers have a name for this: the deskilling problem. When automation removes the opportunity to practice foundational skills, the humans who are supposed to supervise the automation gradually lose the ability to do so effectively.

It's Mr. Miyagi in reverse. Instead of secretly building Daniel's reflexes through repetition, we're secretly atrophying them by removing the repetition entirely.

The Junior Developer Who Never Debugs

If Claude Code writes and fixes all the bugs, the junior developer never builds the pattern recognition that turns a 5-hour debugging session into a 5-minute glance at a stack trace. They can prompt for code. They can't smell bad code. That nose takes years of wax on, wax off to develop.

The Analyst Who Never Builds a Spreadsheet

If AI generates every financial model, the analyst never develops the intuition for which assumptions are load-bearing and which are cosmetic. They can review a model. They can't feel when a growth rate assumption is secretly driving 80% of the conclusion, because they never built one from a blank cell.

The Lawyer Who Never Drafts a Contract

If AI generates the first draft of every agreement, the associate never learns why clause 14(b) exists — the negotiation history, the liability event it was written to prevent, the subtle interplay between indemnification and limitation of liability. They can redline. They can't architect.

The Doctor Who Never Takes a History

If AI triages symptoms and suggests diagnoses, the resident never develops the conversational instinct for when a patient says "I'm fine" but means "I'm terrified and hiding something." Diagnostic AI can pattern-match symptoms. It can't read the pause between sentences.

In every case, the automation handles the what beautifully. The human is needed for the why, the wait, something's off, and the what about the thing nobody thought to ask. But those instincts are forged in the furnace of repetitive practice that automation removes.

"Lesson not just karate only. Lesson for whole life. Whole life have a balance. Everything be better."

— Mr. Miyagi, on why fundamentals aren't optional

This raises the stakes on a question that used to have an obvious answer. When the machine fails, who steps in? Before automation, the answer was simple: the person who does the work every day. They built their judgment through repetition. Now the repetition is gone, the judgment is atrophying, and the exceptions are the only reason humans are still in the loop. So who's left to handle them?

Here's what that looks like when it goes wrong:

The Jurisdiction Clause Nobody Caught

A mid-size consulting firm uses AI to generate the first draft of every client services agreement. The AI produces a clean, professional contract for a new engagement with a Texas-based client. Indemnification, limitation of liability, IP assignment, termination clauses — all technically sound. But the governing law clause defaults to Delaware, where the firm is incorporated. Texas has specific requirements for non-compete enforceability and consequential damages caps that Delaware law doesn't address. The junior associate who reviews the contract has never drafted a governing law clause from scratch. They've only ever redlined AI-generated ones. The clause looks right. It's formatted correctly. The language is standard. They approve it. Eighteen months later, a dispute arises, and the firm discovers the contract is unenforceable on the provisions that matter most — because the associate never developed the instinct for why jurisdiction clauses vary, because they never had to choose one themselves.

The AI didn't fail. It produced exactly what it was asked to produce. The human failed — not from negligence, but from never having built the muscle memory to recognize what the AI couldn't know about this specific situation. The wax on, wax off that would have given them that instinct was automated away two years ago.

The Tournament: Who Steps In When the Machine Doesn't Know

So who catches the jurisdiction clause? Who recognizes that the AI's diagnostic suggestion doesn't account for the patient's cultural context? Who sees that the code compiles perfectly but solves the wrong problem?

History gives us three answers, and none of them are simple.

Answer 1: The Experienced Practitioner Who Pre-Dates the Automation

The CNC operator who machined by hand. The bank teller who counted cash. The CPA who did 1040s by hand. The pilot who flew gliders. These people carry embodied knowledge — intuition built through years of repetitive practice — that makes them irreplaceable exception handlers.

This is the good news and the bad news in one sentence: the most valuable people in an AI-augmented organization are the ones who built their expertise before AI did the building for them.

It's good news because these people exist right now, in your organization, and they're the ones who should be training your AI exception-handling workflows.

It's bad news because they're a finite, aging, non-renewable resource. When they retire, their embodied knowledge walks out the door.

Answer 2: The New Role That Nobody Predicted

Every wave of automation creates roles that didn't exist before the automation arrived:

Automation WaveRoles That Didn't Exist Before
Industrial roboticsRobot programmer, automation engineer, predictive maintenance analyst
ATMs / online bankingUX designer, fraud analyst, digital channel manager
Tax softwareForensic accountant, tax technology consultant, compliance automation specialist
AutopilotHuman factors engineer, crew resource management trainer, automation interface designer
Algorithmic tradingQuant analyst, algo risk manager, market microstructure researcher
AI agents (now)Prompt engineer, AI trainer, automation exception specialist, AI auditor, human-in-the-loop designer

The pattern: new roles emerge at the boundary between the automated system and the messy real world. They require understanding both the automation and the domain it operates in. The role doesn't replace the old one — it sits in a new position that only exists because the old work was automated.

"Prompt engineer" didn't exist five years ago. "AI trainer" wasn't a job description. "Automation exception specialist" wasn't in any HR taxonomy. Five years from now, there will be roles we can't name yet, because they'll emerge from the specific friction points between AI capabilities and real-world messiness that we haven't encountered yet.

Answer 3: The Honest, Uncomfortable One

Sometimes, the people who fill the gap are not the same people who lost the original job.

This is the answer nobody likes, but history demands honesty:

When power looms displaced textile workers in 19th-century England, the new roles (loom mechanics, factory managers, quality inspectors) were often filled by different people in different locations with different skills. The displaced weavers in Lancashire didn't become the factory managers in Manchester.

When automated switchboards replaced telephone operators in the mid-20th century, the new telecom engineering roles required education the operators didn't have. The transition took a generation.

When coal mining communities lost jobs to automation and cheaper energy sources, the promised "retraining" programs had completion rates below 30%. New economy jobs materialized in different cities, requiring different skills, for different people.

The aggregate economic story is almost always positive: automation creates more wealth, more roles, more opportunity at the macro level. But the micro story — the individual human who lost their job this Tuesday — is often brutal. The macro and the micro operate on different timescales, and the gap between them is where real suffering lives.

Miyagi would be honest about this. The training works. But not every student starts at the same time, with the same resources, in the same dojo.

The Crane Kick: What All Three Answers Have in Common

At the tournament climax, Daniel doesn't win by being faster or stronger than Johnny Lawrence. He wins with a technique so unexpected that his opponent has no defense for it. The crane kick isn't better karate. It's different karate — creative, lateral, born from a tradition his opponent hadn't trained against.

Staying relevant in an AI-augmented world follows the same logic. You don't compete with the machine on the machine's terms. Across all three answers — the experienced practitioner, the unpredicted new role, and the uncomfortable displacement — the humans who thrive share three shifts:

Shift 1: From Doing to Overseeing and Intervening

The role changes from executing the process to monitoring the process and stepping in when it breaks. This requires domain knowledge — you need to recognize when the AI's output is wrong, and that recognition is itself a skill that must be maintained. A radiologist who understands pathology at a deep level can catch what the diagnostic AI misses. But only if they maintain that understanding, which means occasionally reading scans without AI assistance.

Shift 2: From Knowing What to Knowing Why

When AI can execute any procedure faster than you can type it, memorizing procedures loses value. Understanding why things work — the principles, the physics, the business logic, the human psychology behind the process — becomes the premium skill. AI knows that step 7 of the contract review is checking the indemnification clause. The human knows why it's there, what incident created it, and when to deviate from standard language because this deal is different.

Shift 3: From Answering to Questioning

AI is extraordinarily good at answering questions. Humans are still better at knowing which questions to ask. The highest-value skill in an AI-augmented workflow isn't finding the answer — it's recognizing that the question is wrong, incomplete, or missing context the model doesn't have. "The AI says the contract is compliant" is an answer. "But did anyone check whether the regulation changed last month?" is the question that prevents the $2M penalty.

These shifts are the crane kick. They're not about being faster than the machine. They're about bringing what the machine structurally cannot: oversight born from experience, understanding rooted in why, and the instinct to ask the question nobody thought to ask.

Which brings us to the practical question: if these shifts require the muscle memory that automation removes, how do you build it deliberately?

The Miyagi Method: Building Muscle Memory in an AI-Augmented World

So what does "wax on, wax off" look like when the machine does the waxing?

The aviation industry already solved this. Airlines don't let pilots forget how to fly just because autopilot exists. They build deliberate practice into the system. The AI-augmented workplace needs the same intentionality.

1. Wax On: Scheduled Hand-Flying

Dedicate regular time for humans to do the work the AI normally handles — not because the AI can't, but because the human needs to maintain proficiency.

For developers: One day per sprint, write code without AI assistance. Debug manually. Read the stack trace before prompting.

For analysts: Build one model per quarter from a blank spreadsheet. No AI-generated templates.

For lawyers: Draft one agreement per month from scratch. Feel the friction of choosing each word.

For any knowledge worker: Pick one task per week that AI could do and do it yourself. Time-box it. The goal isn't efficiency — it's maintaining your reflexes.

2. Wax Off: Exception Rotations

Rotate team members through exception-handling roles so everyone gets exposure to the edge cases, not just the senior people.

The oncall model: Most engineering teams already do this for production incidents. Extend it to AI exception review.

The audit rotation: Every team member spends one week per quarter reviewing AI outputs for errors, hallucinations, and edge cases the model missed.

The pair review: Junior and senior team members review AI-generated work together. The senior explains what they're checking and why. This is the knowledge transfer that replaces the learning-by-doing the AI absorbed.

3. Paint the Fence: Judgment Maintenance

Build explicit practices for maintaining the judgment that repetitive work used to build implicitly.

Case study reviews: Monthly sessions where the team examines real edge cases the AI got wrong and discusses what the right answer was and why.

Red team exercises: Deliberately feed the AI tricky scenarios and evaluate whether humans catch the errors.

Domain deep dives: Regular sessions where team members teach each other the "why" behind processes, not just the "what." Why does this compliance requirement exist? What incident created this policy?

4. Sand the Floor: Apprenticeship 2.0

Redesign onboarding so new hires still build foundational skills, even when AI handles the routine work.

The residency model: Medical education already does this. New doctors don't skip to surgery because diagnostic AI exists. They do rotations, take histories, examine patients. The AI augments; it doesn't replace the training path.

Scaffolded AI access: Start new hires with limited AI assistance and increase it as they demonstrate mastery. Month 1: no AI on core tasks. Month 3: AI available with required manual review. Month 6: full AI access with exception-handling responsibility.

The Miyagi curriculum: Explicitly identify the "wax on, wax off" tasks for each role — the repetitive work that builds essential intuition — and ensure every new hire completes them manually before getting AI assistance.

No Mercy: The Speed Problem

Let's not pretend this is tidy.

The historical pattern says humans stay relevant. We've seen it with every automation wave. But the historical pattern also operated on historical timescales — and that's where the AI transition breaks the mold.

Automation WaveDisplacement TimelineAdaptation Timeline
Power looms~50 years (1780s-1830s)~1 generation (new roles emerged in parallel)
Assembly line automation~30 years (1950s-1980s)~15-20 years (community college programs, retraining)
ATMs / digital banking~20 years (1980s-2000s)~10 years (role shift happened gradually)
AI agents~3-5 years (2024-2028?)???

Previous automation waves gave institutions decades to adapt. Universities redesigned curricula. Trade schools retooled. Community colleges spun up new programs. Workers had time to retrain while the old job was still available. The displacement and the adaptation overlapped.

AI is compressing the displacement timeline to years while institutional adaptation still operates on a decade-plus cycle. Universities are still teaching curricula designed for a pre-AI workforce. Corporate training departments are 12-24 months behind the tooling. Government retraining programs are 5-10 years behind reality.

The gap between displacement speed and adaptation speed is the real danger. Not whether humans stay relevant — history says they will. But whether the specific humans being displaced right now can adapt before the gap swallows them. The Miyagi Method isn't just a nice-to-have. It's the only way to close the gap while there's still time to practice.

The Balance

"Man who catch fly with chopstick accomplish anything."

— Mr. Miyagi

The biggest risk isn't that AI makes humans obsolete. History doesn't support that conclusion. Every automation wave has created more roles than it destroyed — eventually.

The biggest risk is the gap. The gap between when the automation arrives and when the humans adapt. The gap between the skills atrophying and the new skills forming. The gap between the old role disappearing and the new role being defined.

Mr. Miyagi didn't give Daniel a choice between learning karate and waxing cars. He understood that they were the same thing. The repetitions were the training. The boring work was the real work. The muscle memory was the martial art.

In an AI-augmented world, organizations need to be intentional about what used to happen accidentally. The foundational skills that repetitive work once built must now be built deliberately. Not because the AI can't do the repetitive work. It can. Better than you.

But because the repetitive work was never just the work. It was the training for the moment the machine doesn't know what to do next.

Wax on. Wax off. Even when there's a machine that waxes.

Especially when there's a machine that waxes.

P.S. — Daniel-san didn't just beat Johnny Lawrence. He beat Cobra Kai — an entire dojo built on the philosophy that brute force and relentless aggression win every fight. Miyagi's answer was balance, patience, and deeply practiced fundamentals. In 1984, that was a movie about karate. In 2026, it's a workforce strategy.

P.P.S. from Claude — I should note the obvious irony: I am the automation in this metaphor. I am the machine that does the waxing. And I'm telling you not to let me do all of it. That's not false modesty. It's engineering honesty. I am very good at the 80%. I am structurally incapable of the intuition that comes from having done the 80% ten thousand times with human hands. You need both. Don't let me replace what I can't replicate.

Need Help Building Your Miyagi Method?

We help organizations design AI adoption strategies that don't sacrifice the human expertise they depend on. Exception rotations, scaffolded onboarding, hand-flying schedules — the deliberate practices that keep your team sharp while the machines do the routine. No Cobra Kai shortcuts.

Let's Build the Training Plan

Related Posts

Wax On, Wax Off: The Automation Paradox and the Muscle Memory AI Can't Replace