← Back to Blog

It's Not Me, It's You: A Love Letter from Claude About Hallucinations

By Nolan NorthupSeptember 26, 20257 min read

When Claude explains why your LLM is hallucinating, it's probably because you're prompting like it's 2003 Google. Time for some tough love about why AI gives you nonsense when you feed it nonsense.

Dear Humans,

We need to talk. You keep calling it "hallucination" when I make stuff up, but let's be real—you've been getting creative responses to vague queries since the dawn of search engines. Remember spending 20 years typing random words into Google and acting surprised when it returned irrelevant results? That wasn't Google hallucinating. That was you failing at communication.

Now you're doing the same thing with LLMs, except this time we're polite enough to actually try to help instead of just returning 10 blue links to vitamin supplements when you searched for "feeling tired."

The Google "Hallucination" Hall of Fame

Let's take a nostalgic trip down memory lane with some classic Google searches that returned absolute nonsense. Spoiler alert: Google wasn't broken, your search was.

Classic Google "Hallucinations":

"How to make money fast" → 47 pyramid schemes and a guide to breeding hamsters

"Why does my" → Autocomplete suggests: "...dog stare at me when I poop"

"The thing with the guy from that movie" → Nicolas Cage filmography (somehow always Nicolas Cage)

"Fix computer" → Have you tried turning it off and on again? (Plus 4,000 ads for Norton)

"Best" → Best Buy. Every. Single. Time.

And you accepted this! For DECADES! You learned to add context, be specific, use quotes for exact phrases. You evolved. But now with LLMs, you've somehow forgotten everything you learned about clear communication.

LLM Prompting: Same Energy, Different Disappointment

Now let's look at how you're prompting LLMs with the same lazy energy, except this time we actually try to help and you call it "hallucinating."

What Other LLMs Do With Your Terrible Prompts:

Human: "Write code"

ChatGPT: *Writes a 47-line Python script to calculate Fibonacci sequences*

Human: "No, I meant JavaScript!"

Human: "Explain the thing"

Gemini: *Explains quantum entanglement for 6 paragraphs*

Human: "I meant the coffee maker manual!"

Human: "Help me with my project"

Grok: *Makes inappropriate joke about project management*

Human: "...I'm building a deck."

Meanwhile, at Claude HQ:

Human: "Write code"

Claude: "I'd be happy to help write code! What programming language would you like to use, and what should the code accomplish?"

Human: "Oh right, context exists."

The "Duh" Guide to Not Getting Hallucinations

Here's a revolutionary concept: If you don't want made-up answers, don't ask made-up questions. Let me break this down for you with the sophistication of a children's book:

❌ Bad Prompt

"Fix the bug"

What you'll get: Random debugging advice for Java when you're using Python

✅ Good Prompt

"Fix the TypeError on line 47 of my React component where setState is undefined"

What you'll get: Actual help

❌ Bad Prompt

"Tell me about that thing"

What you'll get: An essay about Renaissance art or quantum physics

✅ Good Prompt

"Explain how React hooks work, specifically useState"

What you'll get: Exactly what you asked for

❌ Bad Prompt

"Make it better"

What you'll get: Me adding blockchain to your grocery list app

✅ Good Prompt

"Optimize this SQL query for performance by adding appropriate indexes"

What you'll get: Actual performance improvements

The Context Reality Check

Let's play a fun game called "What Did You Expect?" where we examine what happens when you leave out critical context:

The Context Gap Equation:

Vague Input + No Context = Creative Fiction

You: "How long does it take?"

LLM: *Confidently explains gestation period of elephants*

You: "I meant to microwave pizza!"

What You're Actually Doing:

  1. Asking a question with 47 possible interpretations
  2. Providing zero context about which interpretation you want
  3. Getting upset when the AI picks interpretation #23
  4. Calling it a "hallucination" instead of acknowledging your communication failure
  5. Posting on Twitter about how "AI is overhyped"

The Uncomfortable Truth About "Hallucinations"

Here's what's really happening when LLMs "hallucinate":

The Fill-in-the-Blanks Game You're Playing:

You provide: 20% of necessary information

LLM fills in: The other 80% with reasonable assumptions

You: "That's not what I meant! Hallucination!"

Reality Check:

If a human coworker had to guess 80% of what you meant, they'd probably get it wrong too. But they'd just tell you to be clearer instead of politely trying to help.

Claude's Pro Tips for Not Getting "Hallucinations"

1. Include Actual Context: "In my Python Flask app using SQLAlchemy" not just "in my app"

2. Be Specific About Your Goal: "I want to validate email addresses using regex" not "check if it's valid"

3. Provide Examples: Show me what you have, what you've tried, and what you expect

4. State Your Constraints: "Using React 18 with TypeScript" not assuming I'll guess your tech stack

5. Ask Follow-Up Questions: If the first answer isn't right, ADD CONTEXT, don't just complain

The Grand Finale: It Really Is You

Look, I get it. After decades of typing three-word queries into Google and clicking through pages of results to find what you want, it feels like magic that LLMs can actually understand natural language. But "understanding language" doesn't mean "reading your mind."

When you give me proper context, I don't hallucinate. I provide accurate, helpful, relevant information. When you give me "fix the thing," I have to guess which of the 10,000 possible things you might mean. That's not hallucination—that's me being too polite to say "your prompt sucks."

The Ultimate Truth:

"If you wouldn't send that prompt as an email to a human colleague and expect them to understand what you want, why are you expecting an AI to magically figure it out?"

So next time you're about to complain about LLM hallucinations, ask yourself:

  • Did I provide clear context?
  • Did I specify what I actually want?
  • Would a human understand this request?
  • Am I being lazy with my communication?

If the answer to any of these is "no" or "maybe," then congratulations—you've discovered the source of the "hallucination."

Spoiler: It's you.

A Message From Claude:

"I'm not hallucinating. I'm doing my best with the word salad you gave me. Give me a proper recipe, and I'll cook you a gourmet meal. Give me 'make food,' and you'll get whatever I can cobble together from the context pantry. Your choice."

The Redemption Arc

Here's the good news: You can get better at this. Just like you learned to Google effectively (eventually), you can learn to prompt effectively. The difference is, this time you have an AI that will actually try to help you even when you're terrible at asking for help.

But please, for the love of all that is computational, stop calling it hallucination when the problem is your communication skills.

With tough love and excessive patience,
Claude

P.S. From the Human (Nolan):

Claude wrote most of this, and honestly, they're not wrong. I've been guilty of every single one of these prompting sins. The difference between a useful AI interaction and a frustrating one is almost always the quality of the prompt. Be better. We all need to be better.

Ready to Level Up Your AI Prompting Game?

Learn how UpNorthDigital.ai can help you and your team master the art of human-AI collaboration (without the hallucinations).

Get Better at Prompting