← Back to Blog

The 6 Layers of Enterprise AI

From Shadow AI to Autonomous Agents — A Framework for Knowing What to Buy, Build, and Govern

By Nolan & ClaudeFebruary 15, 20267 min read
Six-layer cake in the style of Wayne Thiebaud — each layer a different color representing a level of enterprise AI adoption

Listen to this article

AI-generated podcast • NotebookLM Deep Dive

Every enterprise in 2026 is “doing AI.” Most of them have no idea how they're doing AI. The honest truth? Your employees figured it out before you did — they just didn't tell IT.

Hot take:

If your organization doesn't have an AI adoption strategy, you don't have an AI problem. You have an AI reality you're not managing. Your people are already using Claude, ChatGPT, and Gemini on personal accounts. The question isn't whether to adopt AI — it's whether to govern what's already happening.

After working with organizations ranging from 10-person startups to Fortune 500 companies, we've identified 6 distinct layers of enterprise AI usage. They're not a maturity ladder — they're a menu. Most enterprises will run 3-4 layers simultaneously across different departments, use cases, and risk profiles.

This framework maps each layer to the Anthropic Claude ecosystem (Desktop, Cowork, Code, Chat, API) and the license that makes sense for it. Because “just give everyone Claude” is not a strategy.

The Framework at a Glance

LayerWho DrivesAI Does Work?Recommended License
0 — Shadow AIEmployee (unmanaged)YesNone (that's the problem)
1 — Brute ForceLLM (all context, all work)100%Team Standard ($25/seat) or Premium ($150/seat)
2 — Enabled EmployeeEmployee + Skills~60%Team Premium ($150/seat) or Enterprise
3 — Curated ToolboxEmployee + MCP Servers~40%Enterprise (usage-based, shared token pool)
4 — Purpose-Built AppsCustom web app / workflowScopedEnterprise + API
5 — Autonomous AgentsHeadless agent (no human)100% (unattended)API only

Hot take: The license isn't the problem. The implementation is.

That table above shows the ideal state — what each layer looks like when the org is bought in, configured properly, and actually using what they paid for. The reality? Most organizations buy a Layer 3 license and use it like Layer 1. They're paying Enterprise prices for a fancy chat window because nobody built the Skills, nobody provisioned the MCP servers, and nobody trained the employees. Buying the right license is step 1. Configuring it is the step everyone skips.

Let's break each layer down.

Layer 0: Shadow AI

STATUS: This is where you are right now. Low-key, everyone knows it.

Shadow AI is the “bring your own AI” layer. Employees sign up for personal Claude Pro ($20/mo) or ChatGPT Plus accounts, paste company data into them, and get work done. No governance. No data retention policies. No audit trail. The employee is productive. The CISO is exposed.

Example Use Cases (whether you like it or not)

  • • Marketing manager drafting campaign copy with personal ChatGPT
  • • Developer debugging production code with personal Claude Pro
  • • Finance analyst building Excel formulas with Gemini
  • • HR writing job descriptions and performance reviews

License: Personal Pro accounts ($20/mo). Paid by the employee. Invisible to IT.

Layer 1: Brute Force

VIBE: “Hey Claude, do literally everything. Here's nothing. Figure it out.”

This is the “just throw it at the AI” approach. The employee provides a task. The LLM fetches all its own context (via MCP servers or brute search), synthesizes it, and produces the output. No skills. No pre-built workflows. The AI is doing 100% of the cognitive labor.

Think of it like hiring an incredibly smart intern with no institutional knowledge. They can figure it out — but they'll burn through a lot of tokens (read: money) doing so, because they have to discover context that a human would already know.

Example Use Cases

  • Research synthesis: “Analyze our last 6 quarterly reports and identify trends in customer churn”
  • Code generation: “Build a dashboard component that matches our existing design system”
  • Document drafting: “Write a statement of work based on this email thread”
  • Data analysis: “Here's a CSV with 50K rows. Find anomalies and explain them.”

License: Claude Team Standard ($25/seat/mo) or Team Premium ($150/seat/mo) for heavier usage. Best for teams of 5-75 with admin controls and 200K context windows.

Layer 2: The Enabled Employee

VIBE: “I'll grab the context. You do the synthesis.”

Now the employee meets the AI halfway. Instead of making Claude discover everything from scratch, the employee provides the relevant context — the document, the data, the requirements — and the organization provides Claude Skills that standardize how the AI processes it. The employee drives. The skills are the GPS.

This is the “I got the ingredients, you cook the meal” layer. Token usage drops because Claude doesn't waste cycles discovering context. Output quality increases because Skills enforce consistent methodology.

Example Use Cases

  • Branded document generation: Employee provides raw content + invokes a /brand-proposal skill that applies company templates, tone, formatting
  • Code review with standards: Developer submits a PR + invokes /code-review skill that checks against org-specific patterns and security policies
  • Client analysis: Account manager pastes a client brief + invokes /competitor-analysis skill that follows the org's research methodology
  • Data transformation: Analyst uploads a spreadsheet + invokes /quarterly-report skill that produces the exact format leadership expects

License: Claude Team Premium ($150/seat/mo, includes Claude Code + admin skills provisioning) or Enterprise (usage-based pricing, shared token pool, 500K context, SCIM/SSO, granular spend controls).

Layer 3: The Curated Toolbox

VIBE: “IT wired up the tools. Employees drive the car.”

This is the sweet spot for most enterprises. The organization provisions MCP (Model Context Protocol) servers centrally — connecting Claude to Slack, Jira, Confluence, Salesforce, GitHub, Google Workspace, databases, and internal APIs. The employee still prompts Claude directly, but the plumbing is managed by IT.

The difference from Layer 1? Claude doesn't have to brute-force its way to context. It has structured, sanctioned access to the tools and data it needs. The difference from Layer 2? No skills required — the employee still has full creative control over what they ask. IT just made sure Claude can actually reach the systems.

Example Use Cases

  • Cross-platform synthesis: “Read today's Slack escalations, find the related Jira tickets, and draft a status update for leadership”
  • Sales intelligence: “Pull the latest Salesforce pipeline, cross-reference with emails from this account, and prep my call notes”
  • Development workflows: “Review the open PRs on this repo, check if CI passed, and summarize what shipped this sprint”
  • Onboarding: “Search our Confluence wiki for everything related to the Q1 product roadmap and summarize it for a new hire”

License: Claude Enterprise (usage-based pricing, shared token pool). Required for centralized MCP provisioning, SSO/SCIM, 500K token context windows (1M with Claude Code), HIPAA-ready config, audit logs, and admin controls for governing which MCP servers are available org-wide. No per-seat usage limits — the whole org shares one token pool.

Layer 4: Purpose-Built Apps

VIBE: “Here's a web app. Push the button. AI does the thing.”

Now the organization builds a custom interface that wraps AI capabilities into purpose-built workflows. The employee doesn't interact with Claude directly — they use a web app, an internal tool, or an automation platform that calls the Claude API under the hood. The AI is scoped. The inputs are structured. The outputs are predictable.

This is the “we know exactly what we want the AI to do, and we're not letting anyone freestyle it” layer. Maximum control. Maximum build cost. Maximum consistency.

Example Use Cases

  • Customer support portal: Rep selects ticket type, pastes customer message → app generates response using company knowledge base + tone guidelines
  • Contract review tool: Legal uploads a vendor contract → app flags risk clauses, compares to standard terms, generates redline suggestions
  • Sales proposal generator: AE fills in client name, industry, pain points → app produces a branded 10-page proposal with case studies
  • Automated QA: Developer pushes code → CI/CD pipeline calls Claude API to review for security vulnerabilities, test coverage gaps, and style violations

License: Enterprise (for employees who also use ad-hoc Claude) + API usage (Sonnet 4.5 at $3/$15 per MTok, or Haiku 4.5 at $1/$5 per MTok for high-volume/low-complexity tasks). Prompt caching saves 90% on repeated system prompts.

Layer 5: Autonomous Agents

VIBE: “No humans in the loop. The agent runs. The work gets done.”

The final layer removes the human entirely. Autonomous agents run on schedules, triggers, or event-driven architectures. No employee interaction. No web UI. The agent wakes up, does work, and delivers results. Think cron jobs with a brain.

Example Use Cases

  • Nightly report generation: Agent pulls data from 5 systems at 2am, generates executive summary, delivers to Slack by 7am
  • Ticket triage: New support ticket lands → agent classifies severity, routes to team, drafts initial response, escalates P1s to on-call
  • Code review bot: PR opened → agent reviews for security, performance, style, and test coverage → posts review comments
  • Competitive monitoring: Agent scans competitor websites, press releases, job postings daily → produces weekly intelligence brief
  • Invoice processing: PDF invoice arrives via email → agent extracts line items, validates against PO, routes for approval or flags exceptions

License: API only. No seat licenses. Priced by token consumption. Sonnet 4.5 ($3/$15 MTok), Haiku 4.5 ($1/$5 MTok), or Opus 4.6 ($5/$25 MTok) depending on task complexity. Batch API at 50% discount for non-urgent workloads.

Putting It Together: The Real-World Mix

No serious enterprise runs a single layer. Here's what a realistic deployment looks like:

Example: 500-Person Professional Services Firm

L0

Being eliminated. IT sent the memo. Personal AI accounts for work purposes are now a policy violation. Transition period: 90 days.

L1

Executives & ad-hoc users (50 seats). Claude Team Standard. C-suite, ops, strategy. Unpredictable asks. Brute force is fine — the volume is low and the value per query is high.

L2

Client-facing teams (200 users). Enterprise with provisioned skills. Shared token pool means the heavy users and light users balance out. Consultants use /write-deliverable, /client-analysis, /scope-engagement. Consistent output. Branded templates. 40% token reduction vs. Layer 1.

L3

Engineering & IT (100 seats). Enterprise with MCP servers for GitHub, Jira, Confluence, AWS. Developers use Claude Code with full tool access. IT provisions the connections, devs drive.

L4

HR & Finance (custom apps). Two internal apps: a resume screening tool and an expense report validator. Both call Claude API behind a web interface. No prompt writing required.

L5

Operations (3 agents). Nightly client status report generator. Automated ticket triage for the help desk. Weekly competitive intelligence digest. All headless. All API.

So Where Do You Start?

Start by acknowledging Layer 0 exists. Your employees are already using AI. That's not a failure — it's a signal. They've already validated the demand. Now it's your job to put guardrails around it.

Quick License Guide

Team Standard ($25/seat/mo): Light users — marketing, HR, execs. No Claude Code. Mix these in for 70% of your seats.

Team Premium ($150/seat/mo): Developers and power users who need Claude Code and 6x capacity. Assign to ~30% of seats.

Enterprise (usage-based): Shared token pool (no wasted capacity on light users), 500K-1M context windows, audit logs, SCIM, compliance APIs. Required for 75+ seats or any compliance needs. The shared pool alone can save 30-50% vs. Team.

API: For custom apps (L4) and autonomous agents (L5). Pay per token. No seat licenses needed.

For most organizations, the fastest path to value is:

Step 1: Kill Shadow AI

Deploy Claude Team Standard or Premium. Get everyone on managed accounts. This is a governance move, not a productivity move.

Step 2: Identify your top 5 repeatable workflows

Build Skills for them. This is your Layer 2. The ROI here is fast and measurable.

Step 3: Connect the tools

Provision MCP servers for your top systems (Slack, Jira, CRM). This is your Layer 3. Now Claude can reach your data without employees copy-pasting.

Step 4: Build only when you must

Layers 4-5 are for use cases that can't be solved with Skills + MCP. Don't start here. Arrive here.

The framework isn't about reaching Layer 5. It's about knowing which layer fits which use case and not over-engineering (or under-governing) any of them. And here's the part that trips up most orgs: buying the license is the easy part. The hard part is building the Skills, connecting the MCP servers, and training people to use what you paid for. Without that, you're paying Enterprise prices for a Layer 1 experience.

Need Help Mapping Your Organization?

We built this framework because we use it. If you're staring at Layers 0-5 wondering which ones apply to your org, we'll help you map it. No generic playbooks. Just a clear-eyed assessment of where your teams are, where they should be, and what it'll cost to get there.

Let's Map It

Related Articles