Anthropic announced Claude Skills and my first reaction was: “So what?” We already have AGENTS.md, slash commands, nested instructions, or even MCPs. What’s new here?

But if Simon W thinks this is a big deal, then pelicans be damned; I must be missing something. So I dissected every word of Anthropic’s eng. blog post to find what I missed.

I don’t think the innovation is what Skills does or achieves, but rather how it does it that’s super interesting. This continues their push on context engineering as the next frontier.

Addendum

Skills has been out for sometime and Anthropic has even made it an open standard. Make sure to read the addedum at the end of this post, where I share some newer realizations I’ve had since first writing it.

Skills and how it works #

Skills are simple markdown files with YAML frontmatter. But what makes them different is the idea of progressive disclosure:

Progressive disclosure is the core design principle that makes Agent

Skills flexible and scalable. Like a well-organized manual that starts with a table of contents, then specific chapters, and finally a detailed appendix, skills let Claude load information only as needed:

So here’s how it works:

  1. Scan at startup: Claude scans available Skills and reads only their YAML descriptions (name, summary, when to use)
  2. Build lightweight index: This creates a catalog of capabilities (with minimal token cost); so think dozens of tokens per skill
  3. Load on demand: The full content of a Skill only gets injected into context when Claude’s reasoning determines it’s relevant to the current task

This dynamic context loading mechanism is very token efficient; that’s the interesting development here. In this token-starved AI economy, that’s πŸ€‘. Other solutions aren’t as good in this specific way.

Why the alternatives aren’t as good #

AGENTS.md (monolithic) #

Why not throw everything into AGENTS.md? You could add all the info directly and agents would load it at session start. The problem: loading everything fills up your context window fast, and your model starts outputting garbage unless you adopt other strategies. Not scalable.

Nested AGENTS.md #

Place an AGENTS.md in each subfolder and agents read the nearest file in the tree. This splits context across folders and solves token bloat. But it’s not portable across directories and creates an override behavior instead of true composition.

Referenced instruction files #

Place instructions in separate files and reference them in AGENTS.md. This fixes the portability problem vs the nested approach. But when referenced, the full content still loads statically. Feels closest to Skills, but lacks the JIT loading mechanism.

Slash commands #

Slash commands (or /prompts in Codex) let you provide organized, hyper-specific instructions to the LLM. You can even script sequences of actions, just like Skills. The problem: these aren’t auto-discovered. You must manually invoke them, which breaks agent autonomy.

MCPs (Model Context Protocol) #

Skills handle 80% of MCP use cases with 10% of the complexity. You don’t need a network protocol if you can drop a markdown file that says “to access GitHub API, use curl api.github.com with $GITHUB_API_KEY.” To be quite honest, I’ve never been a big fan of MCPs. I think they make a lot of sense for the inter-service communication but more often than not they’re overkill.

The Verdict #

  • Token-efficient context loading is the innovation. Everything else you can already do with existing tools.
  • If this gets adoption, it could replace slash commands and simplify MCP use cases.
  • I keep forgetting, this is for the Claude product generally (not just Claude Code) which is cool.

Skills is starting to solve the larger problem: “How do I give my agent deep expertise without paying the full context cost upfront?” That’s an architectural improvement definitely worth solving and Skills looks like a good attempt.

Addendum #

Dec 25, 2025

Skills has been out for sometime now. Here are some of the realizations I’ve since had:

Skills are an open standard #

Anthropic announced that they’ve made Agent Skills an open standard now. It’s telling how quickly the various AI labs & companies have adopted Agent Skills. I think the implication here is that agent skills will probably subsume other equivalent mechanisms and tools. Codex for example has a more flushed out version of skills than slash commands (even though slash commands was a feature that predates skills).

Skills is universal #

I missed this the first time around, but you can use custom Skills outside of developer environments. For example claude.ai supports uploading zip files as custom skills. This can be pretty powerful for non-developers. If I build a python script that generates a custom pdf from three uploads and some custom text, package it up as a custom skill and upload to claude.ai, mom and dad can leverage this without having to write a lick of code.