
Patterns, Distribution, and Troubleshooting Your Claude Skills
- Stephen Jones
- Ai
- March 25, 2026
Table of Contents
This is Part 3 of a three-part series on building skills for Claude. Part 1 covered what skills are and why they matter. Part 2 walked through building and testing your first skill. Now we get into the stuff that separates a toy skill from a production one: proven patterns, distribution mechanics, and how to fix things when they break.
Five Proven Skill Patterns
After building and reviewing dozens of skills, I keep seeing the same handful of patterns emerge. Here are five that work well across different domains.
Pattern 1: Sequential Workflow Orchestration
Use this when you have a multi-step process that must happen in order. Think onboarding flows, deployment pipelines, or any checklist-driven workflow.
## Steps
1. Use Salesforce:get_customer to pull customer record
2. Use Google-Workspace:create_folder to set up shared drive
3. Use Slack:send_message to notify the account team
4. Use Jira:create_epic to create onboarding epic with subtasks
## Validation
After each step, verify the action completed successfully
before proceeding. If any step fails, stop and report which
step failed and why.
The validation block is what makes this pattern robust. Without it, Claude will happily barrel through all four steps even if step one returned an error. Telling it to stop and report on failure gives you a circuit breaker.
Pattern 2: Multi-MCP Coordination
This is the pattern you reach for when a workflow spans multiple services, like a design-to-dev handoff.
## Workflow: Design Handoff
1. Use Figma:get_file to export the approved design specs
2. Use Google-Drive:upload to store assets in the project folder
3. Use Linear:create_issue to create implementation tickets
- Attach Figma links and asset URLs
- Set priority based on sprint capacity
4. Use Slack:post_message to notify #engineering with ticket links
The key technique here is using fully qualified MCP tool names in the format ServerName:tool_name. This removes any ambiguity about which server Claude should call. If you just say “upload the file,” Claude has to guess which upload tool to use. If you say Google-Drive:upload, there is no guessing.
Pattern 3: Iterative Refinement
When output quality matters and a single pass is not good enough, build in a review loop.
## Report Generation
1. Generate initial draft from data sources
2. Run quality checks:
- All data points have sources cited
- Executive summary under 200 words
- No jargon without definitions
3. If checks fail, revise and re-check (max 3 iterations)
4. Format final output as PDF using the pdf skill
The max 3 iterations cap is important. Without it, Claude can get stuck in an infinite revision loop trying to satisfy a check it cannot pass. Always set an upper bound.
Pattern 4: Context-Aware Tool Selection
Sometimes the right tool depends on runtime conditions. This pattern teaches Claude to make that decision.
## Smart File Storage
Based on file characteristics, choose the optimal storage:
- Files < 1MB with frequent access → Google-Drive:upload
- Files > 10MB or archival → S3:put_object
- Sensitive documents (PII, financial) → Vault:store_secret
- Shared team assets → SharePoint:upload_file
Always log the storage decision and location to the project tracker.
This works because the decision criteria are concrete and measurable. Avoid vague criteria like “important files.” Claude cannot reliably judge importance, but it can check file size.
Pattern 5: Domain-Specific Intelligence
This is where skills really shine. You are injecting specialized knowledge that Claude does not have by default.
## Financial Compliance Check
Before processing any payment transaction:
1. Verify transaction amount against daily limits:
- Individual: $10,000
- Business: $50,000
- Requires additional approval above threshold
2. Check counterparty against sanctions lists:
- Use Compliance:screen_entity with full name and jurisdiction
- Any match = STOP and escalate to compliance team
3. Apply regional regulations:
- EU: GDPR data handling for personal information
- US: BSA/AML reporting for transactions > $10,000
- APAC: Local currency conversion rules apply
This pattern turns Claude into a domain expert for your specific context. The compliance rules, the thresholds, the escalation paths. None of this is general knowledge. It is your organization’s knowledge, encoded in a format Claude can act on.
Problem-First vs Tool-First Design
Before you start building, there is a design philosophy decision that will determine whether your skill gets used or sits collecting dust.
Problem-first (recommended): Start with the pain point. What is the workflow that is slow, error-prone, or annoying? Design the skill around that pain, then select the tools that solve it.
Tool-first (common mistake): “I have a Slack MCP server, what can I build with it?” This approach leads to skills that are technically interesting but do not match how anyone actually works.
Problem-first design leads to better triggering because the description naturally contains the words people use when they hit that pain point. It leads to clearer instructions because the workflow is grounded in a real process. And it leads to skills people actually use because the skill solves a problem they already have.
Distribution
Skills are only useful if they reach the people who need them. Here is how distribution works today.
Scope Levels
| Location | Path | Who Gets It |
|---|---|---|
| Enterprise | Managed settings | All org users |
| Personal | ~/.claude/skills/<name>/ | All your projects |
| Project | .claude/skills/<name>/ | This project only |
| Plugin | <plugin>/skills/<name>/ | Where plugin enabled |
When skills share a name, priority is: enterprise > personal > project. This means an enterprise admin can override a project-level skill, which is useful for enforcing organizational standards.
Cross-Platform Reality
Here is something that trips people up: skills do not sync across platforms. You need to upload them separately to each.
- Claude.ai: Upload as a zip via Settings > Features (per-user)
- Claude Code: Place in the filesystem (personal or project directory)
- API: Upload via the
/v1/skillsendpoint (workspace-wide)
This is a friction point, but it also means you can have different skill configurations per platform if that is what your workflow requires.
Skills as an Open Standard
Skills follow the Agent Skills standard, which means the format is designed to be portable across AI tools, not just Claude. If you invest time building a well-structured skill today, that work should transfer as other tools adopt the standard.
Using Skills via the API
The /v1/skills endpoint lets you use skills programmatically, which is essential for automated pipelines.
List available skills:
# List available skills
curl "https://api.anthropic.com/v1/skills?source=anthropic" \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "anthropic-beta: skills-2025-10-02"
Use skills in a request with the container.skills parameter:
{
"model": "claude-sonnet-4-6-20250514",
"max_tokens": 4096,
"betas": ["code-execution-2025-08-25", "skills-2025-10-02"],
"container": {
"skills": [
{
"type": "anthropic",
"skill_id": "pptx",
"version": "latest"
}
]
},
"messages": [{"role": "user", "content": "Create a quarterly review deck"}],
"tools": [{"type": "code_execution_20250825", "name": "code_execution"}]
}
You can attach up to 8 skills per API request. Anthropic provides pre-built skills for common file formats: pptx, xlsx, docx, and pdf.
When to Use Which Platform
| Use Case | Best Platform |
|---|---|
| Interactive work, ad-hoc tasks | Claude.ai |
| Automated pipelines | API |
| Developer workflows | Claude Code |
| Team-wide deployment | API (workspace-wide) |
Troubleshooting Guide
Here are the six problems I see most often, along with how to fix them.
Skill Won’t Upload
- The file must be named exactly
SKILL.md(case-sensitive) - Check your YAML frontmatter: matching
---delimiters, no unclosed quotes - The skill name cannot contain “anthropic” or “claude”
- The description cannot contain angle brackets (
<or>)
Skill Doesn’t Trigger
- Your description must include keywords users would naturally say. If nobody says “orchestrate my onboarding pipeline,” it will never trigger
- Run “What skills are available?” to verify the skill is loaded
- Check the character budget: skill descriptions share 2% of the context window (fallback: 16,000 chars). If you have too many verbose descriptions, some will get cut
- Run
/contextin Claude Code to check for excluded skills - Try direct invocation with
/skill-nameto confirm the skill works at all
Skill Triggers Too Often
- Make the description more specific and narrow
- Add negative triggers: “Do NOT use for simple data exploration”
- Set
disable-model-invocation: truein the frontmatter for manual-only activation
MCP Connection Issues
Run through this four-step checklist:
- Is the MCP server running? Check with
/mcpin Claude Code - Are tool names fully qualified? Use the
ServerName:tool_nameformat - Does the server have the right permissions? Check authentication and scopes
- Test the tool manually before adding it to a skill. If the tool does not work on its own, it will not work inside a skill
Instructions Not Followed
This is almost always a writing problem, not a technical one.
- Too verbose: Claude skims long instruction blocks. Cut the fluff
- Critical info buried in the middle: move the most important instructions to the top
- Ambiguous language: be specific. Say “use the Read tool” not “check the file”
- Keep
SKILL.mdunder 500 lines. If you need more, move reference material to areference/subdirectory
Large Context Issues
- Too many skills loaded at once will exceed the token budget
- Reduce the number of enabled skills or set
disable-model-invocation: trueon low-priority ones - Move detailed reference material to a
reference/subdirectory so it is only loaded when the skill is invoked
Quick Checklist
Before you ship a skill, run through this:
## Before You Start
- [ ] Identified a specific, repeated workflow
- [ ] Documented the current pain points
- [ ] Listed the tools/MCP servers needed
## During Development
- [ ] SKILL.md with valid YAML frontmatter
- [ ] Description follows [what] + [when] + [capabilities] formula
- [ ] Instructions are specific and actionable
- [ ] Error handling for common failures
- [ ] Under 500 lines in SKILL.md
## Before Upload
- [ ] Trigger tests pass (activates when it should)
- [ ] Functional tests pass (produces correct output)
- [ ] Baseline comparison shows improvement
- [ ] Tested with target model (Haiku/Sonnet/Opus)
## After Upload
- [ ] Skill appears in available skills list
- [ ] Real-world test with actual data
- [ ] Edge cases handled gracefully
- [ ] Team members can use it successfully
Resources
- Official Skills Documentation
- Agent Skills API Guide
- Agent Skills Best Practices
- Example Skills Repository
- Agent Skills Standard
- Anthropic Engineering: Agent Skills
Wrapping Up the Series
This three-part series covered everything from understanding what skills are (Part 1), to building and testing your first skill (Part 2), to the patterns and operational knowledge you need to run skills in production (Part 3). Skills are still early, but the fundamentals are solid: write clear instructions in markdown, test them like you would test code, and distribute them where your team already works. The best skill you can build is the one that solves a problem you hit every single day. Start there.


