Skip to main content
Organization Skills teach your Char agent what it needs to know about your business. Rather than building custom AI models or fine-tuning, you write instructions in plain Markdown that the agent reads and follows.

The Problem Skills Solve

General-purpose AI models know a lot, but they don’t know your business. They can’t answer questions about your return policy, your internal workflows, or how your product works. You could paste instructions into every conversation, but that’s wasteful and hard to maintain. Skills solve this by giving the agent a library of domain knowledge it can draw from. When a customer asks about returns, the agent recognizes this matches a skill and loads the relevant instructions. The knowledge lives in one place, stays up to date, and only uses context when needed.

Why SKILL.md Format

Char uses the AgentSkills specification, an open format originally developed by Anthropic and now adopted across the AI ecosystem. This matters for several reasons. First, portability. A skill you write for Char works in Claude Code, Cursor, and other compatible tools. You’re not locked into a proprietary format. Second, simplicity. Skills are Markdown files with YAML frontmatter—the same format developers already use for documentation, blog posts, and configuration. There’s no special syntax to learn, no compilation step, no deployment process. Third, version control. Skills are text files. You can store them in git, review changes in pull requests, and track who modified what and when. This is particularly valuable for regulated industries where audit trails matter.

Progressive Disclosure

A naive approach would load all skills into every conversation. This wastes tokens and confuses the agent with irrelevant context. Char uses progressive disclosure instead, loading information only when the agent needs it. At startup, the agent receives a compact index: just the name and description of each skill. This typically runs 50-100 tokens per skill—enough for the agent to know what’s available without consuming significant context. When a user’s question matches a skill’s domain, the agent requests the full content. The detailed instructions—which might run thousands of tokens—load only then. This keeps routine conversations fast and cheap while ensuring complex questions get the depth they need. The model is similar to how a human expert works. You don’t rehearse every procedure you know before each conversation. You have a mental index of your expertise and dive deep when a question requires it.

Skills vs. System Prompts

You might wonder why skills exist when you could just put everything in the system prompt. The distinction becomes clear at scale. A system prompt is a single block of instructions that loads for every conversation. It’s appropriate for universal guidance: your brand voice, safety rules, things the agent should always know. Skills are modular. They load selectively based on context. As your knowledge base grows—ten skills, fifty skills, hundreds of skills—this modularity becomes essential. You couldn’t fit everything in a system prompt even if you wanted to. There’s also a maintenance story. Updating a single skill doesn’t require touching your core configuration. Teams can own their own skills. Changes are isolated and reviewable.

The Broader Context

Skills reflect a shift in how we think about AI customization. Traditional approaches required training data, compute resources, and machine learning expertise. You’d collect examples, fine-tune a model, deploy it, and hope it generalized well. Instruction-based customization is different. You tell the agent what to do in natural language. The feedback loop is immediate—change the instructions, see the behavior change. Domain experts who understand the business can contribute directly, without going through a technical translation layer. This doesn’t replace fine-tuning entirely. If you need the model to recognize patterns it wasn’t trained on or generate outputs in a specific style, training still has a role. But for teaching procedures, policies, and domain knowledge, instructions are often simpler and more transparent.

Authorship and Collaboration

Skills can come from multiple sources. Dashboard users create them through the management interface. End users can create them directly from widget conversations, capturing workflows they discover as they work. The agent itself can draft skills based on conversation patterns. This distributed authorship model means your knowledge base grows organically. A support agent notices they keep explaining the same procedure and captures it as a skill. A power user documents a workflow for their colleagues. The barrier to contribution is low because the format is simple. All skills remain under organizational control. You can review, edit, and archive skills from the dashboard regardless of who created them. Nothing goes live without visibility.

Further Reading