Skip to main content
Every organization runs on knowledge that isn’t written down. The senior engineer who knows why that API is structured oddly. The support lead who remembers the exception they made for a key customer last quarter. The operations manager who understands which approvals are really required versus which are rubber stamps. This knowledge lives in people’s heads, materializes in Slack threads and hallway conversations, and evaporates when people leave. It’s the difference between what the documentation says and how things actually work. AI agents bump into this wall constantly. They have access to tools. They can read your APIs and databases. But they don’t know that healthcare customers always get an extra 10% because their procurement cycles are brutal. They don’t know that the “required” security review is actually optional for internal tools. They don’t know any of the accumulated wisdom that makes your organization function. Char is designed to capture this knowledge over time.

The Evolution: Tools, Skills, Context

The progression from basic AI capabilities to genuine organizational intelligence follows three phases: Tools give agents capability. Through MCP, agents can interact with your systems—query databases, call APIs, execute actions. But an agent with tools is like a new hire with system access. They can technically do things, but they don’t know how your organization actually operates. Skills give agents expertise. A skill is procedural knowledge—the “how to” that teaches an agent to use tools effectively. Skills capture the workflows, the edge cases, the decision trees. An agent with skills is like an employee who’s read all the SOPs. Context gives agents experience. This is the accumulated record of decisions, exceptions, and outcomes. Not just what the procedure says, but what happened last time. What exceptions were granted and why. What worked and what didn’t. An agent with context is like an employee who’s been there for years and remembers everything.
PhaseWhat the Agent HasAnalogy
ToolsCapabilitiesNew hire with system access
SkillsProceduresEmployee who’s read the SOPs
ContextExperienceVeteran who remembers everything
Most AI deployments stop at tools, maybe skills. Char is architected to reach context.

Why Decisions Disappear

Traditional systems of record—your CRM, ERP, ticketing system—capture outcomes. The deal closed at this price. The ticket was escalated to Tier 3. The refund was approved. What they don’t capture is the reasoning. Why that price? Who approved the deviation? What precedent did they cite? What context informed the escalation? This reasoning exists, briefly, in the moment of decision. It lives in the conversation that preceded the action, in the judgment call someone made, in the memory of similar situations. Then it evaporates. The system records the final state, not the path that led there. When an AI agent executes a workflow, the same thing happens. The agent makes decisions—which tool to call, what parameters to use, how to handle an edge case. These decisions are informed by context, but that context disappears when the task completes. The next time a similar situation arises, the agent starts from scratch. This is the wall that prevents AI from accumulating genuine organizational intelligence.

How Char Captures Context

Char’s architecture is designed to make decisions durable rather than ephemeral. The Hub sees everything. Every tool invocation—whether it’s a browser-based WebMCP tool, an internal MCP server, or an external service—flows through the user’s Tool Hub. The Hub doesn’t just route requests; it can observe patterns, log decisions, and build a record of what happened and why. Skills are living documents. Unlike static documentation, skills can be created and refined through conversation. When a user describes a workflow, the agent can capture it as a skill. When an exception occurs, the skill can be updated to reflect the new precedent. The knowledge base grows through natural interaction. Execution becomes precedent. When the agent handles a situation, that handling becomes a reference point. “Last time we saw this pattern, here’s what we did.” Over time, the agent doesn’t just follow procedures—it remembers outcomes.

Skills as the Bridge

Skills occupy a unique position in this architecture. They’re explicit enough to be auditable, flexible enough to evolve, and structured enough to be machine-readable. A skill isn’t just instructions—it’s a boundary around a domain of knowledge. When a user asks something that matches a skill’s description, the agent loads that skill’s full context. This progressive disclosure means the agent can have access to vast procedural knowledge without being overwhelmed by irrelevant detail. The AgentSkills specification defines skills as folders containing a SKILL.md file with metadata and instructions. This simple format has profound implications: Skills are versionable. Store them in git. Review changes in pull requests. Track who modified what and when. This matters for regulated industries where audit trails are required. Skills are portable. A skill written for Char works in Claude Code, Cursor, and other compatible tools. You’re not locked into a proprietary format. Skills are refinable. When the agent encounters an edge case, the skill can be updated. When a better approach is discovered, the instructions can change. Skills improve through use. Most importantly, skills can be created from experience. The agent observes a workflow, captures it as a skill, and that knowledge becomes available for future use. Tribal knowledge becomes institutional knowledge.

The Decision Trace

A decision trace captures not just what happened, but the context that informed it:
  • What was the user trying to accomplish?
  • What tools were available?
  • What skills were loaded?
  • What precedents were considered?
  • What decision was made and why?
  • What was the outcome?
This is richer than a log. A log tells you that a tool was called with certain parameters. A decision trace tells you why that tool was chosen, what alternatives were considered, and how the situation was understood. On the Enterprise plan, Char captures decision traces for compliance and audit purposes. But the deeper value isn’t compliance—it’s organizational learning. Every traced decision becomes a potential precedent. Every exception becomes a data point for refining procedures.

From Individual to Organizational

The vision extends beyond individual users. When Char is embedded across an organization’s applications, patterns emerge:
  • How do different teams handle similar situations?
  • What exceptions are being granted repeatedly? (Maybe the exception should become the rule.)
  • What workflows are being invented ad-hoc? (Maybe they should become skills.)
  • Where are decisions inconsistent? (Maybe alignment is needed.)
This is the context graph—the accumulated record of how your organization actually operates, as opposed to how the documentation says it should operate. It’s the difference between the org chart and how decisions really get made. Char doesn’t capture this today in its full form. But the architecture—Hub-mediated execution, enterprise audit logging, refinable skills—is designed to make it possible.

The Tradeoffs

This vision involves real considerations: Privacy and consent. Capturing decision traces means recording what people do. Organizations need clear policies about what’s captured, who can access it, and how it’s used. The enterprise controls are designed to give organizations this control. Signal versus noise. Not every decision is worth tracing. A search query has different value than an exception approval. The system needs to distinguish routine operations from meaningful precedents. Interpretation. A decision trace records what happened, but interpreting why requires judgment. Two similar situations might warrant different decisions for reasons that aren’t captured in the trace. Cold start. An organization deploying Char today doesn’t have accumulated context. The value builds over time through use. Early adopters are investing in future capability. These tradeoffs are inherent to any system that attempts to capture organizational knowledge. They’re not reasons to avoid the attempt—they’re considerations for how to do it responsibly.

Further Reading