Documentation Index
Fetch the complete documentation index at: https://docs.rkat.ai/llms.txt
Use this file to discover all available pages before exploring further.
Global runtime scope flags
All CLI commands accept realm scope flags:
--realm <id>
--instance <id>
--realm-backend <sqlite|jsonl> (creation hint only)
These flags decide which realm config/session state is used.
Default realm behavior
| Command surface | Default when --realm is omitted |
|---|
rkat run, rkat run --resume, rkat session ... | Workspace-derived stable realm (ws-...) |
rkat mob ... | Workspace-derived stable realm (ws-...) |
rkat-rpc | New opaque realm (realm-...) |
If you want CLI + RPC to share the same state, pass the same explicit --realm to both.
Environment variables
Required API keys (at least one):
| Variable | Provider |
|---|
ANTHROPIC_API_KEY | Anthropic Claude |
OPENAI_API_KEY | OpenAI GPT |
GOOGLE_API_KEY / GEMINI_API_KEY | Google Gemini |
See providers for full key precedence (RKAT_* variants included).
Config files
Canonical runtime config for CLI commands is realm-scoped (<realm>/config.toml under platform data dir).
Compatibility files still exist:
| Scope | Path |
|---|
| User | ~/.rkat/config.toml |
| Project | .rkat/config.toml |
These are useful for templates (rkat init) and compatibility workflows, but realm config is the runtime source of truth for CLI/RPC/REST/MCP surfaces.
Common tool gates:
[tools]
builtins_enabled = false
shell_enabled = false
schedule_enabled = true
workgraph_enabled = false
Self-hosted model config
Self-hosted models are defined directly in realm config:
[self_hosted.servers.local]
transport = "openai_compatible"
base_url = "http://127.0.0.1:11434"
api_style = "chat_completions"
bearer_token_env = "LOCAL_LLM_TOKEN"
[self_hosted.models.gemma-4-31b]
server = "local"
remote_model = "gemma4:31b"
display_name = "Gemma 4 31B"
family = "gemma-4"
tier = "supported"
context_window = 256000
max_output_tokens = 8192
vision = true
image_tool_results = true
inline_video = false
supports_temperature = true
supports_thinking = true
supports_reasoning = true
call_timeout_secs = 600
transport is currently openai_compatible only.
api_style chooses the upstream API shape:
chat_completions is the safest default for Ollama, LM Studio, and vLLM in current Meerkat docs and examples
responses should be treated as an advanced/server-specific path you validate explicitly before depending on it
For Gemma 4, prefer chat_completions unless you have verified a server-specific responses workflow you want to use.
supports_thinking and supports_reasoning describe the behavior you intend Meerkat to expose through that configured transport. Gemma 4 models themselves are reasoning-capable, but some servers expose those capabilities with provider-specific conventions.
Use bearer_token_env instead of bearer_token whenever possible.
Session storage layout
Realm storage lives under platform data dir:
- macOS:
~/Library/Application Support/meerkat/realms/<realm>/
- Linux:
~/.local/share/meerkat/realms/<realm>/
- Windows:
%APPDATA%\meerkat\realms\<realm>\
Important files:
realm_manifest.json (backend pinning)
config.toml (realm config)
sessions.sqlite3 (when backend is sqlite)
sessions_jsonl/ (when backend is jsonl)
mob_registry.json (CLI mob registry)
mob_registry.lock (CLI mob registry lock)
--realm-backend is a creation hint. After first realm creation, backend selection is pinned by realm_manifest.json.
This applies to both session storage and rkat mob command behavior in that realm.
When SQLite support is compiled in, new persistent realms default to sqlite.
MCP configuration
MCP servers are configured separately from realm runtime state:
| Scope | Path |
|---|
| User | ~/.rkat/mcp.toml |
| Project | .rkat/mcp.toml |
Project servers override user servers with the same name.
Exit codes
| Code | Meaning |
|---|
| 0 | Success |
| 1 | Internal error |
| 2 | Budget exhausted |
Most richer session/provider/runtime failures are reported through structured error payloads and stderr text rather than a large exit-code taxonomy. In particular, session persistence/compaction disabled conditions are informational at the CLI transport layer and do not map to dedicated non-zero exit codes.
See also