Meerkat is provider-agnostic at the session/config/runtime model level. You can switch providers by changing the model name, and a configured self-hosted alias such asDocumentation Index
Fetch the complete documentation index at: https://docs.rkat.ai/llms.txt
Use this file to discover all available pages before exploring further.
gemma-4-31b behaves like any other model ID in the runtime. Tool visibility and multimodal behavior still depend on model capabilities, so provider/model differences can affect the effective tool surface.
This page is the concept layer for provider abstraction. Use Auth and Self-hosting models for setup workflows, and use reference pages for exact capability and contract details.
Provider setup
- Anthropic
- OpenAI
- Gemini
- Self-hosted
| Model | Context | Max output | Best for |
|---|---|---|---|
claude-opus-4-7 | 1M | 128K | Default Anthropic recommendation |
claude-opus-4-6 | 1M | 128K | Supported Opus fallback |
claude-sonnet-4-6 | 1M | 64K | Balanced performance and cost |
claude-sonnet-4-5 | 200K | 64K | Legacy supported Sonnet |
config.toml (active realm)
Environment variables
| Variable | Fallback | Provider |
|---|---|---|
RKAT_ANTHROPIC_API_KEY | ANTHROPIC_API_KEY | Anthropic Claude |
RKAT_OPENAI_API_KEY | OPENAI_API_KEY | OpenAI GPT |
RKAT_GEMINI_API_KEY | GEMINI_API_KEY, GOOGLE_API_KEY | Google Gemini |
The
RKAT_* variants take precedence over provider-native names, so you can run Meerkat with dedicated keys separate from other tools.bearer_token_env or bearer_token on each server definition instead of the shared provider env vars above.
Image generation providers
Thegenerate_image builtin uses provider-specific image profiles behind a single Meerkat request shape. The active session model does not have to be an image model; image operations can route to a provider default or a forced image target while preserving the original session identity.
| Provider | Default image target | Notes |
|---|---|---|
| OpenAI | gpt-image-2 | Uses the hosted Responses image tool by default. Other OpenAI-owned gpt-image* or dall-e* targets use the Images API path. |
| Gemini | gemini-3.1-flash-image-preview | Also accepts provider alias google. Gemini image targets run through an internal scoped image-model turn. |
provider_params are provider-specific and do not replace Meerkat’s universal image fields. Use top-level size, quality, format, and intent; the OpenAI adapter lowers format to provider-side output_format. For the current gpt-image-2 default, public callers should only need background, output_compression, moderation, and the hosted-tool-only action override; use background: "auto" or "opaque" (not "transparent"), use output_compression only with format: "jpeg" or "webp", usually omit action, and omit input_fidelity because Meerkat 0.6.5 rejects unknown OpenAI image provider params. Gemini accepts aspect_ratio and image_size. See Image generation for the exact request shape and troubleshooting.
SDK feature flags
When using Meerkat as a Rust library, enable only the providers you need:| Feature | Description | Default |
|---|---|---|
anthropic | Anthropic Claude support | Yes |
openai | OpenAI GPT support | Yes |
gemini | Google Gemini support | Yes |
all-providers | All LLM providers (convenience alias) | No |
Provider parameters
Provider-specific options can be passed via the--param CLI flag or provider_params in the SDK:
Provider-native web search is on by default for catalog models that support it.
Disable it in config with the matching provider_tools.<provider> search toggle,
or for a single CLI run with rkat run --no-web-search "...".
- Anthropic
- OpenAI
- Gemini
| Parameter | Description |
|---|---|
thinking_budget | Token budget for extended thinking (integer) |
top_k | Top-k sampling parameter (integer) |
Model catalog
Meerkat ships a curated built-in model catalog inmeerkat_core::model_profile and merges it with any configured self-hosted aliases into one effective runtime registry used for capability detection, provider resolution, and catalog responses. The compatibility meerkat-models crate re-exports that catalog surface for consumers that still depend on the older crate boundary.
Query the catalog programmatically from any surface:
- CLI:
rkat models - RPC:
models/catalog - REST:
GET /models/catalog - MCP:
meerkat_models_catalog
self_hosted provider group and include their backing server_id.
For Gemma 4 specifically, prefer chat_completions as the default OpenAI-compatible interface. It is the clearest common path for tool calling across Ollama, LM Studio, and vLLM, while reasoning-trace semantics still vary by server.
Auto-detection
The provider is resolved from the built-in model catalog and any configured self-hosted aliases:claude-*models use Anthropicgpt-*,chatgpt-*models use OpenAIgemini-*models use Gemini
gemma-4-31b works without --provider.
You can still override this with --provider on the CLI or provider in API requests.
