Custom Models Configuration
xopcbot supports custom model providers via ~/.xopcbot/models.json.
Table of Contents
- Quick Start
- Configuration
- Supported APIs
- Provider Configuration
- Model Configuration
- Overriding Built-in Providers
- API Key Resolution
- Frontend UI
- Examples
- API Endpoints
- Troubleshooting
Quick Start
Create ~/.xopcbot/models.json:
{
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434/v1",
"api": "openai-completions",
"apiKey": "ollama",
"models": [
{ "id": "llama3.1:8b" },
{ "id": "qwen2.5-coder:7b" }
]
}
}
}The apiKey is required but Ollama ignores it, so any value works.
Configuration
File Location
~/.xopcbot/models.json (or set XOPCBOT_MODELS_JSON environment variable)
Minimal Example
{
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434/v1",
"api": "openai-completions",
"apiKey": "ollama",
"models": [
{ "id": "llama3.1:8b" }
]
}
}
}Full Example
{
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434/v1",
"api": "openai-completions",
"apiKey": "ollama",
"models": [
{
"id": "llama3.1:8b",
"name": "Llama 3.1 8B (Local)",
"reasoning": false,
"input": ["text"],
"contextWindow": 128000,
"maxTokens": 32000,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
}
]
},
"openrouter": {
"baseUrl": "https://openrouter.ai/api/v1",
"apiKey": "OPENROUTER_API_KEY",
"api": "openai-completions",
"models": [
{
"id": "anthropic/claude-3.5-sonnet",
"name": "Claude 3.5 Sonnet (OR)",
"compat": {
"openRouterRouting": {
"only": ["anthropic"]
}
}
}
]
}
}
}Supported APIs
| API | Description |
|---|---|
openai-completions | OpenAI Chat Completions (most compatible) |
openai-responses | OpenAI Responses API |
anthropic-messages | Anthropic Messages API |
google-generative-ai | Google Generative AI |
azure-openai-responses | Azure OpenAI |
bedrock-converse-stream | AWS Bedrock |
openai-codex-responses | OpenAI Codex |
google-gemini-cli | Google Gemini CLI |
google-vertex | Google Vertex AI |
Provider Configuration
| Field | Description |
|---|---|
baseUrl | API endpoint URL |
api | API type (see above) |
apiKey | API key (see resolution below) |
headers | Custom headers |
authHeader | Set true to add Authorization: Bearer <apiKey> |
models | Array of model configurations |
modelOverrides | Per-model overrides for built-in models |
Model Configuration
| Field | Required | Default | Description |
|---|---|---|---|
id | Yes | - | Model identifier (passed to the API) |
name | No | id | Display name |
api | No | provider's api | Override provider's API |
reasoning | No | false | Supports extended thinking |
input | No | ["text"] | Input types: ["text"] or ["text", "image"] |
contextWindow | No | 128000 | Context window size |
maxTokens | No | 16384 | Maximum output tokens |
cost | No | all zeros | {input, output, cacheRead, cacheWrite} per million tokens |
headers | No | - | Custom headers for this model |
compat | No | - | OpenAI compatibility settings |
Overriding Built-in Providers
Base URL Override
Route a built-in provider through a proxy:
{
"providers": {
"anthropic": {
"baseUrl": "https://my-proxy.example.com/v1"
}
}
}Model Overrides
Customize specific built-in models:
{
"providers": {
"openrouter": {
"modelOverrides": {
"anthropic/claude-sonnet-4": {
"name": "Claude Sonnet 4 (Bedrock Route)",
"compat": {
"openRouterRouting": {
"only": ["amazon-bedrock"]
}
}
}
}
}
}
}API Key Resolution
The apiKey field supports three formats:
1. Shell Command
Prefix with ! to execute a shell command:
{
"apiKey": "!op read 'op://vault/item/credential'"
}2. Environment Variable
Use the name of an environment variable (all uppercase):
{
"apiKey": "ANTHROPIC_API_KEY"
}3. Literal Value
Use the value directly:
{
"apiKey": "sk-..."
}Frontend UI
Access the Models configuration in the web UI:
- Open the web UI (http://localhost:18790 by default)
- Go to Settings → Models
- Use the visual editor to configure providers and models
Provider Management
Adding a Provider
Click "Add Provider" to open the provider configuration dialog:
Quick Setup with Presets:
- Ollama - Local LLMs via Ollama (
http://localhost:11434/v1) - LM Studio - LM Studio local server (
http://localhost:1234/v1) - OpenRouter - Multi-provider API (
https://openrouter.ai/api/v1) - Vercel AI Gateway - Vercel AI Gateway (
https://ai-gateway.vercel.sh/v1) - vLLM - vLLM inference server (
http://localhost:8000/v1) - Custom - Manual configuration
Selecting a preset automatically fills in the base URL and API type.
Configuration Fields:
- Provider ID - Unique identifier (lowercase, alphanumeric, hyphens, underscores)
- API Type - The API protocol (OpenAI Completions, Anthropic Messages, etc.)
- Base URL - The API endpoint URL (should end with
/v1for OpenAI-compatible APIs) - API Key - Supports literal values, environment variables (uppercase), or shell commands (
!command)
Advanced Options:
- Add Authorization header - Automatically adds
Authorization: Bearer {apiKey} - Custom Headers - JSON format custom headers
Model Management
Adding/Editing Models
Click "Add Model" or the edit icon on an existing model to open the model editor dialog:
Basic Tab:
- Model ID - Unique identifier (e.g.,
llama3.1:8b,gpt-4o) - Display Name - Human-readable name
- Input Types - Text only or Text + Vision
- Supports Reasoning - Enable for models with extended thinking capability
- Context Window - Maximum context size in tokens (default: 128000)
- Max Output Tokens - Maximum response tokens (default: 16384)
Advanced Tab:
- Cost Configuration - Per-million-token pricing:
- Input / Output / Cache Read / Cache Write
- Custom Headers - Model-specific headers in JSON format
Compatibility Tab:
- OpenAI Completions Settings:
- Supports Store
- Supports Developer Role
- Supports Usage in Streaming
- Max Tokens Field (auto-detect / max_completion_tokens / max_tokens)
- Routing Configuration (for OpenRouter/Vercel):
- Provider Order - Priority list (e.g.,
anthropic, openai) - Allowed Providers - Whitelist (e.g.,
amazon-bedrock)
- Provider Order - Priority list (e.g.,
API Key Testing
Each provider shows an API key type badge (literal/env/shell). Click "Test" to:
- Verify the key resolves correctly
- See the resolved value type
- Check for errors (e.g., missing environment variable)
Statistics Display
The toolbar shows real-time statistics:
- Providers count - Number of custom providers (highlighted in blue when > 0)
- Models count - Total models across all providers
Actions
- Validate - Check configuration for errors without saving
- Save - Save changes to models.json
- Reload - Hot reload configuration without restart
- Show/Hide JSON - View raw JSON configuration
Hot Reload
Changes are automatically reloaded when you save in the UI. No restart required.
Examples
Ollama (Local)
{
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434/v1",
"api": "openai-completions",
"apiKey": "ollama",
"models": [
{ "id": "llama3.1:8b" },
{ "id": "qwen2.5-coder:7b" }
]
}
}
}OpenRouter
{
"providers": {
"openrouter": {
"baseUrl": "https://openrouter.ai/api/v1",
"apiKey": "OPENROUTER_API_KEY",
"api": "openai-completions",
"models": [
{
"id": "anthropic/claude-3.5-sonnet",
"compat": {
"openRouterRouting": {
"order": ["anthropic", "openai"]
}
}
}
]
}
}
}Vercel AI Gateway
{
"providers": {
"vercel-ai-gateway": {
"baseUrl": "https://ai-gateway.vercel.sh/v1",
"apiKey": "AI_GATEWAY_API_KEY",
"api": "openai-completions",
"models": [
{
"id": "moonshotai/kimi-k2.5",
"name": "Kimi K2.5 (Fireworks via Vercel)",
"compat": {
"vercelGatewayRouting": {
"only": ["fireworks", "novita"]
}
}
}
]
}
}
}LM Studio
{
"providers": {
"lmstudio": {
"baseUrl": "http://localhost:1234/v1",
"api": "openai-completions",
"apiKey": "lmstudio",
"models": [
{ "id": "local-model" }
]
}
}
}API Endpoints
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/models-json | Get models.json configuration |
| POST | /api/models-json/validate | Validate models.json configuration |
| PATCH | /api/models-json | Save models.json |
| POST | /api/models-json/reload | Hot reload |
| POST | /api/models-json/test-api-key | Test API key resolution |
Troubleshooting
Models not showing up
- Check the browser console for errors
- Verify
models.jsonsyntax is valid JSON - Check the Settings → Models page for validation errors
- Ensure API keys are correctly resolved (use the Test button)
API key not working
- Use the "Test" button in the UI to verify resolution
- Check environment variables are set
- For shell commands, ensure they work when run manually
- Check logs for command execution errors
Changes not taking effect
- Click "Reload" in the UI to force a refresh
- Check the
models.jsonfile was saved correctly - Restart the gateway if needed
Separation from config.json
Note: models.json is separate from config.json:
config.jsoncontains API keys for built-in providers (simple string format)models.jsoncontains custom provider configurations with models
This separation allows:
- Different file permissions for sensitive API keys
- Easier management of custom model configurations
- Hot reload of models without affecting other settings