Custom Models Configuration
xopc supports custom model providers via ~/.xopc/models.json.
Quick Start
Create ~/.xopc/models.json:
{
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434/v1",
"api": "openai-completions",
"apiKey": "ollama",
"models": [
{ "id": "llama3.1:8b" },
{ "id": "qwen2.5-coder:7b" }
]
}
}
}Note: The
apiKeyis required but Ollama ignores it, so any value works.
Configuration
File Location
~/.xopc/models.json (or set XOPC_MODELS_JSON environment variable)
Minimal Example
{
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434/v1",
"api": "openai-completions",
"apiKey": "ollama",
"models": [
{ "id": "llama3.1:8b" }
]
}
}
}Full Example
{
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434/v1",
"api": "openai-completions",
"apiKey": "ollama",
"models": [
{
"id": "llama3.1:8b",
"name": "Llama 3.1 8B (Local)",
"reasoning": false,
"input": ["text"],
"contextWindow": 128000,
"maxTokens": 32000,
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
}
}
]
},
"openrouter": {
"baseUrl": "https://openrouter.ai/api/v1",
"apiKey": "OPENROUTER_API_KEY",
"api": "openai-completions",
"models": [
{
"id": "anthropic/claude-3.5-sonnet",
"name": "Claude 3.5 Sonnet (OR)",
"compat": {
"openRouterRouting": {
"only": ["anthropic"]
}
}
}
]
}
}
}Supported APIs
| API | Description |
|---|---|
openai-completions | OpenAI Chat Completions (most compatible) |
openai-responses | OpenAI Responses API |
anthropic-messages | Anthropic Messages API |
google-generative-ai | Google Generative AI |
azure-openai-responses | Azure OpenAI |
bedrock-converse-stream | AWS Bedrock |
openai-codex-responses | OpenAI Codex |
google-gemini-cli | Google Gemini CLI |
google-vertex | Google Vertex AI |
Provider Configuration
| Field | Description |
|---|---|
baseUrl | API endpoint URL |
api | API type (see above) |
apiKey | API key (see resolution below) |
headers | Custom headers |
authHeader | Add Authorization: Bearer <apiKey> header |
models | Array of model configurations |
modelOverrides | Per-model overrides for built-in models |
Model Configuration
| Field | Required | Default | Description |
|---|---|---|---|
id | Yes | - | Model identifier (passed to API) |
name | No | id | Display name |
api | No | provider's api | Override provider's API |
reasoning | No | false | Supports extended thinking |
input | No | ["text"] | Input types: ["text"] or ["text", "image"] |
contextWindow | No | 128000 | Context window size |
maxTokens | No | 16384 | Maximum output tokens |
cost | No | all zeros | {input, output, cacheRead, cacheWrite} per million tokens |
headers | No | - | Custom headers for this model |
compat | No | - | OpenAI compatibility settings |
Overriding Built-in Providers
Base URL Override
Route a built-in provider through a proxy:
{
"providers": {
"anthropic": {
"baseUrl": "https://my-proxy.example.com/v1"
}
}
}Model Overrides
Customize specific built-in models:
{
"providers": {
"openrouter": {
"modelOverrides": {
"anthropic/claude-sonnet-4": {
"name": "Claude Sonnet 4 (Bedrock Route)",
"compat": {
"openRouterRouting": {
"only": ["amazon-bedrock"]
}
}
}
}
}
}
}API Key Resolution
The apiKey field supports three formats:
1. Shell Command
Prefix with ! to execute a shell command:
{
"apiKey": "!op read 'op://vault/item/credential'"
}2. Environment Variable
Use the name of an environment variable (all uppercase):
{
"apiKey": "ANTHROPIC_API_KEY"
}3. Literal Value
Use the value directly:
{
"apiKey": "sk-..."
}Web UI Configuration
Access the Models configuration in the web UI:
- Open the web UI (http://localhost:18790)
- Go to Settings → Models
- Use the visual editor to configure providers and models
Provider Management
Adding a Provider
Click "Add Provider" to open the provider configuration dialog:
Quick Setup with Presets:
- Ollama - Local LLMs (
http://localhost:11434/v1) - LM Studio - LM Studio server (
http://localhost:1234/v1) - OpenRouter - Multi-provider API (
https://openrouter.ai/api/v1) - Vercel AI Gateway - Vercel Gateway (
https://ai-gateway.vercel.sh/v1) - vLLM - vLLM server (
http://localhost:8000/v1) - Custom - Manual configuration
Configuration Fields:
- Provider ID - Unique identifier (lowercase, alphanumeric, hyphens)
- API Type - The API protocol
- Base URL - The API endpoint URL
- API Key - Supports literal, env vars, or shell commands
Advanced Options:
- Add Authorization header - Adds
Authorization: Bearer {apiKey} - Custom Headers - JSON format custom headers
Model Management
Adding/Editing Models
Click "Add Model" or edit icon on existing model:
Basic Tab:
- Model ID - Unique identifier
- Display Name - Human-readable name
- Input Types - Text only or Text + Vision
- Supports Reasoning - Enable for extended thinking
- Context Window - Maximum context size (default: 128000)
- Max Output Tokens - Maximum response tokens (default: 16384)
Advanced Tab:
- Cost Configuration - Per-million-token pricing
- Custom Headers - Model-specific headers
Compatibility Tab:
- OpenAI Completions Settings
- Routing Configuration (for OpenRouter/Vercel)
API Key Testing
Each provider shows an API key type badge. Click "Test" to:
- Verify the key resolves correctly
- See the resolved value type
- Check for errors
Statistics Display
The toolbar shows real-time statistics:
- Providers count - Number of custom providers
- Models count - Total models across all providers
Actions
- Validate - Check configuration for errors
- Save - Save changes to models.json
- Reload - Hot reload without restart
- Show/Hide JSON - View raw JSON configuration
Hot Reload
Changes are automatically reloaded when you save in the UI. No restart required.
Examples
Ollama (Local)
{
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434/v1",
"api": "openai-completions",
"apiKey": "ollama",
"models": [
{ "id": "llama3.1:8b" },
{ "id": "qwen2.5-coder:7b" }
]
}
}
}OpenRouter
{
"providers": {
"openrouter": {
"baseUrl": "https://openrouter.ai/api/v1",
"apiKey": "OPENROUTER_API_KEY",
"api": "openai-completions",
"models": [
{
"id": "anthropic/claude-3.5-sonnet",
"compat": {
"openRouterRouting": {
"order": ["anthropic", "openai"]
}
}
}
]
}
}
}Vercel AI Gateway
{
"providers": {
"vercel-ai-gateway": {
"baseUrl": "https://ai-gateway.vercel.sh/v1",
"apiKey": "AI_GATEWAY_API_KEY",
"api": "openai-completions",
"models": [
{
"id": "moonshotai/kimi-k2.5",
"name": "Kimi K2.5 (Fireworks via Vercel)",
"compat": {
"vercelGatewayRouting": {
"only": ["fireworks", "novita"]
}
}
}
]
}
}
}LM Studio
{
"providers": {
"lmstudio": {
"baseUrl": "http://localhost:1234/v1",
"api": "openai-completions",
"apiKey": "lmstudio",
"models": [
{ "id": "local-model" }
]
}
}
}API Endpoints
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/models-json | Get models.json configuration |
| POST | /api/models-json/validate | Validate configuration |
| PATCH | /api/models-json | Save configuration |
| POST | /api/models-json/reload | Hot reload |
| POST | /api/models-json/test-api-key | Test API key resolution |
Troubleshooting
Models Not Showing Up
- Check browser console for errors
- Verify
models.jsonsyntax is valid JSON - Check Settings → Models page for validation errors
- Ensure API keys are correctly resolved (use Test button)
API Key Not Working
- Use "Test" button in UI to verify resolution
- Check environment variables are set
- For shell commands, ensure they work manually
- Check logs for command execution errors
Changes Not Taking Effect
- Click "Reload" in UI to force refresh
- Check
models.jsonfile was saved correctly - Restart gateway if needed
Separation from config.json
Note: models.json is separate from config.json:
config.jsoncontains API keys for built-in providersmodels.jsoncontains custom provider configurations
This separation allows:
- Different file permissions for sensitive API keys
- Easier management of custom model configurations
- Hot reload of models without affecting other settings