Skip to content

Custom Providers

Alma supports any OpenAI-compatible API, allowing you to use local models, proxies, or alternative services.

Setup

  1. In Alma, go to SettingsProvidersAdd Provider
  2. Select Custom as the provider type
  3. Configure your settings:
    • Name: A display name
    • Base URL: Your API endpoint
    • API Key: Authentication key (if required)
  4. Click Save
  5. Manually add your model IDs

Compatible Services

Local Models

Ollama

Base URL: http://localhost:11434/v1

Models: llama3, mistral, codellama, etc.

LM Studio

Base URL: http://localhost:1234/v1

Models: Any loaded model

LocalAI

Base URL: http://localhost:8080/v1

Models: Various open-source models

Cloud Services

Together AI

Base URL: https://api.together.xyz/v1

Models: Llama, Mistral, and more

Groq

Base URL: https://api.groq.com/openai/v1

Models: Ultra-fast Llama and Mixtral

Perplexity

Base URL: https://api.perplexity.ai

Models: Sonar models with web search

Proxy Services

Many proxy services provide OpenAI-compatible endpoints:

  • Custom enterprise proxies
  • Rate-limiting proxies
  • Caching proxies

Adding Models

For custom providers, you need to manually add model IDs:

  1. Go to provider settings
  2. Click Add Model
  3. Enter the exact model ID
  4. The model will appear in your model selector

Configuration Tips

API Key Format

Some services use different authentication:

  • Bearer tokens: Standard format
  • No authentication: Leave API key empty
  • Custom headers: May not be supported

Endpoint Format

Ensure your base URL:

  • Includes the protocol (http/https)
  • Does not include trailing slashes
  • Points to the v1 endpoint

Troubleshooting

"Connection Refused"

  • Check if the service is running
  • Verify the port number
  • Check firewall settings

"Model Not Found"

  • Verify the exact model ID
  • Check if the model is loaded (for local services)

"Authentication Failed"

  • Check API key format
  • Some local services don't require keys