Custom Providers
Alma supports any OpenAI-compatible API, allowing you to use local models, proxies, or alternative services.
Setup
- In Alma, go to Settings → Providers → Add Provider
- Select Custom as the provider type
- Configure your settings:
- Name: A display name
- Base URL: Your API endpoint
- API Key: Authentication key (if required)
- Click Save
- Manually add your model IDs
Compatible Services
Local Models
Ollama
Base URL: http://localhost:11434/v1Models: llama3, mistral, codellama, etc.
LM Studio
Base URL: http://localhost:1234/v1Models: Any loaded model
LocalAI
Base URL: http://localhost:8080/v1Models: Various open-source models
Cloud Services
Together AI
Base URL: https://api.together.xyz/v1Models: Llama, Mistral, and more
Groq
Base URL: https://api.groq.com/openai/v1Models: Ultra-fast Llama and Mixtral
Perplexity
Base URL: https://api.perplexity.aiModels: Sonar models with web search
Proxy Services
Many proxy services provide OpenAI-compatible endpoints:
- Custom enterprise proxies
- Rate-limiting proxies
- Caching proxies
Adding Models
For custom providers, you need to manually add model IDs:
- Go to provider settings
- Click Add Model
- Enter the exact model ID
- The model will appear in your model selector
Configuration Tips
API Key Format
Some services use different authentication:
- Bearer tokens: Standard format
- No authentication: Leave API key empty
- Custom headers: May not be supported
Endpoint Format
Ensure your base URL:
- Includes the protocol (http/https)
- Does not include trailing slashes
- Points to the v1 endpoint
Troubleshooting
"Connection Refused"
- Check if the service is running
- Verify the port number
- Check firewall settings
"Model Not Found"
- Verify the exact model ID
- Check if the model is loaded (for local services)
"Authentication Failed"
- Check API key format
- Some local services don't require keys
