LLM Gateway
LLM Gateway is an open-source API gateway for Large Language Models (LLMs). It acts as a middleware between your applications and various LLM providers, allowing you to:
- Route requests to multiple LLM providers (OpenAI, Anthropic, Google Vertex AI, and others)
- Manage API keys for different providers in one place
- Track token usage and costs across all your LLM interactions
- Analyze performance metrics to optimize your LLM usage
Features
- Unified API Interface: Compatible with the OpenAI API format for seamless migration
- Usage Analytics: Track requests, tokens used, response times, and costs
- Multi-provider Support: Connect to various LLM providers through a single gateway
- Performance Monitoring: Compare different models' performance and cost-effectiveness
Getting Started
You can use LLM Gateway in two ways:
- Hosted Version: For immediate use without setup, visit llmgateway.io to create an account and get an API key.
- Self-Hosted: Deploy LLM Gateway on your own infrastructure for complete control over your data and configuration.
Self-Hosted With Docker
Use Docker-managed volumes for the unified image. Do not bind-mount a host directory directly to /var/lib/postgresql/data, because PostgreSQL initialization inside the container needs to set permissions on that directory and that can fail depending on the host filesystem and ownership.
export LLM_GATEWAY_SECRET="$(openssl rand -base64 32 | tr -d '\n')"
export GATEWAY_API_KEY_HASH_SECRET="$(openssl rand -base64 32 | tr -d '\n')"