╦ ╔═╗╔╗╔╔═╗╔═╗╔═╗╔═╗╦ ╦╔═╗ ║ ╠═╣║║║║ ╦║ ╠═╣║ ╠═╣║╣ ╩═╝╩ ╩╝╚╝╚═╝╚═╝╩ ╩╚═╝╩ ╩╚═╝
Welcome to LangCache Gateway.
A semantic caching layer for OpenAI, Anthropic, and Google Gemini APIs.
Features:
• Vector-based semantic similarity matching
• Statistics over prompt usage and caching
• Support for streaming and non-streaming requests
• Drop in replacement for OpenAI, Anthropic, and Gemini
── Authenticate to get started ──
Loading...
Your workspaces:
── Integration Examples ──
curl -X POST https://gateway.langcache.io/v1/responses \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "x-langcache-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("OPENAI_API_KEY"),
base_url="https://gateway.langcache.io/v1"
)
response = client.responses.create(
model="gpt-4",
messages=[
{"role": "user", "content": "Hello!"}
],
extra_headers={
"x-langcache-key": "YOUR_API_KEY"
}
)
print(response.choices[0].message.content)
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: 'https://gateway.langcache.io/v1'
});
const response = await client.responses.create({
model: 'gpt-4',
messages: [
{ role: 'user', content: 'Hello!' }
]
}, {
headers: {
'x-langcache-key': 'YOUR_API_KEY'
}
});
console.log(response.choices[0].message.content);
curl -X POST https://gateway.langcache.io/v1/messages \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "x-langcache-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-20250514",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hello, Claude!"}
]
}'
import os
from anthropic import Anthropic
client = Anthropic(
api_key=os.getenv("ANTHROPIC_API_KEY"),
base_url="https://gateway.langcache.io/v1"
)
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude!"}
],
extra_headers={
"x-langcache-key": "YOUR_API_KEY"
}
)
print(message.content[0].text)
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
baseURL: 'https://gateway.langcache.io/v1'
});
const message = await client.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [
{ role: 'user', content: 'Hello, Claude!' }
]
}, {
headers: {
'x-langcache-key': 'YOUR_API_KEY'
}
});
console.log(message.content[0].text);
curl -X POST "https://gateway.langcache.io/v1beta/models/gemini-2.5-flash:generateContent" \
-H "x-goog-api-key: $GEMINI_API_KEY" \
-H "x-langcache-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"parts": [
{
"text": "Hello, Gemini!"
}
]
}
]
}'
import os
import google.generativeai as genai
# Configure the Gemini client to use LangCache Gateway
genai.configure(
api_key=os.getenv("GEMINI_API_KEY"),
transport="rest",
client_options={
"api_endpoint": "https://gateway.langcache.io/v1beta"
}
)
model = genai.GenerativeModel('gemini-2.5-flash')
# Add LangCache API key header
response = model.generate_content(
"Hello, Gemini!",
request_options={
"headers": {
"x-langcache-key": "YOUR_API_KEY"
}
}
)
print(response.text)
import { GoogleGenerativeAI } from '@google/generative-ai';
const genAI = new GoogleGenerativeAI(
process.env.GEMINI_API_KEY,
{
baseUrl: 'https://gateway.langcache.io/v1beta'
}
);
const model = genAI.getGenerativeModel({
model: 'gemini-2.5-flash'
});
const result = await model.generateContent(
'Hello, Gemini!',
{
headers: {
'x-langcache-key': 'YOUR_API_KEY'
}
}
);
const response = await result.response;
console.log(response.text());
── Usage Statistics ──
── Top 10 Cache Hits ──
── Similarity Threshold Settings ──
Adjust how similar a query must be to match a cached response.
Queries with 80% or higher similarity will return cached results.
── API Key Management ──
Create and manage API keys for secure workspace access.
── Your Subscription ──
Save this key now. You won't be able to see it again!
Last Updated: 19th of November, 2025
We collect information you provide directly to us when you create an account, use our services, or communicate with us. This may include:
We use the information we collect to:
We implement appropriate technical and organizational measures to protect your personal information against unauthorized access, alteration, disclosure, or destruction.
We retain your information for as long as necessary to provide our services and fulfill the purposes outlined in this policy, unless a longer retention period is required by law.
You have the right to access, correct, or delete your personal information. You may also have the right to restrict or object to certain processing of your data.
Our service integrates with third-party APIs (OpenAI, Anthropic, Google Gemini). Your use of these services through our gateway is also subject to their respective privacy policies.
If you have questions about this Privacy Policy, please contact us at: support@langcache.io
Last Updated: 19th of November, 2025
By accessing and using LangCache Gateway, you accept and agree to be bound by the terms and provisions of this agreement.
LangCache Gateway provides a semantic caching layer for AI language model APIs. The service acts as an intermediary to cache and retrieve responses based on semantic similarity.
You must provide accurate and complete information when creating an account. You are responsible for maintaining the security of your account credentials.
You agree to use the service only for lawful purposes and in accordance with these Terms. You must not:
You are responsible for maintaining the confidentiality of your API keys. All activities under your account are your responsibility.
We strive to maintain high availability but do not guarantee uninterrupted access. We reserve the right to modify or discontinue the service with reasonable notice.
Subscription fees are billed in advance on a recurring basis. You are responsible for providing current, complete, and accurate billing information.
Your use of the service is also governed by our Privacy Policy. We process cached data in accordance with our privacy practices.
The service and its original content, features, and functionality are owned by LangCache and are protected by international copyright, trademark, and other intellectual property laws.
To the maximum extent permitted by law, LangCache shall not be liable for any indirect, incidental, special, consequential, or punitive damages resulting from your use of the service.
We may terminate or suspend your account and access to the service immediately, without prior notice, for conduct that we believe violates these Terms or is harmful to other users, us, or third parties.
We reserve the right to modify these terms at any time. We will notify users of any material changes via email or through the service.
For questions about these Terms, please contact us at: support@langcache.io