Documentation
frogAPI provides a unified gateway to the world's leading AI models through a single, OpenAI-compatible REST API. Change one line of code in your existing application to route requests through frogAPI and instantly benefit from wholesale pricing.
All endpoints are fully compatible with the OpenAI SDK, meaning you can use the official openai Python and Node.js libraries with zero modifications beyond updating the base URL and API key.
Authentication
All API requests require an API key passed in the Authorization header using the Bearer scheme. API keys are prefixed with frog_sk_ for easy identification.
You can generate and manage your API keys from the Dashboard. Each account receives one API key by default.
GET /v1/chat/completions HTTP/1.1 Host: frogapi.app Authorization: Bearer frog_sk_your_api_key_here Content-Type: application/json
Keep your API key secure
Never expose your API key in client-side code, public repositories, or browser-accessible locations. Use environment variables and server-side proxying.
Quick Start
Get up and running with frogAPI in under 60 seconds. Since frogAPI is fully OpenAI-compatible, integration is as simple as changing your base URL.
1Install the OpenAI SDK
pip install openai
2Make your first request
from openai import OpenAI
client = OpenAI(
api_key="frog_sk_your_api_key_here",
base_url="https://frogapi.app/v1"
)
response = client.chat.completions.create(
model="gpt-5.2",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, frogAPI!"}
]
)
print(response.choices[0].message.content)3Enable streaming (optional)
Stream responses in real-time using Server-Sent Events. Simply set stream: true in your request.
from openai import OpenAI
client = OpenAI(
api_key="frog_sk_your_api_key_here",
base_url="https://frogapi.app/v1"
)
stream = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Tell me a joke"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")Endpoints
/v1/chat/completionsCreates a model response for the given chat conversation. This is the primary endpoint for all chat-based AI interactions.
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Yes | ID of the model to use (e.g. gpt-5.2) |
messages | array | Yes | List of messages in the conversation |
temperature | number | No | Sampling temperature, 0–2. Default: 1 |
max_tokens | integer | No | Maximum tokens to generate |
stream | boolean | No | Enable Server-Sent Events streaming |
top_p | number | No | Nucleus sampling parameter. Default: 1 |
Example Response
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1709000000,
"model": "gpt-5.2",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 9,
"total_tokens": 29
}
}/v1/modelsLists all available models currently served by frogAPI. Returns an array of model objects with metadata.
curl https://frogapi.app/v1/models \ -H "Authorization: Bearer frog_sk_your_api_key_here"
Models
frogAPI serves the following models. Use the exact model ID in your API requests. All models are available at the same source-matching pricing — see the Pricing page for details.
| Model ID | Provider | Context | Best For |
|---|---|---|---|
| gpt-5.4-mini | OpenAI | 128K | High-performance compact |
| gpt-5.4-nano | OpenAI | 128K | Fast & efficient |
| gpt-5.3 | OpenAI | 128K | Latest flagship |
| gpt-5.2 | OpenAI | 128K | Flagship |
| gpt-5-mini | OpenAI | 128K | Balanced performance & cost |
| gpt-5-nano | OpenAI | 128K | Ultra-fast, low-latency tasks |
| gemini-3.1-pro | 1M | Latest flagship | |
| gemini-3-flash | 1M | Fast balanced | |
| gemini-3.1-flash-lite | 1M | Ultra-fast value | |
| gemini-2.5-pro | 1M | Advanced reasoning | |
| kimi-k2.5 | Kimi | 128K | Advanced reasoning |
| grok-4.1-fast | xAI | 128K | Fast reasoning |
| grok-4.1-fast-nr | xAI | 128K | Fast non-reasoning |
| mistral-large-3 | Mistral | 128K | Multilingual & code generation |
| deepseek-v3.2 | DeepSeek | 64K | Cost-effective reasoning |
Note
All models accept the same OpenAI-compatible request format. To switch models, simply change the model parameter — no other code changes needed.
Gemini Models
frogAPI currently serves all Gemini models below through the same OpenAI-compatible /v1/chat/completions endpoint.
| Gemini Model | Best For | Input / 1M | Output / 1M |
|---|---|---|---|
| gemini-3.1-pro | Highest-quality Gemini reasoning | $2.00 | $12.00 |
| gemini-3-flash | Lower-latency general-purpose generation | $0.50 | $3.00 |
| gemini-3.1-flash-lite | Cheapest Gemini tier for lightweight tasks | $0.25 | $1.50 |
| gemini-2.5-pro | Advanced reasoning on Gemini 2.5 | $1.25 | $10.00 |
Compatibility
Gemini models use the same messages format as every other frogAPI model.
max_tokens and max_completion_tokens are both accepted.
stop is supported and mapped to Gemini stop sequences.
Gemini-specific Notes
frequency_penalty and presence_penalty are ignored for Gemini requests.
Base64 image data URLs are supported in message content arrays.
Remote image URLs are passed through as text references rather than uploaded binaries.
Error Handling
frogAPI uses standard HTTP status codes. Errors return a JSON body with a descriptive error object containing a human-readable message, error type, and machine-readable code.
{
"error": {
"message": "Invalid API key provided.",
"type": "authentication_error",
"code": "invalid_api_key"
}
}| Status | Type | Description |
|---|---|---|
| 400 | invalid_request | Your request is malformed or missing required fields |
| 401 | authentication_error | Invalid or missing API key |
| 402 | insufficient_funds | Your account balance is too low |
| 404 | not_found | The requested model or resource does not exist |
| 429 | rate_limit_exceeded | You have exceeded your rate limit |
| 500 | server_error | An unexpected error occurred on our end |
| 503 | service_unavailable | The upstream provider is temporarily unavailable |
Retry Strategy
For 429 and 5xx errors, implement exponential backoff with jitter. Respect the Retry-After header when present. A good starting point is 1 second, doubling on each retry up to a maximum of 30 seconds.
Rate Limits
Limits are applied per user, per model.
RPM
Requests/min
TPM
Tokens/min
RPD
Requests/day
When Exceeded
API returns 429. Check Retry-After.
Daily Reset
Daily limit resets at 00:00 UTC.
For detailed per-model limits, see the Rate Limits page.
SDKs & Libraries
frogAPI is fully OpenAI-compatible. Any library or framework that works with the OpenAI API works with frogAPI — just update the base URL and API key.
Python
openai
Install
pip install openaiBase URL
base_url="https://frogapi.app/v1"Node.js
openai
Install
npm install openaiBase URL
baseURL: "https://frogapi.app/v1"LangChain
langchain-openai
Install
pip install langchain-openaiBase URL
openai_api_base="https://frogapi.app/v1"LiteLLM
litellm
Install
pip install litellmBase URL
api_base="https://frogapi.app/v1"Migration Tip
If you already use OpenAI SDKs, only change base_url and API key.