Documentation

frogAPI provides a unified gateway to the world's leading AI models through a single, OpenAI-compatible REST API. Change one line of code in your existing application to route requests through frogAPI and instantly benefit from wholesale pricing.

All endpoints are fully compatible with the OpenAI SDK, meaning you can use the official openai Python and Node.js libraries with zero modifications beyond updating the base URL and API key.

Authentication

All API requests require an API key passed in the Authorization header using the Bearer scheme. API keys are prefixed with frog_sk_ for easy identification.

You can generate and manage your API keys from the Dashboard. Each account receives one API key by default.

http
GET /v1/chat/completions HTTP/1.1
Host: frogapi.app
Authorization: Bearer frog_sk_your_api_key_here
Content-Type: application/json

Keep your API key secure

Never expose your API key in client-side code, public repositories, or browser-accessible locations. Use environment variables and server-side proxying.

Quick Start

Get up and running with frogAPI in under 60 seconds. Since frogAPI is fully OpenAI-compatible, integration is as simple as changing your base URL.

1Install the OpenAI SDK

bash
pip install openai

2Make your first request

python
from openai import OpenAI

client = OpenAI(
    api_key="frog_sk_your_api_key_here",
    base_url="https://frogapi.app/v1"
)

response = client.chat.completions.create(
    model="gpt-5.2",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello, frogAPI!"}
    ]
)

print(response.choices[0].message.content)

3Enable streaming (optional)

Stream responses in real-time using Server-Sent Events. Simply set stream: true in your request.

python
from openai import OpenAI

client = OpenAI(
    api_key="frog_sk_your_api_key_here",
    base_url="https://frogapi.app/v1"
)

stream = client.chat.completions.create(
    model="gpt-5.2",
    messages=[{"role": "user", "content": "Tell me a joke"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Endpoints

POST/v1/chat/completions

Creates a model response for the given chat conversation. This is the primary endpoint for all chat-based AI interactions.

Request Body

ParameterTypeRequiredDescription
modelstringYesID of the model to use (e.g. gpt-5.2)
messagesarrayYesList of messages in the conversation
temperaturenumberNoSampling temperature, 0–2. Default: 1
max_tokensintegerNoMaximum tokens to generate
streambooleanNoEnable Server-Sent Events streaming
top_pnumberNoNucleus sampling parameter. Default: 1

Example Response

json
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1709000000,
  "model": "gpt-5.2",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 9,
    "total_tokens": 29
  }
}
GET/v1/models

Lists all available models currently served by frogAPI. Returns an array of model objects with metadata.

curl https://frogapi.app/v1/models \
  -H "Authorization: Bearer frog_sk_your_api_key_here"

Models

frogAPI serves the following models. Use the exact model ID in your API requests. All models are available at the same source-matching pricing — see the Pricing page for details.

Model IDProviderContextBest For
gpt-5.4-miniOpenAI128KHigh-performance compact
gpt-5.4-nanoOpenAI128KFast & efficient
gpt-5.3OpenAI128KLatest flagship
gpt-5.2OpenAI128KFlagship
gpt-5-miniOpenAI128KBalanced performance & cost
gpt-5-nanoOpenAI128KUltra-fast, low-latency tasks
gemini-3.1-proGoogle1MLatest flagship
gemini-3-flashGoogle1MFast balanced
gemini-3.1-flash-liteGoogle1MUltra-fast value
gemini-2.5-proGoogle1MAdvanced reasoning
kimi-k2.5Kimi128KAdvanced reasoning
grok-4.1-fastxAI128KFast reasoning
grok-4.1-fast-nrxAI128KFast non-reasoning
mistral-large-3Mistral128KMultilingual & code generation
deepseek-v3.2DeepSeek64KCost-effective reasoning

Note

All models accept the same OpenAI-compatible request format. To switch models, simply change the model parameter — no other code changes needed.

Gemini Models

frogAPI currently serves all Gemini models below through the same OpenAI-compatible /v1/chat/completions endpoint.

Gemini ModelBest ForInput / 1MOutput / 1M
gemini-3.1-proHighest-quality Gemini reasoning$2.00$12.00
gemini-3-flashLower-latency general-purpose generation$0.50$3.00
gemini-3.1-flash-liteCheapest Gemini tier for lightweight tasks$0.25$1.50
gemini-2.5-proAdvanced reasoning on Gemini 2.5$1.25$10.00

Compatibility

Gemini models use the same messages format as every other frogAPI model.

max_tokens and max_completion_tokens are both accepted.

stop is supported and mapped to Gemini stop sequences.

Gemini-specific Notes

frequency_penalty and presence_penalty are ignored for Gemini requests.

Base64 image data URLs are supported in message content arrays.

Remote image URLs are passed through as text references rather than uploaded binaries.

Error Handling

frogAPI uses standard HTTP status codes. Errors return a JSON body with a descriptive error object containing a human-readable message, error type, and machine-readable code.

json
{
  "error": {
    "message": "Invalid API key provided.",
    "type": "authentication_error",
    "code": "invalid_api_key"
  }
}
StatusTypeDescription
400invalid_requestYour request is malformed or missing required fields
401authentication_errorInvalid or missing API key
402insufficient_fundsYour account balance is too low
404not_foundThe requested model or resource does not exist
429rate_limit_exceededYou have exceeded your rate limit
500server_errorAn unexpected error occurred on our end
503service_unavailableThe upstream provider is temporarily unavailable

Retry Strategy

For 429 and 5xx errors, implement exponential backoff with jitter. Respect the Retry-After header when present. A good starting point is 1 second, doubling on each retry up to a maximum of 30 seconds.

Rate Limits

Limits are applied per user, per model.

RPM

Requests/min

TPM

Tokens/min

RPD

Requests/day

When Exceeded

API returns 429. Check Retry-After.

Daily Reset

Daily limit resets at 00:00 UTC.

For detailed per-model limits, see the Rate Limits page.

SDKs & Libraries

frogAPI is fully OpenAI-compatible. Any library or framework that works with the OpenAI API works with frogAPI — just update the base URL and API key.

Python

openai

Install

pip install openai

Base URL

base_url="https://frogapi.app/v1"

Node.js

openai

Install

npm install openai

Base URL

baseURL: "https://frogapi.app/v1"

LangChain

langchain-openai

Install

pip install langchain-openai

Base URL

openai_api_base="https://frogapi.app/v1"

LiteLLM

litellm

Install

pip install litellm

Base URL

api_base="https://frogapi.app/v1"

Migration Tip

If you already use OpenAI SDKs, only change base_url and API key.