Skip to main content

Endpoint

POST https://www.opencompress.ai/api/v1/chat/completions

Headers

HeaderRequiredDescription
AuthorizationYesBearer sk-occ-your-key-here
Content-TypeYesapplication/json

Request body

model
string
required
Model identifier. See Supported Models for the full list.Examples: gpt-4o, claude-sonnet-4-6, gemini-2.5-pro
messages
array
required
Array of message objects. Each message has a role and content.Supported roles: system, user, assistant
stream
boolean
default:"false"
If true, returns a stream of server-sent events.
temperature
number
Sampling temperature (0-2). Passed through to the model.
max_tokens
integer
Maximum tokens to generate. Passed through to the model.
top_p
number
Nucleus sampling parameter. Passed through to the model.
stop
string | array
Stop sequences. Passed through to the model.
All standard OpenAI parameters (frequency_penalty, presence_penalty, logprobs, tools, tool_choice, etc.) are passed through to the upstream model.

Response

id
string
Unique identifier for this completion.
object
string
Always "chat.completion".
created
integer
Unix timestamp of when the completion was created.
model
string
The model used, matching your request.
choices
array
Array of completion choices.
usage
object
Token usage statistics.

Example

from openai import OpenAI

client = OpenAI(
    base_url="https://www.opencompress.ai/api/v1",
    api_key="sk-occ-your-key-here",
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a concise technical writer."},
        {"role": "user", "content": "Explain how JWT authentication works."},
    ],
    temperature=0.7,
    max_tokens=500,
)

print(response.choices[0].message.content)