The o3-Pro API is a RESTful ChatCompletion endpoint that enables developers to invoke OpenAI’s advanced chain-of-thought reasoning, code execution, and data-analysis capabilities via configurable parameters (model=”o3-pro”, messages, temperature, max_tokens, streaming, etc.) for seamless integration into complex workflows.
OpenAI officially launched o3-Pro on June 10, 2025, positioning it as the company’s most capable reasoning model yet. This release follows the earlier rollout of the o-series and replaces o1-Pro for ChatGPT Pro and Team users, with Enterprise and Education customers gaining access shortly thereafter.
Basic Information & Features
- Model Class: o3-Pro is part of OpenAI’s “reasoning models,” designed to think step-by-step rather than generate immediate responses.
- Availability: Accessible via ChatGPT Pro/Team interfaces and the OpenAI developer API as of June 10, 2025.
- Access Tiers: Replaces the previous o1-Pro edition; Enterprise and Edu users onboard in the week following launch.
Technical Details
- Architecture: Builds on the o3 backbone with an enhanced private chain of thought, enabling multi-step reasoning at inference.
- Tokenization: Supports the same token schema as its predecessors—1 million input tokens ≈ 750,000 words.
- Extended Capabilities: Includes web search, Python code execution, file analysis, and visual reasoning; image generation remains unsupported in this release.
Evolution of the o-Series
- o1 → o3: Initial jump from o1 to o3 in April 2025 introduced reasoning capabilities.
- Pricing Strategy: Alongside o3-Pro’s debut, OpenAI cut o3’s price by 80 percent—from $2 to $0.40 per million input tokens—to accelerate adoption.
- o3-Pro Release: Premium compute and fine-tuned reasoning pathways deliver the highest reliability at a premium tier.
Benchmark Performance
- Math & Science: Surpassed Google Gemini 2.5 Pro on the AIME 2024 contest, demonstrating superior problem-solving in advanced mathematics.
- PhD-Level Science: Outperformed Anthropic’s Claude 4 Opus on the GPQA Diamond benchmark, indicating robust expertise in scientific domains.
- Enterprise Use: Internal tests report consistent wins over predecessor models across coding, STEM, and business reasoning tasks.
Technical Indicators
- Latency: Response times are higher than o1-Pro—reflecting the deeper reasoning chains—averaging 1.5× the previous latency.
- Throughput: Sustained token-generation throughput of up to 10 tokens/sec in burst mode.
- Pricing:
- Input: $20 per million tokens
- Output: $80 per million tokens
Sample Code & API Integration
import openai
openai.api_key = "YOUR_API_KEY"
response = openai.ChatCompletion.create(
model="o3-pro",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain Euler's formula step by step."}
],
max_tokens=500,
temperature=0.2
)
print(response.choices[0].message["content"])
- Highlights:
- model:
"o3-pro"
- temperature: Low temperature (0.2) to ensure consistent, factual responses
- max_tokens: Adjust per query complexity
- model:
With its enhanced reasoning chains, expanded feature set, and leading benchmark performance, o3-Pro represents a significant step forward in reliable, high-precision AI.
How to call o3-Pro API from CometAPI
Required Steps
- Log in to cometapi.com. If you are not our user yet, please register first
- Get the access credential API key of the interface. Click “Add Token” at the API token in the personal center, get the token key: sk-xxxxx and submit.
- Get the url of this site: https://api.cometapi.com/
Useage Methods
- Select the “
“or”o3-Pro
claude-sonnet-4-20250514-thinking
” endpoint to send the request and set the request body. The request method and request body are obtained from our website API doc. Our website also provides Apifox test for your convenience. - Replace <YOUR_API_KEY> with your actual CometAPI key from your account.
- Insert your question or request into the content field—this is what the model will respond to.
- . Process the API response to get the generated answer.
For Model Access information in Comet API please see API doc.
For Model Price information in Comet API please see https://api.cometapi.com/pricing.
See Also :