ModelsSupportEnterpriseBlog
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Resources
AI ModelsBlogEnterpriseChangelogAbout
2025 CometAPI. All right reserved.Privacy PolicyTerms of Service
Home/Models/OpenAI/o3
O

o3

Input:$1.6/M
Output:$6.4/M
O3 is an artificial intelligence model provided by OpenAI.
New
Commercial Use
Overview
Features
Pricing
API
Versions

Technical Specifications of o3

SpecificationDetails
Model IDo3
ProviderOpenAI
Model typeReasoning model
Input modalitiesText, image
Output modalitiesText
Context window200,000 tokens
Max output tokens100,000 tokens
Knowledge cutoffJune 1, 2024
API availabilityAvailable through the Responses API
Best suited forComplex reasoning, math, science, coding, visual reasoning, and technical writing

What is o3?

o3 is an artificial intelligence model provided by OpenAI. It is a reasoning-focused model designed for complex, multi-step problem solving across text, code, and image-based inputs. OpenAI describes it as a well-rounded model that performs strongly in domains such as mathematics, science, coding, visual reasoning, and instruction-following.

On CometAPI, the model is accessed using the platform model identifier o3. If you are integrating this model into your application, workflow, or internal tooling, use o3 exactly as the model name in your API requests.

Main features of o3

  • Advanced reasoning: Built for multi-step analysis and decision-making, making it suitable for tasks that require deeper logical processing rather than only surface-level text generation.
  • Multimodal input support: Accepts both text and image inputs, which is useful for workflows involving screenshots, diagrams, charts, documents, or mixed-format prompts.
  • Text output generation: Returns text outputs that can be used for explanations, summaries, problem solving, technical writing, and structured responses.
  • Large context window: Supports up to 200,000 tokens of context, enabling it to work with long conversations, large documents, extensive codebases, or multi-part instructions.
  • High output capacity: Can generate up to 100,000 output tokens, which helps for long-form answers, detailed reports, and extended reasoning tasks.
  • Strong STEM and coding performance: Especially useful for mathematics, scientific analysis, software development, debugging, and other logic-intensive use cases.
  • Visual reasoning capability: Can reason over image inputs in addition to text, helping with interpretation of visual materials and mixed-modal tasks.
  • Instruction following: Performs well on structured prompts and detailed task requirements, which is important for production use cases and predictable integrations.

How to access and integrate o3

Step 1: Sign Up for API Key

To start using the o3 API, first sign up for an API key on the CometAPI platform. After registration, you will receive your developer credentials, which you can use to authenticate requests and manage usage across supported AI models.

Step 2: Send Requests to o3 API

Once you have your API key, send requests to CometAPI’s compatible API endpoint while setting the model field to o3.

curl https://api.cometapi.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_COMETAPI_KEY" \
  -d '{
    "model": "o3",
    "input": "Explain the main advantages of reasoning models in production applications."
  }'

You can also integrate o3 from common server-side environments such as Python, Node.js, or any framework that can make standard HTTPS requests to a JSON API.

Step 3: Retrieve and Verify Results

After sending your request, CometAPI will return the model’s generated response. You can then parse the output in your application, display it to users, store it for later workflows, or run additional validation checks based on your business logic. For production deployments, it is recommended to verify response quality, formatting, and task accuracy before using the result in user-facing or automated systems.

Features for o3

Explore the key features of o3, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for o3

Explore competitive pricing for o3, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how o3 can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Input:$1.6/M
Output:$6.4/M
Input:$2/M
Output:$8/M
-20%

Sample code and API for o3

Access comprehensive sample code and API resources for o3 to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of o3 in your projects.

Versions of o3

The reason o3 has multiple snapshots may include potential factors such as variations in output after updates requiring older snapshots for consistency, providing developers a transition period for adaptation and migration, and different snapshots corresponding to global or regional endpoints to optimize user experience. For detailed differences between versions, please refer to the official documentation.
version
o3-mini-2025-01-31-low
o3-mini-2025-01-31-medium
o3-mini-2025-01-31
o3-mini-low
o3-pro
o3-pro-2025-06-10
o3-mini-2025-01-31-high
o3-mini
o3-mini-high
o3-mini-medium
o3-mini-all
o3
o3-2025-04-16

More Models

G

Nano Banana 2

Input:$0.4/M
Output:$2.4/M
コア機能の概要: 解像度: 最大4K(4096×4096)、Proと同等。参照画像の一貫性: 参照画像は最大14枚(オブジェクト10件 + キャラクター4件)、スタイル/キャラクターの一貫性を維持。極端なアスペクト比: 1:4、4:1、1:8、8:1を新規追加、縦長画像・ポスター・バナーに最適。テキストレンダリング: 高度なテキスト生成、インフォグラフィックおよびマーケティングポスターのレイアウトに最適。検索機能の強化: Google Search + Image Searchを統合。グラウンディング: 思考プロセスを内蔵、複雑なプロンプトは生成前に推論。
A

Claude Opus 4.6

Input:$4/M
Output:$20/M
Claude Opus 4.6 は、Anthropic の「Opus」クラスの大規模言語モデルで、2026年2月にリリースされた。ナレッジワークや研究ワークフローの主力として位置づけられており、長文脈での推論、多段階の計画立案、ツールの利用(エージェント型ソフトウェアワークフローを含む)、およびスライドやスプレッドシートの自動生成といったコンピュータ操作タスクを強化する。
A

Claude Sonnet 4.6

Input:$2.4/M
Output:$12/M
Claude Sonnet 4.6 は、これまでで最も高性能な Sonnet モデルです。コーディング、コンピューターの利用、長文脈推論、エージェントの計画立案、ナレッジワーク、デザインにわたってモデルのスキルを全面的にアップグレードしました。Sonnet 4.6 は、ベータ版で 1M トークンのコンテキストウィンドウも備えています。
O

GPT-5.4 nano

Input:$0.16/M
Output:$1/M
GPT-5.4 nano は、分類、データ抽出、ランキング、サブエージェントなど、速度とコストが最も重要となるタスク向けに設計されています。
O

GPT-5.4 mini

Input:$0.6/M
Output:$3.6/M
GPT-5.4 mini は、GPT-5.4 の強みを、高スループットのワークロード向けに設計された、より高速で効率的なモデルにもたらします。
A

Claude Mythos Preview

A

Claude Mythos Preview

Coming soon
Input:$60/M
Output:$240/M
Claude Mythos Preview は、当社のこれまでで最も高性能なフロンティアモデルであり、 従来のフロンティアモデルである Claude Opus 4.6 と比べ、多くの評価ベンチマークでスコアが顕著に向上していることを示しています。

Related Blog

What is HappyHorse-1.0? How to Compare Seedance 2.0?
Apr 11, 2026
seedance-2-0

What is HappyHorse-1.0? How to Compare Seedance 2.0?

Learn what HappyHorse-1.0 is, why it hit the top of the Artificial Analysis video leaderboard, how it compares with Seedance 2.0, and what the latest rankings mean for AI video generation.
What is Google Veo 3.1 Lite
Apr 1, 2026
veo-3-1

What is Google Veo 3.1 Lite

What is Veo 3.1 Lite? Veo 3.1 Lite is Google’s newest cost-efficient video generation model for developers, released on March 31, 2026. It supports text-to-video and image-to-video, outputs video with audio, and is designed for high-volume applications. Google says it costs less than half of Veo 3.1 Fast while keeping the same speed, with 16:9 and 9:16 output formats and 720p/1080p resolution support.
How to Get Grok Imagine for Free: Access, Pricing, and Alternatives
Mar 25, 2026
grok-imagine-video

How to Get Grok Imagine for Free: Access, Pricing, and Alternatives

Grok Imagine Video is not free on official xAI/Grok platforms as of March 2026 (free tier removed due to high demand and misuse concerns), but you can access it affordably — or with free starter credits — via third-party aggregators like CometAPI. CometAPI offers the model at just $0.04 per second (480p), with new users often receiving $1–$5 in free credits upon signup.
What is Seedance 2.0? A Comprehensive Analysis
Mar 24, 2026
seedance-2-0

What is Seedance 2.0? A Comprehensive Analysis

Seedance 2.0 is a next-generation multimodal AI video generation model developed by ByteDance that can generate high-quality, cinematic videos from text, images, audio, and reference videos. It features audio-video joint generation, motion stability, and reference-based editing, and has rapidly climbed global benchmarks like the Artificial Analysis leaderboard, positioning itself among the top AI video models in 2026.
How to edit videos via veo 3.1
Mar 5, 2026
veo-3-1

How to edit videos via veo 3.1

Google publicly introduced Veo 3.1 (and a Veo 3.1 Fast variant) in mid-October 2025 as an improved text-to-video model that produces higher-fidelity short