ModelsSupportEnterpriseBlog
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Resources
AI ModelsBlogEnterpriseChangelogAbout
2025 CometAPI. All right reserved.Privacy PolicyTerms of Service
Home/Models/OpenAI/o1-preview-2024-09-12
O

o1-preview-2024-09-12

Input:$12/M
Output:$48/M
Commercial Use
Overview
Features
Pricing
API

Technical Specifications of o1-preview-2024-09-12

SpecificationDetails
Model IDo1-preview-2024-09-12
ProviderOpenAI
Model familyo-series reasoning model
Release dateSeptember 12, 2024
Availability statusDeprecated snapshot listed in OpenAI’s model docs
ModalityText and image input, text output
Context window200,000 tokens
Max output tokens100,000 tokens
Reasoning profileHigher reasoning, slowest speed in the o1 family overview
Pricing referenceOpenAI lists o1 at $15 input / $60 output per 1M tokens on the current o1 model page; because o1-preview-2024-09-12 is deprecated, developers should verify current billing behavior in their provider dashboard before production use.
Supported features in current OpenAI model pageStreaming and function calling are shown as supported for the o1 model family page, but the original launch post said the preview API initially did not include function calling, streaming, system messages, and some other features. This means feature support depended on the specific release stage and integration surface.

What is o1-preview-2024-09-12?

o1-preview-2024-09-12 is the September 12, 2024 preview snapshot of OpenAI’s early o-series reasoning model. OpenAI introduced o1-preview as a model designed to “spend more time thinking before” responding, with a focus on harder multi-step tasks in science, coding, and math.

Unlike general-purpose chat models optimized primarily for speed and broad multimodal interaction, o1-preview was positioned as an early reasoning-focused release. OpenAI stated that these models were trained with reinforcement learning to perform complex reasoning and to refine strategies before producing an answer.

This exact identifier, o1-preview-2024-09-12, refers to the dated preview snapshot rather than the later production o1 release. OpenAI’s current model documentation explicitly marks o1-preview-2024-09-12 as deprecated, so it is best understood as a historical preview checkpoint that helped establish the o-series reasoning line.

Main features of o1-preview-2024-09-12

  • Advanced reasoning focus: OpenAI introduced o1-preview specifically for complex, multi-step reasoning, especially in science, mathematics, and coding workloads.
  • Long internal deliberation: OpenAI describes o1 models as thinking before answering and producing a long internal chain of thought prior to the visible response, which is a defining trait of the reasoning series.
  • Large context capacity: The o1 model documentation lists a 200,000-token context window and up to 100,000 output tokens, making the family suitable for long prompts and extensive generated outputs.
  • Multimodal input support: OpenAI’s model page lists text and image as supported inputs, with text as the output format.
  • Research-preview positioning: At launch, OpenAI described o1-preview as an early preview and said it expected regular updates and improvements, so it was intended partly for experimentation and prototyping rather than final long-term standardization.
  • Strong safety emphasis: OpenAI paired the launch with a dedicated system card and highlighted stronger performance on jailbreak resistance testing compared with GPT-4o in its announcement.
  • Historically limited early API feature set: When first launched, the preview API did not include function calling, streaming, support for system messages, and certain other capabilities, which is important context for teams integrating older snapshots.
  • Deprecated snapshot status: OpenAI now lists o1-preview-2024-09-12 as deprecated, so teams should treat it as a legacy model identifier on aggregator platforms and verify present-day availability before depending on it in production.

How to access and integrate o1-preview-2024-09-12

Step 1: Sign Up for API Key

To start using o1-preview-2024-09-12, first create an account on CometAPI and generate your API key from the dashboard. After logging in, store your API key securely and use it to authenticate every request to the API.

Step 2: Send Requests to o1-preview-2024-09-12 API

Once you have your API key, you can call the model through CometAPI using the standard OpenAI-compatible API format. Make sure to set the model field to o1-preview-2024-09-12.

curl https://api.cometapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -d '{
    "model": "o1-preview-2024-09-12",
    "messages": [
      {
        "role": "user",
        "content": "Explain the main strengths of this model."
      }
    ]
  }'

Step 3: Retrieve and Verify Results

After sending your request, CometAPI will return the model’s generated response in JSON format. You can parse the response text from the first choice and then validate output quality, latency, and structured behavior in your application. Because o1-preview-2024-09-12 corresponds to a deprecated preview snapshot in OpenAI’s model history, it is a good idea to verify current availability and behavior in your CometAPI workspace before deploying it in production workflows.

Features for o1-preview-2024-09-12

Explore the key features of o1-preview-2024-09-12, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for o1-preview-2024-09-12

Explore competitive pricing for o1-preview-2024-09-12, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how o1-preview-2024-09-12 can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Input:$12/M
Output:$48/M
Input:$15/M
Output:$60/M
-20%

Sample code and API for o1-preview-2024-09-12

Access comprehensive sample code and API resources for o1-preview-2024-09-12 to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of o1-preview-2024-09-12 in your projects.

More Models

G

Nano Banana 2

Input:$0.4/M
Output:$2.4/M
Core Capabilities Overview: Resolution: Up to 4K (4096×4096), on par with Pro. Reference Image Consistency: Up to 14 reference images (10 objects + 4 characters), maintaining style/character consistency. Extreme Aspect Ratios: New 1:4, 4:1, 1:8, 8:1 ratios added, suitable for long images, posters, and banners. Text Rendering: Advanced text generation, suitable for infographics and marketing poster layouts. Search Enhancement: Integrated Google Search + Image Search. Grounding: Built-in thinking process; complex prompts are reasoned before generation.
A

Claude Opus 4.6

Input:$4/M
Output:$20/M
Claude Opus 4.6 is Anthropic’s “Opus”-class large language model, released February 2026. It is positioned as a workhorse for knowledge-work and research workflows — improving long-context reasoning, multi-step planning, tool use (including agentic software workflows), and computer-use tasks such as automated slide and spreadsheet generation.
A

Claude Sonnet 4.6

Input:$2.4/M
Output:$12/M
Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta.
O

GPT-5.4 nano

Input:$0.16/M
Output:$1/M
GPT-5.4 nano is designed for tasks where speed and cost matter most like classification, data extraction, ranking, and sub-agents.
O

GPT-5.4 mini

Input:$0.6/M
Output:$3.6/M
GPT-5.4 mini brings the strengths of GPT-5.4 to a faster, more efficient model designed for high-volume workloads.
A

Claude Mythos Preview

A

Claude Mythos Preview

Coming soon
Input:$60/M
Output:$240/M
Claude Mythos Preview is our most capable frontier model to date, and shows a striking leap in scores on many evaluation benchmarks compared to our previous frontier model, Claude Opus 4.6.