ModelsSupportEnterpriseBlog
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Resources
AI ModelsBlogEnterpriseChangelogAbout
2025 CometAPI. All right reserved.Privacy PolicyTerms of Service
Home/Models/OpenAI/o1-2024-12-17
O

o1-2024-12-17

Input:$12/M
Output:$48/M
Commercial Use
Overview
Features
Pricing
API

Technical Specifications of o1-2024-12-17

SpecificationDetails
Model IDo1-2024-12-17
Provider / familyOpenAI o1 reasoning model family.
Model typeFrontier reasoning large language model optimized for complex problem-solving, coding, math, science, and multi-step analysis.
Release snapshoto1-2024-12-17 is the dated snapshot released by OpenAI in December 2024.
Input modalitiesText and image input.
Output modalitiesText output.
Context window200K tokens.
Max outputUp to 100K output tokens per request.
Performance profileSlower than lighter models, but designed for deeper reasoning and higher-quality answers on hard tasks.
Reasoning controlsSupports reasoning_effort so developers can tune how long the model thinks before answering.
Prompting behaviorWith o1 models and newer, developer messages replace older system-message style guidance; starting with o1-2024-12-17, markdown is suppressed by default unless explicitly re-enabled in the developer message.

What is o1-2024-12-17?

o1-2024-12-17 is CometAPI’s platform identifier for OpenAI’s December 17, 2024 snapshot of the o1 reasoning model. It belongs to the o1 series, which OpenAI describes as models trained with reinforcement learning to perform complex reasoning and to “think before they answer.”

Compared with conventional chat-oriented models, o1-2024-12-17 is aimed at tasks where correctness, multi-step logic, and careful analysis matter more than raw speed. OpenAI positions o1 as a full reasoning model for advanced use cases, with support for text and image inputs and text outputs.

This particular snapshot was introduced as an updated post-trained version of o1, improving model behavior based on feedback while preserving the frontier-level reasoning capabilities evaluated for the o1 family. OpenAI also reported lower latency relative to o1-preview, using on average 60% fewer reasoning tokens for a given request.

Main features of o1-2024-12-17

  • Advanced reasoning: Built for hard problems that require step-by-step thinking, including mathematics, science, logic, and difficult coding workflows.
  • Vision input support: Can reason over image inputs in addition to text, which is useful for visual analysis, diagrams, scientific workflows, and technical problem-solving.
  • Long context handling: Supports a 200K-token context window, making it suitable for large documents, long conversations, and multi-file reasoning tasks.
  • Large response capacity: Can generate up to 100K output tokens in a single request, which helps for detailed reports, long-form reasoning, or substantial code generation.
  • Adjustable reasoning depth: The reasoning_effort parameter lets developers trade off latency and depth of reasoning based on the needs of the application.
  • Improved efficiency vs. preview: OpenAI states that o1 uses on average 60% fewer reasoning tokens than o1-preview for a given request, improving practical efficiency.
  • Developer-message-first prompting: For o1 models and newer, developer messages are the preferred mechanism for high-level behavioral instructions, replacing the older system-message pattern.
  • Default plain-text behavior: Starting with o1-2024-12-17, API responses avoid markdown formatting by default unless you explicitly re-enable it in the developer message.

How to access and integrate o1-2024-12-17

Step 1: Sign Up for API Key

To use o1-2024-12-17, first create an account on CometAPI and generate your API key from the dashboard. After that, store the key securely as an environment variable in your application so you can authenticate requests without hard-coding secrets in source files.

Step 2: Send Requests to o1-2024-12-17 API

Once your API key is ready, send requests through CometAPI’s OpenAI-compatible endpoint and set the model field to o1-2024-12-17.

curl https://api.cometapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -d '{
    "model": "o1-2024-12-17",
    "messages": [
      {
        "role": "developer",
        "content": "You are a precise reasoning assistant. Formatting re-enabled."
      },
      {
        "role": "user",
        "content": "Analyze the trade-offs between recursive descent and Pratt parsers."
      }
    ]
  }'

You can also integrate it from any OpenAI-compatible SDK by replacing the base URL with CometAPI’s endpoint and keeping o1-2024-12-17 as the target model ID.

Step 3: Retrieve and Verify Results

After submitting the request, parse the response JSON and read the generated assistant output from the returned choices or message content fields, depending on the SDK or endpoint you use. For production use, you should also verify outputs with application-level checks such as schema validation, test cases, citation workflows, or human review when correctness is critical.

Features for o1-2024-12-17

Explore the key features of o1-2024-12-17, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for o1-2024-12-17

Explore competitive pricing for o1-2024-12-17, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how o1-2024-12-17 can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Input:$12/M
Output:$48/M
Input:$15/M
Output:$60/M
-20%

Sample code and API for o1-2024-12-17

Access comprehensive sample code and API resources for o1-2024-12-17 to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of o1-2024-12-17 in your projects.

More Models

G

Nano Banana 2

Input:$0.4/M
Output:$2.4/M
Core Capabilities Overview: Resolution: Up to 4K (4096×4096), on par with Pro. Reference Image Consistency: Up to 14 reference images (10 objects + 4 characters), maintaining style/character consistency. Extreme Aspect Ratios: New 1:4, 4:1, 1:8, 8:1 ratios added, suitable for long images, posters, and banners. Text Rendering: Advanced text generation, suitable for infographics and marketing poster layouts. Search Enhancement: Integrated Google Search + Image Search. Grounding: Built-in thinking process; complex prompts are reasoned before generation.
A

Claude Opus 4.6

Input:$4/M
Output:$20/M
Claude Opus 4.6 is Anthropic’s “Opus”-class large language model, released February 2026. It is positioned as a workhorse for knowledge-work and research workflows — improving long-context reasoning, multi-step planning, tool use (including agentic software workflows), and computer-use tasks such as automated slide and spreadsheet generation.
A

Claude Sonnet 4.6

Input:$2.4/M
Output:$12/M
Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta.
O

GPT-5.4 nano

Input:$0.16/M
Output:$1/M
GPT-5.4 nano is designed for tasks where speed and cost matter most like classification, data extraction, ranking, and sub-agents.
O

GPT-5.4 mini

Input:$0.6/M
Output:$3.6/M
GPT-5.4 mini brings the strengths of GPT-5.4 to a faster, more efficient model designed for high-volume workloads.
A

Claude Mythos Preview

A

Claude Mythos Preview

Coming soon
Input:$60/M
Output:$240/M
Claude Mythos Preview is our most capable frontier model to date, and shows a striking leap in scores on many evaluation benchmarks compared to our previous frontier model, Claude Opus 4.6.