ModelsSupportEnterpriseBlog
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Resources
AI ModelsBlogEnterpriseChangelogAbout
2025 CometAPI. All right reserved.Privacy PolicyTerms of Service
Home/Models/OpenAI/GPT 5 Codex
O

GPT 5 Codex

Input:$1/M
Output:$8/M
Context:400K
Max Output:128K
GPT-5-Codex is a high-performance large language model focused on code generation and understanding, with enhanced capabilities for complex programming tasks, code reasoning, and production-level applications.
New
Commercial Use
Playground
Overview
Features
Pricing
API
Versions

What is the GPT-5-Codex ?

GPT-5-Codex is a specialized variant of OpenAI’s GPT-5 family designed for complex software engineering workflows: coding, large-scale refactoring, long multi-step agentic tasks, and extended autonomous runs inside the Codex environment (CLI, IDE extension, and cloud). It is positioned as the default model for OpenAI’s Codex product and is accessible via the Responses API and Codex subscriptions.

Key features

  • Agentic optimization — tuned to run inside agent loops and tool-driven workflows (better consistency when using tools/CLIs). Agentic and tool usage are first-class.
  • Code quality focus — produces cleaner, more steerable code for refactoring, review, and long-running development tasks.
  • IDE & product integration — integrated into developer products (e.g., GitHub Copilot preview rollouts) and OpenAI’s Codex SDK/CLI.
  • Responses API only — uses the newer Responses API pattern (token reuse, agent loop support) for best results; legacy Completion calls can underperform on Codex tasks.

Technical details — training & architecture

  • Base lineage: GPT-5-Codex is a derivative of GPT-5, built by further tuning the GPT-5 snapshot for coding tasks and agent behaviors. Model internals (exact parameter count, training compute) are not publicly enumerated; OpenAI publishes capabilities and tuning approach rather than raw parameter counts.
  • Training focus: emphasis on real-world software engineering corpora, interactive agent traces, tool-use trajectories, and instruction tuning to improve steerability and long-horizon correctness.
  • Tool & agent loop tuning: prompt and tool definitions were adjusted so the Codex agent loop runs faster and yields more accurate multi-step outcomes when compared to a vanilla GPT-5 in comparable setups.

Benchmark performance

Public benchmarking from independent reviewers and aggregator sites shows GPT-5-Codex leading or near-leading on modern coding benchmarks:

  • SWE-Bench (real-world coding tasks): independent summary reports ~≈77% success on a 500-task suite (reported in a third-party review). This was noted as slightly above the general-purpose GPT-5 (high) baseline in that review.
  • LiveCodeBench / other code benchmarks: aggregator sites report high relative performance (examples include LiveCodeBench scores in the mid-80s for certain tasks).

Model versioning & availability

Availability channels: Responses API (model id gpt-5-codex)

gpt-5-codex-low/medium/high – Specialized for coding & software engineering:

  • gpt-5-codex-low
  • gpt-5-codex-medium
  • gpt-5-codex-high

Support /v1/responses format call

Limitations

  • Latency & compute: agentic workflows can be compute-intensive and sometimes slower than lighter models, particularly when the model runs test suites or performs extensive static analysis.
  • Hallucination & overconfidence: despite improvements, GPT-5-Codex can still hallucinate APIs, file paths, or test coverage—users must validate generated code and CI outputs.
  • Context length & state: while the model is tuned for longer sessions, it remains bounded by practical context/attention limits; extremely large codebases require chunking, retrieval augmentation, or tool-assisted memory.
  • Safety & security: automated code changes can introduce security regressions or license violations; human oversight and secure CI gating are mandatory.

Use cases

  • Automated code review — produce reviewer comments, identify regressions, and suggest fixes.
  • Feature development & refactoring — large multi-file edits with tests run by the model and CI validation.
  • Test synthesis & TDD automation — generate unit/integration tests and iterate until passing.
  • Developer assistants & agents — integrated into IDE plugins, CI pipelines, or autonomous agents to carry out complex engineering tasks.

How to use GPT-5 Codex API

Required Steps

  • Log in to cometapi.com. If you are not our user yet, please register first.
  • Sign into your CometAPI console.
  • Get the access credential API key of the interface. Click “Add Token” at the API token in the personal center, get the token key: sk-xxxxx and submit.

Use Method

  1. Select the “gpt-5-codex” endpoint to send the API request and set the request body. The request method and request body are obtained from our website API doc. Our website also provides Apifox test for your convenience.
  2. Replace <YOUR_API_KEY> with your actual CometAPI key from your account.
  3. Insert your question or request into the content field—this is what the model will respond to.
  4. . Process the API response to get the generated answer.

CometAPI provides a fully compatible REST API—for seamless migration. Key details to Responses

See also GPT-5.1 API and GPT-5.1-Chat-latest API

Features for GPT 5 Codex

Explore the key features of GPT 5 Codex, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for GPT 5 Codex

Explore competitive pricing for GPT 5 Codex, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how GPT 5 Codex can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Input:$1/M
Output:$8/M
Input:$1.25/M
Output:$10/M
-20%

Sample code and API for GPT 5 Codex

Access comprehensive sample code and API resources for GPT 5 Codex to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of GPT 5 Codex in your projects.
POST
/v1/responses
Python
JavaScript
Curl
from openai import OpenAI
import os

# Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
COMETAPI_KEY = os.environ.get("COMETAPI_KEY") or "<YOUR_COMETAPI_KEY>"
BASE_URL = "https://api.cometapi.com/v1"

client = OpenAI(base_url=BASE_URL, api_key=COMETAPI_KEY)
response = client.responses.create(
    model="gpt-5-codex", input="Tell me a three sentence bedtime story about a unicorn."
)

print(response)

Python Code Example

from openai import OpenAI
import os

# Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
COMETAPI_KEY = os.environ.get("COMETAPI_KEY") or "<YOUR_COMETAPI_KEY>"
BASE_URL = "https://api.cometapi.com/v1"

client = OpenAI(base_url=BASE_URL, api_key=COMETAPI_KEY)
response = client.responses.create(
    model="gpt-5-codex", input="Tell me a three sentence bedtime story about a unicorn."
)

print(response)

JavaScript Code Example

import OpenAI from "openai";

// Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
const api_key = process.env.COMETAPI_KEY;
const base_url = "https://api.cometapi.com/v1";

const openai = new OpenAI({
  apiKey: api_key,
  baseURL: base_url,
});

const response = await openai.responses.create({
  model: "gpt-5-codex",
  input: "Tell me a three sentence bedtime story about a unicorn.",
});

console.log(response);

Curl Code Example

curl https://api.cometapi.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_KEY" \
  -d '{
    "model": "gpt-5-codex",
    "input": "Tell me a three sentence bedtime story about a unicorn."
  }'

Versions of GPT 5 Codex

The reason GPT 5 Codex has multiple snapshots may include potential factors such as variations in output after updates requiring older snapshots for consistency, providing developers a transition period for adaptation and migration, and different snapshots corresponding to global or regional endpoints to optimize user experience. For detailed differences between versions, please refer to the official documentation.
version
gpt-5-codex
gpt-5-codex-high
gpt-5-codex-low
gpt-5-codex-medium

More Models

A

Claude Opus 4.6

Input:$4/M
Output:$20/M
Claude Opus 4.6 is Anthropic’s “Opus”-class large language model, released February 2026. It is positioned as a workhorse for knowledge-work and research workflows — improving long-context reasoning, multi-step planning, tool use (including agentic software workflows), and computer-use tasks such as automated slide and spreadsheet generation.
A

Claude Sonnet 4.6

Input:$2.4/M
Output:$12/M
Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta.
O

GPT-5.4 nano

Input:$0.16/M
Output:$1/M
GPT-5.4 nano is designed for tasks where speed and cost matter most like classification, data extraction, ranking, and sub-agents.
O

GPT-5.4 mini

Input:$0.6/M
Output:$3.6/M
GPT-5.4 mini brings the strengths of GPT-5.4 to a faster, more efficient model designed for high-volume workloads.
A

Claude Mythos Preview

A

Claude Mythos Preview

Coming soon
Input:$60/M
Output:$240/M
Claude Mythos Preview is our most capable frontier model to date, and shows a striking leap in scores on many evaluation benchmarks compared to our previous frontier model, Claude Opus 4.6.
X

mimo-v2-pro

Input:$0.8/M
Output:$2.4/M
MiMo-V2-Pro is Xiaomi's flagship foundation model, featuring over 1T total parameters and a 1M context length, deeply optimized for agentic scenarios. It is highly adaptable to general agent frameworks like OpenClaw. It ranks among the global top tier in the standard PinchBench and ClawBench benchmarks, with perceived performance approaching that of Opus 4.6. MiMo-V2-Pro is designed to serve as the brain of agent systems, orchestrating complex workflows, driving production engineering tasks, and delivering results reliably.