ModelsPricingEnterprise
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Company
About usEnterprise
Resources
AI ModelsBlogChangelogSupport
Terms of ServicePrivacy Policy
© 2026 CometAPI · All rights reserved
Home/Models/OpenAI/GPT 5.3 Codex
O

GPT 5.3 Codex

Input:$1.4/M
Output:$11.2/M
Context:400,000
Max Output:128,000
GPT-5.3-Codex is optimized for agentic coding tasks in Codex or similar environments. GPT-5.3-Codex supports low, medium, high, and xhigh reasoning effort settings.
New
Commercial Use
Playground
Overview
Features
Pricing
API

Technical specifications of GPT-5.3 Codex

ItemGPT-5.3 Codex (public specs)
Model familyGPT-5.3 (Codex variant — agentic coding optimized)
Input typesText, Code, Tool/terminal context, (limited) Vision via Codex app interfaces
Output typesText (natural language, code, patches, shell commands), structured logs, test results
Long‑context handlingCompaction triggers every 100,000 tokens during long sessions (reported in system card)
Release / publication dateFebruary 5, 2026 (OpenAI announcement & system card)

What is GPT-5.3 Codex

GPT‑5.3 Codex is OpenAI’s flagship agentic coding model tuned for long-horizon software engineering, tool-driven workflows, and high-fidelity security research/defensive workflows. It combines GPT‑5.2 Codex’s coding strengths with improved reasoning, longer-running task reliability, and additional safety controls tailored for cyber and dual‑use domains.

Main Features of GPT-5.3 Codex

🧪 Frontier Coding Capabilities

  • State-of-the-art results on industry coding benchmarks like SWE-Bench Pro and Terminal-Bench 2.0—including higher efficiency and language diversity.
  • Designed for complex development workflows like multi-day builds, tests, refactoring, deployment, and debugging.

🛠️ Professional Workflow Integration

  • Executes tasks that involve research, tool invocation, and complex execution end-to-end such as building web games, desktop apps, analyses, and more.
  • Web development improvements: Better “default sensible outputs” for common coding prompts, and automated UX enhancements in generated code.

📊 Broad Domain Work

  • Performs in knowledge work benchmarks like GDPval, matching the performance of GPT-5.2 in professional productivity tasks across 44 careers.
  • Shows strong desktop computing capability measured by OSWorld-Verified, which evaluates visual desktop task performance approaching human baselines.

🔐 Cybersecurity Readiness

  • First Codex to be classified as High capability in cybersecurity tasks under OpenAI’s Preparedness Framework.

Benchmark Performance (Selected Metrics)

BenchmarkGPT-5.3 CodexGPT-5.2 CodexGPT-5.2
SWE-Bench Pro56.8 %56.4 %55.6 %
Terminal-Bench 2.077.3 %64.0 %62.2 %
OSWorld-Verified64.7 %38.2 %37.9 %
GDPval (wins/ties)70.9 %–70.9 %
Cybersecurity CTF77.6 %67.4 %67.7 %
SWE-Lancer IC Diamond81.4 %76.0 %74.6 %

Benchmarks show GPT-5.3 Codex outperforming previous models across coding, agentic and real-world productivity tasks.

GPT-5.3 Codex vs GPT-5.2-Codex vs Competitors

FeatureGPT-5.3-CodexGPT-5.2-CodexClaude Opus 4.6
Coding Performance⚡ Industry-leadingHighModerate-High
Contextual ReasoningStrongModerateStrong
Long TasksExcellentGoodVery strong
Agentic Computer UseExcellentModerateNot central
Cybersecurity TasksHighModerateNot prominently reported
Real-time steeringYesLimitedNot specified

Note on Claude Opus 4.6: launched on the same day, targeting general workflows and coding enhancement with expanded context support, but not optimized explicitly for agentic computing like GPT-5.3 Codex.

Representative enterprise use cases

Repository-scale refactorings and automated PR generation with test and validation loops.

Assisted vulnerability triage, reverse engineering, and defensive research within a Trusted Access program.

CI/CD orchestration and automated regression testing with human-in-the-loop verification.

Design → prototype workflows translating requirements into multi-file scaffolds and test harnesses.

How to access GPT-5.3 Codex API

Step 1: Sign Up for API Key

Log in to cometapi.com. If you are not our user yet, please register first. Sign into your CometAPI console. Get the access credential API key of the interface. Click “Add Token” at the API token in the personal center, get the token key: sk-xxxxx and submit.

cometapi-key

Step 2: Send Requests to GPT-5.3 Codex API

Select the “gpt-5.3-codex” endpoint to send the API request and set the request body. The request method and request body are obtained from our website API doc. Our website also provides Apifox test for your convenience. Replace <YOUR_API_KEY> with your actual CometAPI key from your account. base url is Responses

Insert your question or request into the content field—this is what the model will respond to . Process the API response to get the generated answer.

Step 3: Retrieve and Verify Results

Process the API response to get the generated answer. After processing, the API responds with the task status and output data.

FAQ

How does GPT-5.3-Codex differ from GPT-5.2-Codex in real-world coding tasks?

GPT-5.3-Codex improves on GPT-5.2-Codex with ~25% faster inference, stronger performance across SWE-Bench Pro, Terminal-Bench 2.0, OSWorld, and GDPval benchmarks, and deeper agentic capabilities for integrated software workflows.

What kinds of tasks is GPT-5.3-Codex optimized for?

It’s optimized for long-running, agentic coding work — including complex application builds, debugging, deployment, and research-oriented tasks within a coding lifecycle.

What technical limits (context and tokens) does GPT-5.3-Codex support?

GPT-5.3-Codex supports a 400,000 token context window and up to 128,000 max output tokens.

Can developers interact with GPT-5.3-Codex while it’s running tasks?

Yes — unlike traditional single-answer models, it provides mid-task updates and allows iterative steering and feedback during execution.

What are GPT-5.3-Codex’s benchmark strengths?

It achieves state-of-the-art results on SWE-Bench Pro and Terminal-Bench 2.0 and strong performance on OSWorld and GDPval, highlighting both coding and broader reasoning capabilities.

Does GPT-5.3-Codex support function calling and tool use?

Yes — it supports streaming, structured outputs, function calling, and tool integrations as part of its agentic tooling.

Is GPT-5.3-Codex suitable for security-related tasks?

It’s the first Codex model given a ‘High Capability’ designation in cybersecurity under OpenAI’s Preparedness Framework, with enhanced safeguards for defensive research.

Features for GPT 5.3 Codex

Explore the key features of GPT 5.3 Codex, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for GPT 5.3 Codex

Explore competitive pricing for GPT 5.3 Codex, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how GPT 5.3 Codex can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Input:$1.4/M
Output:$11.2/M
Input:$1.75/M
Output:$14/M
-20%

Sample code and API for GPT 5.3 Codex

Access comprehensive sample code and API resources for GPT 5.3 Codex to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of GPT 5.3 Codex in your projects.
POST
/v1/responses
Python
JavaScript
Curl
from openai import OpenAI
import os

# Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
COMETAPI_KEY = os.environ.get("COMETAPI_KEY") or "<YOUR_COMETAPI_KEY>"
BASE_URL = "https://api.cometapi.com/v1"

client = OpenAI(base_url=BASE_URL, api_key=COMETAPI_KEY)
response = client.responses.create(
    model="gpt-5.3-codex",
    input="Write a short Python function that checks if a string is a palindrome.",
)

print(response.output_text)

Python Code Example

from openai import OpenAI
import os

# Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
COMETAPI_KEY = os.environ.get("COMETAPI_KEY") or "<YOUR_COMETAPI_KEY>"
BASE_URL = "https://api.cometapi.com/v1"

client = OpenAI(base_url=BASE_URL, api_key=COMETAPI_KEY)
response = client.responses.create(
    model="gpt-5.3-codex",
    input="Write a short Python function that checks if a string is a palindrome.",
)

print(response.output_text)

JavaScript Code Example

import OpenAI from "openai";

// Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
const COMETAPI_KEY = process.env.COMETAPI_KEY || "<YOUR_COMETAPI_KEY>";
const BASE_URL = "https://api.cometapi.com/v1";

const client = new OpenAI({
  apiKey: COMETAPI_KEY,
  baseURL: BASE_URL,
});

const response = await client.responses.create({
  model: "gpt-5.3-codex",
  input: "Write a short Python function that checks if a string is a palindrome.",
});

console.log(response.output_text);

Curl Code Example

# Get your CometAPI key from https://api.cometapi.com/console/token
# Export it as: export COMETAPI_KEY="your-key-here"

curl https://api.cometapi.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_KEY" \
  -d '{
    "model": "gpt-5.3-codex",
    "input": "Write a short Python function that checks if a string is a palindrome."
  }'

More Models

C

Claude Opus 4.7

Input:$4/M
Output:$20/M
Claude Opus 4.7 is a hybrid reasoning model designed specifically for frontier-level coding, AI agents, and complex multi-step professional work. Unlike lighter models (e.g., Sonnet or Haiku variants), Opus 4.7 prioritizes depth, consistency, and autonomy on the hardest tasks.
A

Claude Sonnet 4.6

Input:$2.4/M
Output:$12/M
Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta.
X

Grok 4.3

Input:$1/M
Output:$2/M
Excels at agentic reasoning, knowledge work, and tool use.
O

GPT 5.5 Pro

Input:$24/M
Output:$144/M
An advanced model engineered for extremely complex logic and professional demands, representing the highest standard of deep reasoning and precise analytical capabilities.
O

GPT 5.5

Input:$4/M
Output:$24/M
A next-generation multimodal flagship model balancing exceptional performance with efficient response, dedicated to providing comprehensive and stable general-purpose AI services.
O

GPT Image 2 ALL

Per Request:$0.04
GPT Image 2 is openai state-of-the-art image generation model for fast, high-quality image generation and editing. It supports flexible image sizes and high-fidelity image inputs.

Related Blog

AI and the novel: how to use ChatGPT to write a full-length book
Mar 13, 2026
chat-gpt

AI and the novel: how to use ChatGPT to write a full-length book

You can produce a complete novel with ChatGPT — but not by saying “Write a novel.” The reliable approach is a disciplined, human-in-the-loop workflow: design the concept, break the work into manageable chunks, use targeted prompts to generate scenes and chapters, iterate with editing passes (structural, line-level, copyediting), and apply quality controls (consistency checks, attribution, rights clearance). The result is a co-created novel: faster drafts, measurable time savings for some workflows, but also new legal, ethical, and market risks to manage.
What is GPT-5.3-Codex-Spark? How to Use it?
Mar 10, 2026
gpt-5-3-codex

What is GPT-5.3-Codex-Spark? How to Use it?

In February 2026, OpenAI introduced GPT-5.3-Codex-Spark, a research-preview variant of its Codex family that is explicitly optimized for real-time coding. Codex-Spark trades model size for extremely low latency and very high token throughput — OpenAI reports >1,000 tokens/sec generation and a 128k token context window for the model when served on a low-latency hardware path provided in partnership with Cerebras. The release targets interactive developer workflows: live coding, instant edits, tight edit–compile–run loops inside IDEs, and agentic coding workflows where responsiveness is crucial.
What is Codex app for desktop— a deep dive
Mar 8, 2026
codex

What is Codex app for desktop— a deep dive

OpenAI released the Codex app for macOS, a desktop “command center” built to orchestrate multiple AI coding agents in parallel, run long-horizon development tasks, and integrate agentic workflows directly into developers’ day-to-day. The app represents a deliberate pivot from one-off code suggestions toward coordinated, multi-agent automation—think of it as managing a small, autonomous engineering team rather than pairing with a single assistant.
How to make the Codex app work on Windows/Linux
Mar 5, 2026
codex

How to make the Codex app work on Windows/Linux

OpenAI’s Codex is a new “command center” for agent-driven software development: a desktop app + CLI + IDE extensions that let developers run multi-agent coding workflows, create isolated worktrees for experiments, and automate large, long-running engineering tasks. OpenAI’s Codex app is a desktop interface for running and orchestrating AI coding agents locally and in the cloud. It launched on macOS and — as of early March 2026 — was expanded to Windows, with Linux support planned.
Exciting Innovations in GPT-5.3 Chat released( Comet Support): What’s New?
Mar 4, 2026
gpt-5-3

Exciting Innovations in GPT-5.3 Chat released( Comet Support): What’s New?

OpenAI’s latest update to ChatGPT—marketed as GPT-5.3 Chat (often referred to in the product UI as GPT-5.3 Instant)—is a targeted evolution of the company’s most widely used conversational model. Rather than promising a step-change in raw reasoning capability, the release focuses on polishing the day-to-day experience: fewer unhelpful refusals, fewer “hallucinations” (fabricated or incorrect facts), smoother conversational tone, better web-context integration, and reduced friction in sustained dialogues. The rollout began as an update to ChatGPT’s default/instant model and is being positioned as an improvement to the large body of everyday interactions users have with the assistant.