Home/Models/OpenAI/GPT-5.2 Pro
O

GPT-5.2 Pro

輸入:$16.80/M
輸出:$134.40/M
上下文:400,000
最大输出:128,000
gpt-5.2-pro 是 OpenAI 的 GPT-5.2 系列中能力最强的生产级成员,通过 Responses API 对外提供,适用于需要最高保真度、多步推理、广泛工具使用,以及 OpenAI 所提供的最大上下文/吞吐量预算的工作负载。
新
商用
Playground
概览
功能亮点
定价
API

What is GPT-5.2-Pro

GPT-5.2-Pro is the “Pro” tier of OpenAI’s GPT-5.2 family intended for the hardest problems — multi-step reasoning, complex code, large document synthesis, and professional knowledge work. It’s made available in the Responses API to enable multi-turn interactions and advanced API features (tooling, reasoning modes, compaction, etc.). The Pro variant trades throughput and cost for maximum answer quality and stronger safety/consistency in hard domains.

Main features (what gpt-5.2-pro brings to applications)

  • Highest-fidelity reasoning: Pro supports OpenAI’s top reasoning settings (including xhigh) to trade latency and compute for deeper internal reasoning passes and improved chain-of-thought-style solution refinement.
  • Large-context, long-document proficiency: engineered to maintain accuracy across very long contexts (OpenAI benchmarked up through 256k+ tokens for family variants), making the tier suitable for legal/technical document review, enterprise knowledge bases, and long-running agent states.
  • Stronger tool & agent execution: designed to call toolsets reliably (allowed-tools lists, auditing hooks, and richer tool integrations) and to act as a “mega-agent” that can orchestrate multiple subtools and multi-step workflows.
  • Improved factuality & safety mitigations: OpenAI reports notable reductions in hallucination and undesirable responses on internal safety metrics for GPT-5.2 vs prior models, supported by updates in the system card and targeted safety training.

Technical capabilities & specifications (developer-oriented)

  • API endpoint & availability: Responses API is the recommended integration for Pro-level workflows; developers can set reasoning.effort to none|medium|high|xhigh to tune internal compute devoted to reasoning. Pro exposes the highest xhigh fidelity.
  • Reasoning effort levels: none | medium | high | xhigh (Pro and Thinking support xhigh for quality-prioritized runs). This parameter lets you trade cost/latency for quality.
  • Compaction & context management: New compaction features allow the API to manage what the model “remembers” and reduce token usage while preserving relevant context—helpful for long conversations and document workflows.
  • Tooling & custom tools: Models can call custom tools (send raw text to tools while constraining model outputs); stronger tool-calling and agentic patterns in 5.2 reduce the need for elaborate system prompts.

Benchmark performance

Below are the most relevant, reproducible headline numbers for GPT-5.2 Pro (OpenAI’s verified/internal results):

  • GDPval (professional work benchmark): GPT-5.2 Pro — 74.1% (wins/ties) on the GDPval suite — a marked improvement over GPT-5.1. This metric is designed to approximate value in real economic tasks across many occupations.
  • ARC-AGI-1 (general reasoning): GPT-5.2 Pro — 90.5% (Verified); Pro was reported as the first model to cross 90% on this benchmark.
  • Coding & software engineering (SWE-Bench): strong gains in multi-step code reasoning; e.g., SWE-Bench Pro public and SWE-Lancer (IC Diamond) show material improvements over GPT-5.1 — representative family numbers: SWE-Bench Pro public ~55.6% (Thinking; Pro results reported higher on internal runs).
  • Long-context factuality (MRCRv2): GPT-5.2 family shows high retrieval and needle-finding scores across 4k–256k ranges (examples: MRCRv2 8 needles at 16k–32k: 95.3% for GPT-5.2 Thinking; Pro maintained high accuracy at larger windows). These show the family’s resilience to long-context tasks, a Pro selling point.

How gpt-5.2-pro compares with peers and other GPT-5.2 tiers

  • vs GPT-5.2 Thinking / Instant:: gpt-5.2-pro prioritizes fidelity and maximal reasoning (xhigh) over latency/cost. gpt-5.2 (Thinking) sits in the middle for deeper work, and gpt-5.2-chat-latest (Instant) is tuned for low-latency chat. Choose Pro for highest-value, compute-intensive tasks.
  • Versus Google Gemini 3 and other frontier models: GPT-5.2 (family) as OpenAI’s competitive response to Gemini 3. Leaderboards show task-dependent winners — on some graduate-level science and professional benchmarks GPT-5.2 Pro and Gemini 3 are close; in narrow coding or specialized domains outcomes can vary.
  • Versus GPT-5.1 / GPT-5: Pro shows material gains in GDPval, ARC-AGI, coding benchmarks and long-context metrics vs GPT-5.1, and adds new API controls (xhigh reasoning, compaction). OpenAI will keep earlier variants available during transition.

Practical use cases and recommended patterns

High-value use cases where Pro makes sense

  • Complex financial modeling, large spreadsheet synthesis and analysis where accuracy and multi-step reasoning matter (OpenAI reported improved investment banking spreadsheet task scores).
  • Long-document legal or scientific synthesis where the 400k token context preserves entire reports, appendices, and citation chains.
  • High-quality code generation and multi-file refactoring for enterprise codebases (Pro’s higher xhigh reasoning helps with multi-step program transformations).
  • Strategic planning, multi-stage project orchestration, and agentic workflows that use custom tools and require robust tool calling.

When to choose Thinking or Instant instead

  • Choose Instant for fast, lower-cost conversational tasks and editor integrations.
  • Choose Thinking for deeper but latency-sensitive work where cost is constrained but quality still matters.

How to access and use GPT-5.2 pro API

Step 1: Sign Up for API Key

Log in to cometapi.com. If you are not our user yet, please register first. Sign into your CometAPI console. Get the access credential API key of the interface. Click “Add Token” at the API token in the personal center, get the token key: sk-xxxxx and submit.

Step 2: Send Requests to GPT-5.2 pro API

Select the “gpt-5.2-pro” endpoint to send the API request and set the request body. The request method and request body are obtained from our website API doc. Our website also provides Apifox test for your convenience. Replace <YOUR_API_KEY> with your actual CometAPI key from your account. Where to call it: Responses-style APIs.

Insert your question or request into the content field—this is what the model will respond to . Process the API response to get the generated answer.

Step 3: Retrieve and Verify Results

Process the API response to get the generated answer. After processing, the API responds with the task status and output data.

See also Gemini 3 Pro Preview API

常见问题

Why does GPT-5.2 Pro only work with the Responses API?

GPT-5.2 Pro is exclusively available through the Responses API to enable multi-turn model interactions before responding to API requests, supporting advanced workflows like tool chaining and extended reasoning sessions that require persistent state management.

What reasoning effort levels does GPT-5.2 Pro support?

GPT-5.2 Pro supports three reasoning effort levels: medium, high, and xhigh—allowing developers to balance response quality against latency for complex problem-solving tasks.

How does GPT-5.2 Pro handle long-running requests?

Some GPT-5.2 Pro requests may take several minutes to complete due to the model's deep reasoning process. OpenAI recommends using background mode to avoid timeouts on particularly challenging tasks.

What tools can GPT-5.2 Pro access through the Responses API?

GPT-5.2 Pro supports web search, file search, image generation, and MCP (Model Context Protocol), but notably does not support code interpreter or computer use tools.

When should I choose GPT-5.2 Pro over standard GPT-5.2?

Choose GPT-5.2 Pro when your workload demands maximum fidelity, multi-step reasoning, or extensive tool orchestration—it's designed for production scenarios with the largest context and throughput budgets OpenAI offers.

Does GPT-5.2 Pro support structured outputs?

No, GPT-5.2 Pro does not currently support structured outputs or fine-tuning, making it best suited for high-fidelity generation tasks rather than constrained format requirements.

GPT-5.2 Pro 的功能

了解 GPT-5.2 Pro 的核心能力,帮助提升性能与可用性,并改善整体体验。

GPT-5.2 Pro 的定价

查看 GPT-5.2 Pro 的竞争性定价,满足不同预算与使用需求,灵活方案确保随需求扩展。
Comet 价格 (USD / M Tokens)官方定价 (USD / M Tokens)
輸入:$16.80/M
輸出:$134.40/M
輸入:$21.00/M
輸出:$168.00/M

GPT-5.2 Pro 的示例代码与 API

GPT-5.2-Pro is OpenAI’s highest-quality variant of the GPT-5.2 family designed for the hardest, highest-value knowledge and technical tasks.
Python
JavaScript
Curl
from openai import OpenAI
import os

# Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
COMETAPI_KEY = os.environ.get("COMETAPI_KEY") or "<YOUR_COMETAPI_KEY>"
BASE_URL = "https://api.cometapi.com/v1"

client = OpenAI(base_url=BASE_URL, api_key=COMETAPI_KEY)

response = client.responses.create(
    model="gpt-5.2-pro",
    input="How much gold would it take to coat the Statue of Liberty in a 1mm layer?",
    reasoning={"effort": "high"},
)

print(response.output_text)

更多模型