Home/Models/DeepSeek/DeepSeek-V3.2
D

DeepSeek-V3.2

輸入:$0.216/M
輸出:$0.3456/M
上下文:128K
最大输出:4K
DeepSeek v3.2 是 DeepSeek V3 家族中最新的生产级发布:一个大型、推理优先的开放权重语言模型家族,旨在实现长上下文理解、稳健的智能体/工具使用、高级推理、编程与数学能力。
新
商用
概览
功能亮点
定价
API
版本

What is DeepSeek v3.2?

DeepSeek v3.2 is the latest production release in the DeepSeek V3 family: a large, reasoning-first open-weight language model family designed for long-context understanding, robust agent/tool use, advanced reasoning, coding and math. The release bundles multiple variants (production V3.2 and a high-performance V3.2-Speciale). The project emphasizes cost-efficient long-context inference through a new sparse attention mechanism called DeepSeek Sparse Attention (DSA) and agents / “thinking” workflows (“Thinking in Tool-Use”).

Main features (high level)

  • DeepSeek Sparse Attention (DSA): a sparse-attention mechanism intended to dramatically reduce compute in long-context scenarios while preserving long-range reasoning. (Core research claim; used in V3.2-Exp.)
  • Agentic thinking + tool-use integration: V3.2 emphasizes embedding “thinking” into tool-use: the model can operate in reasoning-thinking modes and in non-thinking (normal) modes when calling tools, improving decision-making in multi-step tasks and tool orchestration.
  • Large-scale agent data synthesis pipeline: DeepSeek reports a training corpus and agent-synthesis pipeline spanning thousands of environments and tens of thousands of complex instructions to improve robustness for interactive tasks.
  • DeepSeek Sparse Attention (DSA): DSA is a fine-grained sparse attention method introduced in the V3.2 line (first in V3.2-Exp) that reduces attention complexity (from naive O(L²) to an O(L·k) style with k ≪ L), selecting a smaller set of key/value tokens per query token. The result is substantially lower memory/compute for very long contexts (128K), making long-context inference materially cheaper.
  • Mixture-of-Experts (MoE) backbone and Multi-head Latent Attention (MLA): The V3 family uses MoE to increase capacity efficiently (large nominal parameter counts with limited per-token activation) along with MLA methods to maintain quality and control compute.

Technical specifications (concise table)

  • Nominal parameter range: ~671B – 685B (variant dependent).
  • Context window (documented reference): 128,000 tokens (128K) in vLLM/reference configs.
  • Attention: DeepSeek Sparse Attention (DSA) + MLA; reduced attention complexity for long contexts.
  • Numeric & training precision: BF16 / F32 and compressed quantized formats (F8_E4M3 etc.) available for distribution.
  • Architectural family: MoE (mixture-of-experts) backbone with per-token activation economy.
  • Input / output: standard tokenized text input (chat/message formats supported); supports tool-calls (tool-use API primitives) and both interactive chat-style calls and programmatic completions via API.
  • Offered variants: v3.2, v3.2-Exp (experimental, DSA debut), v3.2-Speciale (reasoning-first, API-only short term).

Benchmark performance

High-compute V3.2-Speciale reaches parity or exceeds contemporary high-end models on several reasoning/math/coding benchmarks, and achieves top-level marks on selected elite math problem sets. The preprint highlights parity with models such as GPT-5 / Kimi K2 on selected reasoning benchmarks, specific improvements versus earlier DeepSeek R1/V3 baselines:

  • AIME: improved from 70.0 to 87.5 (Δ +17.5).
  • GPQA: 71.5 → 81.0 (Δ +9.5).
  • LCB_v6: 63.5 → 73.3 (Δ +9.8).
  • Aider: 57.0 → 71.6 (Δ +14.6).

Comparison with other models (high level)

  • Vs GPT-5 / Gemini 3 Pro (public claims): DeepSeek authors and several press outlets claim parity or superiority on selected reasoning and coding tasks for the Speciale variant, while emphasizing cost efficiency and open licensing as differentiators.
  • Vs open models (Olmo, Nemotron, Moonshot, etc.): DeepSeek highlights agentic training and DSA as key differentiators for long-context efficiency.

Representative use cases

  • Agentic systems / orchestration: multi-tool agents (APIs, web scrapers, code-execution connectors) that benefit from model-level “thinking” + explicit tool-call primitives.
  • Long-document reasoning / analysis: legal documents, large research corpora, meeting transcripts — long-context variants (128k tokens) let you keep very large contexts in a single call.
  • Complex math & coding assistance: V3.2-Speciale is promoted for advanced math reasoning and extensive code debugging tasks per vendor benchmarks.
  • Cost-sensitive production deployments: DSA + pricing changes aim to lower inference costs for high-context workloads.

How to get started to use DeepSeek v3.2 API

DeepSeek v3.2 API Pricing in CometAPI,20% off the official price:

Input Tokens$0.22
Output Tokens$0.35

Required Steps

  • Log in to cometapi.com. If you are not our user yet, please register first
  • Get the access credential API key of the interface. Click “Add Token” at the API token in the personal center, get the token key: sk-xxxxx and submit.
  • Get the url of this site: https://api.cometapi.com/

Use Method

  1. Select the “deepseek-v3.2” endpoint to send the API request and set the request body. The request method and request body are obtained from our website API doc. Our website also provides Apifox test for your convenience.
  2. Replace <YOUR_API_KEY> with your actual CometAPI key from your account.
  3. Select Chat format: Insert your question or request into the content field—this is what the model will respond to.
  4. .Process the API response to get the generated answer.

DeepSeek-V3.2 的功能

了解 DeepSeek-V3.2 的核心能力,帮助提升性能与可用性,并改善整体体验。

DeepSeek-V3.2 的定价

查看 DeepSeek-V3.2 的竞争性定价,满足不同预算与使用需求,灵活方案确保随需求扩展。
Comet 价格 (USD / M Tokens)官方定价 (USD / M Tokens)折扣
輸入:$0.216/M
輸出:$0.3456/M
輸入:$0.27/M
輸出:$0.432/M
-20%

DeepSeek-V3.2 的示例代码与 API

获取完整示例代码与 API 资源,简化 DeepSeek-V3.2 的集成流程,我们提供逐步指导,助你发挥模型潜能。
Python
JavaScript
Curl
from openai import OpenAI
import os

# Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
COMETAPI_KEY = os.environ.get("COMETAPI_KEY") or "<YOUR_COMETAPI_KEY>"
BASE_URL = "https://api.cometapi.com/v1"

client = OpenAI(base_url=BASE_URL, api_key=COMETAPI_KEY)

completion = client.chat.completions.create(
    model="deepseek-v3.2-exp",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"},
    ],
)

print(completion.choices[0].message.content)

DeepSeek-V3.2 的版本

DeepSeek-V3.2 可能存在多个快照,原因包括:更新后保持一致性需要保留旧版、给开发者留出迁移窗口,以及全球/区域端点提供的优化差异。具体差异请参考官方文档。
deepseek-v3.2
DeepSeek-V3.2-Exp-nothinking
DeepSeek-V3.2-Exp-thinking

更多模型