Home/Models/DeepSeek/DeepSeek-V3.2
D

DeepSeek-V3.2

Input:$0.216/M
Output:$0.3456/M
Context:128K
Max Output:4K
DeepSeek v3.2 is the latest production release in the DeepSeek V3 family: a large, reasoning-first open-weight language model family designed for long-context understanding, robust agent/tool use, advanced reasoning, coding and math.
New
Commercial Use
Overview
Features
Pricing
API
Versions

What is DeepSeek v3.2?

DeepSeek v3.2 is the latest production release in the DeepSeek V3 family: a large, reasoning-first open-weight language model family designed for long-context understanding, robust agent/tool use, advanced reasoning, coding and math. The release bundles multiple variants (production V3.2 and a high-performance V3.2-Speciale). The project emphasizes cost-efficient long-context inference through a new sparse attention mechanism called DeepSeek Sparse Attention (DSA) and agents / “thinking” workflows (“Thinking in Tool-Use”).

Main features (high level)

  • DeepSeek Sparse Attention (DSA): a sparse-attention mechanism intended to dramatically reduce compute in long-context scenarios while preserving long-range reasoning. (Core research claim; used in V3.2-Exp.)
  • Agentic thinking + tool-use integration: V3.2 emphasizes embedding “thinking” into tool-use: the model can operate in reasoning-thinking modes and in non-thinking (normal) modes when calling tools, improving decision-making in multi-step tasks and tool orchestration.
  • Large-scale agent data synthesis pipeline: DeepSeek reports a training corpus and agent-synthesis pipeline spanning thousands of environments and tens of thousands of complex instructions to improve robustness for interactive tasks.
  • DeepSeek Sparse Attention (DSA): DSA is a fine-grained sparse attention method introduced in the V3.2 line (first in V3.2-Exp) that reduces attention complexity (from naive O(L²) to an O(L·k) style with k ≪ L), selecting a smaller set of key/value tokens per query token. The result is substantially lower memory/compute for very long contexts (128K), making long-context inference materially cheaper.
  • Mixture-of-Experts (MoE) backbone and Multi-head Latent Attention (MLA): The V3 family uses MoE to increase capacity efficiently (large nominal parameter counts with limited per-token activation) along with MLA methods to maintain quality and control compute.

Technical specifications (concise table)

  • Nominal parameter range: ~671B – 685B (variant dependent).
  • Context window (documented reference): 128,000 tokens (128K) in vLLM/reference configs.
  • Attention: DeepSeek Sparse Attention (DSA) + MLA; reduced attention complexity for long contexts.
  • Numeric & training precision: BF16 / F32 and compressed quantized formats (F8_E4M3 etc.) available for distribution.
  • Architectural family: MoE (mixture-of-experts) backbone with per-token activation economy.
  • Input / output: standard tokenized text input (chat/message formats supported); supports tool-calls (tool-use API primitives) and both interactive chat-style calls and programmatic completions via API.
  • Offered variants: v3.2, v3.2-Exp (experimental, DSA debut), v3.2-Speciale (reasoning-first, API-only short term).

Benchmark performance

High-compute V3.2-Speciale reaches parity or exceeds contemporary high-end models on several reasoning/math/coding benchmarks, and achieves top-level marks on selected elite math problem sets. The preprint highlights parity with models such as GPT-5 / Kimi K2 on selected reasoning benchmarks, specific improvements versus earlier DeepSeek R1/V3 baselines:

  • AIME: improved from 70.0 to 87.5 (Δ +17.5).
  • GPQA: 71.5 → 81.0 (Δ +9.5).
  • LCB_v6: 63.5 → 73.3 (Δ +9.8).
  • Aider: 57.0 → 71.6 (Δ +14.6).

Comparison with other models (high level)

  • Vs GPT-5 / Gemini 3 Pro (public claims): DeepSeek authors and several press outlets claim parity or superiority on selected reasoning and coding tasks for the Speciale variant, while emphasizing cost efficiency and open licensing as differentiators.
  • Vs open models (Olmo, Nemotron, Moonshot, etc.): DeepSeek highlights agentic training and DSA as key differentiators for long-context efficiency.

Representative use cases

  • Agentic systems / orchestration: multi-tool agents (APIs, web scrapers, code-execution connectors) that benefit from model-level “thinking” + explicit tool-call primitives.
  • Long-document reasoning / analysis: legal documents, large research corpora, meeting transcripts — long-context variants (128k tokens) let you keep very large contexts in a single call.
  • Complex math & coding assistance: V3.2-Speciale is promoted for advanced math reasoning and extensive code debugging tasks per vendor benchmarks.
  • Cost-sensitive production deployments: DSA + pricing changes aim to lower inference costs for high-context workloads.

How to get started to use DeepSeek v3.2 API

DeepSeek v3.2 API Pricing in CometAPI,20% off the official price:

Input Tokens$0.22
Output Tokens$0.35

Required Steps

  • Log in to cometapi.com. If you are not our user yet, please register first
  • Get the access credential API key of the interface. Click “Add Token” at the API token in the personal center, get the token key: sk-xxxxx and submit.
  • Get the url of this site: https://api.cometapi.com/

Use Method

  1. Select the “deepseek-v3.2” endpoint to send the API request and set the request body. The request method and request body are obtained from our website API doc. Our website also provides Apifox test for your convenience.
  2. Replace <YOUR_API_KEY> with your actual CometAPI key from your account.
  3. Select Chat format: Insert your question or request into the content field—this is what the model will respond to.
  4. .Process the API response to get the generated answer.

Features for DeepSeek-V3.2

Explore the key features of DeepSeek-V3.2, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for DeepSeek-V3.2

Explore competitive pricing for DeepSeek-V3.2, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how DeepSeek-V3.2 can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Input:$0.216/M
Output:$0.3456/M
Input:$0.27/M
Output:$0.432/M
-20%

Sample code and API for DeepSeek-V3.2

Access comprehensive sample code and API resources for DeepSeek-V3.2 to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of DeepSeek-V3.2 in your projects.
Python
JavaScript
Curl
from openai import OpenAI
import os

# Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
COMETAPI_KEY = os.environ.get("COMETAPI_KEY") or "<YOUR_COMETAPI_KEY>"
BASE_URL = "https://api.cometapi.com/v1"

client = OpenAI(base_url=BASE_URL, api_key=COMETAPI_KEY)

completion = client.chat.completions.create(
    model="deepseek-v3.2-exp",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"},
    ],
)

print(completion.choices[0].message.content)

Versions of DeepSeek-V3.2

The reason DeepSeek-V3.2 has multiple snapshots may include potential factors such as variations in output after updates requiring older snapshots for consistency, providing developers a transition period for adaptation and migration, and different snapshots corresponding to global or regional endpoints to optimize user experience. For detailed differences between versions, please refer to the official documentation.
deepseek-v3.2
DeepSeek-V3.2-Exp-nothinking
DeepSeek-V3.2-Exp-thinking

More Models