D

DeepSeek-OCR2

每次请求:$0.04
DeepSeek-OCR 2 是由 DeepSeek 于 2026年1月27日发布的模型,采用了创新性的 DeepEncoder V2 方法,使 AI 能够根据图像的语义动态重排其部分内容,而非仅仅机械地从左到右扫描。在保持高数据压缩效率的同时,该模型在多个基准测试与生产指标上取得了显著突破。该模型仅需 256 至 1120 个视觉 token 即可覆盖复杂的文档页面,并在 OmniDocBench v1.5 评测中获得 91.09% 的总体得分。
D

DeepSeek-V3.2

上下文:128K
输入:$0.216/M
输出:$0.3456/M
DeepSeek v3.2 是 DeepSeek V3 家族中最新的生产级发布:一个大型、推理优先的开放权重语言模型家族,旨在实现长上下文理解、稳健的智能体/工具使用、高级推理、编程与数学能力。
D

DeepSeek V4

D

DeepSeek V4

即将推出
输入:$60/M
输出:$240/M
敬请期待
D

DeepSeek-V3

D

DeepSeek-V3

输入:$0.216/M
输出:$0.88/M
The most popular and cost-effective DeepSeek-V3 model. 671B full-blood version. This model supports a maximum context length of 64,000 tokens.
D

DeepSeek-V3.1

D

DeepSeek-V3.1

输入:$0.44/M
输出:$1.32/M
DeepSeek V3.1 is the upgrade in DeepSeek’s V-series: a hybrid “thinking / non-thinking” large language model aimed at high-throughput, low-cost general intelligence and agentic tool use. It keeps OpenAI-style API compatibility, adds smarter tool-calling, and—per the company—lands faster generation and improved agent reliability.
D

DeepSeek R2

D

DeepSeek R2

即将推出
输入:$60/M
输出:$240/M
deepseek-r2 即将发布
D

DeepSeek-R1T2-Chimera

D

DeepSeek-R1T2-Chimera

输入:$0.2416/M
输出:$0.2416/M
A 671B parameter Mixture of Experts text generation model, merged from DeepSeek-AI's R1-0528, R1, and V3-0324, supporting up to 60k tokens of context.
D

DeepSeek-Reasoner

D

DeepSeek-Reasoner

输入:$0.44/M
输出:$1.752/M
DeepSeek-Reasoner is DeepSeek’s reasoning-first family of LLMs and API endpoints designed to (1) expose internal chain-of-thought (CoT) reasoning to callers and (2) operate in “thinking” modes tuned for multi-step planning, math, coding and agent/tool use.
D

DeepSeek-OCR

D

DeepSeek-OCR

每次请求:$0.04
DeepSeek-OCR 是一种光学字符识别模型,用于从图像和文档中提取文本。它可处理扫描页面、照片和 UI 截图,生成包含布局线索(如换行)的转录结果。常见用途包括文档数字化、发票和收据录入、搜索索引构建,以及启用 RPA 流程。技术亮点包括图像转文本处理、对扫描与拍摄内容的支持,以及面向下游解析的结构化文本输出。
D

DeepSeek-Chat

D

DeepSeek-Chat

上下文:64K
输入:$0.216/M
输出:$0.88/M
The most popular and cost-effective DeepSeek-V3 model. 671B full-blood version. This model supports a maximum context length of 64,000 tokens.