ModelsSupportEnterpriseBlog
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Resources
AI ModelsBlogEnterpriseChangelogAbout
2025 CometAPI. All right reserved.Privacy PolicyTerms of Service
Home/Models/Google/Gemini 2.5 Pro
G

Gemini 2.5 Pro

Input:$1/M
Output:$8/M
Context:1M
Max Output:65K
Gemini 2.5 Pro is an artificial intelligence model provided by Google. It has native Multimodal processing capabilities and an ultra-long context window of up to 1 million tokens, providing unprecedented powerful support for complex, long-sequence tasks. According to Google's data, Gemini 2.5 Pro performs particularly well in complex tasks. This model supports a maximum context length of 1,048,576 tokens.
New
Commercial Use
Playground
Overview
Features
Pricing
API
Versions

Basic Information (Features)

  • Multimodality: Natively handles text, images, and code in a single model.
  • Long Context Window: Maintains coherence over extended conversations and documents.1.05M
  • Deep Think Mode: An experimental variant within the Pro suite that deploys multiple reasoning agents in parallel for strategic planning and creative solutions.
  • Ideal Use Cases: Coding, agentic workflows, interactive simulations, and data visualization .

Technical Details

  • Multi-Agent Architecture: Parallelizes reasoning streams to explore multiple solution paths simultaneously.
  • MRCR (Multi-Round Coreference Resolution): Enhanced co-reference handling for sustained dialogues and multi-turn tasks.
  • Training Corpus: Billions of tokens spanning web text, code repositories, academic sources, and proprietary datasets.
  • Tool Integration: Seamlessly combines code execution, Google Search, and external APIs to augment its internal reasoning.
  • Limitations & known risks
  • Content policy constraints: models enforce content policies (e.g., disallowing explicit sexual content and some illicit content), but enforcement is not perfect — generating images of public figures or controversial icons may still be possible in some scenarios, so policy checks are essential. )
  • Failure modes: possible identity drift in extreme edits, occasional semantic misalignment (when prompts are under-specified), and artifacts in very complex scenes or extreme viewpoint changes.
  • Provenance & misuse: while watermarks and SynthID are present, these do not prevent misuse — they assist detection and attribution but are not a substitute for human review in sensitive workflows.

Typical use cases

  • Product & ecommerce: place/catalog products into lifestyle shots via multi-image fusion.
  • Creative tooling / design: fast iterations in design apps (Adobe Firefly integration cited).
  • Photo editing & retouching: localized edits from natural language (remove objects, change color/lighting, restyle).
  • Storytelling / character assets: keep characters consistent across panels and scenes.

Features for Gemini 2.5 Pro

Explore the key features of Gemini 2.5 Pro, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for Gemini 2.5 Pro

Explore competitive pricing for Gemini 2.5 Pro, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how Gemini 2.5 Pro can enhance your projects while keeping costs manageable.

gemini-2.5-pro (same price across variants)

Model familyVariant (model name)Input price (USD / 1M tokens)Output price (USD / 1M tokens)
gemini-2.5-progemini-2.5-pro-all$1.00$8.00
gemini-2.5-progemini-2.5-pro-thinking$1.00$8.00
gemini-2.5-progemini-2.5-pro$1.00$8.00

Sample code and API for Gemini 2.5 Pro

Access comprehensive sample code and API resources for Gemini 2.5 Pro to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of Gemini 2.5 Pro in your projects.
POST
/v1beta/models/{model}:{operator}
POST
/v1/chat/completions
Python
JavaScript
Curl
from google import genai
import os

# Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
COMETAPI_KEY = os.environ.get("COMETAPI_KEY") or "<YOUR_COMETAPI_KEY>"
BASE_URL = "https://api.cometapi.com"

client = genai.Client(
    http_options={"api_version": "v1beta", "base_url": BASE_URL},
    api_key=COMETAPI_KEY,
)

response = client.models.generate_content(
    model="gemini-2.5-pro",
    contents="Tell me a three sentence bedtime story about a unicorn.",
)

print(response.text)

Python Code Example

from google import genai
import os

# Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
COMETAPI_KEY = os.environ.get("COMETAPI_KEY") or "<YOUR_COMETAPI_KEY>"
BASE_URL = "https://api.cometapi.com"

client = genai.Client(
    http_options={"api_version": "v1beta", "base_url": BASE_URL},
    api_key=COMETAPI_KEY,
)

response = client.models.generate_content(
    model="gemini-2.5-pro",
    contents="Tell me a three sentence bedtime story about a unicorn.",
)

print(response.text)

JavaScript Code Example

import { GoogleGenAI } from "@google/genai";

// Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
const COMETAPI_KEY = process.env.COMETAPI_KEY || "<YOUR_COMETAPI_KEY>";
const BASE_URL = "https://api.cometapi.com";

async function main() {
  const ai = new GoogleGenAI({
    apiKey: COMETAPI_KEY,
    httpOptions: { baseUrl: BASE_URL },
  });

  const response = await ai.models.generateContent({
    model: "gemini-2.5-pro",
    contents: "Tell me a three sentence bedtime story about a unicorn.",
  });

  console.log(response.text);
}

main();

Curl Code Example

from google import genai
import os

# Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
COMETAPI_KEY = os.environ.get("COMETAPI_KEY") or "<YOUR_COMETAPI_KEY>"
BASE_URL = "https://api.cometapi.com"

client = genai.Client(
    http_options={"api_version": "v1beta", "base_url": BASE_URL},
    api_key=COMETAPI_KEY,
)

response = client.models.generate_content(
    model="gemini-2.5-pro",
    contents="Tell me a three sentence bedtime story about a unicorn.",
)

print(response.text)

Versions of Gemini 2.5 Pro

The reason Gemini 2.5 Pro has multiple snapshots may include potential factors such as variations in output after updates requiring older snapshots for consistency, providing developers a transition period for adaptation and migration, and different snapshots corresponding to global or regional endpoints to optimize user experience. For detailed differences between versions, please refer to the official documentation.
gemini-2.5-pro-all
gemini-2.5-pro-thinking
gemini-2.5-pro-deepsearch

More Models

A

Claude Opus 4.6

Input:$4/M
Output:$20/M
Claude Opus 4.6 is Anthropic’s “Opus”-class large language model, released February 2026. It is positioned as a workhorse for knowledge-work and research workflows — improving long-context reasoning, multi-step planning, tool use (including agentic software workflows), and computer-use tasks such as automated slide and spreadsheet generation.
A

Claude Sonnet 4.6

Input:$2.4/M
Output:$12/M
Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta.
O

GPT-5.4 nano

Input:$0.16/M
Output:$1/M
GPT-5.4 nano is designed for tasks where speed and cost matter most like classification, data extraction, ranking, and sub-agents.
O

GPT-5.4 mini

Input:$0.6/M
Output:$3.6/M
GPT-5.4 mini brings the strengths of GPT-5.4 to a faster, more efficient model designed for high-volume workloads.
A

Claude Mythos Preview

A

Claude Mythos Preview

Coming soon
Input:$60/M
Output:$240/M
Claude Mythos Preview is our most capable frontier model to date, and shows a striking leap in scores on many evaluation benchmarks compared to our previous frontier model, Claude Opus 4.6.
X

mimo-v2-pro

Input:$0.8/M
Output:$2.4/M
MiMo-V2-Pro is Xiaomi's flagship foundation model, featuring over 1T total parameters and a 1M context length, deeply optimized for agentic scenarios. It is highly adaptable to general agent frameworks like OpenClaw. It ranks among the global top tier in the standard PinchBench and ClawBench benchmarks, with perceived performance approaching that of Opus 4.6. MiMo-V2-Pro is designed to serve as the brain of agent systems, orchestrating complex workflows, driving production engineering tasks, and delivering results reliably.

Related Blog

Is Free Gemini 2.5 Pro API fried? Changes to the free quota in 2025
Dec 11, 2025
gemini-2-5-pro
gemini-2-5-flash

Is Free Gemini 2.5 Pro API fried? Changes to the free quota in 2025

Google has sharply tightened the free tier for the Gemini API: Gemini 2.5 Pro has been removed from the free tier and Gemini 2.5 Flash’s daily free requests were cut dramatically (reports: ~250 → ~20/day). That doesn’t mean the model is permanently “dead” for experimentation — but it does mean free access has been effectively gutted for many real-world use cases.
What are the limitations of Gemini usage limits across all tiers?
Sep 26, 2025
gemini-2-5-pro
veo-3

What are the limitations of Gemini usage limits across all tiers?

Google has moved from vague “limited access” wording to explicit, per-tier caps for the Gemini app (free, Google AI Pro, and Google AI Ultra). Those caps