ModelsPricingEnterprise
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Company
About usEnterprise
Resources
AI ModelsBlogChangelogSupport
Terms of ServicePrivacy Policy
© 2026 CometAPI · All rights reserved
Home/Models/Google/Veo 3.1
G

Veo 3.1

Per Second:$0.05
Veo 3.1 is Google’s incremental-but-significant update to its Veo text-and-image→video family, adding richer native audio, longer and more controllable video outputs, and finer editing and scene-level controls.
New
Commercial Use
Playground
Overview
Features
Pricing
API
Versions

Core features

Veo 3.1 focuses on practical content creation features:

  • Native audio generation (dialogue, ambient sound, SFX) integrated in outputs. Veo 3.1 generates native audio (dialogue + ambience + SFX) aligned to the visual timeline; the model aims to preserve lip sync and audio–visual alignment for dialogue and scene cues.
  • Longer outputs (support for up to ~60 seconds / 1080p versus Veo 3’s very short clips,8s), and multi-prompt multi-shot sequences for narrative continuity.
  • Scene Extension and First/Last Frame modes that extend or interpolate footage between key frames.
  • Object insertion and (coming) object removal and editing primitives inside Flow.

Each bullet above is designed to reduce manual VFX work: audio and scene continuity are now first-class outputs rather than afterthoughts.

Technical details (model behavior & inputs)

Model family & variants: Veo belongs to Google’s Veo-3 family; the preview model ID is typically veo3.1-pro; veo3.1 (CometAPI doc). It accepts text prompts, image references (single frame or sequences), and structured multi-prompt layouts for multi-shot generation.

Resolution & duration: Preview documentation describes outputs at 720p/1080p with options for longer durations (up to ~60s in certain preview settings) and higher fidelity than earlier Veo variants.

Aspect ratios: 16:9 (supported) and 9:16 (supported except in some reference-image flows).

Prompt language: English (preview).

API limits: typical preview limits include max 10 API requests/min per project, max 4 videos per request, and video lengths selectable among 4, 6, or 8 seconds (reference-image flows support 8s).

Benchmark performance

Google’s internal and publicly summarized evaluations report strong preference for Veo 3.1 outputs across human rater comparisons on metrics such as text alignment, visual quality, and audio–visual coherence (text→video and image→video tasks).

Veo 3.1 achieved state-of-the-art results on internal human-rater comparisons across several objective axes — overall preference, prompt alignment (text→video and image→video), visual quality, audio-video alignment, and “visually realistic physics” on benchmark datasets such as MovieGenBench and VBench.

Limitations & safety considerations

Limitations:

  • Artifacts & inconsistency: despite improvements, certain lighting, fine-grained physics, and complex occlusions can still yield artifacts; image→video consistency (especially over long durations) is improved but not perfect.
  • Misinformation / deepfake risk: richer audio + object insertion/removal increases misuse risk (realistic fake audio and extended clips). Google notes mitigations (policy, safeguards) and earlier Veo launches referenced watermarking/SynthID to aid provenance; however technical safeguards do not eliminate misuse risk.
  • Cost & throughput constraints: high-resolution, long videos are computationally expensive and currently gated in a paid preview—expect higher latency and cost compared with image models. Community posts and Google forum threads discuss availability windows and fallback strategies.

Safety controls: Veo3.1 has integrated content policies, watermarking/synthID signaling in earlier Veo releases, and preview access controls; customers are advised to follow platform policy and implement human review for high-risk outputs.

Practical use cases

  • Rapid prototyping for creatives: storyboards → multi-shot clips and animatics with native dialogue for early creative review.
  • Marketing & short form content: 15–60s product spots, social clips, and concept teasers where speed matters more than perfect photorealism.
  • Image→video adaptation: turning illustrations, characters, or two frames into smooth transitions or animated scenes via First/Last Frame and Scene Extension.
  • Tooling augmentation: integrated into Flow for iterative editing (object insertion/removal, lighting presets) that reduces manual VFX passes.

Comparison with other leading models

Veo 3.1 vs Veo 3 (predecessor): Veo 3.1 focuses on improved prompt adherence, audio quality, and multi-shot consistency — incremental but impactful updates aimed at reducing artifacts and improving editability.

Veo 3.1 vs OpenAI Sora 2: tradeoffs reported in press: Veo 3.1 emphasizes longer-form narrative control, integrated audio, and Flow editing integration; Sora 2 (when compared in press) focuses on different strengths (speed, different editing pipelines). TechRadar and other outlets frame Veo 3.1 as Google’s targeted competitor to Sora 2 for narrative and longer video support. Independent side-by-side testing remains limited.

Features for Veo 3.1

Explore the key features of Veo 3.1, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for Veo 3.1

Explore competitive pricing for Veo 3.1, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how Veo 3.1 can enhance your projects while keeping costs manageable.

veo3.1(videos)

Model nameTagsCalculate price
veo3.1-allvideos$0.20000
veo3.1videos$0.40000

Sample code and API for Veo 3.1

Access comprehensive sample code and API resources for Veo 3.1 to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of Veo 3.1 in your projects.
POST
/v1/videos
Python
JavaScript
Curl
import os
import requests
import json

# Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
COMETAPI_KEY = os.environ.get("COMETAPI_KEY") or "<YOUR_COMETAPI_KEY>"
BASE_URL = "https://api.cometapi.com/v1"

headers = {
    "Authorization": COMETAPI_KEY,
}

# ============================================================
# Step 1: Download Reference Image
# ============================================================
print("Step 1: Downloading reference image...")

image_url = "https://images.unsplash.com/photo-1506905925346-21bda4d32df4?w=1280"
image_response = requests.get(image_url)
image_path = "/tmp/veo3.1_reference.jpg"
with open(image_path, "wb") as f:
    f.write(image_response.content)
print(f"Reference image saved to: {image_path}")

# ============================================================
# Step 2: Create Video Generation Task (form-data with image upload)
# ============================================================
print("
Step 2: Creating video generation task...")

with open(image_path, "rb") as image_file:
    files = {
        "input_reference": ("reference.jpg", image_file, "image/jpeg"),
    }
    data = {
        "prompt": "A breathtaking mountain landscape with clouds flowing through valleys, cinematic aerial shot",
        "model": "veo3.1",
        "size": "16x9",
    }
    create_response = requests.post(
        f"{BASE_URL}/videos", headers=headers, data=data, files=files
    )

create_result = create_response.json()
print("Create response:", json.dumps(create_result, indent=2))

task_id = create_result.get("id")
if not task_id:
    print("Error: Failed to get task_id from response")
    exit(1)
print(f"Task ID: {task_id}")

# ============================================================
# Step 3: Query Task Status
# ============================================================
print("
Step 3: Querying task status...")

query_response = requests.get(f"{BASE_URL}/videos/{task_id}", headers=headers)
query_result = query_response.json()
print("Query response:", json.dumps(query_result, indent=2))

task_status = query_result.get("data", {}).get("status")
print(f"Task status: {task_status}")

Python Code Example

import os
import requests
import json

# Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
COMETAPI_KEY = os.environ.get("COMETAPI_KEY") or "<YOUR_COMETAPI_KEY>"
BASE_URL = "https://api.cometapi.com/v1"

headers = {
    "Authorization": COMETAPI_KEY,
}

# ============================================================
# Step 1: Download Reference Image
# ============================================================
print("Step 1: Downloading reference image...")

image_url = "https://images.unsplash.com/photo-1506905925346-21bda4d32df4?w=1280"
image_response = requests.get(image_url)
image_path = "/tmp/veo3.1_reference.jpg"
with open(image_path, "wb") as f:
    f.write(image_response.content)
print(f"Reference image saved to: {image_path}")

# ============================================================
# Step 2: Create Video Generation Task (form-data with image upload)
# ============================================================
print("\nStep 2: Creating video generation task...")

with open(image_path, "rb") as image_file:
    files = {
        "input_reference": ("reference.jpg", image_file, "image/jpeg"),
    }
    data = {
        "prompt": "A breathtaking mountain landscape with clouds flowing through valleys, cinematic aerial shot",
        "model": "veo3.1",
        "size": "16x9",
    }
    create_response = requests.post(
        f"{BASE_URL}/videos", headers=headers, data=data, files=files
    )

create_result = create_response.json()
print("Create response:", json.dumps(create_result, indent=2))

task_id = create_result.get("id")
if not task_id:
    print("Error: Failed to get task_id from response")
    exit(1)
print(f"Task ID: {task_id}")

# ============================================================
# Step 3: Query Task Status
# ============================================================
print("\nStep 3: Querying task status...")

query_response = requests.get(f"{BASE_URL}/videos/{task_id}", headers=headers)
query_result = query_response.json()
print("Query response:", json.dumps(query_result, indent=2))

task_status = query_result.get("data", {}).get("status")
print(f"Task status: {task_status}")

JavaScript Code Example

import fs from "fs";
import path from "path";
import os from "os";

// Get your CometAPI key from https://api.cometapi.com/console/token, and paste it here
const api_key = process.env.COMETAPI_KEY || "<YOUR_COMETAPI_KEY>";
const base_url = "https://api.cometapi.com/v1";

// ============================================================
// Step 1: Download Reference Image
// ============================================================
console.log("Step 1: Downloading reference image...");

const imageUrl = "https://images.unsplash.com/photo-1506905925346-21bda4d32df4?w=1280";
const imageResponse = await fetch(imageUrl);
const imageBuffer = Buffer.from(await imageResponse.arrayBuffer());
const imagePath = path.join(os.tmpdir(), "veo3.1_reference.jpg");
fs.writeFileSync(imagePath, imageBuffer);
console.log(`Reference image saved to: ${imagePath}`);

// ============================================================
// Step 2: Create Video Generation Task (form-data with image upload)
// ============================================================
console.log("\nStep 2: Creating video generation task...");

const formData = new FormData();
formData.append("prompt", "A breathtaking mountain landscape with clouds flowing through valleys, cinematic aerial shot");
formData.append("model", "veo3.1");
formData.append("size", "16x9");
formData.append("input_reference", new Blob([fs.readFileSync(imagePath)], { type: "image/jpeg" }), "reference.jpg");

const createResponse = await fetch(`${base_url}/videos`, {
  method: "POST",
  headers: {
    "Authorization": api_key,
  },
  body: formData,
});

const createResult = await createResponse.json();
console.log("Create response:", JSON.stringify(createResult, null, 2));

const taskId = createResult?.id;
if (!taskId) {
  console.log("Error: Failed to get task_id from response");
  process.exit(1);
}
console.log(`Task ID: ${taskId}`);

// ============================================================
// Step 3: Query Task Status
// ============================================================
console.log("\nStep 3: Querying task status...");

const queryResponse = await fetch(`${base_url}/videos/${taskId}`, {
  method: "GET",
  headers: {
    "Authorization": api_key,
  },
});

const queryResult = await queryResponse.json();
console.log("Query response:", JSON.stringify(queryResult, null, 2));

const taskStatus = queryResult?.data?.status;
console.log(`Task status: ${taskStatus}`);

Curl Code Example

#!/bin/bash
# Get your CometAPI key from https://api.cometapi.com/console/token
# Export it as: export COMETAPI_KEY="your-key-here"

BASE_URL="https://api.cometapi.com/v1"
IMAGE_PATH="/tmp/veo3.1_reference.jpg"

# ============================================================
# Step 1: Download Reference Image
# ============================================================
echo "Step 1: Downloading reference image..."

curl -s -o "$IMAGE_PATH" "https://images.unsplash.com/photo-1506905925346-21bda4d32df4?w=1280"
echo "Reference image saved to: $IMAGE_PATH"

# ============================================================
# Step 2: Create Video Generation Task (form-data with image upload)
# ============================================================
echo ""
echo "Step 2: Creating video generation task..."

RESPONSE=$(curl -s -X POST "${BASE_URL}/videos" \
  -H "Authorization: $COMETAPI_KEY" \
  -F 'prompt=A breathtaking mountain landscape with clouds flowing through valleys, cinematic aerial shot' \
  -F 'model=veo3.1' \
  -F 'size=16x9' \
  -F "input_reference=@${IMAGE_PATH}")

echo "Create response:"
echo "$RESPONSE" | jq .

TASK_ID=$(echo "$RESPONSE" | jq -r '.id')

if [ "$TASK_ID" = "null" ] || [ -z "$TASK_ID" ]; then
  echo "Error: Failed to get task_id from response"
  exit 1
fi

echo "Task ID: $TASK_ID"

# ============================================================
# Step 3: Query Task Status
# ============================================================
echo ""
echo "Step 3: Querying task status..."

QUERY_RESPONSE=$(curl -s -X GET "${BASE_URL}/videos/${TASK_ID}" \
  -H "Authorization: $COMETAPI_KEY")

echo "Query response:"
echo "$QUERY_RESPONSE" | jq .

TASK_STATUS=$(echo "$QUERY_RESPONSE" | jq -r '.data.status')
echo "Task status: $TASK_STATUS"

Versions of Veo 3.1

The reason Veo 3.1 has multiple snapshots may include potential factors such as variations in output after updates requiring older snapshots for consistency, providing developers a transition period for adaptation and migration, and different snapshots corresponding to global or regional endpoints to optimize user experience. For detailed differences between versions, please refer to the official documentation.
Model iddescriptionAvailabilityPriceRequst
veo3.1-allThe technology used is unofficial and the generation is unstable etc✅$0.2 / perChat format
veo3.1Recommend, Pointing to the latest model✅$0.4/ perAsync Generation

More Models

D

Doubao-Seedance-2-0

Per Second:$0.07
Seedance 2.0 is ByteDance’s next-generation multimodal video foundation model focused on cinematic, multi-shot narrative video generation. Unlike single-shot text-to-video demos, Seedance 2.0 emphasizes reference-based control (images, short clips, audio), coherent character/style consistency across shots, and native audio/video synchronization — aiming to make AI video useful for professional creative and previsualization workflows.
O

Sora 2

Per Second:$0.08
Super powerful video generation model, with sound effects, supports chat format.
M

mj_fast_video

Per Request:$0.6
Midjourney video generation
X

Grok Imagine Video

Per Second:$0.04
Generate videos from text prompts, animate still images, or edit existing videos with natural language. The API supports configurable duration, aspect ratio, and resolution for generated videos — with the SDK handling the asynchronous polling automatically.
G

Veo 3.1 Pro

Per Second:$0.25
Veo 3.1-Pro refers to the high-capability access/configuration of Google’s Veo 3.1 family — a generation of short-form, audio-enabled video models that add richer native audio, improved narrative/editing controls and scene-extension tools.
G

Veo 3 Pro

G

Veo 3 Pro

Per Second:$0.25
Veo 3 pro denotes the production-grade Veo 3 video model experience (high fidelity, native audio, and extended tooling)

Related Blog

Kling 3.0 vs Veo 3.1: The Ultimate 2026 AI Video Generator Showdown
Apr 20, 2026
veo-3-1
kling-3-0

Kling 3.0 vs Veo 3.1: The Ultimate 2026 AI Video Generator Showdown

Kling 3.0 currently leads with native 4K multi-shot storytelling, superior camera control. Veo 3.1 excels in photorealistic physics, native audio synchronization, and Google ecosystem integration, making it ideal for cinematic or enterprise projects. For most users, the winner depends on priorities: Kling 3.0 for speed, consistency, and cost; Veo 3.1 for premium realism and audio.
What is Google Veo 3.1 Lite
Apr 1, 2026
veo-3-1

What is Google Veo 3.1 Lite

What is Veo 3.1 Lite? Veo 3.1 Lite is Google’s newest cost-efficient video generation model for developers, released on March 31, 2026. It supports text-to-video and image-to-video, outputs video with audio, and is designed for high-volume applications. Google says it costs less than half of Veo 3.1 Fast while keeping the same speed, with 16:9 and 9:16 output formats and 720p/1080p resolution support.
How to Get Grok Imagine for Free: Access, Pricing, and Alternatives
Mar 25, 2026
grok-imagine-video

How to Get Grok Imagine for Free: Access, Pricing, and Alternatives

Grok Imagine Video is not free on official xAI/Grok platforms as of March 2026 (free tier removed due to high demand and misuse concerns), but you can access it affordably — or with free starter credits — via third-party aggregators like CometAPI. CometAPI offers the model at just $0.04 per second (480p), with new users often receiving $1–$5 in free credits upon signup.
How to edit videos via veo 3.1
Mar 5, 2026
veo-3-1

How to edit videos via veo 3.1

Google publicly introduced Veo 3.1 (and a Veo 3.1 Fast variant) in mid-October 2025 as an improved text-to-video model that produces higher-fidelity short
What is vidu Q3? It is maybe Best AI Video Model in 2026
Jan 31, 2026
vidu-q3

What is vidu Q3? It is maybe Best AI Video Model in 2026

Vidu Q3 entered the conversation in early 2026 as one of the clearest signals yet that AI-driven video generation is moving from short, novelty clips toward genuinely narrative, multi-shot storytelling. In the months since its wide release Vidu Q3 has become a staple in creator workflows, research pilots, and commercial pilots — and for good reason: it pushes duration, audiovisual integration, and multi-shot coherence farther than most earlier models while offering a developer-facing API for programmatic use.