模型支持企业博客
500+ AI 模型 API,一次搞定,就在 CometAPI
模型 API
开发者
快速入门文档API 仪表板
资源
AI 模型博客企业更新日志关于
2025 CometAPI。保留所有权利。隐私政策服务条款
Home/Models/Runway/runway_video2video
R

runway_video2video

每次请求:$0.2
商用
概览
功能亮点
定价
API

Technical Specifications of runway-video2video

SpecificationDetails
Model IDrunway-video2video
ProviderRunway
Model typeVideo-to-video generation / stylized video transformation
Primary functionTransforms an input video into a new output video guided by a text prompt, and in some workflows by a reference image or first frame.
Backing Runway capabilityCommonly associated with Runway’s Video to Video workflow on Gen-3 Alpha and Gen-3 Alpha Turbo, while Runway’s newer guidance points users toward Gen-4 Aleph for the latest video-to-video capability.
Input typesInput video is required; prompt text is used for transformation guidance. Runway documentation also describes image-guided styling in video workflows.
OutputAI-transformed video clip
Supported durationsRunway’s Gen-3 video-to-video workflow supports clips up to 20 seconds for the referenced workflow.
Output resolutions1280×768 and, for supported variants, 768×1280.
Max input size64 MB in the referenced Gen-3 video-to-video workflow.
AccessAvailable through API-based integration and platform workflows from Runway. Runway publishes its API documentation separately.

What is runway-video2video?

runway-video2video is CometAPI’s model identifier for Runway’s video-to-video generation capability, a workflow that takes an existing video clip and re-renders it into a new visual style or motion-driven interpretation using prompt-based guidance. In practice, this is used for stylization, scene transformation, look development, experimental VFX, and turning raw footage into more cinematic or imaginative outputs.

Runway’s help documentation describes Video to Video as a way to change the style of videos using a text prompt or an input image as the first frame. Its more recent product guidance also indicates that while Gen-3 Alpha and Turbo supported this workflow, newer Runway recommendations point users to newer-generation tooling for the latest video-to-video use cases. From a CometAPI integration perspective, however, runway-video2video is the platform model ID you use to access this class of capability.

Main features of runway-video2video

  • Video-guided generation: Starts from an existing video input rather than generating motion from scratch, making it useful for preserving shot structure, timing, and composition while changing the visual result.
  • Prompt-based transformation: Uses natural-language instructions to control the output style, mood, subject reinterpretation, or visual effect direction.
  • Style transfer and creative re-rendering: Well suited for converting footage into animated, cinematic, surreal, or branded visual treatments without manual frame-by-frame editing. This is an inference based on Runway’s described video-to-video styling workflow.
  • Reference-aware workflows: Runway documentation indicates support for image-informed guidance in related video generation flows, which helps steer composition or aesthetic consistency.
  • Portrait and landscape output options: Supported workflows include standard horizontal and vertical output formats, which is useful for social, mobile, and marketing delivery.
  • Short-form production efficiency: The referenced workflow supports short clips up to 20 seconds, which fits ad creatives, concept shots, social posts, and rapid iteration pipelines.
  • API accessibility: Runway provides API documentation and model access patterns, enabling programmatic integration into creative apps, internal tools, and automated media pipelines.

How to access and integrate runway-video2video

Step 1: Sign Up for API Key

To get started, sign up on CometAPI and generate your API key from the dashboard. Once you have your key, store it securely and use it to authenticate requests to the CometAPI endpoint.

Step 2: Send Requests to runway-video2video API

Use Runway's official API format via CometAPI. The endpoint is POST /runwayml/v1/video_to_video. You must include the X-Runway-Version header.

curl https://api.cometapi.com/runwayml/v1/video_to_video \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -H "X-Runway-Version: 2024-11-06" \
  -d '{
    "model": "gen4_aleph",
    "videoUri": "https://example.com/your-source-video.mp4",
    "promptText": "Transform this footage into a cinematic neon-lit sci-fi sequence with dramatic atmosphere and smooth motion.",
    "seed": 1,
    "ratio": "1280:720",
    "references": [],
    "contentModeration": {
      "publicFigureThreshold": "auto"
    }
  }'
import os, requests

resp = requests.post(
    "https://api.cometapi.com/runwayml/v1/video_to_video",
    headers={
        "Authorization": f"Bearer {os.environ['COMETAPI_API_KEY']}",
        "Content-Type": "application/json",
        "X-Runway-Version": "2024-11-06",
    },
    json={
        "model": "gen4_aleph",
        "videoUri": "https://example.com/your-source-video.mp4",
        "promptText": "Transform this footage into a cinematic neon-lit sci-fi sequence.",
        "seed": 1,
        "ratio": "1280:720",
        "references": [],
        "contentModeration": {"publicFigureThreshold": "auto"},
    },
)
print(resp.json())

Step 3: Retrieve and Verify Results

The API returns a task object. Poll the task status endpoint to check when the video generation is complete, then retrieve the output video URL. For production use, add retries, status polling, and logging to ensure reliable integration with runway-video2video.

runway_video2video 的功能

了解 runway_video2video 的核心能力,帮助提升性能与可用性,并改善整体体验。

runway_video2video 的定价

查看 runway_video2video 的竞争性定价,满足不同预算与使用需求,灵活方案确保随需求扩展。
Comet 价格 (USD / M Tokens)官方定价 (USD / M Tokens)折扣
每次请求:$0.2
每次请求:$0.25
-20%

runway_video2video 的示例代码与 API

获取完整示例代码与 API 资源,简化 runway_video2video 的集成流程,我们提供逐步指导,助你发挥模型潜能。

更多模型

O

Sora 2 Pro

每秒:$0.24
Sora 2 Pro 是我们最先进、最强大的媒体生成模型,可生成带有同步音频的视频。它可以根据自然语言或图像创建细致、动态的视频片段。
O

Sora 2

每秒:$0.08
超级强大的视频生成模型,带有音效,支持聊天格式。
M

mj_fast_video

每次请求:$0.6
Midjourney video generation
X

Grok Imagine Video

每秒:$0.04
通过文本提示生成视频、为静态图像添加动画,或用自然语言编辑现有视频。该 API 支持配置生成视频的时长、长宽比和分辨率,并由 SDK 自动处理异步轮询。
G

Veo 3.1 Pro

每秒:$0.25
Veo 3.1-Pro 指的是 Google 的 Veo 3.1 系列的高能力访问/配置——这一代短时长、支持音频的视频模型带来更丰富的原生音频、改进的叙事/剪辑控制以及场景扩展工具。
G

Veo 3.1

每秒:$0.05
Veo 3.1 是 Google 针对其 Veo 文本与图像→视频系列的渐进但意义重大的更新,新增更丰富的原生音频、更长且可控性更高的视频输出,以及更精细的编辑与场景级控制。