模型支援企業部落格
500+ AI 模型 API,全部整合在一個 API 中。就在 CometAPI
模型 API
開發者
快速入門說明文件API 儀表板
資源
AI模型部落格企業更新日誌關於
2025 CometAPI. 保留所有權利。隱私政策服務條款
Home/Models/Runway/runway_video2video
R

runway_video2video

每次請求:$0.2
商業用途
概覽
功能
定價
API

Technical Specifications of runway-video2video

SpecificationDetails
Model IDrunway-video2video
ProviderRunway
Model typeVideo-to-video generation / stylized video transformation
Primary functionTransforms an input video into a new output video guided by a text prompt, and in some workflows by a reference image or first frame.
Backing Runway capabilityCommonly associated with Runway’s Video to Video workflow on Gen-3 Alpha and Gen-3 Alpha Turbo, while Runway’s newer guidance points users toward Gen-4 Aleph for the latest video-to-video capability.
Input typesInput video is required; prompt text is used for transformation guidance. Runway documentation also describes image-guided styling in video workflows.
OutputAI-transformed video clip
Supported durationsRunway’s Gen-3 video-to-video workflow supports clips up to 20 seconds for the referenced workflow.
Output resolutions1280×768 and, for supported variants, 768×1280.
Max input size64 MB in the referenced Gen-3 video-to-video workflow.
AccessAvailable through API-based integration and platform workflows from Runway. Runway publishes its API documentation separately.

What is runway-video2video?

runway-video2video is CometAPI’s model identifier for Runway’s video-to-video generation capability, a workflow that takes an existing video clip and re-renders it into a new visual style or motion-driven interpretation using prompt-based guidance. In practice, this is used for stylization, scene transformation, look development, experimental VFX, and turning raw footage into more cinematic or imaginative outputs.

Runway’s help documentation describes Video to Video as a way to change the style of videos using a text prompt or an input image as the first frame. Its more recent product guidance also indicates that while Gen-3 Alpha and Turbo supported this workflow, newer Runway recommendations point users to newer-generation tooling for the latest video-to-video use cases. From a CometAPI integration perspective, however, runway-video2video is the platform model ID you use to access this class of capability.

Main features of runway-video2video

  • Video-guided generation: Starts from an existing video input rather than generating motion from scratch, making it useful for preserving shot structure, timing, and composition while changing the visual result.
  • Prompt-based transformation: Uses natural-language instructions to control the output style, mood, subject reinterpretation, or visual effect direction.
  • Style transfer and creative re-rendering: Well suited for converting footage into animated, cinematic, surreal, or branded visual treatments without manual frame-by-frame editing. This is an inference based on Runway’s described video-to-video styling workflow.
  • Reference-aware workflows: Runway documentation indicates support for image-informed guidance in related video generation flows, which helps steer composition or aesthetic consistency.
  • Portrait and landscape output options: Supported workflows include standard horizontal and vertical output formats, which is useful for social, mobile, and marketing delivery.
  • Short-form production efficiency: The referenced workflow supports short clips up to 20 seconds, which fits ad creatives, concept shots, social posts, and rapid iteration pipelines.
  • API accessibility: Runway provides API documentation and model access patterns, enabling programmatic integration into creative apps, internal tools, and automated media pipelines.

How to access and integrate runway-video2video

Step 1: Sign Up for API Key

To get started, sign up on CometAPI and generate your API key from the dashboard. Once you have your key, store it securely and use it to authenticate requests to the CometAPI endpoint.

Step 2: Send Requests to runway-video2video API

Use Runway's official API format via CometAPI. The endpoint is POST /runwayml/v1/video_to_video. You must include the X-Runway-Version header.

curl https://api.cometapi.com/runwayml/v1/video_to_video \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -H "X-Runway-Version: 2024-11-06" \
  -d '{
    "model": "gen4_aleph",
    "videoUri": "https://example.com/your-source-video.mp4",
    "promptText": "Transform this footage into a cinematic neon-lit sci-fi sequence with dramatic atmosphere and smooth motion.",
    "seed": 1,
    "ratio": "1280:720",
    "references": [],
    "contentModeration": {
      "publicFigureThreshold": "auto"
    }
  }'
import os, requests

resp = requests.post(
    "https://api.cometapi.com/runwayml/v1/video_to_video",
    headers={
        "Authorization": f"Bearer {os.environ['COMETAPI_API_KEY']}",
        "Content-Type": "application/json",
        "X-Runway-Version": "2024-11-06",
    },
    json={
        "model": "gen4_aleph",
        "videoUri": "https://example.com/your-source-video.mp4",
        "promptText": "Transform this footage into a cinematic neon-lit sci-fi sequence.",
        "seed": 1,
        "ratio": "1280:720",
        "references": [],
        "contentModeration": {"publicFigureThreshold": "auto"},
    },
)
print(resp.json())

Step 3: Retrieve and Verify Results

The API returns a task object. Poll the task status endpoint to check when the video generation is complete, then retrieve the output video URL. For production use, add retries, status polling, and logging to ensure reliable integration with runway-video2video.

runway_video2video 的功能

探索 runway_video2video 的核心功能,專為提升效能和可用性而設計。了解這些功能如何為您的專案帶來效益並改善使用者體驗。

runway_video2video 的定價

探索 runway_video2video 的競爭性定價,專為滿足各種預算和使用需求而設計。我們靈活的方案確保您只需為實際使用量付費,讓您能夠隨著需求增長輕鬆擴展。了解 runway_video2video 如何在保持成本可控的同時提升您的專案效果。
彗星價格 (USD / M Tokens)官方價格 (USD / M Tokens)折扣
每次請求:$0.2
每次請求:$0.25
-20%

runway_video2video 的範例程式碼和 API

存取完整的範例程式碼和 API 資源,以簡化您的 runway_video2video 整合流程。我們詳盡的文件提供逐步指引,協助您在專案中充分發揮 runway_video2video 的潛力。

更多模型

O

Sora 2 Pro

每秒:$0.24
Sora 2 Pro 是我們最先進且最強大的媒體生成模型,能生成帶有同步音訊的影片。它可以從自然語言或圖像創建細節豐富、動態的影片片段。
O

Sora 2

每秒:$0.08
超強大的影片生成模型,具備音效,支援對話格式。
M

mj_fast_video

每次請求:$0.6
Midjourney video generation
X

Grok Imagine Video

每秒:$0.04
可根據文字提示生成影片,將靜態圖片動態化,或以自然語言編輯現有影片。API 支援為生成的影片自訂時長、寬高比與解析度 — SDK 會自動處理非同步輪詢。
G

Veo 3.1 Pro

每秒:$0.25
Veo 3.1-Pro 指的是 Google 的 Veo 3.1 系列的高階存取/設定 — 這是一代支援音訊的短影片模型,並加入更豐富的原生音訊、改進的敘事/剪輯控制與場景延伸工具。
G

Veo 3.1

每秒:$0.05
Veo 3.1 是 Google 對其 Veo 文本與圖像→影片系列的一次漸進但意義重大的更新,新增更豐富的原生音訊、更長且更可控的影片輸出,以及更精細的編輯與場景層級控制。