Technical Specifications of runway-video2video
| Specification | Details |
|---|---|
| Model ID | runway-video2video |
| Provider | Runway |
| Model type | Video-to-video generation / stylized video transformation |
| Primary function | Transforms an input video into a new output video guided by a text prompt, and in some workflows by a reference image or first frame. |
| Backing Runway capability | Commonly associated with Runway’s Video to Video workflow on Gen-3 Alpha and Gen-3 Alpha Turbo, while Runway’s newer guidance points users toward Gen-4 Aleph for the latest video-to-video capability. |
| Input types | Input video is required; prompt text is used for transformation guidance. Runway documentation also describes image-guided styling in video workflows. |
| Output | AI-transformed video clip |
| Supported durations | Runway’s Gen-3 video-to-video workflow supports clips up to 20 seconds for the referenced workflow. |
| Output resolutions | 1280×768 and, for supported variants, 768×1280. |
| Max input size | 64 MB in the referenced Gen-3 video-to-video workflow. |
| Access | Available through API-based integration and platform workflows from Runway. Runway publishes its API documentation separately. |
What is runway-video2video?
runway-video2video is CometAPI’s model identifier for Runway’s video-to-video generation capability, a workflow that takes an existing video clip and re-renders it into a new visual style or motion-driven interpretation using prompt-based guidance. In practice, this is used for stylization, scene transformation, look development, experimental VFX, and turning raw footage into more cinematic or imaginative outputs.
Runway’s help documentation describes Video to Video as a way to change the style of videos using a text prompt or an input image as the first frame. Its more recent product guidance also indicates that while Gen-3 Alpha and Turbo supported this workflow, newer Runway recommendations point users to newer-generation tooling for the latest video-to-video use cases. From a CometAPI integration perspective, however, runway-video2video is the platform model ID you use to access this class of capability.
Main features of runway-video2video
- Video-guided generation: Starts from an existing video input rather than generating motion from scratch, making it useful for preserving shot structure, timing, and composition while changing the visual result.
- Prompt-based transformation: Uses natural-language instructions to control the output style, mood, subject reinterpretation, or visual effect direction.
- Style transfer and creative re-rendering: Well suited for converting footage into animated, cinematic, surreal, or branded visual treatments without manual frame-by-frame editing. This is an inference based on Runway’s described video-to-video styling workflow.
- Reference-aware workflows: Runway documentation indicates support for image-informed guidance in related video generation flows, which helps steer composition or aesthetic consistency.
- Portrait and landscape output options: Supported workflows include standard horizontal and vertical output formats, which is useful for social, mobile, and marketing delivery.
- Short-form production efficiency: The referenced workflow supports short clips up to 20 seconds, which fits ad creatives, concept shots, social posts, and rapid iteration pipelines.
- API accessibility: Runway provides API documentation and model access patterns, enabling programmatic integration into creative apps, internal tools, and automated media pipelines.
How to access and integrate runway-video2video
Step 1: Sign Up for API Key
To get started, sign up on CometAPI and generate your API key from the dashboard. Once you have your key, store it securely and use it to authenticate requests to the CometAPI endpoint.
Step 2: Send Requests to runway-video2video API
Use Runway's official API format via CometAPI. The endpoint is POST /runwayml/v1/video_to_video. You must include the X-Runway-Version header.
curl https://api.cometapi.com/runwayml/v1/video_to_video \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $COMETAPI_API_KEY" \
-H "X-Runway-Version: 2024-11-06" \
-d '{
"model": "gen4_aleph",
"videoUri": "https://example.com/your-source-video.mp4",
"promptText": "Transform this footage into a cinematic neon-lit sci-fi sequence with dramatic atmosphere and smooth motion.",
"seed": 1,
"ratio": "1280:720",
"references": [],
"contentModeration": {
"publicFigureThreshold": "auto"
}
}'
import os, requests
resp = requests.post(
"https://api.cometapi.com/runwayml/v1/video_to_video",
headers={
"Authorization": f"Bearer {os.environ['COMETAPI_API_KEY']}",
"Content-Type": "application/json",
"X-Runway-Version": "2024-11-06",
},
json={
"model": "gen4_aleph",
"videoUri": "https://example.com/your-source-video.mp4",
"promptText": "Transform this footage into a cinematic neon-lit sci-fi sequence.",
"seed": 1,
"ratio": "1280:720",
"references": [],
"contentModeration": {"publicFigureThreshold": "auto"},
},
)
print(resp.json())
Step 3: Retrieve and Verify Results
The API returns a task object. Poll the task status endpoint to check when the video generation is complete, then retrieve the output video URL. For production use, add retries, status polling, and logging to ensure reliable integration with runway-video2video.