ModelsPricingEnterprise
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Company
About usEnterprise
Resources
AI ModelsBlogChangelogSupport
Terms of ServicePrivacy Policy
© 2026 CometAPI · All rights reserved
Home/Models/Runway/runwayml_image_to_video
R

runwayml_image_to_video

Per Request:$0.32
Commercial Use
Overview
Features
Pricing
API

Technical Specifications of runwayml-image-to-video

SpecificationDetails
Model IDrunwayml-image-to-video
ProviderRunway
Primary capabilityGenerates video from a starting image using a text prompt and image-conditioned motion generation.
API task typeAsynchronous image-to-video generation task started via Runway’s image-to-video endpoint.
Supported underlying models in Runway APIRunway’s image-to-video endpoint accepts multiple video models, including gen4_turbo, gen3a_turbo, veo3.1, veo3.1_fast, and veo3.
Required inputsAn input image plus model-specific generation parameters; prompt text is commonly used to describe desired motion and scene behavior.
AuthenticationBearer API key in the Authorization header.
Versioning headerRunway’s API reference requires the X-Runway-Version header, with the documented value 2024-11-06 for the image-to-video endpoint.
Output formatGenerated video asset returned through task-based API processing.
Typical frame rateRunway documentation lists 24 fps for Gen-3 Alpha image-to-video outputs.
Common output resolutionRunway documents 1280×768 for Gen-3 Alpha and both 1280×768 and 768×1280 for Gen-3 Alpha Turbo, depending on orientation.
Image input guidanceRunway recommends avoiding reference images smaller than 640×640 px or larger than 4K for image inputs.
Usage pattern on CometAPIAccessed through CometAPI using the platform model identifier runwayml-image-to-video.

What is runwayml-image-to-video?

runwayml-image-to-video is CometAPI’s model identifier for accessing Runway’s image-to-video generation capability. In practice, this category of model takes a still image as the visual starting point and transforms it into a generated video, typically guided by prompt text that describes motion, camera behavior, atmosphere, or scene evolution.

Runway positions image-to-video as part of its broader video generation stack. Its documentation for Gen-3 Alpha describes image-to-video as a way to animate a supplied image, with prompting focused especially on movement rather than re-describing everything already visible in the frame.

From an API perspective, Runway exposes image-to-video generation as a task-creation endpoint, which means requests typically start a job first and then require polling or retrieval steps to obtain the final video result after processing completes.

Main features of runwayml-image-to-video

  • Image-conditioned video generation: Starts from a still image and generates motion-driven video output rather than creating video from text alone.
  • Prompt-guided motion control: Works best when prompts describe motion, camera movement, and scene changes to animate the supplied frame coherently.
  • Asynchronous task workflow: Uses a start-generation request followed by result retrieval, which fits production systems that queue and monitor long-running media jobs.
  • Multiple underlying model options: Runway’s image-to-video API endpoint supports several backend video models, giving developers flexibility around quality, speed, and cost profiles.
  • Production-style API authentication: Uses standard Bearer-token authentication and explicit API version headers for controlled integrations.
  • Support for portrait and landscape outputs: Runway documents output sizes for both horizontal and vertical generation modes in supported model families.
  • Reference-image workflow compatibility: The API is designed around image references, and Runway provides input-size recommendations to help maintain generation quality.
  • Extensible creative pipeline fit: Runway’s broader platform includes related video-generation workflows such as keyframes, expansion, and other generation modes, making image-to-video useful as one stage in a larger creative pipeline.

How to access and integrate runwayml-image-to-video

Step 1: Sign Up for API Key

To get started, sign up on CometAPI and generate your API key from the dashboard. After you have an API key, store it securely and use it in the Authorization header for all requests.

Step 2: Send Requests to runwayml-image-to-video API

Use Runway's official API format via CometAPI. The endpoint is POST /runwayml/v1/image_to_video. Include the X-Runway-Version header.

curl https://api.cometapi.com/runwayml/v1/image_to_video \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -H "X-Runway-Version: 2024-11-06" \
  -d '{
    "model": "gen4_aleph",
    "promptText": "Your prompt here."
  }'

Step 3: Retrieve and Verify Results

The API returns a task object with a task ID. Poll GET /runwayml/v1/tasks/{task_id} to check generation status, then retrieve the output URL from the completed task response.

Features for runwayml_image_to_video

Explore the key features of runwayml_image_to_video, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for runwayml_image_to_video

Explore competitive pricing for runwayml_image_to_video, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how runwayml_image_to_video can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Per Request:$0.32
Per Request:$0.4
-20%

Sample code and API for runwayml_image_to_video

Access comprehensive sample code and API resources for runwayml_image_to_video to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of runwayml_image_to_video in your projects.

More Models

D

Doubao-Seedance-2-0

Per Second:$0.07
Seedance 2.0 is ByteDance’s next-generation multimodal video foundation model focused on cinematic, multi-shot narrative video generation. Unlike single-shot text-to-video demos, Seedance 2.0 emphasizes reference-based control (images, short clips, audio), coherent character/style consistency across shots, and native audio/video synchronization — aiming to make AI video useful for professional creative and previsualization workflows.
O

Sora 2

Per Second:$0.08
Super powerful video generation model, with sound effects, supports chat format.
M

mj_fast_video

Per Request:$0.6
Midjourney video generation
X

Grok Imagine Video

Per Second:$0.04
Generate videos from text prompts, animate still images, or edit existing videos with natural language. The API supports configurable duration, aspect ratio, and resolution for generated videos — with the SDK handling the asynchronous polling automatically.
G

Veo 3.1 Pro

Per Second:$0.25
Veo 3.1-Pro refers to the high-capability access/configuration of Google’s Veo 3.1 family — a generation of short-form, audio-enabled video models that add richer native audio, improved narrative/editing controls and scene-extension tools.
G

Veo 3.1

Per Second:$0.05
Veo 3.1 is Google’s incremental-but-significant update to its Veo text-and-image→video family, adding richer native audio, longer and more controllable video outputs, and finer editing and scene-level controls.