ModelsPricingEnterprise
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Company
About usEnterprise
Resources
AI ModelsBlogChangelogSupport
Terms of ServicePrivacy Policy
© 2026 CometAPI · All rights reserved
Home/Models/Runway/runway_act_one
R

runway_act_one

Per Request:$0.4
Commercial Use
Overview
Features
Pricing
API

Technical Specifications of runway-act-one

SpecificationDetails
Model IDrunway-act-one
ProviderRunway
Model typeCharacter-performance animation / image-and-video-to-video facial performance transfer
Core functionApplies a driving performance video to a character reference image or character reference video to animate expressions, lip movement, and facial motion.
Base model contextAct-One is presented by Runway as a feature used with Gen-3 Alpha and Gen-3 Alpha Turbo workflows.
Primary inputsDriving performance video plus either a character reference image or a character reference video.
Recommended subject typeHuman, forward-facing, single-face shots with clear facial features and minimal occlusion.
Supported output durationUp to 30 seconds in the documented Gen-3 Alpha / Turbo Act-One workflow.
Output resolutions1280×768 and 768×1280 in the documented workflow.
Frame rate24 fps.
Motion controlMotion Intensity setting from 1 to 5, where lower values favor stability and higher values increase expressiveness.
Platform availabilityRunway documents Act-One availability on web and iOS for the referenced workflow.
Access considerationsRunway states this feature is available to Standard-plan users or higher in its product workflow, while API access uses Runway’s developer platform, API keys, versioned headers, and task-based generation flow.

What is runway-act-one?

runway-act-one is CometAPI’s platform identifier for Runway’s Act-One model/workflow, a character animation system designed to transfer a recorded performance onto a target character. In Runway’s own description, users upload a driving performance and use it to influence a character reference image or video, especially for expressions, mouth movement, and facial acting.

In practice, this makes runway-act-one well suited for talking-character clips, stylized portrait animation, AI avatar shots, and performance-driven facial animation where you want more control than prompt-only video generation. Runway’s guidance emphasizes that the best results come from well-lit, forward-facing, human single-face inputs with the face visible for the full shot.

Although Act-One is described in Runway’s help documentation as part of the Gen-3 Alpha and Turbo creation flow, developers integrating Runway programmatically should understand that Runway’s API is task-based, authenticated by bearer token, and versioned using the X-Runway-Version header.

Main features of runway-act-one

  • Performance-driven facial animation: Transfers a source performance onto a target character, allowing expressions and mouth movement to follow the driving video rather than relying only on text prompting.
  • Character image or video input: Supports either a still character reference image for more stable outputs or a character reference video for greater motion flexibility.
  • Expression and lip-sync influence: Runway explicitly positions Act-One as a way to precisely influence expressions, mouth movements, and related facial behavior.
  • Adjustable motion intensity: Includes a Motion Intensity control from 1 to 5 so users can tune outputs toward steadier motion or stronger expressiveness.
  • Portrait-friendly production specs: The documented workflow supports both landscape and portrait outputs at 24 fps, with output lengths up to 30 seconds.
  • Best-practice driven reliability: Runway provides concrete input guidelines such as shoulders-up framing, forward-facing faces, minimal occlusion, and consistent visibility to improve success rate and quality.
  • Task-based API compatibility: For developer workflows, Runway’s API uses authenticated task submission and task retrieval patterns, which fit well into asynchronous production pipelines.
  • Reusable asset ingestion options: Runway supports direct HTTPS inputs and ephemeral uploaded assets via runway:// URIs, which can help when automating media-heavy workflows.

How to access and integrate runway-act-one

Step 1: Sign Up for API Key

To call runway-act-one, first create an account on CometAPI and generate an API key from the dashboard. Store it securely as an environment variable so your application can authenticate requests without hardcoding secrets in source files.

Step 2: Send Requests to runway-act-one API

Use CometAPI's Runway-compatible endpoint at POST /runway/pro/act_one.

curl https://api.cometapi.com/runway/pro/act_one \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -d '{
    "prompt": "Your prompt here."
  }'

Step 3: Retrieve and Verify Results

The API returns a task object. Poll the task status via POST /runway/feed with the task ID to check when generation is complete, then retrieve the output URL.

Features for runway_act_one

Explore the key features of runway_act_one, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for runway_act_one

Explore competitive pricing for runway_act_one, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how runway_act_one can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Per Request:$0.4
Per Request:$0.5
-20%

Sample code and API for runway_act_one

Access comprehensive sample code and API resources for runway_act_one to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of runway_act_one in your projects.

More Models

D

Doubao-Seedance-2-0

Per Second:$0.07
Seedance 2.0 is ByteDance’s next-generation multimodal video foundation model focused on cinematic, multi-shot narrative video generation. Unlike single-shot text-to-video demos, Seedance 2.0 emphasizes reference-based control (images, short clips, audio), coherent character/style consistency across shots, and native audio/video synchronization — aiming to make AI video useful for professional creative and previsualization workflows.
O

Sora 2

Per Second:$0.08
Super powerful video generation model, with sound effects, supports chat format.
M

mj_fast_video

Per Request:$0.6
Midjourney video generation
X

Grok Imagine Video

Per Second:$0.04
Generate videos from text prompts, animate still images, or edit existing videos with natural language. The API supports configurable duration, aspect ratio, and resolution for generated videos — with the SDK handling the asynchronous polling automatically.
G

Veo 3.1 Pro

Per Second:$0.25
Veo 3.1-Pro refers to the high-capability access/configuration of Google’s Veo 3.1 family — a generation of short-form, audio-enabled video models that add richer native audio, improved narrative/editing controls and scene-extension tools.
G

Veo 3.1

Per Second:$0.05
Veo 3.1 is Google’s incremental-but-significant update to its Veo text-and-image→video family, adding richer native audio, longer and more controllable video outputs, and finer editing and scene-level controls.