Hurry! Free Tokens Waiting for You – Register Today!

  • Home
  • Models
    • Grok 4 API
    • Suno v4.5
    • GPT-image-1 API
    • GPT-4.1 API
    • Qwen 3 API
    • Llama 4 API
    • GPT-4o API
    • GPT-4.5 API
    • Claude Opus 4 API
    • Claude Sonnet 4 API
    • DeepSeek R1 API
    • Gemini2.5 pro
    • Runway Gen-3 Alpha API
    • FLUX 1.1 API
    • Kling 1.6 Pro API
    • All Models
  • Enterprise
  • Pricing
  • API Docs
  • Blog
  • Contact
Sign Up
Log in

Video

Runway

Runway/Act_two

Act-Two is Runway’s next-generation AI performance capture and character animation tool: it ingests a short driving performance (a webcam or phone video of someone acting a scene) plus a character reference (image or video) and generates an animated character performance that transfers body, facial expression and hand motion to the character. Act-Two is offered inside Runway’s web product and as a model available through API ecosystem.
Get Free API Key
  • Flexible Solution
  • Constant Updates
import os
from openai import OpenAI

client = OpenAI(
    base_url="https://api.cometapi.com/v1",
    api_key="<YOUR_API_KEY>",    
)

response = client.chat.completions.create(
    model="act_two",
    messages=[
        {
            "role": "system",
            "content": "You are an AI assistant who knows everything.",
        },
        {
            "role": "user",
            "content": "Tell me, why is the sky blue?"
        },
    ],
)

message = response.choices[0].message.content

print(f"Assistant: {message}")

All AI Models in One API
500+ AI Models

Free For A Limited Time! Register Now 

Get 1M Free Token Instantly!

Runway

Runway/Act_two

Act_two is Runway’s next-generation AI performance capture and character animation tool: it ingests a short driving performance (a webcam or phone video of someone acting a scene) plus a character reference (image or video) and generates an animated character performance that transfers body, facial expression and hand motion to the character. Act-Two is offered inside Runway’s web product and as a model available through API ecosystem.

Introduction to the Act-Two model

act_two is designed to democratize high-fidelity animation by turning short “driving performance” clips into fully animated character sequences. The model focuses on expressive fidelity — transferring facial micro-expressions, lip and mouth motion, finger/hand gestures and full-body posture — while also adding plausible environmental motion when the character input is an image. act_two is positioned as an evolution of Runway’s earlier Act models and is tightly integrated into Runway’s Gen-4 video toolset and API.

what act-two does

  • Full-body performance transfer: maps head, face, torso and hands from a single driving video to a character reference.
  • Character input flexibility: accepts either a character image or a reference video as target.
  • Gesture control: when using a character image, you can drive hand/body gestures through the driving clip and adjust gesture influence.
  • Automatic environmental motion: adds subtle background/environment motion for image-based characters to avoid “floating” results.

Technical details & task constraints

Inputs

  • Driving performance: a video that contains the acting performance (movement, gestures, audio).
  • Character reference: either a still image or a video of the character you want animated. ([Runway][1])

Outputs & formats

Supported aspect ratios and resolutions include 1280×720 (16:9), 720×1280 (9:16), 960×960 (1:1) and a small set of other presets; 24 FPS is the standard frame rate for outputs. There is auto-cropping to match aspect ratio targets.

Processing notes: best results when driving performance and character face the same general direction and occupy similar screen space; inputs with extreme perspective mismatches or very distant/low-resolution subjects can produce degraded results. Runway’s moderation and content filters apply to uploaded assets; tasks may be rejected if content violates policy.

Limitations and known failure modes

  • Short-duration focus: act_two is optimized for short clips (3s minimum; typical workflows use under 30s). For feature-length mocap you’ll still need traditional capture or chunked workflows.
  • Moderation / content safety: Runway’s moderation can block or fail tasks for flagged inputs; accounts with excessive moderation failures may be rate-limited or suspended. Plan content policy compliance into automation.
  • Edge cases: extremely complex multi-person performances, highly occluded hands, or ultra-stylized references can produce artifacts (jitter, incorrect hand poses, or expression mismatch). Manual cleanup or hybrid pipelines (lighthand rotoscoping / keyframe repair) may still be required.
  • Not a full motion-capture replacement in all cases: while Act-Two can replace many traditional setups for short scenes and prototyping, high-end film/CGI pipelines that require sub-millimeter accuracy, multiple actors interacting physically, or on-set timing sync will still rely on marker systems / performance capture stages.

Typical use cases

  • Virtual production & previs — rapid blocking and acting tests without a mocap stage.
  • Indie game & animation prototyping — fast character motion generation for short scenes.
  • Commercials & social content — produce character spots and animated talent cheaply and quickly.
  • VFX insertions & motion replacement — augment existing footage by driving a stylized character from an actor’s take.

Comparison with other current solutions

act_two vs Pika Labs / Kaiber / Sora (high level)

  • Act-Two (Runway): excels at performance fidelity for characters (head/face/body/hands) with a single driving clip paradigm and straightforward API integration for short videos; predictable credit pricing for seconds of output.
  • Pika Labs: often highlighted for flexible prompt-to-video and style transfer; may focus more on general video generation and stylization rather than targeted performance transfer.
  • Kaiber: strong at style transforms, music-driven visuals, and general scene generation, but not necessarily as specialized in per-character mocap fidelity.
  • Sora (and similar premium VFX pipelines): oriented toward cinematic quality and extended scene generation; stronger for long sequences and film VFX but more resource-intensive and possibly less accessible for rapid prototypes.

How to call Act-Two API from CometAPI

Required Steps

  • Log in to cometapi.com. If you are not our user yet, please register first
  • Get the access credential API key of the interface. Click “Add Token” at the API token in the personal center, get the token key: sk-xxxxx and submit.
  • Get the url of this site: https://api.cometapi.com/

Use Method

  1. Select the “act_two” endpoint to send the API request and set the request body. The request method and request body are obtained from our website API doc. Our website also provides Apifox test for your convenience.
  2. Replace <YOUR_API_KEY> with your actual CometAPI key from your account.
  3. Insert your question or request into the content field—this is what the model will respond to.
  4. . Process the API response to get the generated answer.

CometAPI provides a fully compatible REST API—for seamless migration. Key details to  API doc:

  • Endpoint: https://api.cometapi.com/runwayml/v1/character_performance
  • Model Parameter: act_two
  • Authentication:  Bearer YOUR_CometAPI_API_KEY 
  • Content-Type: application/json .
curl --location --request POST 'https://api.cometapi.com/runwayml/v1/character_performance' \
--header 'X-Runway-Version: 2024-11-06' \
--header 'Authorization: {{api-key}}' \
--header 'Content-Type: application/json' \
--data-raw '{
"character": {
"type": "video",
"uri": "https://filesystem.site/cdn/20250818/wAKbHUoj5EHyqZvEdJbFXn10wXBMUn.mp4"
},
"reference": {
"type": "video",
"uri": "https://filesystem.site/cdn/20250818/wAKbHUoj5EHyqZvEdJbFXn10wXBMUn.mp4"
},
"bodyControl": true,
"expressionIntensity": 3,
"seed": 4294967295,
"model": "act_two",
"ratio": "1280:720",
"contentModeration": {
"publicFigureThreshold": "auto"
}
}'

Start Today

One API
Access 500+ AI Models!

Free For A Limited Time! Register Now
Get 1M Free Token Instantly!

Get Free API Key
API Docs

Related posts

AI Model

Runway/upscale_v1 API

2025-09-14 anna No comments yet

Runway/upscale_v1 is Runway’s production-targeted video upscaler model (model id upscale_v1) designed to increase video resolution by 4× up to 4K (capped at 4096 pixels on a side). It’s available through Runway’s API surface and is also packaged on third-party model hosting marketplaces (e.g., CometAPI).

AI Model

Runway/gen4_image API

2025-09-14 anna No comments yet

Gen-4 Image is Runway’s flagship multimodal image-generation model in the Gen-4 family that supports prompted generation plus visual references (you can “@mention” reference images) to produce highly controllable, stylistically consistent outputs for image and image→video pipelines.

AI Model

Runway/gen4_aleph API

2025-09-14 anna No comments yet

Runway Gen-4 Aleph(model id /gen4_aleph) is Runway’s in-context, video-to-video model that extends the Gen-4 family with powerful video editing, shot continuation and view-synthesis capabilities. In plain terms: Aleph can take an input clip and perform complex edits — add/remove/replace objects, relight, restyle, generate novel camera angles, and even generate the “next shot” in a sequence — driven by text prompts and optional reference images. This release is presented as a major step toward coherent, multi-shot video generation and in-context editing.

500+ AI Model API,All In One API. Just In CometAPI

Models API
  • GPT API
  • Suno API
  • Luma API
  • Sora API
Developer
  • Sign Up
  • API DashBoard
  • Documentation
  • Quick Start
Resources
  • Pricing
  • Enterprise
  • Blog
  • AI Model API Articles
  • Discord Community
Get in touch
  • support@cometapi.com

© CometAPI. All Rights Reserved.  

  • Terms & Service
  • Privacy Policy