Hurry! Free Tokens Waiting for You – Register Today!

  • Home
  • Models
    • Grok 4 API
    • Suno v4.5
    • GPT-image-1 API
    • GPT-4.1 API
    • Qwen 3 API
    • Llama 4 API
    • GPT-4o API
    • GPT-4.5 API
    • Claude Opus 4 API
    • Claude Sonnet 4 API
    • DeepSeek R1 API
    • Gemini2.5 pro
    • Runway Gen-3 Alpha API
    • FLUX 1.1 API
    • Kling 1.6 Pro API
    • All Models
  • Enterprise
  • Pricing
  • API Docs
  • Blog
  • Contact
Sign Up
Log in

Video

Runway

Runway/upscale_v1 API

Runway/upscale_v1 is Runway’s production-targeted video upscaler model (model id upscale_v1) designed to increase video resolution by 4× up to 4K (capped at 4096 pixels on a side). It’s available through Runway’s API surface and is also packaged on third-party model hosting marketplaces (e.g., CometAPI).
Get Free API Key
  • Flexible Solution
  • Constant Updates
import os
from openai import OpenAI

client = OpenAI(
    base_url="https://api.cometapi.com/v1",
    api_key="<YOUR_API_KEY>",    
)

response = client.chat.completions.create(
    model="upscale_v1",
    messages=[
        {
            "role": "system",
            "content": "You are an AI assistant who knows everything.",
        },
        {
            "role": "user",
            "content": "Tell me, why is the sky blue?"
        },
    ],
)

message = response.choices[0].message.content

print(f"Assistant: {message}")

All AI Models in One API
500+ AI Models

Free For A Limited Time! Register Now 

Get 1M Free Token Instantly!

Runway

Runway/upscale_v1 API

Runway/upscale_v1 is Runway’s production-targeted video upscaler model (model id upscale_v1) designed to increase video resolution by 4× up to 4K (capped at 4096 pixels on a side). It’s available through Runway’s API surface and is also packaged on third-party model hosting marketplaces (e.g., CometAPI).

Key features

  • 4× upscale to 4K — upscales input video frames by four times, capped at 4096 px on either side. ([Replicate][1])
  • Short-form support — intended for relatively short videos (Runway’s docs indicate up to 30 s, limits around ≤40 s in some usage contexts). Temporal consistency and speed are prioritized for short clips.

Technical details

  • Input / output: accepts a video file/URL and outputs an upscaled video file (4× upscaling). Max resolution: 4096 px per side; duration limits documented by provider.
  • Task model type: model is a video-to-video upscaler (not a general generative video model). It is optimized for enhancing existing footage—not for creating new content.

upscale_v1 is offered as a model endpoint (video→video). Runway’s API exposes a video-upscale task that accepts a video asset ID and returns an upscaled video task output; the service handles frame processing and reassembly so developers don’t need to break videos into frames manually.

Limitations & known caveats

  • Duration and resolution caps: documented caps (e.g., 4K / 4096 px and short-duration recommendations) — these can differ slightly by platform (Runway docs vs hosting site schemas). Always verify the current per-run limits for your account.
  • Temporal artifacts: like many VSR systems, fast motion or severe compression can produce flicker, ghosting, or inconsistent high-frequency detail across frames.
  • No guaranteed numeric benchmark: Runway has not released a public, single-number benchmark for upscale_v1; objective image/video metrics may diverge from perceived visual quality.
  • Cost at scale — priced per output second; long renders can become expensive compared with local solutions. Plan cost-control (chunking, preview passes) accordingly.

Typical and recommended use cases

  • Film & VFX post-production: quick upscaling of dailies or previsual material to 4K for compositing tests.
  • Archival / remastering: raising resolution of older footage for streaming or archival release (best with manual oversight).
  • Marketing & content deliverables: upscaling social or short-form clips to higher-resolution masters.
  • Preprocessing for VFX pipelines: produce higher-res assets for tracking, rotoscoping, or keying steps.

How upscale_v1 compares to other well-known upscalers

  • Real-ESRGAN / ESRGAN-family (image-focused, often applied per-frame): these methods are strong at single-frame perceptual detail but can produce temporal inconsistency when naively applied frame-by-frame to video. upscale_v1 is marketed for video and therefore emphasizes temporal consistency vs. pure per-frame enhancement.
  • EDVR / Swin-based VSR (research models): EDVR and SwinIR variants are strong academic baselines in video super-resolution benchmarks (NTIRE / REDS). Those models are often evaluated with objective metrics (PSNR/SSIM/VMAF) on standardized datasets; they can be heavy to run locally and usually require fine-tuning for real-world degradations. upscale_v1 aims for hosted convenience and production-grade throughput rather than research-bench optimization.
  • Diffusion-based VSR (emerging): recent diffusion-based VSR approaches can produce very natural high-frequency detail but are often more computationally expensive. Runway’s research into latent diffusion suggests they may leverage diffusion-style ideas in some image tasks, but upscale_v1 is presented primarily as a practical, efficient upscaler for video delivered via API.

How to call Act-Two API from CometAPI

Price$0.40000

Required Steps

  • Log in to cometapi.com. If you are not our user yet, please register first
  • Get the access credential API key of the interface. Click “Add Token” at the API token in the personal center, get the token key: sk-xxxxx and submit.
  • Get the url of this site: https://api.cometapi.com/

Use Method

  1. Select the “upscale_v1” endpoint to send the API request and set the request body. The request method and request body are obtained from our website API doc. Our website also provides Apifox test for your convenience.
  2. Replace <YOUR_API_KEY> with your actual CometAPI key from your account.
  3. Insert your question or request into the content field—this is what the model will respond to.
  4. . Process the API response to get the generated answer.

CometAPI provides a fully compatible REST API—for seamless migration. Key details to  API doc:

  • Endpoint: https://api.cometapi.com/runwayml/v1/video_upscale
  • Model Parameter: upscale_v1
  • Authentication:  Bearer YOUR_CometAPI_API_KEY 
  • Content-Type: application/json .
curl --location --request POST 'https://api.cometapi.com/runwayml/v1/video_upscale' \
--header 'X-Runway-Version: 2024-11-06' \
--header 'Authorization: {{api-key}}' \
--header 'Content-Type: application/json' \
--data-raw '{
"videoUri": "https://filesystem.site/cdn/20250818/c4gCDVPhiBc6TomRTJ7zNg0KwO1PSJ.mp4",
"model": "upscale_v1"
}'

See also Runway/Act_two

Start Today

One API
Access 500+ AI Models!

Free For A Limited Time! Register Now
Get 1M Free Token Instantly!

Get Free API Key
API Docs

Related posts

AI Model

Runway/gen4_image API

2025-09-14 anna No comments yet

Gen-4 Image is Runway’s flagship multimodal image-generation model in the Gen-4 family that supports prompted generation plus visual references (you can “@mention” reference images) to produce highly controllable, stylistically consistent outputs for image and image→video pipelines.

AI Model

Runway/gen4_aleph API

2025-09-14 anna No comments yet

Runway Gen-4 Aleph(model id /gen4_aleph) is Runway’s in-context, video-to-video model that extends the Gen-4 family with powerful video editing, shot continuation and view-synthesis capabilities. In plain terms: Aleph can take an input clip and perform complex edits — add/remove/replace objects, relight, restyle, generate novel camera angles, and even generate the “next shot” in a sequence — driven by text prompts and optional reference images. This release is presented as a major step toward coherent, multi-shot video generation and in-context editing.

AI Model

Runway/Act_two

2025-09-14 anna No comments yet

Act-Two is Runway’s next-generation AI performance capture and character animation tool: it ingests a short driving performance (a webcam or phone video of someone acting a scene) plus a character reference (image or video) and generates an animated character performance that transfers body, facial expression and hand motion to the character. Act-Two is offered inside Runway’s web product and as a model available through API ecosystem.

500+ AI Model API,All In One API. Just In CometAPI

Models API
  • GPT API
  • Suno API
  • Luma API
  • Sora API
Developer
  • Sign Up
  • API DashBoard
  • Documentation
  • Quick Start
Resources
  • Pricing
  • Enterprise
  • Blog
  • AI Model API Articles
  • Discord Community
Get in touch
  • support@cometapi.com

© CometAPI. All Rights Reserved.  

  • Terms & Service
  • Privacy Policy