ModelsPricingEnterprise
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Company
About usEnterprise
Resources
AI ModelsBlogChangelogSupport
Terms of ServicePrivacy Policy
© 2026 CometAPI · All rights reserved
Home/Models/Midjourney/mj_turbo_inpaint
M

mj_turbo_inpaint

Per Request:$0.08
Commercial Use
Overview
Features
Pricing
API

Technical Specifications of mj-turbo-inpaint

SpecificationDetails
Model IDmj-turbo-inpaint
Model familyMidjourney-style image editing / inpainting workflow
Primary capabilityImage inpainting and localized image editing using masked regions
Input modalitiesImage input plus text prompt; typically paired with a mask or selected edit region
Output modalityEdited image
Editing scopeTargeted replacement or regeneration of selected portions of an image while preserving surrounding composition
Performance profileTurbo-oriented workflow intended for faster turnaround than standard generation modes
Access patternThird-party API access to Midjourney-style functionality rather than an official native Midjourney public API
Typical use casesObject replacement, background cleanup, costume changes, region-specific redesign, compositing, and iterative art-direction
Integration style on CometAPIOpenAI-compatible API access through CometAPI’s unified endpoint

What is mj-turbo-inpaint?

mj-turbo-inpaint is CometAPI’s platform identifier for a Midjourney-style inpainting model/workflow focused on fast image edits. Public Midjourney documentation describes inpainting as part of its Editor experience, where users erase or select a region and regenerate only that area from a new prompt while preserving the rest of the image. Midjourney also documents a Turbo mode that is designed to generate images significantly faster than normal fast mode, though at higher GPU cost.

Because Midjourney itself does not provide a broadly available official public API, third-party integrations commonly expose Midjourney-linked operations through intermediary APIs. Public third-party Midjourney API documentation also explicitly lists inpaint support and separate turbo routing, which aligns with CometAPI’s model naming pattern mj-turbo-inpaint. Based on those sources, this model ID should be understood as a turbo-speed Midjourney-compatible inpainting endpoint for localized image editing rather than a pure text-to-image generator.

In practice, developers would use mj-turbo-inpaint when they want to keep most of an existing image intact but selectively modify one region—for example replacing an object, changing clothing, altering a face accessory, refining a background area, or repairing unwanted image elements. This interpretation is an inference from Midjourney’s editor/inpainting behavior and third-party API descriptions of turbo inpaint support.

Main features of mj-turbo-inpaint

  • Localized image editing: Designed for inpainting workflows where only a chosen part of the image is regenerated, helping preserve the original framing, style, and untouched areas.
  • Prompt-guided modifications: Uses natural-language instructions to describe what should appear inside the edited region, making it suitable for controlled creative changes.
  • Turbo-speed execution: Midjourney’s Turbo mode is documented as using a higher-speed GPU pool to generate results faster than standard fast mode, so this model is positioned for lower-latency editing workflows.
  • Creative iteration support: Fast inpaint workflows are useful for trying multiple regional variations quickly during concept development, asset refinement, or design review cycles. This is a practical inference from the combination of inpainting and turbo behavior.
  • Useful for repair and replacement tasks: Well suited to removing distractions, swapping objects, updating backgrounds, and making compositional corrections without regenerating the full image.
  • Aggregator-friendly access: CometAPI provides a unified OpenAI-compatible interface for many models, so mj-turbo-inpaint can be consumed within a consistent API integration pattern alongside other image and language models.

How to access and integrate mj-turbo-inpaint

Step 1: Sign Up for API Key

Sign up for a CometAPI account and generate your API key in the dashboard. CometAPI uses a unified credential for its model catalog, and its public documentation describes the service as OpenAI-compatible, so the same key pattern is used across supported models, including mj-turbo-inpaint. Store the key securely and avoid exposing it in client-side code.

Step 2: Send Requests to mj-turbo-inpaint API

Point your client to CometAPI’s OpenAI-compatible base URL and send requests using mj-turbo-inpaint as the model identifier. CometAPI publicly documents https://api.cometapi.com/v1 as the base URL for compatible clients. For image workflows, CometAPI’s model articles also reference OpenAI-style image endpoints such as /v1/images/generations and /v1/images/edits; for an inpainting model like mj-turbo-inpaint, the edits-style workflow is the relevant pattern.

import os
import requests

url = "https://api.cometapi.com/v1/images/edits"
headers = {
    "Authorization": f"Bearer {os.environ['COMETAPI_KEY']}",
}

files = {
    "image": open("input.png", "rb"),
    # Include a mask file as required by your workflow if supported
    # "mask": open("mask.png", "rb"),
}

data = {
    "model": "mj-turbo-inpaint",
    "prompt": "Replace the selected area with a polished silver helmet, cinematic lighting, realistic detail"
}

response = requests.post(url, headers=headers, files=files, data=data, timeout=300)
print(response.json())

Step 3: Retrieve and Verify Results

Parse the JSON response, retrieve the returned image URL or encoded output, and verify that the edited region matches your prompt while the untouched areas remain consistent with the source image. For production use, validate file type, resolution, latency, and any asynchronous job metadata your client receives. If your workflow depends on precise masks or region control, test several masks and prompts to confirm how mj-turbo-inpaint behaves in your pipeline. CometAPI’s unified API approach makes it straightforward to automate this verification step in the same integration stack used for other supported models.

Features for mj_turbo_inpaint

Explore the key features of mj_turbo_inpaint, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for mj_turbo_inpaint

Explore competitive pricing for mj_turbo_inpaint, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how mj_turbo_inpaint can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Per Request:$0.08
Per Request:$0.1
-20%

Sample code and API for mj_turbo_inpaint

Access comprehensive sample code and API resources for mj_turbo_inpaint to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of mj_turbo_inpaint in your projects.

More Models

G

Nano Banana 2

Input:$0.4/M
Output:$2.4/M
Core Capabilities Overview: Resolution: Up to 4K (4096×4096), on par with Pro. Reference Image Consistency: Up to 14 reference images (10 objects + 4 characters), maintaining style/character consistency. Extreme Aspect Ratios: New 1:4, 4:1, 1:8, 8:1 ratios added, suitable for long images, posters, and banners. Text Rendering: Advanced text generation, suitable for infographics and marketing poster layouts. Search Enhancement: Integrated Google Search + Image Search. Grounding: Built-in thinking process; complex prompts are reasoned before generation.
D

Doubao-Seedance-2-0

Per Second:$0.08
Seedance 2.0 is ByteDance’s next-generation multimodal video foundation model focused on cinematic, multi-shot narrative video generation. Unlike single-shot text-to-video demos, Seedance 2.0 emphasizes reference-based control (images, short clips, audio), coherent character/style consistency across shots, and native audio/video synchronization — aiming to make AI video useful for professional creative and previsualization workflows.
C

Claude Opus 4.7

Input:$3/M
Output:$15/M
Claude Opus 4.7 is a hybrid reasoning model designed specifically for frontier-level coding, AI agents, and complex multi-step professional work. Unlike lighter models (e.g., Sonnet or Haiku variants), Opus 4.7 prioritizes depth, consistency, and autonomy on the hardest tasks.
A

Claude Sonnet 4.6

Input:$2.4/M
Output:$12/M
Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta.
O

GPT-5.4 nano

Input:$0.16/M
Output:$1/M
GPT-5.4 nano is designed for tasks where speed and cost matter most like classification, data extraction, ranking, and sub-agents.
O

GPT-5.4 mini

Input:$0.6/M
Output:$3.6/M
GPT-5.4 mini brings the strengths of GPT-5.4 to a faster, more efficient model designed for high-volume workloads.