ModelsPricingEnterprise
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Company
About usEnterprise
Resources
AI ModelsBlogChangelogSupport
Terms of ServicePrivacy Policy
© 2026 CometAPI · All rights reserved
Home/Models/Midjourney/mj_fast_describe
M

mj_fast_describe

Per Request:$0
Commercial Use
Overview
Features
Pricing
API

Technical Specifications of mj-fast-describe

SpecificationDetails
Model IDmj-fast-describe
Provider categoryMidjourney-compatible image understanding / prompt description model
Primary modalityImage-to-text
Core functionAnalyzes an uploaded image and returns descriptive prompt suggestions that can be used for creative prompting
Typical outputMultiple prompt candidates describing the uploaded image
Speed profileFast describe-oriented workflow, intended for quick prompt extraction and ideation
Commercial useSupported on CometAPI’s model listing.
Pricing display on CometAPIListed as per-request pricing on the CometAPI model page.
Integration channelAccessible through CometAPI’s API platform for Midjourney-related models.

What is mj-fast-describe?

mj-fast-describe is a Midjourney-oriented image-to-text model endpoint exposed through CometAPI that is designed to inspect an input image and generate prompt-style textual descriptions. Based on Midjourney’s official Describe documentation, the underlying describe workflow is meant to help users generate creative prompts by analyzing an uploaded image and offering words and phrases that characterize it.

In practice, this kind of model is useful when you want to reverse-engineer an image into prompt ideas, bootstrap new generations from visual references, or discover stylistic language you might not have written manually. Midjourney’s official documentation also notes that Describe suggestions are inspiration-oriented rather than exact reconstruction instructions, and repeated runs on the same image can produce different prompt sets.

Main features of mj-fast-describe

  • Image-to-text prompt extraction: mj-fast-describe is built for turning visual inputs into descriptive text prompts, making it useful for prompt engineering, reference analysis, and creative ideation.
  • Creative prompt suggestion workflow: The describe process is intended to inspire new prompt directions rather than produce a perfect literal copy of the source image.
  • Multiple prompt candidates: Midjourney’s Describe workflow generates four prompt suggestions for an uploaded image, which is a strong indicator of the expected user experience for this model category.
  • Fast ideation for visual references: This model is well suited for quickly extracting style words, scene cues, and phrasing from an image so teams can iterate faster on downstream image-generation prompts. This is an inference from the documented Describe behavior and the model’s “fast describe” naming.
  • Useful for prompt discovery and remixing: You can use the returned descriptions as-is, edit them, or use them as a starting point for broader creative exploration.
  • API-ready through CometAPI: Instead of using the native Midjourney interface directly, developers can access this capability through CometAPI’s unified model access layer.

How to access and integrate mj-fast-describe

Step 1: Sign Up for API Key

Sign up on CometAPI and create your API key from the developer dashboard. Once your key is issued, store it securely and use it as your Bearer token for all requests to mj-fast-describe.

Step 2: Send Requests to mj-fast-describe API

Use CometAPI's Midjourney-compatible endpoint at POST /mj/submit/describe.

curl https://api.cometapi.com/mj/submit/describe \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -d '{
    "prompt": "a futuristic cityscape at sunset --v 6.1",
    "botType": "MID_JOURNEY",
    "accountFilter": {
      "modes": ["FAST"]
    }
  }'

Step 3: Retrieve and Verify Results

The API returns a task object with a task ID. Poll GET /mj/task/{task_id}/fetch to check generation status and retrieve the output image URL when the task reaches a terminal state.

Features for mj_fast_describe

Explore the key features of mj_fast_describe, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for mj_fast_describe

Explore competitive pricing for mj_fast_describe, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how mj_fast_describe can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Per Request:$0
Input:0.00/M
Output:0.00/M
-

Sample code and API for mj_fast_describe

Access comprehensive sample code and API resources for mj_fast_describe to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of mj_fast_describe in your projects.

More Models

G

Nano Banana 2

Input:$0.4/M
Output:$2.4/M
Core Capabilities Overview: Resolution: Up to 4K (4096×4096), on par with Pro. Reference Image Consistency: Up to 14 reference images (10 objects + 4 characters), maintaining style/character consistency. Extreme Aspect Ratios: New 1:4, 4:1, 1:8, 8:1 ratios added, suitable for long images, posters, and banners. Text Rendering: Advanced text generation, suitable for infographics and marketing poster layouts. Search Enhancement: Integrated Google Search + Image Search. Grounding: Built-in thinking process; complex prompts are reasoned before generation.
D

Doubao-Seedance-2-0

Per Second:$0.08
Seedance 2.0 is ByteDance’s next-generation multimodal video foundation model focused on cinematic, multi-shot narrative video generation. Unlike single-shot text-to-video demos, Seedance 2.0 emphasizes reference-based control (images, short clips, audio), coherent character/style consistency across shots, and native audio/video synchronization — aiming to make AI video useful for professional creative and previsualization workflows.
C

Claude Opus 4.7

Input:$3/M
Output:$15/M
Claude Opus 4.7 is a hybrid reasoning model designed specifically for frontier-level coding, AI agents, and complex multi-step professional work. Unlike lighter models (e.g., Sonnet or Haiku variants), Opus 4.7 prioritizes depth, consistency, and autonomy on the hardest tasks.
A

Claude Sonnet 4.6

Input:$2.4/M
Output:$12/M
Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta.
O

GPT-5.4 nano

Input:$0.16/M
Output:$1/M
GPT-5.4 nano is designed for tasks where speed and cost matter most like classification, data extraction, ranking, and sub-agents.
O

GPT-5.4 mini

Input:$0.6/M
Output:$3.6/M
GPT-5.4 mini brings the strengths of GPT-5.4 to a faster, more efficient model designed for high-volume workloads.