ModelsPricingEnterprise
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Company
About usEnterprise
Resources
AI ModelsBlogChangelogSupport
Terms of ServicePrivacy Policy
© 2026 CometAPI · All rights reserved
Home/Models/Flux/flux-pro-finetuned
F

flux-pro-finetuned

Per Request:$0.072
Commercial Use
Overview
Features
Pricing
API

Technical Specifications of flux-pro-finetuned

SpecificationDetails
Model IDflux-pro-finetuned
Model familyFLUX.1 [pro] fine-tuned variant
Provider lineageBased on Black Forest Labs’ FLUX.1 [pro] model and exposed on third-party inference platforms such as Replicate and fal.ai-style fine-tuned FLUX endpoints.
Primary modalityText-to-image generation.
Fine-tuning supportRequires a finetune_id tied to a previously trained fine-tune.
Input typesText prompt, fine-tune identifier, optional fine-tune strength, and optional image prompt on supported implementations.
OutputGenerated image output for commercial and creative workflows.
Commercial usageCommercial use is indicated on listed provider pages for corresponding FLUX Pro fine-tuned endpoints.
Pricing signalsPublic provider listings show pricing around $0.06 per output image or per megapixel depending on platform and endpoint.

What is flux-pro-finetuned?

flux-pro-finetuned is a text-to-image model entry on CometAPI that corresponds to a fine-tuned FLUX.1 [pro]-class image generation workflow. Public documentation from model hosts indicates that this model builds on FLUX.1 [pro] and adds support for custom fine-tunes, allowing users to generate images that follow a trained visual identity, subject style, or brand aesthetic more closely than a base model alone.

In practice, the key distinction is that this model is not just a generic prompt-to-image generator. It is designed to work with a trained fine-tune identifier, typically passed as finetune_id, so inference can incorporate custom learned concepts. Provider references also show optional controls such as finetune_strength, which suggests users can balance the influence of the fine-tune against the prompt itself.

This makes flux-pro-finetuned well suited for production image workflows where consistency matters, such as campaign visuals, repeated character generation, product-centric creative, or brand-safe concept art. That use-case framing is an inference from the model’s documented fine-tune mechanism and the FLUX Pro positioning toward premium commercial image generation.

Main features of flux-pro-finetuned

  • Custom fine-tune inference: The model is built to generate images using a previously trained fine-tune via finetune_id, rather than relying only on the base FLUX.1 [pro] model.
  • Professional-grade image quality: FLUX Pro endpoints are positioned for premium, commercial-quality visual generation, making this model appropriate for polished creative output.
  • Prompt-controlled generation: Standard text prompting remains central, so users can steer composition, scene content, style direction, and subject presentation while still benefiting from the fine-tune.
  • Adjustable fine-tune influence: Public API references list finetune_strength as an optional control, which enables finer balancing between the learned fine-tune and the prompt intent.
  • Optional image-guided workflows: Some implementations mention an optional image_prompt, indicating support for workflows that blend text guidance with visual composition reference.
  • Commercial workflow fit: Provider listings label comparable FLUX Pro fine-tuned endpoints for commercial use, which is important for teams building production design pipelines.

How to access and integrate flux-pro-finetuned

Step 1: Sign Up for API Key

To get started, create an account on CometAPI and generate your API key from the dashboard. This key is required to authenticate all requests. After you have the key, store it securely and avoid exposing it in client-side code or public repositories.

Step 2: Send Requests to flux-pro-finetuned API

Use CometAPI’s compatible API endpoint and set the model field to flux-pro-finetuned. A typical request structure looks like this:

curl https://api.cometapi.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -d '{
    "model": "flux-pro-finetuned",
    "input": "Generate a high-end editorial fashion image with dramatic studio lighting and refined texture detail."
  }'

If your workflow requires model-specific controls such as a fine-tune identifier, include the relevant fields supported by your CometAPI integration layer. Public references for this model family indicate that deployments may support parameters such as finetune_id, finetune_strength, and optional image-guidance inputs.

Step 3: Retrieve and Verify Results

After submission, parse the API response and extract the generated output returned by CometAPI. For image-generation workflows, verify that the output matches the requested prompt, style consistency, and any expected fine-tuned concept behavior. In production, it is a good practice to log request parameters, validate output format, and run a small prompt test suite to confirm that flux-pro-finetuned behaves consistently across your use case.

Features for flux-pro-finetuned

Explore the key features of flux-pro-finetuned, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for flux-pro-finetuned

Explore competitive pricing for flux-pro-finetuned, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how flux-pro-finetuned can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Per Request:$0.072
Per Request:$0.09
-20%

Sample code and API for flux-pro-finetuned

Access comprehensive sample code and API resources for flux-pro-finetuned to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of flux-pro-finetuned in your projects.

More Models

G

Nano Banana 2

Input:$0.4/M
Output:$2.4/M
Core Capabilities Overview: Resolution: Up to 4K (4096×4096), on par with Pro. Reference Image Consistency: Up to 14 reference images (10 objects + 4 characters), maintaining style/character consistency. Extreme Aspect Ratios: New 1:4, 4:1, 1:8, 8:1 ratios added, suitable for long images, posters, and banners. Text Rendering: Advanced text generation, suitable for infographics and marketing poster layouts. Search Enhancement: Integrated Google Search + Image Search. Grounding: Built-in thinking process; complex prompts are reasoned before generation.
C

Claude Opus 4.7

Input:$4/M
Output:$20/M
Claude Opus 4.7 is a hybrid reasoning model designed specifically for frontier-level coding, AI agents, and complex multi-step professional work. Unlike lighter models (e.g., Sonnet or Haiku variants), Opus 4.7 prioritizes depth, consistency, and autonomy on the hardest tasks.
C

Claude Opus 4.6

Input:$4/M
Output:$20/M
Claude Opus 4.6 is Anthropic’s “Opus”-class large language model, released February 2026. It is positioned as a workhorse for knowledge-work and research workflows — improving long-context reasoning, multi-step planning, tool use (including agentic software workflows), and computer-use tasks such as automated slide and spreadsheet generation.
A

Claude Sonnet 4.6

Input:$2.4/M
Output:$12/M
Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta.
O

GPT-5.4 nano

Input:$0.16/M
Output:$1/M
GPT-5.4 nano is designed for tasks where speed and cost matter most like classification, data extraction, ranking, and sub-agents.
O

GPT-5.4 mini

Input:$0.6/M
Output:$3.6/M
GPT-5.4 mini brings the strengths of GPT-5.4 to a faster, more efficient model designed for high-volume workloads.