ModelsPricingEnterprise
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Company
About usEnterprise
Resources
AI ModelsBlogChangelogSupport
Terms of ServicePrivacy Policy
© 2026 CometAPI · All rights reserved
Home/Models/Replicate/stability-ai/stable-diffusion
R

stability-ai/stable-diffusion

Per Request:$0.016
Commercial Use
Overview
Features
Pricing
API

Technical Specifications of stability-ai/stable-diffusion

SpecificationDetails
Model IDstability-ai/stable-diffusion
ProviderStability AI
Model familyStable Diffusion
ModalityText-to-image image generation
Core approachLatent diffusion model
Primary inputNatural-language prompts
Primary outputAI-generated images
Common capabilitiesText-to-image generation, image variation, inpainting, outpainting, prompt-guided editing, style control
Typical resolutionsVaries by checkpoint/version; common official Stable Diffusion families support resolutions from 512×512 up to 1024×1024 and beyond, depending on the specific model and workflow
Deployment styleAPI-based access on CometAPI; the broader Stable Diffusion ecosystem also supports local and self-hosted usage through open weights and community tooling
Licensing noteStable Diffusion has been distributed under open licenses such as CreativeML Open RAIL++-M for some official releases, but license terms vary by checkpoint/version, so implementation-specific review is recommended

What is stability-ai/stable-diffusion?

stability-ai/stable-diffusion is a text-to-image generative AI model identifier on CometAPI that represents Stability AI’s Stable Diffusion family. Stable Diffusion is best known as a latent diffusion model that creates images from written prompts by generating in a compressed latent space and then decoding that latent representation into a final image. This design significantly reduces compute requirements compared with fully pixel-space diffusion approaches while still enabling high-quality image synthesis

The Stable Diffusion ecosystem was created through collaboration involving Stability AI, CompVis, Runway, and LAION-affiliated contributors, and it became widely adopted because it combined strong image generation quality with relatively accessible deployment options and open model availability

In practice, this model family is used for generating concept art, illustrations, marketing visuals, product mockups, stylized scenes, photorealistic compositions, and prompt-based creative experiments. Depending on the backing checkpoint and workflow, users may also apply it to inpainting, image editing, upscaling pipelines, and controlled generation tasks

Main features of stability-ai/stable-diffusion

  • Text-to-image generation: Converts natural-language prompts into original images, making it useful for ideation, design exploration, and content creation workflows.
  • Latent diffusion efficiency: Generates images in latent space rather than directly in pixel space, which lowers computational cost while preserving strong synthesis quality.
  • Flexible creative control: Prompt wording, negative prompts, seeds, guidance settings, and sampler choices can all influence style, composition, and consistency across generations. This is an inference from how Stable Diffusion pipelines are commonly exposed in tooling and documentation.
  • Image editing workflows: Stable Diffusion has been used for inpainting, outpainting, and prompt-guided modifications, allowing targeted edits rather than full regeneration from scratch.
  • Multiple model variants: The broader Stable Diffusion line includes multiple generations and checkpoints, including higher-resolution and more capable variants such as SDXL, giving developers flexibility based on quality, speed, and hardware needs.
  • Broad ecosystem support: Because Stable Diffusion is widely integrated across repositories, SDKs, and creative tools, developers benefit from a large surrounding ecosystem for experimentation and production use.
  • Open-weight ecosystem influence: Official Stable Diffusion releases helped establish a major open model ecosystem for image generation, enabling customization, fine-tuning, and self-hosted experimentation in many environments.

How to access and integrate stability-ai/stable-diffusion

Step 1: Sign Up for API Key

To get started, create a CometAPI account and generate your API key from the dashboard. You’ll use this key to authenticate every request to the stability-ai/stable-diffusion API.

Step 2: Send Requests to stability-ai/stable-diffusion API

Use CometAPI's Replicate-compatible endpoint at POST /replicate/v1/models/stability-ai/stable-diffusion/predictions.

curl https://api.cometapi.com/replicate/v1/models/stability-ai/stable-diffusion/predictions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -d '{
    "input": {
      "prompt": "A cinematic futuristic city skyline at sunset, ultra detailed, volumetric lighting"
    }
  }'

Step 3: Retrieve and Verify Results

The API returns a prediction object with an ID. Poll GET /replicate/v1/predictions/{prediction_id} to check generation status and retrieve the output image URL when the prediction completes.

Features for stability-ai/stable-diffusion

Explore the key features of stability-ai/stable-diffusion, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for stability-ai/stable-diffusion

Explore competitive pricing for stability-ai/stable-diffusion, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how stability-ai/stable-diffusion can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Per Request:$0.016
Per Request:$0.02
-20%

Sample code and API for stability-ai/stable-diffusion

Access comprehensive sample code and API resources for stability-ai/stable-diffusion to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of stability-ai/stable-diffusion in your projects.

More Models

O

GPT Image 2

Input:$6.4/M
Output:$24/M
GPT Image 2 is openai state-of-the-art image generation model for fast, high-quality image generation and editing. It supports flexible image sizes and high-fidelity image inputs.
D

Doubao-Seedance-2-0

Per Second:$0.07
Seedance 2.0 is ByteDance’s next-generation multimodal video foundation model focused on cinematic, multi-shot narrative video generation. Unlike single-shot text-to-video demos, Seedance 2.0 emphasizes reference-based control (images, short clips, audio), coherent character/style consistency across shots, and native audio/video synchronization — aiming to make AI video useful for professional creative and previsualization workflows.
C

Claude Opus 4.7

Input:$3/M
Output:$15/M
Claude Opus 4.7 is a hybrid reasoning model designed specifically for frontier-level coding, AI agents, and complex multi-step professional work. Unlike lighter models (e.g., Sonnet or Haiku variants), Opus 4.7 prioritizes depth, consistency, and autonomy on the hardest tasks.
A

Claude Sonnet 4.6

Input:$2.4/M
Output:$12/M
Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta.
O

GPT 5.5 Pro

Input:$24/M
Output:$144/M
An advanced model engineered for extremely complex logic and professional demands, representing the highest standard of deep reasoning and precise analytical capabilities.
O

GPT 5.5

Input:$4/M
Output:$24/M
A next-generation multimodal flagship model balancing exceptional performance with efficient response, dedicated to providing comprehensive and stable general-purpose AI services.