Technical Specifications of stability-ai/stable-diffusion-3-5-large
| Specification | Details |
|---|---|
| Model ID | stability-ai/stable-diffusion-3-5-large |
| Provider | Stability AI |
| Model type | Text-to-image generative model |
| Parameter scale | 8 billion parameters |
| Family | Stable Diffusion 3.5 |
| Architecture | Diffusion Transformer with Transformer and CNN components |
| Primary input | Text prompt, with optional image-based control workflows in some deployments |
| Output | RGB image |
| Target resolution | Up to 1 megapixel; commonly supported resolutions include 1024×1024, 768×1344, 1344×768, and 1216×832 |
| Notable add-ons | ControlNet variants for Canny and Depth control are listed alongside the Large model; the Stability AI SD3.5 repository also references Blur, Canny, and Depth ControlNets for SD3.5 Large workflows |
| Access options | Self-hosted deployment, Stability AI API, cloud partner ecosystems, and web-based applications |
What is stability-ai/stable-diffusion-3-5-large?
stability-ai/stable-diffusion-3-5-large is CometAPI’s platform identifier for Stability AI’s Stable Diffusion 3.5 Large, a flagship text-to-image model in the Stable Diffusion 3.5 family. Stability AI describes it as the most powerful model in the family, focused on superior image quality and prompt adherence for professional-grade generation at up to 1 megapixel resolution.
The model is designed for generating high-quality visuals from natural-language prompts, including photorealistic and artistic outputs. Based on Stability AI’s public model information and partner model cards, it emphasizes strong prompt following, higher visual fidelity, and flexibility for more advanced creative and production workflows than lighter variants in the same family.
Main features of stability-ai/stable-diffusion-3-5-large
- High-capacity 8B model: Stable Diffusion 3.5 Large is built at the 8 billion parameter scale, positioning it as the highest-capability model in the SD3.5 lineup for demanding image-generation tasks.
- Professional-grade image quality: Stability AI presents the Large variant as the strongest option in the family for quality and prompt adherence, making it suitable for creators, design workflows, and professional visual generation.
- Up to 1 megapixel generation: The model is intended for high-resolution image creation, with common supported output sizes including square and portrait/landscape formats such as 1024×1024, 768×1344, 1344×768, and 1216×832.
- Diffusion Transformer architecture: Public model documentation describes the architecture as a Diffusion Transformer, combining transformer-based modeling with CNN components in the overall system.
- Flexible controllability options: SD3.5 Large can be paired with ControlNet-based guidance. Public references list Canny and Depth variants in partner model cards, while Stability AI’s repository also shows Blur control support for SD3.5 Large workflows.
- Multiple deployment paths: Stability AI indicates that the SD3.5 family can be deployed on your own infrastructure, integrated via API, used through cloud partners, or accessed through web-based tools.
- Prompt-driven creative flexibility: The official repository examples show support for prompt-based generation, configurable width and height, step counts, seeds, and controlled generation pipelines, which makes the model practical for both experimentation and production tuning.
How to access and integrate stability-ai/stable-diffusion-3-5-large
Step 1: Sign Up for API Key
To get started, sign up on CometAPI and generate your API key from the dashboard. Once you have your key, store it securely and use it to authenticate every request to the API.
Step 2: Send Requests to stability-ai/stable-diffusion-3-5-large API
Use CometAPI’s OpenAI-compatible endpoint and specify the model as stability-ai/stable-diffusion-3-5-large.
curl --request POST \
--url https://api.cometapi.com/v1/images/generations \
--header "Authorization: Bearer $COMETAPI_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "stability-ai/stable-diffusion-3-5-large",
"prompt": "A cinematic futuristic city skyline at sunset, ultra detailed, volumetric lighting"
}'
from openai import OpenAI
client = OpenAI(
api_key="YOUR_COMETAPI_API_KEY",
base_url="https://api.cometapi.com/v1"
)
response = client.images.generate(
model="stability-ai/stable-diffusion-3-5-large",
prompt="A cinematic futuristic city skyline at sunset, ultra detailed, volumetric lighting"
)
print(response)
Step 3: Retrieve and Verify Results
After sending the request, parse the API response to retrieve the generated image output or image URL, depending on the response format enabled in your integration. Then verify that the result matches the requested prompt, expected resolution, and any workflow constraints before storing it or returning it to end users.