ModelsSupportEnterpriseBlog
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Resources
AI ModelsBlogEnterpriseChangelogAbout
2025 CometAPI. All right reserved.Privacy PolicyTerms of Service
Home/Models/OpenAI/dall-e-2
O

dall-e-2

Input:$8/M
Output:$32/M
An AI model that generates images from text descriptions.
New
Commercial Use
Overview
Features
Pricing
API

Technical Specifications of dall-e-2

AttributeDetails
Model IDdall-e-2
Model TypeText-to-image generation
Primary FunctionGenerates images from natural language descriptions
Input ModalitiesText
Output ModalitiesImage
Typical Use CasesConcept art, marketing visuals, storyboarding, creative ideation, product mockups
Integration MethodAPI
Platform Identifierdall-e-2

What is dall-e-2?

dall-e-2 is an AI model that generates images from text descriptions. It enables developers, creators, and businesses to turn natural language prompts into original visuals, making it useful for rapid content creation, design exploration, and visual prototyping.

Because it works from descriptive text input, dall-e-2 can help streamline workflows that normally require manual illustration or graphic design. It is well suited for applications that need fast image generation based on user ideas, creative direction, or product concepts.

Main features of dall-e-2

  • Text-to-image generation: Creates images directly from natural language prompts, allowing users to describe scenes, styles, objects, or concepts in plain text.
  • Creative visual synthesis: Produces original imagery for brainstorming, ideation, and artistic experimentation across many visual themes.
  • Rapid prototyping: Helps teams quickly generate draft visuals for campaigns, presentations, mockups, and early-stage design work.
  • Flexible application support: Can be used in creative tools, content pipelines, marketing workflows, and other products that benefit from automated image creation.
  • API-based integration: Supports implementation in software products and internal systems through a programmatic API workflow.

How to access and integrate dall-e-2

Step 1: Sign Up for API Key

To get started, sign up on the CometAPI platform and generate your API key from the dashboard. This key will authenticate your requests and let you access the dall-e-2 model through the API.

Step 2: Send Requests to dall-e-2 API

Once you have your API key, send POST requests to the CometAPI endpoint with your prompt and model set to dall-e-2.

curl --request POST \
  --url https://api.cometapi.com/v1/responses \
  --header "Authorization: Bearer YOUR_COMETAPI_KEY" \
  --header "Content-Type: application/json" \
  --data '{
    "model": "dall-e-2",
    "input": "A futuristic city skyline at sunset with flying cars"
  }'

Step 3: Retrieve and Verify Results

After submitting your request, parse the API response to retrieve the generated output. Verify that the returned content matches your requested prompt, handle any errors or incomplete results, and store or display the generated image as needed in your application.

Features for dall-e-2

Explore the key features of dall-e-2, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for dall-e-2

Explore competitive pricing for dall-e-2, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how dall-e-2 can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Input:$8/M
Output:$32/M
Input:$10/M
Output:$40/M
-20%

Sample code and API for dall-e-2

Access comprehensive sample code and API resources for dall-e-2 to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of dall-e-2 in your projects.

More Models

G

Nano Banana 2

Input:$0.4/M
Output:$2.4/M
Core Capabilities Overview: Resolution: Up to 4K (4096×4096), on par with Pro. Reference Image Consistency: Up to 14 reference images (10 objects + 4 characters), maintaining style/character consistency. Extreme Aspect Ratios: New 1:4, 4:1, 1:8, 8:1 ratios added, suitable for long images, posters, and banners. Text Rendering: Advanced text generation, suitable for infographics and marketing poster layouts. Search Enhancement: Integrated Google Search + Image Search. Grounding: Built-in thinking process; complex prompts are reasoned before generation.
A

Claude Opus 4.6

Input:$4/M
Output:$20/M
Claude Opus 4.6 is Anthropic’s “Opus”-class large language model, released February 2026. It is positioned as a workhorse for knowledge-work and research workflows — improving long-context reasoning, multi-step planning, tool use (including agentic software workflows), and computer-use tasks such as automated slide and spreadsheet generation.
A

Claude Sonnet 4.6

Input:$2.4/M
Output:$12/M
Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta.
O

GPT-5.4 nano

Input:$0.16/M
Output:$1/M
GPT-5.4 nano is designed for tasks where speed and cost matter most like classification, data extraction, ranking, and sub-agents.
O

GPT-5.4 mini

Input:$0.6/M
Output:$3.6/M
GPT-5.4 mini brings the strengths of GPT-5.4 to a faster, more efficient model designed for high-volume workloads.
A

Claude Mythos Preview

A

Claude Mythos Preview

Coming soon
Input:$60/M
Output:$240/M
Claude Mythos Preview is our most capable frontier model to date, and shows a striking leap in scores on many evaluation benchmarks compared to our previous frontier model, Claude Opus 4.6.