ModelsSupportEnterpriseBlog
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Resources
AI ModelsBlogEnterpriseChangelogAbout
2025 CometAPI. All right reserved.Privacy PolicyTerms of Service
Home/Models/OpenAI/tts-1
O

tts-1

Input:$12/M
Output:$12/M
Commercial Use
Overview
Features
Pricing
API

Technical Specifications of tts-1

SpecificationDetails
Model IDtts-1
ProviderOpenAI
Model typeText-to-speech (TTS) model for converting text input into spoken audio.
Primary optimizationOptimized for speed and low-latency generation, especially for realtime or near-realtime speech output.
Quality profileLower latency than tts-1-hd, but with lower audio quality than the HD variant.
Input modalityText only.
Output modalityAudio only.
API endpointOpenAI Audio API speech generation endpoint: /v1/audio/speech.
Max input lengthUp to 4096 characters per request.
Supported response formatsmp3, opus, aac, flac, wav, pcm.
Speed controlSupported from 0.25 to 4.0, with 1.0 as default.
Voice options for tts-1alloy, ash, coral, echo, fable, onyx, nova, sage, shimmer.
Streaming supportThe Speech API supports streaming audio output, but SSE streaming and instruction-based voice control are not supported for tts-1.
PricingOpenAI lists tts-1 at $15 per 1M tokens for speech generation.

What is tts-1?

tts-1 is OpenAI’s text-to-speech model designed to turn written text into natural-sounding spoken audio. It is positioned as the faster, lower-latency option among OpenAI’s classic TTS models, making it suitable for applications that need quick speech synthesis rather than the highest possible fidelity.

Developers typically use tts-1 through the Audio API’s speech generation endpoint when they want to convert application text, prompts, notifications, narrations, or assistant responses into playable audio files. OpenAI’s documentation describes it as optimized for realtime text-to-speech use cases.

In practice, tts-1 is a good fit for lightweight voice experiences, rapid response systems, interactive prototypes, and products where responsiveness matters more than premium voice quality. If maximum quality is the priority, OpenAI points users toward tts-1-hd, while newer expressive use cases may use newer TTS models instead.

Main features of tts-1

  • Low-latency speech generation: tts-1 is specifically optimized for speed, which makes it useful for apps that need spoken output quickly.
  • Natural-sounding text-to-speech: The model converts plain text into spoken audio suitable for narration, assistant responses, and voice interfaces.
  • Multiple built-in voices: tts-1 supports a set of built-in voices including alloy, ash, coral, echo, fable, onyx, nova, sage, and shimmer.
  • Flexible audio output formats: Developers can request generated audio in common formats such as MP3, WAV, FLAC, AAC, Opus, and PCM depending on playback or processing needs.
  • Adjustable playback speed: The API allows speed control from 0.25x to 4.0x, enabling slower narration or faster playback where appropriate.
  • Simple API-based integration: tts-1 is available through the standard speech generation API, which makes it straightforward to integrate into web, mobile, or backend workflows.
  • Good for realtime-oriented applications: OpenAI explicitly frames tts-1 as a model for realtime text-to-speech scenarios, which makes it practical for assistants, notifications, and fast interactive systems.
  • Tradeoff-focused model choice: Compared with tts-1-hd, this model prioritizes faster generation over higher-fidelity output, giving developers a clear latency-versus-quality option.

How to access and integrate tts-1

Step 1: Sign Up for API Key

To access the tts-1 API, first sign up on CometAPI and generate your API key from the dashboard. After logging in, create a new key, copy it securely, and store it in your application environment variables. You will use this key to authenticate all requests to the tts-1 API.

Step 2: Send Requests to tts-1 API

Once you have your API key, send a POST request to the CometAPI endpoint for tts-1 with your input payload. Include your API key in the Authorization header and specify tts-1 as the model. A typical request includes the input text plus TTS parameters such as voice and response format.

curl https://api.cometapi.com/v1/audio/speech \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "tts-1",
    "input": "Welcome to CometAPI text to speech.",
    "voice": "alloy",
    "response_format": "mp3"
  }' \
  --output speech.mp3

Step 3: Retrieve and Verify Results

After submitting your request, the tts-1 API returns generated audio content if the call succeeds. Save the returned file or stream, verify that the audio plays correctly, and confirm that the selected voice, speed, and format match your application requirements. If needed, retry with adjusted parameters to improve the final output.

Features for tts-1

Explore the key features of tts-1, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for tts-1

Explore competitive pricing for tts-1, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how tts-1 can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Input:$12/M
Output:$12/M
Input:$15/M
Output:$15/M
-20%

Sample code and API for tts-1

Access comprehensive sample code and API resources for tts-1 to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of tts-1 in your projects.

More Models

G

Nano Banana 2

Input:$0.4/M
Output:$2.4/M
Core Capabilities Overview: Resolution: Up to 4K (4096×4096), on par with Pro. Reference Image Consistency: Up to 14 reference images (10 objects + 4 characters), maintaining style/character consistency. Extreme Aspect Ratios: New 1:4, 4:1, 1:8, 8:1 ratios added, suitable for long images, posters, and banners. Text Rendering: Advanced text generation, suitable for infographics and marketing poster layouts. Search Enhancement: Integrated Google Search + Image Search. Grounding: Built-in thinking process; complex prompts are reasoned before generation.
A

Claude Opus 4.6

Input:$4/M
Output:$20/M
Claude Opus 4.6 is Anthropic’s “Opus”-class large language model, released February 2026. It is positioned as a workhorse for knowledge-work and research workflows — improving long-context reasoning, multi-step planning, tool use (including agentic software workflows), and computer-use tasks such as automated slide and spreadsheet generation.
A

Claude Sonnet 4.6

Input:$2.4/M
Output:$12/M
Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta.
O

GPT-5.4 nano

Input:$0.16/M
Output:$1/M
GPT-5.4 nano is designed for tasks where speed and cost matter most like classification, data extraction, ranking, and sub-agents.
O

GPT-5.4 mini

Input:$0.6/M
Output:$3.6/M
GPT-5.4 mini brings the strengths of GPT-5.4 to a faster, more efficient model designed for high-volume workloads.
A

Claude Mythos Preview

A

Claude Mythos Preview

Coming soon
Input:$60/M
Output:$240/M
Claude Mythos Preview is our most capable frontier model to date, and shows a striking leap in scores on many evaluation benchmarks compared to our previous frontier model, Claude Opus 4.6.

Related Blog

Can ChatGPT Do Text to Speech? The Latest 2026 Guide to Voice, TTS Models
Apr 2, 2026

Can ChatGPT Do Text to Speech? The Latest 2026 Guide to Voice, TTS Models

ChatGPT can do text to speech, but the answer depends on what you mean. In the ChatGPT app, Voice lets ChatGPT speak aloud and has recently been updated to follow instructions better and use tools like web search more effectively. For developers, OpenAI also provides a dedicated text-to-speech API via the audio/speech endpoint, with models including gpt-4o-mini-tts, tts-1, and tts-1-hd. OpenAI says its latest TTS snapshot delivered roughly 35% lower word error rate on Common Voice and FLEURS compared with the previous generation.