ModellerSupportVirksomhedBlog
500+ AI Model API, Alt I Én API. Kun I CometAPI
Modeller API
Udvikler
Hurtig StartDokumentationAPI Dashboard
Ressourcer
AI-modellerBlogVirksomhedÆndringslogOm os
2025 CometAPI. Alle rettigheder forbeholdes.PrivatlivspolitikServicevilkår
Home/Models/OpenAI/tts-1-1106
O

tts-1-1106

Indtast:$12/M
Output:$12/M
Kommersiel brug
Oversigt
Funktioner
Priser
API

Technical Specifications of tts-1-1106

AttributeDetails
Model IDtts-1-1106
Provider familyOpenAI text-to-speech model family
Primary capabilityConverts text input into natural-sounding spoken audio
Typical endpoint/v1/audio/speech
Optimization focusLow-latency, realtime-oriented speech generation
Input modalityText
Output modalityAudio
Supported output formatsmp3, opus, aac, flac, wav, pcm
Voice supportCompatible with OpenAI’s built-in TTS voices; the tts-1 / tts-1-hd family supports a smaller subset including alloy, ash, coral, echo, fable, nova, onyx, sage, and shimmer
Max input length4096 characters per request
Pricing referenceOpenAI lists TTS speech generation pricing at $15.00 per 1M characters for the TTS category
Compliance noteEnd users should be clearly informed when the voice they hear is AI-generated

What is tts-1-1106?

tts-1-1106 is CometAPI’s platform identifier for an OpenAI text-to-speech model in the tts-1 family, designed to transform written text into spoken audio. OpenAI documents tts-1 as a model optimized for speed and realtime use cases, making it suitable for interactive applications that need fast speech generation rather than maximum offline rendering quality.

In practice, this model class is used for scenarios such as narration, voice assistants, accessibility features, conversational interfaces, and automated audio playback. It is accessed through the speech-generation workflow of the Audio API, where developers provide text, select a supported voice, and receive audio in a chosen output format.

Because the exact -1106 suffix appears to be a platform-side identifier rather than the public OpenAI model alias, the safest interpretation is that tts-1-1106 maps to the behavior and integration pattern of OpenAI’s tts-1 generation family. That means developers should expect a fast TTS model focused on responsive synthesis, standard speech endpoint usage, and built-in voice selection.

Main features of tts-1-1106

  • Realtime-oriented speech generation: The underlying tts-1 family is optimized for speed, which makes it well suited for live applications, assistants, and other latency-sensitive audio experiences.
  • Natural-sounding text-to-audio conversion: It converts plain text into lifelike spoken output for playback, narration, and voice-enabled product features.
  • Multiple output formats: Developers can request audio in mp3, opus, aac, flac, wav, or raw pcm, which supports both consumer playback and lower-latency system integration.
  • Built-in voice options: The model family supports a set of preset voices, letting teams choose a delivery style that fits their product tone without training a custom speaker model.
  • Straightforward API integration: The model is designed to work through the standard speech endpoint, reducing implementation complexity for teams already using OpenAI-compatible audio APIs.
  • Language flexibility: OpenAI states its TTS stack generally follows Whisper language support, enabling speech generation across many languages even though voices are primarily optimized for English.
  • Streaming-friendly usage: OpenAI’s speech API supports streamed audio delivery, allowing playback to begin before the full file is finished in suitable implementations.
  • Practical for production apps: With documented rate limits, standardized endpoint behavior, and usage-policy guidance around AI voice disclosure, the model family is suitable for real application deployment.

How to access and integrate tts-1-1106

Step 1: Sign Up for API Key

Sign up on CometAPI and generate your API key from the dashboard. Store the key securely and configure it as an environment variable in your application so your backend can authenticate requests to the tts-1-1106 API.

Step 2: Send Requests to tts-1-1106 API

Send a POST request to the OpenAI-compatible audio speech endpoint through CometAPI, setting model to tts-1-1106 and including the input text plus any supported options such as voice and response_format.

curl --request POST \
  --url https://api.cometapi.com/v1/audio/speech \
  --header "Authorization: Bearer $COMETAPI_API_KEY" \
  --header "Content-Type: application/json" \
  --data '{
    "model": "tts-1-1106",
    "input": "Welcome to CometAPI text to speech.",
    "voice": "alloy",
    "response_format": "mp3"
  }' \
  --output speech.mp3

Step 3: Retrieve and Verify Results

Save the returned audio file or stream the response directly in your application, then verify that the speech content, selected voice, format, and playback quality match your expected output for tts-1-1106.

Funktioner til tts-1-1106

Udforsk de vigtigste funktioner i tts-1-1106, designet til at forbedre ydeevne og brugervenlighed. Opdag hvordan disse muligheder kan gavne dine projekter og forbedre brugeroplevelsen.

Priser for tts-1-1106

Udforsk konkurrencedygtige priser for tts-1-1106, designet til at passe til forskellige budgetter og brugsbehov. Vores fleksible planer sikrer, at du kun betaler for det, du bruger, hvilket gør det nemt at skalere, efterhånden som dine krav vokser. Opdag hvordan tts-1-1106 kan forbedre dine projekter, mens omkostningerne holdes håndterbare.
Comet-pris (USD / M Tokens)Officiel Pris (USD / M Tokens)Rabat
Indtast:$12/M
Output:$12/M
Indtast:$15/M
Output:$15/M
-20%

Eksempelkode og API til tts-1-1106

Få adgang til omfattende eksempelkode og API-ressourcer for tts-1-1106 for at strømline din integrationsproces. Vores detaljerede dokumentation giver trin-for-trin vejledning, der hjælper dig med at udnytte det fulde potentiale af tts-1-1106 i dine projekter.

Flere modeller

G

Nano Banana 2

Indtast:$0.4/M
Output:$2.4/M
Oversigt over kernefunktioner: Opløsning: Op til 4K (4096×4096), på niveau med Pro. Konsistens for referencebilleder: Op til 14 referencebilleder (10 objekter + 4 figurer), med bevaret stil-/figurkonsistens. Ekstreme aspektforhold: Nye 1:4, 4:1, 1:8, 8:1-forhold tilføjet, velegnet til lange billeder, plakater og bannere. Tekstrendering: Avanceret tekstgenerering, egnet til infografikker og layout til markedsføringsplakater. Søgeforbedring: Integreret Google-søgning + billedsøgning. Forankring: Indbygget tænkeproces; komplekse prompts ræsonneres før generering.
A

Claude Opus 4.6

Indtast:$4/M
Output:$20/M
Claude Opus 4.6 er Anthropic’s "Opus"-klasse store sprogmodel, lanceret i februar 2026. Den er positioneret som en arbejdshest til vidensarbejde og forskningsarbejdsgange — med forbedret langkontekstuel ræsonnering, flertrinsplanlægning, brug af værktøjer (herunder agent-baserede softwarearbejdsgange) og computeropgaver såsom automatiseret generering af slides og regneark.
A

Claude Sonnet 4.6

Indtast:$2.4/M
Output:$12/M
Claude Sonnet 4.6 er vores hidtil mest kapable Sonnet-model. Det er en fuld opgradering af modellens færdigheder på tværs af kodning, computerbrug, langkontekstlig ræsonnering, agentplanlægning, vidensarbejde og design. Sonnet 4.6 har også et kontekstvindue på 1M tokens i beta.
O

GPT-5.4 nano

Indtast:$0.16/M
Output:$1/M
GPT-5.4 nano er designet til opgaver, hvor hastighed og omkostninger er vigtigst, såsom klassificering, dataudtræk, rangering og subagenter.
O

GPT-5.4 mini

Indtast:$0.6/M
Output:$3.6/M
GPT-5.4 mini samler styrkerne fra GPT-5.4 i en hurtigere og mere effektiv model, der er designet til arbejdsbelastninger i stor skala.
A

Claude Mythos Preview

A

Claude Mythos Preview

Kommer snart
Indtast:$60/M
Output:$240/M
Claude Mythos Preview er vores hidtil mest kapable frontier-model og viser et markant spring i resultaterne på tværs af mange benchmark-tests sammenlignet med vores tidligere frontier-model, Claude Opus 4.6.