模型支援企業部落格
500+ AI 模型 API,全部整合在一個 API 中。就在 CometAPI
模型 API
開發者
快速入門說明文件API 儀表板
資源
AI模型部落格企業更新日誌關於
2025 CometAPI. 保留所有權利。隱私政策服務條款
Home/Models/OpenAI/Whisper-1
O

Whisper-1

輸入:$24/M
輸出:$24/M
語音轉文字,生成翻譯
商業用途
概覽
功能
定價
API

Technical Specifications of whisper-1

SpecificationDetails
Model IDwhisper-1
Model typeSpeech-to-text and speech translation
Primary use casesAudio transcription, multilingual speech recognition, speech translation into English
Input modalityAudio
Output modalityText
Supported endpoints/v1/audio/transcriptions, /v1/audio/translations
Streaming supportNot supported for whisper-1
Prompting supportYes, with limited prompt control for formatting, punctuation, and style
Language capabilityMultilingual speech recognition and language identification
Typical integration formatFile upload via multipart form data
Common audio formatsm4a, mp3, mp4, mpeg, mpga, wav, webm
Best fit forConverting spoken content into readable text or English translations

What is whisper-1?

whisper-1 is a speech recognition model available through CometAPI for turning audio into text and creating translations from spoken audio into English. It is designed for developers who need reliable transcription for recorded speech, interviews, meetings, voice notes, subtitles, and multilingual audio workflows.

The model is well suited for applications that need automatic speech recognition across multiple languages. It can transcribe audio in the original language or translate spoken content into English, making it useful for global products, media processing pipelines, support tools, and accessibility solutions.

Because whisper-1 works on uploaded audio files and returns text output, it fits naturally into backend automation, content indexing, caption generation, search enrichment, and analytics pipelines.

Main features of whisper-1

  • Speech-to-text transcription: Converts spoken audio into written text for documents, captions, archives, and application workflows.
  • Speech translation: Creates English text translations from non-English spoken audio, simplifying multilingual content processing.
  • Multilingual recognition: Supports recognition across many languages, making it practical for international and cross-region deployments.
  • Prompt-assisted formatting: Accepts prompts that can help guide punctuation, capitalization, terminology, and transcript style.
  • File-based API workflow: Works well with uploaded audio files, making it easy to integrate into batch jobs, media systems, and backend services.
  • Language identification support: Can be used in workflows where detecting or handling multiple spoken languages is important.
  • Strong fit for content operations: Useful for subtitle generation, searchable transcript creation, customer call logging, interview processing, and voice-note conversion.

How to access and integrate whisper-1

Step 1: Sign Up for API Key

To start using whisper-1, first create an account on CometAPI and generate your API key from the dashboard. After logging in, go to the API management section, create a new key, and store it securely. This key will be required to authenticate every request you send to the whisper-1 API.

Step 2: Send Requests to whisper-1 API

Once you have your API key, you can send requests to the CometAPI endpoint using the whisper-1 model ID. Include your API key in the Authorization header and specify whisper-1 as the target model. For speech workflows, send an audio file to the appropriate transcription or translation endpoint.

curl --request POST \
  --url https://api.cometapi.com/v1/audio/transcriptions \
  --header "Authorization: Bearer YOUR_COMETAPI_KEY" \
  --header "Content-Type: multipart/form-data" \
  --form "model=whisper-1" \
  --form "file=@/path/to/audio.mp3"

For translation workflows, use the translation endpoint with the same model ID:

curl --request POST \
  --url https://api.cometapi.com/v1/audio/translations \
  --header "Authorization: Bearer YOUR_COMETAPI_KEY" \
  --header "Content-Type: multipart/form-data" \
  --form "model=whisper-1" \
  --form "file=@/path/to/audio.mp3"

Step 3: Retrieve and Verify Results

After the request is processed, CometAPI will return the generated text result for your whisper-1 job. Review the response to confirm transcript quality, language handling, punctuation, and completeness. If needed, refine your audio preprocessing or prompting approach and resend the request to improve output consistency for your production use case.

Whisper-1 的功能

探索 Whisper-1 的核心功能,專為提升效能和可用性而設計。了解這些功能如何為您的專案帶來效益並改善使用者體驗。

Whisper-1 的定價

探索 Whisper-1 的競爭性定價,專為滿足各種預算和使用需求而設計。我們靈活的方案確保您只需為實際使用量付費,讓您能夠隨著需求增長輕鬆擴展。了解 Whisper-1 如何在保持成本可控的同時提升您的專案效果。
彗星價格 (USD / M Tokens)官方價格 (USD / M Tokens)折扣
輸入:$24/M
輸出:$24/M
輸入:$30/M
輸出:$30/M
-20%

Whisper-1 的範例程式碼和 API

存取完整的範例程式碼和 API 資源,以簡化您的 Whisper-1 整合流程。我們詳盡的文件提供逐步指引,協助您在專案中充分發揮 Whisper-1 的潛力。

更多模型

O

gpt-realtime-1.5

輸入:$3.2/M
輸出:$12.8/M
用於音訊輸入、音訊輸出的最佳語音模型。
O

gpt-audio-1.5

輸入:$2/M
輸出:$8/M
搭配 Chat Completions 進行音訊輸入、音訊輸出的最佳語音模型。
O

TTS

輸入:$12/M
輸出:$12/M
OpenAI 文字轉語音
K

Kling TTS

每次請求:$0.006608
[語音合成] 全新上線:線上文字轉廣播級音訊,支援預覽功能 ● 可同時生成 audio_id,適用於任何 Keling API。
K

Kling video-to-audio

K

Kling video-to-audio

每次請求:$0.03304
Kling 影片轉音訊
K

Kling text-to-audio

K

Kling text-to-audio

每次請求:$0.03304
Kling 文字轉音訊