모델지원엔터프라이즈블로그
500개 이상의 AI 모델 API, 모든 것이 하나의 API로. CometAPI에서
Models API
개발자
빠른 시작문서API 대시보드
리소스
AI 모델블로그엔터프라이즈변경 로그소개
2025 CometAPI. 모든 권리 보유.개인정보 보호정책서비스 이용약관
Home/Models/OpenAI/Whisper-1
O

Whisper-1

입력:$24/M
출력:$24/M
음성을 텍스트로 변환, 번역 생성
상업적 사용
개요
기능
가격
API

Technical Specifications of whisper-1

SpecificationDetails
Model IDwhisper-1
Model typeSpeech-to-text and speech translation
Primary use casesAudio transcription, multilingual speech recognition, speech translation into English
Input modalityAudio
Output modalityText
Supported endpoints/v1/audio/transcriptions, /v1/audio/translations
Streaming supportNot supported for whisper-1
Prompting supportYes, with limited prompt control for formatting, punctuation, and style
Language capabilityMultilingual speech recognition and language identification
Typical integration formatFile upload via multipart form data
Common audio formatsm4a, mp3, mp4, mpeg, mpga, wav, webm
Best fit forConverting spoken content into readable text or English translations

What is whisper-1?

whisper-1 is a speech recognition model available through CometAPI for turning audio into text and creating translations from spoken audio into English. It is designed for developers who need reliable transcription for recorded speech, interviews, meetings, voice notes, subtitles, and multilingual audio workflows.

The model is well suited for applications that need automatic speech recognition across multiple languages. It can transcribe audio in the original language or translate spoken content into English, making it useful for global products, media processing pipelines, support tools, and accessibility solutions.

Because whisper-1 works on uploaded audio files and returns text output, it fits naturally into backend automation, content indexing, caption generation, search enrichment, and analytics pipelines.

Main features of whisper-1

  • Speech-to-text transcription: Converts spoken audio into written text for documents, captions, archives, and application workflows.
  • Speech translation: Creates English text translations from non-English spoken audio, simplifying multilingual content processing.
  • Multilingual recognition: Supports recognition across many languages, making it practical for international and cross-region deployments.
  • Prompt-assisted formatting: Accepts prompts that can help guide punctuation, capitalization, terminology, and transcript style.
  • File-based API workflow: Works well with uploaded audio files, making it easy to integrate into batch jobs, media systems, and backend services.
  • Language identification support: Can be used in workflows where detecting or handling multiple spoken languages is important.
  • Strong fit for content operations: Useful for subtitle generation, searchable transcript creation, customer call logging, interview processing, and voice-note conversion.

How to access and integrate whisper-1

Step 1: Sign Up for API Key

To start using whisper-1, first create an account on CometAPI and generate your API key from the dashboard. After logging in, go to the API management section, create a new key, and store it securely. This key will be required to authenticate every request you send to the whisper-1 API.

Step 2: Send Requests to whisper-1 API

Once you have your API key, you can send requests to the CometAPI endpoint using the whisper-1 model ID. Include your API key in the Authorization header and specify whisper-1 as the target model. For speech workflows, send an audio file to the appropriate transcription or translation endpoint.

curl --request POST \
  --url https://api.cometapi.com/v1/audio/transcriptions \
  --header "Authorization: Bearer YOUR_COMETAPI_KEY" \
  --header "Content-Type: multipart/form-data" \
  --form "model=whisper-1" \
  --form "file=@/path/to/audio.mp3"

For translation workflows, use the translation endpoint with the same model ID:

curl --request POST \
  --url https://api.cometapi.com/v1/audio/translations \
  --header "Authorization: Bearer YOUR_COMETAPI_KEY" \
  --header "Content-Type: multipart/form-data" \
  --form "model=whisper-1" \
  --form "file=@/path/to/audio.mp3"

Step 3: Retrieve and Verify Results

After the request is processed, CometAPI will return the generated text result for your whisper-1 job. Review the response to confirm transcript quality, language handling, punctuation, and completeness. If needed, refine your audio preprocessing or prompting approach and resend the request to improve output consistency for your production use case.

Whisper-1의 기능

[모델 이름]의 성능과 사용성을 향상시키도록 설계된 주요 기능을 살펴보세요. 이러한 기능이 프로젝트에 어떻게 도움이 되고 사용자 경험을 개선할 수 있는지 알아보세요.

Whisper-1 가격

[모델명]의 경쟁력 있는 가격을 살펴보세요. 다양한 예산과 사용 요구에 맞게 설계되었습니다. 유연한 요금제로 사용한 만큼만 지불하므로 요구사항이 증가함에 따라 쉽게 확장할 수 있습니다. [모델명]이 비용을 관리 가능한 수준으로 유지하면서 프로젝트를 어떻게 향상시킬 수 있는지 알아보세요.
코멧 가격 (USD / M Tokens)공식 가격 (USD / M Tokens)할인
입력:$24/M
출력:$24/M
입력:$30/M
출력:$30/M
-20%

Whisper-1의 샘플 코드 및 API

[모델 이름]의 포괄적인 샘플 코드와 API 리소스에 액세스하여 통합 프로세스를 간소화하세요. 자세한 문서는 단계별 가이드를 제공하여 프로젝트에서 [모델 이름]의 모든 잠재력을 활용할 수 있도록 돕습니다.

더 많은 모델

O

gpt-realtime-1.5

입력:$3.2/M
출력:$12.8/M
오디오 입력 및 출력用 최고의 음성 모델.
O

gpt-audio-1.5

입력:$2/M
출력:$8/M
Chat Completions와 함께 오디오 입력과 오디오 출력을 위한 최고의 음성 모델.
O

TTS

입력:$12/M
출력:$12/M
OpenAI 텍스트 음성 변환
K

Kling TTS

요청당:$0.006608
[Speech Synthesis] 신규 출시: 온라인에서 텍스트를 방송용 오디오로 변환, 미리보기 기능 제공 ● 동시에 audio_id 생성 가능, 모든 Keling API에서 사용 가능.
K

Kling video-to-audio

K

Kling video-to-audio

요청당:$0.03304
Kling 동영상을 오디오로 변환
K

Kling text-to-audio

K

Kling text-to-audio

요청당:$0.03304
Kling 텍스트-투-오디오