O

GPT-4o mini Audio

Енгізу:$0.12/M
Шығыс:$0.48/M
GPT-4o mini Audio は、音声とテキストのインタラクション向けのマルチモーダルモデルです。音声認識、翻訳、音声合成(テキスト読み上げ)を行い、指示に従い、ストリーミング応答で構造化されたアクションのためのツール呼び出しが可能です。代表的な用途には、リアルタイム音声アシスタント、ライブ字幕化と翻訳、通話要約、音声制御アプリケーションが含まれます。技術的な特長には、音声入出力、ストリーミング応答、関数呼び出し、構造化された JSON 出力が含まれます。
Коммерциялық пайдалану

Technical Specifications of gpt-4o-mini-audio

SpecificationDetails
Model IDgpt-4o-mini-audio
Model typeMultimodal speech-and-text model
Core modalitiesAudio input, text input, audio output, text output
Primary capabilitiesSpeech recognition, speech translation, text-to-speech, instruction following, function calling, structured JSON generation
Response modeStandard and streaming responses
Best forReal-time voice assistants, live captioning, translation, call summarization, voice-controlled workflows
Interaction styleConversational, tool-usable, low-friction multimodal exchanges
Structured output supportYes, including schema-guided JSON-style responses
Tool useYes, supports function calling for structured external actions
Integration patternAPI-based requests from backend services, apps, agents, and real-time systems

What is gpt-4o-mini-audio?

gpt-4o-mini-audio is a multimodal AI model designed for applications that combine spoken and written interaction. It can understand speech, process text instructions, generate spoken responses, and support workflows that require fast, interactive exchanges between users and software systems.

This model is well suited for products that need voice-first experiences without giving up structured automation. It can transcribe speech, translate audio across languages, respond conversationally, and trigger tools or functions when an application needs the model to take action beyond plain text generation.

Because it supports both audio and text pathways, gpt-4o-mini-audio is a practical choice for building assistants that listen, think, speak, and coordinate downstream systems. Common use cases include customer support voice agents, meeting and call summaries, real-time captioning, multilingual assistants, and app interfaces controlled by voice.

Main features of gpt-4o-mini-audio

  • Audio input and output: Accepts spoken input and can generate spoken responses, enabling natural voice-based application flows.
  • Speech recognition: Converts user speech into usable text for downstream reasoning, automation, and interface control.
  • Speech translation: Supports translation-oriented workflows for multilingual conversations, captions, and accessibility scenarios.
  • Text-to-speech responses: Produces audio replies for interactive assistants, hands-free tools, and spoken user experiences.
  • Instruction following: Handles guided prompts reliably for assistant behavior, operational workflows, and domain-specific tasks.
  • Streaming responses: Supports incremental output for lower-latency user experiences in real-time voice and captioning systems.
  • Function calling: Can invoke tools or application-defined functions for structured actions such as lookups, booking flows, or workflow orchestration.
  • Structured JSON output: Useful for systems that need predictable machine-readable responses for parsing, validation, and automation.
  • Multimodal app support: Fits products that combine chat, voice, transcripts, summaries, and action-taking in a single experience.
  • Production-friendly flexibility: Works well for assistants, support flows, live transcription pipelines, and voice-controlled applications that need both natural interaction and structured outputs.

How to access and integrate gpt-4o-mini-audio

Step 1: Sign Up for API Key

To get started, create a CometAPI account and generate your API key from the dashboard. Store the key securely and load it through an environment variable in your application. This key will be used to authenticate every request you send to the gpt-4o-mini-audio API.

Step 2: Send Requests to gpt-4o-mini-audio API

After obtaining your API key, send HTTPS requests to the CometAPI endpoint using your preferred SDK or HTTP client. Set the model field to gpt-4o-mini-audio and include the appropriate input payload for your use case, such as text, audio, streaming parameters, tool definitions, or structured output instructions.

curl https://api.cometapi.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -d '{
    "model": "gpt-4o-mini-audio",
    "input": "Transcribe this audio and return a short summary."
  }'

Step 3: Retrieve and Verify Results

When the API responds, parse the returned content based on the format you requested, such as plain text, audio output metadata, streamed events, or structured JSON. Verify that the response matches your expected schema, confirm tool calls if your workflow uses function calling, and log outputs appropriately so your integration with gpt-4o-mini-audio remains reliable in production.

Көбірек модельдер