Technical Specifications of gpt-realtime-mini
| Specification | Details |
|---|---|
| Model ID | gpt-realtime-mini |
| Model type | Realtime multimodal model |
| Description | An economical version of the real-time GPT—capable of responding to audio and text inputs in realtime via WebRTC, WebSocket, or SIP connections. |
| Input modalities | Text, audio, image |
| Output modalities | Text, audio |
| Context window | 32,000 tokens |
| Max output tokens | 4,096 tokens |
| Supported interfaces | WebRTC, WebSocket, SIP |
| Supported features | Function calling supported; structured outputs, fine-tuning, distillation, and predicted outputs not supported |
| Recommended use | Low-latency voice agents, realtime multimodal applications, and cost-sensitive interactive experiences |
What is gpt-realtime-mini?
gpt-realtime-mini is a cost-efficient realtime model designed for applications that need fast, natural interaction with users through live audio and text. It is intended for low-latency multimodal experiences, allowing developers to build assistants that can listen, respond, and stream output in realtime rather than relying on slower multi-step pipelines.
Compared with larger realtime variants, gpt-realtime-mini is positioned as the economical option for developers who want realtime speech and text capabilities while managing cost and maintaining responsive performance. It works across browser, server, and telephony-style connection patterns through WebRTC, WebSocket, and SIP.
Main features of gpt-realtime-mini
- Realtime audio and text interaction: Supports low-latency conversations with streaming input and output, making it suitable for live assistants, voice bots, and interactive agents.
- Cost-efficient deployment: Positioned as an economical version of the realtime model family, making it attractive for high-volume or budget-sensitive applications.
- Multiple connection methods: Can be integrated through WebRTC for browser clients, WebSocket for server-side systems, and SIP for telephony or VoIP scenarios.
- Multimodal input support: Accepts text, audio, and image input, enabling richer user interactions and more flexible application design.
- Speech-capable output: Produces both text and audio output, which is useful for conversational interfaces and spoken response systems.
- Function calling support: Supports function calling, allowing applications to connect the model to tools, workflows, or backend actions during realtime sessions.
- Built for voice agents: Well suited for speech-to-speech assistants and realtime customer interaction experiences where interruption handling and fast turn-taking matter.
How to access and integrate gpt-realtime-mini
Step 1: Sign Up for API Key
To get started, sign up on CometAPI and generate your API key from the dashboard. Once you have your key, keep it secure and store it in your environment variables for server-side use.
Step 2: Connect to gpt-realtime-mini API
The Realtime API uses WebSocket connections. Connect to CometAPI's WebSocket endpoint:
const ws = new WebSocket(
"wss://api.cometapi.com/v1/realtime?model=gpt-realtime-mini",
{
headers: {
"Authorization": "Bearer " + process.env.COMETAPI_API_KEY,
"OpenAI-Beta": "realtime=v1"
}
}
);
ws.on("open", () => {
ws.send(JSON.stringify({
type: "session.update",
session: {
modalities: ["text", "audio"],
instructions: "You are a helpful assistant."
}
}));
});
ws.on("message", (data) => {
console.log(JSON.parse(data));
});
Step 3: Retrieve and Verify Results
The Realtime API streams responses through the WebSocket connection as server-sent events. Listen for response.audio.delta events for audio output and response.text.delta for text. Verify the session is established and responses are streaming correctly.