Home/Models/OpenAI/gpt-3.5-turbo
O

gpt-3.5-turbo

Input:$0.4/M
Output:$1.2/M
GPT-3.5 Turbo is an artificial intelligence model provided by OpenAI. A pure official high-speed GPT-3.5 series, supporting tools_call. This model supports a maximum context length of 4096 tokens.
New
Commercial Use
Overview
Features
Pricing
API
Versions

Technical Specifications of gpt-3-5-turbo

ParameterValue
Model IDgpt-3-5-turbo
ProviderOpenAI
Context Length4096 tokens
Tool CallingSupported
Model TypeChat / text generation
Speed ProfileHigh-speed GPT-3.5 series

What is gpt-3-5-turbo?

gpt-3-5-turbo is an artificial intelligence model provided by OpenAI. It belongs to the official high-speed GPT-3.5 series and is designed for efficient conversational AI and text generation tasks. On CometAPI, gpt-3-5-turbo is the platform model identifier used to access this model.

This model supports tool calling, making it suitable for workflows that require the model to interact with external functions or structured tools during inference. It also supports a maximum context length of 4096 tokens, which is appropriate for short-to-medium multi-turn conversations, lightweight assistants, summarization, classification, extraction, and general-purpose application logic.

Main features of gpt-3-5-turbo

  • Official OpenAI model: Provided by OpenAI and exposed on CometAPI using the model ID gpt-3-5-turbo.
  • High-speed generation: Optimized for fast response times, making it useful for real-time chat, lightweight assistants, and interactive applications.
  • Tool calling support: Supports tools_call, enabling integration with external tools, functions, and structured application workflows.
  • 4096-token context window: Can process up to 4096 tokens of context, suitable for concise prompts and short multi-turn exchanges.
  • General-purpose usability: Works well for chat, drafting, rewriting, summarization, extraction, tagging, and other common NLP tasks.
  • Simple API adoption: Easy to integrate through standard OpenAI-compatible chat completion patterns on CometAPI.

How to access and integrate gpt-3-5-turbo

Step 1: Sign Up for API Key

First, create a CometAPI account and generate your API key from the dashboard. After obtaining the key, store it securely and use it to authenticate every API request.

Step 2: Send Requests to gpt-3-5-turbo API

Use the OpenAI-compatible API format and specify gpt-3-5-turbo as the model value in your request.

curl https://api.cometapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -d '{
    "model": "gpt-3-5-turbo",
    "messages": [
      {
        "role": "user",
        "content": "Write a short introduction to tool calling."
      }
    ]
  }'

Step 3: Retrieve and Verify Results

After sending the request, parse the response JSON and read the generated content from the returned choices. Verify that the response matches your application requirements, and if you use tool-enabled workflows, confirm that any tool call outputs are handled correctly in your integration logic.

Features for gpt-3.5-turbo

Explore the key features of gpt-3.5-turbo, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for gpt-3.5-turbo

Explore competitive pricing for gpt-3.5-turbo, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how gpt-3.5-turbo can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Input:$0.4/M
Output:$1.2/M
Input:$0.5/M
Output:$1.5/M
-20%

Sample code and API for gpt-3.5-turbo

Access comprehensive sample code and API resources for gpt-3.5-turbo to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of gpt-3.5-turbo in your projects.

Versions of gpt-3.5-turbo

The reason gpt-3.5-turbo has multiple snapshots may include potential factors such as variations in output after updates requiring older snapshots for consistency, providing developers a transition period for adaptation and migration, and different snapshots corresponding to global or regional endpoints to optimize user experience. For detailed differences between versions, please refer to the official documentation.
version
gpt-3.5-turbo-0125
gpt-3.5-turbo-1106
gpt-3.5-turbo-16k
gpt-3.5-turbo-0301
gpt-3.5-turbo
gpt-3.5-turbo-instruct
gpt-3.5-turbo-0613
gpt-3.5-turbo-16k-0613

More Models