Technical Specifications of gpt-4-0613
| Attribute | Details |
|---|---|
| Model ID | gpt-4-0613 |
| Provider | OpenAI |
| Model family | GPT-4 |
| Release snapshot | June 13, 2023 snapshot with updated behavior and function-calling support at launch |
| Primary modality | Text input and text output |
| Context window | 8,192 tokens |
| Max output tokens | 8,192 tokens |
| Knowledge cutoff | December 1, 2023 |
| Supported API style | Chat Completions-compatible GPT-4 snapshot; also listed in OpenAI model documentation as a GPT-4 snapshot |
| Notable capability | Early GPT-4 snapshot known for improved function-calling behavior introduced in the June 13, 2023 update |
| Pricing reference | OpenAI lists GPT-4 pricing at $30 per 1M input tokens and $60 per 1M output tokens for this GPT-4 model page. |
What is gpt-4-0613
gpt-4-0613 is a pinned GPT-4 snapshot from OpenAI associated with the June 13, 2023 model update. It is commonly recognized as one of the early stable GPT-4 API versions used by developers who wanted more predictable behavior than a moving alias. OpenAI specifically introduced this snapshot as an updated GPT-4 model with improved function-calling support.
In practice, gpt-4-0613 is best understood as an older but historically important GPT-4 release for chat-based applications, structured tool invocation patterns, and production systems that benefited from version pinning. OpenAI’s current model documentation labels GPT-4 as an older high-intelligence model and includes gpt-4-0613 among the available GPT-4 snapshots.
For CometAPI users, the model ID gpt-4-0613 can be treated as the platform identifier for accessing this specific GPT-4 snapshot through a unified API workflow.
Main features of gpt-4-0613
- Pinned GPT-4 snapshot:
gpt-4-0613refers to a fixed GPT-4 version rather than a rolling alias, which is useful when you want more stable behavior across testing and production deployments. - Function-calling era model: OpenAI introduced
gpt-4-0613as part of the update that added improved function calling, making it an important model for tool-use workflows and structured external actions. - Strong general reasoning: As a GPT-4 model, it belongs to OpenAI’s higher-intelligence class of models intended for complex instructions, multi-step reasoning, and higher-quality text generation than earlier mainstream chat models.
- 8K context handling: The model supports an 8,192-token context window, which is suitable for moderately long prompts, multi-turn chat history, and document-grounded tasks that do not require very large context sizes.
- Chat-oriented integration: It is designed for chat-style prompting patterns, making it a practical fit for assistants, support bots, internal copilots, and prompt-engineered business workflows.
- Legacy compatibility value: Because many older applications were built around this GPT-4 snapshot,
gpt-4-0613remains relevant when maintaining legacy integrations, reproducing earlier outputs, or migrating gradually from older OpenAI-compatible systems. This is an inference based on its snapshot role and continued listing in model documentation.
How to access and integrate gpt-4-0613
Step 1: Sign Up for API Key
Sign up on CometAPI and create an API key from the dashboard. After that, store the key securely as an environment variable such as COMETAPI_API_KEY so your application can authenticate requests safely.
Step 2: Send Requests to gpt-4-0613 API
Use CometAPI’s OpenAI-compatible endpoint and set the model field to gpt-4-0613.
curl https://api.cometapi.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $COMETAPI_API_KEY" \
-d '{
"model": "gpt-4-0613",
"messages": [
{
"role": "user",
"content": "Write a short introduction for the GPT-4-0613 model."
}
]
}'
from openai import OpenAI
client = OpenAI(
api_key="<COMETAPI_API_KEY>",
base_url="https://api.cometapi.com/v1"
)
response = client.chat.completions.create(
model="gpt-4-0613",
messages=[
{"role": "user", "content": "Write a short introduction for the GPT-4-0613 model."}
]
)
print(response.choices[0].message.content)
Step 3: Retrieve and Verify Results
Read the generated content from the response object, then verify output quality for your use case with representative prompts, formatting checks, and application-level validation. If you are integrating the model into production workflows, it is good practice to test consistency, latency, and fallback behavior before full deployment.