/
모델지원엔터프라이즈블로그
500개 이상의 AI 모델 API, 모든 것이 하나의 API로. CometAPI에서
Models API
개발자
빠른 시작문서API 대시보드
리소스
AI 모델블로그엔터프라이즈변경 로그소개
2025 CometAPI. 모든 권리 보유.개인정보 보호정책서비스 이용약관
Home/Models/OpenAI/gpt-4-1106-preview
O

gpt-4-1106-preview

입력:$8/M
출력:$16/M
상업적 사용
개요
기능
가격
API

Technical Specifications of gpt-4-1106-preview

SpecificationDetails
Model IDgpt-4-1106-preview
ProviderOpenAI
Model familyGPT-4 Turbo Preview
Release contextAnnounced as part of OpenAI DevDay updates in November 2023.
Primary modalityText input and text output
Context windowUp to 128K context.
API compatibilityUsable through chat/completions-style workflows and OpenAI API model routing for GPT family models.
Structured output supportSupports JSON mode; function calling features are supported in this model generation family.
Notable developer featureSeed-based reproducible output support was documented for gpt-4-1106-preview in beta.
Pricing referenceOpenAI’s legacy pricing page lists gpt-4-1106-preview at $10.00 per 1M input tokens and $30.00 per 1M output tokens.
Lifecycle statusListed by OpenAI among legacy/deprecated GPT model snapshots, so availability may depend on platform compatibility and migration policy.

What is gpt-4-1106-preview?

gpt-4-1106-preview is OpenAI’s preview snapshot of GPT-4 Turbo, a faster and lower-cost GPT-4 generation introduced for developers during OpenAI DevDay in November 2023. It was positioned as a high-capability text model with a much larger context window than earlier GPT-4 variants, making it suitable for long prompts, document-heavy workflows, coding assistance, structured generation, and conversational AI.

Compared with older GPT-4 API snapshots, this model was notable for combining stronger efficiency with a 128K context window and developer-oriented capabilities such as JSON mode and improved function-calling workflows. Those features made it especially useful for building assistants, extraction pipelines, and tool-using applications that needed more reliable machine-readable output.

Today, gpt-4-1106-preview is best understood as a legacy OpenAI model snapshot that still appears in historical pricing and deprecation materials. In CometAPI, the identifier gpt-4-1106-preview serves as the platform model code to access this model route where supported.

Main features of gpt-4-1106-preview

  • 128K long-context processing: Designed to handle very large prompts and multi-document inputs, which is useful for summarization, analysis, retrieval-augmented generation, and long-session conversations.
  • GPT-4 Turbo efficiency: OpenAI introduced GPT-4 Turbo as a lower-cost, faster alternative to earlier GPT-4 variants, improving practicality for production workloads.
  • JSON mode: Helps developers generate valid JSON-shaped outputs for workflows that depend on structured responses instead of free-form text.
  • Function calling support: Well-suited for applications that connect the model to external tools, APIs, or internal business logic.
  • Reproducibility controls: OpenAI documented beta support for seed-based output consistency on gpt-4-1106-preview, which can be helpful in testing and evaluation workflows.
  • Strong general-purpose capability: Appropriate for advanced chat, coding help, content generation, classification, transformation, and instruction-following use cases associated with GPT-4-class models.
  • Legacy snapshot availability: Because OpenAI now classifies this model line as older or legacy, teams integrating it should verify current support and plan for future migration if needed.

How to access and integrate gpt-4-1106-preview

Step 1: Sign Up for API Key

To use gpt-4-1106-preview, first create a CometAPI account and generate an API key from the dashboard. After signing in, store your API key securely in an environment variable such as COMETAPI_API_KEY. This allows your application to authenticate requests safely without hardcoding credentials in source files.

Step 2: Send Requests to gpt-4-1106-preview API

Use CometAPI’s OpenAI-compatible endpoint and set the model field to gpt-4-1106-preview.

curl https://api.cometapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -d '{
    "model": "gpt-4-1106-preview",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Explain the main benefits of long-context language models."}
    ],
    "temperature": 0.7
  }'

Step 3: Retrieve and Verify Results

The API returns a JSON response containing the model’s generated output, token usage, and other metadata. In production, you should verify the response format, validate structured fields when using JSON-style outputs, and log request IDs or usage statistics for monitoring and debugging. If your workflow depends on deterministic behavior or schema compliance, test prompts carefully and confirm that the returned content matches your application requirements.

gpt-4-1106-preview의 기능

[모델 이름]의 성능과 사용성을 향상시키도록 설계된 주요 기능을 살펴보세요. 이러한 기능이 프로젝트에 어떻게 도움이 되고 사용자 경험을 개선할 수 있는지 알아보세요.

gpt-4-1106-preview 가격

[모델명]의 경쟁력 있는 가격을 살펴보세요. 다양한 예산과 사용 요구에 맞게 설계되었습니다. 유연한 요금제로 사용한 만큼만 지불하므로 요구사항이 증가함에 따라 쉽게 확장할 수 있습니다. [모델명]이 비용을 관리 가능한 수준으로 유지하면서 프로젝트를 어떻게 향상시킬 수 있는지 알아보세요.
코멧 가격 (USD / M Tokens)공식 가격 (USD / M Tokens)할인
입력:$8/M
출력:$16/M
입력:$10/M
출력:$20/M
-20%

gpt-4-1106-preview의 샘플 코드 및 API

[모델 이름]의 포괄적인 샘플 코드와 API 리소스에 액세스하여 통합 프로세스를 간소화하세요. 자세한 문서는 단계별 가이드를 제공하여 프로젝트에서 [모델 이름]의 모든 잠재력을 활용할 수 있도록 돕습니다.

더 많은 모델