模型支援企業部落格
500+ AI 模型 API,全部整合在一個 API 中。就在 CometAPI
模型 API
開發者
快速入門說明文件API 儀表板
資源
AI模型部落格企業更新日誌關於
2025 CometAPI. 保留所有權利。隱私政策服務條款
Home/Models/OpenAI/gpt-4-v
O

gpt-4-v

每次請求:$0.04
商業用途
概覽
功能
定價
API

Technical Specifications of gpt-4-v

SpecificationDetails
Model IDgpt-4-v
Provider familyOpenAI GPT-4 with vision capabilities
Model typeMultimodal large language model
Primary modalitiesText input, image input, text output
Core capabilityUnderstands and analyzes images alongside natural-language prompts
Input image methodsImage URL, Base64-encoded image, or uploaded file ID
Multi-image supportYes, multiple images can be included in a single request
Typical API patternsChat Completions-style vision requests and newer multimodal/Responses-style image analysis workflows
Best suited forVisual question answering, OCR-style understanding, document and UI analysis, captioning, accessibility, and image-grounded reasoning
Context notesImage inputs count toward usage and billing as tokens in supported API workflows
Availability statusGPT-4 and vision capabilities were introduced by OpenAI, though OpenAI’s current platform documentation now emphasizes newer multimodal models and image-capable APIs for many production use cases.

What is gpt-4-v?

gpt-4-v is CometAPI’s platform identifier for GPT-4 with vision, a multimodal version of GPT-4 designed to interpret and reason about image inputs in addition to text. OpenAI described GPT-4V as the capability that lets GPT-4 analyze user-provided images, enabling applications that combine visual understanding with conversational responses.

In practice, this model is used when an application needs language intelligence grounded in visual content. That includes describing scenes, extracting meaning from screenshots or charts, reading text embedded in images, comparing multiple images, and answering follow-up questions about what appears in a picture. OpenAI’s vision documentation also notes that image inputs can be passed by URL, Base64 data URL, or file ID, making the model flexible for both web and backend pipelines.

Although OpenAI’s latest documentation now highlights newer image-capable model families and APIs, GPT-4V remains an important reference point in the evolution of multimodal AI because it brought GPT-4-class reasoning to image understanding workflows. That makes gpt-4-v a useful compatibility target on aggregation platforms when developers want a GPT-4-style vision model interface. This last point is an inference based on OpenAI’s historical GPT-4V positioning and its newer documentation emphasis on later multimodal models.

Main features of gpt-4-v

  • Multimodal understanding: gpt-4-v can process both natural-language instructions and image inputs, allowing users to ask questions about visual content rather than relying on text alone.
  • Image-grounded reasoning: The model can identify objects, scenes, layouts, and relationships inside an image, then use GPT-4-style reasoning to produce useful textual answers.
  • OCR-like text recognition: When text appears inside an image, OpenAI’s vision guidance indicates the model can understand that text, which is valuable for screenshots, signs, forms, slides, and document snapshots.
  • Flexible image ingestion: Developers can provide image inputs as public URLs, Base64-encoded data URLs, or uploaded file references, making integration easier across browser, mobile, and server-side systems.
  • Multiple-image analysis: The model can accept more than one image in a single request, which supports comparison, step-by-step inspection, and multi-page or multi-view workflows.
  • Strong accessibility use cases: OpenAI highlighted real-world accessibility applications for GPT-4-powered vision, including support for interpreting visual environments for blind and low-vision users.
  • Broad application fit: gpt-4-v is well suited for visual Q&A, screenshot interpretation, content moderation assistance, image captioning, product-image analysis, UI inspection, and document understanding. This is an inference from the documented vision capabilities and example use cases.

How to access and integrate gpt-4-v

Step 1: Sign Up for API Key

To start using gpt-4-v, first create an account on CometAPI and generate your API key from the dashboard. After signing in, store the key securely and load it through an environment variable or your application’s secret manager so it is not exposed in client-side code.

Step 2: Send Requests to gpt-4-v API

Once your API key is ready, send requests to the CometAPI endpoint and set the model field to gpt-4-v.

curl https://api.cometapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -d '{
    "model": "gpt-4-v",
    "messages": [
      {
        "role": "user",
        "content": "Describe the image and extract any visible text."
      }
    ]
  }'

If your integration supports multimodal message content, you can pair text instructions with image inputs in the same request. For best results, provide clear prompts, specify the task you want performed on the image, and structure downstream handling for potentially detailed outputs.

Step 3: Retrieve and Verify Results

After the API returns a response, parse the generated output from the response body and validate that it matches your application’s expected format. For production use, it is a good practice to verify image-based answers, especially for OCR, compliance, accessibility, or decision-support workflows, because vision models can still misread small details or ambiguous visuals.

gpt-4-v 的功能

探索 gpt-4-v 的核心功能,專為提升效能和可用性而設計。了解這些功能如何為您的專案帶來效益並改善使用者體驗。

gpt-4-v 的定價

探索 gpt-4-v 的競爭性定價,專為滿足各種預算和使用需求而設計。我們靈活的方案確保您只需為實際使用量付費,讓您能夠隨著需求增長輕鬆擴展。了解 gpt-4-v 如何在保持成本可控的同時提升您的專案效果。
彗星價格 (USD / M Tokens)官方價格 (USD / M Tokens)折扣
每次請求:$0.04
每次請求:$0.05
-20%

gpt-4-v 的範例程式碼和 API

存取完整的範例程式碼和 API 資源,以簡化您的 gpt-4-v 整合流程。我們詳盡的文件提供逐步指引,協助您在專案中充分發揮 gpt-4-v 的潛力。

更多模型

G

Nano Banana 2

輸入:$0.4/M
輸出:$2.4/M
核心能力概覽:解析度:最高可達 4K(4096×4096),與 Pro 相當。參考圖片一致性:最多支援 14 張參考圖片(10 個物件 + 4 個角色),維持風格與角色一致性。極端寬高比:新增 1:4、4:1、1:8、8:1 比例,適合長圖、海報與橫幅。文字渲染:進階文字生成,適用於資訊圖表與行銷海報版面。搜尋強化:整合 Google Search + Image Search。Grounding:內建思考過程;在生成前會先對複雜提示進行推理。
A

Claude Opus 4.6

輸入:$4/M
輸出:$20/M
Claude Opus 4.6 是 Anthropic 的「Opus」級大型語言模型,於 2026 年 2 月發布。它被定位為知識工作與研究工作流程的主力,著重提升長上下文推理、多步規劃、工具使用(包括代理型軟體工作流程),以及電腦操作任務,例如自動化製作投影片與試算表。
A

Claude Sonnet 4.6

輸入:$2.4/M
輸出:$12/M
Claude Sonnet 4.6 是我們迄今為止最強大的 Sonnet 模型。它對模型在程式設計、電腦操作、長上下文推理、代理規劃、知識工作與設計等方面的能力進行了全面升級。Sonnet 4.6 亦提供 1M 詞元的上下文視窗,目前處於 Beta 階段。
O

GPT-5.4 nano

輸入:$0.16/M
輸出:$1/M
GPT-5.4 nano 專為速度與成本最為關鍵的任務而設計,例如分類、資料擷取、排序與子智能體。
O

GPT-5.4 mini

輸入:$0.6/M
輸出:$3.6/M
GPT-5.4 mini 將 GPT-5.4 的優勢帶入一個更快速、更高效、專為大量工作負載設計的模型。
A

Claude Mythos Preview

A

Claude Mythos Preview

即將推出
輸入:$60/M
輸出:$240/M
Claude Mythos Preview 是我們迄今最強大的前沿模型,與我們先前的前沿模型 Claude Opus 4.6 相比,在多項評測基準上的分數呈現出 顯著躍升。