模型支援企業部落格
500+ AI 模型 API,全部整合在一個 API 中。就在 CometAPI
模型 API
開發者
快速入門說明文件API 儀表板
資源
AI模型部落格企業更新日誌關於
2025 CometAPI. 保留所有權利。隱私政策服務條款
Home/Models/Zhipu AI/glm-4.5-airx
Z

glm-4.5-airx

輸入:$1.6/M
輸出:$6.4/M
輕量、高效能、超高速響應的模型,完美結合 Air 的成本優勢與 X 的速度優勢,是兼顧性能與效率的理想選擇。
商業用途
概覽
功能
定價
API

Technical Specifications of glm-4-5-airx

SpecificationDetails
Model IDglm-4-5-airx
ProviderZhipu AI
CategoryLarge Language Model
Primary PositioningLightweight, high-performance, ultra-fast response model
Core AdvantageCombines the cost advantages of Air with the speed advantages of X
Best Use CasesLow-latency chat, real-time assistants, high-throughput applications, cost-efficient inference
Input ModalitiesText
Output ModalitiesText
Context WindowSupports long-context conversational and instruction-following tasks
Inference StyleOptimized for responsiveness, efficiency, and balanced performance

What is glm-4-5-airx?

glm-4-5-airx is a lightweight, high-performance, ultra-fast response model designed for developers and businesses that need strong language capabilities with excellent efficiency. It is positioned as a practical option for applications where both speed and cost matter, making it especially suitable for production workloads that require responsive interactions at scale.

This model perfectly combines the cost advantages of Air and the speed advantages of X, making it an ideal choice for balancing performance and efficiency. Whether you are building a real-time chatbot, an internal productivity assistant, a customer support workflow, or an automation layer for text processing, glm-4-5-airx offers a streamlined solution that prioritizes quick turnaround times without sacrificing practical output quality.

Main features of glm-4-5-airx

  • Ultra-fast response: Designed for low-latency generation, making it well suited for interactive products and real-time user experiences.
  • Lightweight deployment profile: Its efficient design makes it a strong fit for applications that need fast scaling and high request throughput.
  • Balanced cost-performance ratio: Combines affordability with strong responsiveness, helping teams control inference costs while maintaining useful output quality.
  • High-performance text generation: Supports common natural language tasks such as question answering, summarization, rewriting, classification, and conversational assistance.
  • Production-friendly reliability: A practical choice for business applications that require stable, efficient, and repeatable text generation behavior.
  • Ideal for efficiency-focused use cases: Particularly useful for startups, enterprise tools, customer service systems, and API products where performance per dollar is critical.

How to access and integrate glm-4-5-airx

Step 1: Sign Up for API Key

To get started, sign up on the CometAPI platform and generate your API key from the dashboard. After creating your account, store the API key securely and use it to authenticate every request to the API.

Step 2: Send Requests to glm-4-5-airx API

Use the standard OpenAI-compatible chat completions interface and specify glm-4-5-airx as the model. Example request:

curl --request POST \
  --url https://api.cometapi.com/v1/chat/completions \
  --header "Authorization: Bearer YOUR_COMETAPI_KEY" \
  --header "Content-Type: application/json" \
  --data '{
    "model": "glm-4-5-airx",
    "messages": [
      {
        "role": "user",
        "content": "Write a short product description for a smart home device."
      }
    ]
  }'

Step 3: Retrieve and Verify Results

After sending the request, the API returns a structured JSON response containing the generated output, usage data, and other metadata. Parse the response on your server or client side, extract the assistant message content, and verify that the returned model field is glm-4-5-airx to confirm the correct model handled the request.

glm-4.5-airx 的功能

探索 glm-4.5-airx 的核心功能,專為提升效能和可用性而設計。了解這些功能如何為您的專案帶來效益並改善使用者體驗。

glm-4.5-airx 的定價

探索 glm-4.5-airx 的競爭性定價,專為滿足各種預算和使用需求而設計。我們靈活的方案確保您只需為實際使用量付費,讓您能夠隨著需求增長輕鬆擴展。了解 glm-4.5-airx 如何在保持成本可控的同時提升您的專案效果。
彗星價格 (USD / M Tokens)官方價格 (USD / M Tokens)折扣
輸入:$1.6/M
輸出:$6.4/M
輸入:$2/M
輸出:$8/M
-20%

glm-4.5-airx 的範例程式碼和 API

存取完整的範例程式碼和 API 資源,以簡化您的 glm-4.5-airx 整合流程。我們詳盡的文件提供逐步指引,協助您在專案中充分發揮 glm-4.5-airx 的潛力。

更多模型

G

Nano Banana 2

輸入:$0.4/M
輸出:$2.4/M
核心能力概覽:解析度:最高可達 4K(4096×4096),與 Pro 相當。參考圖片一致性:最多支援 14 張參考圖片(10 個物件 + 4 個角色),維持風格與角色一致性。極端寬高比:新增 1:4、4:1、1:8、8:1 比例,適合長圖、海報與橫幅。文字渲染:進階文字生成,適用於資訊圖表與行銷海報版面。搜尋強化:整合 Google Search + Image Search。Grounding:內建思考過程;在生成前會先對複雜提示進行推理。
A

Claude Opus 4.6

輸入:$4/M
輸出:$20/M
Claude Opus 4.6 是 Anthropic 的「Opus」級大型語言模型,於 2026 年 2 月發布。它被定位為知識工作與研究工作流程的主力,著重提升長上下文推理、多步規劃、工具使用(包括代理型軟體工作流程),以及電腦操作任務,例如自動化製作投影片與試算表。
A

Claude Sonnet 4.6

輸入:$2.4/M
輸出:$12/M
Claude Sonnet 4.6 是我們迄今為止最強大的 Sonnet 模型。它對模型在程式設計、電腦操作、長上下文推理、代理規劃、知識工作與設計等方面的能力進行了全面升級。Sonnet 4.6 亦提供 1M 詞元的上下文視窗,目前處於 Beta 階段。
O

GPT-5.4 nano

輸入:$0.16/M
輸出:$1/M
GPT-5.4 nano 專為速度與成本最為關鍵的任務而設計,例如分類、資料擷取、排序與子智能體。
O

GPT-5.4 mini

輸入:$0.6/M
輸出:$3.6/M
GPT-5.4 mini 將 GPT-5.4 的優勢帶入一個更快速、更高效、專為大量工作負載設計的模型。
A

Claude Mythos Preview

A

Claude Mythos Preview

即將推出
輸入:$60/M
輸出:$240/M
Claude Mythos Preview 是我們迄今最強大的前沿模型,與我們先前的前沿模型 Claude Opus 4.6 相比,在多項評測基準上的分數呈現出 顯著躍升。