ModelsSupportEnterpriseBlog
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Resources
AI ModelsBlogEnterpriseChangelogAbout
2025 CometAPI. All right reserved.Privacy PolicyTerms of Service
Home/Models/Aliyun/Qwen3.6-Plus
Q

Qwen3.6-Plus

Input:$0.32/M
Output:$1.92/M
Qwen 3.6-Plus is now available, featuring enhanced code development capabilities and improved efficiency in multimodal recognition and inference, making the Vibe Coding experience even better.
New
Commercial Use
Playground
Overview
Features
Pricing
API

Technical Specifications of Qwen3.6-Plus

The model is engineered for long-context, high-throughput agentic workloads.

SpecificationDetails
Context Length1,000,000 tokens (1M) by default
Max Output Tokens65,536 tokens
Input ModalitiesText, Image, Video
OutputText (with multimodal reasoning and tool execution)
ArchitectureHybrid (linear attention + sparse MoE routing)
Key CapabilitiesAlways-on chain-of-thought reasoning; native tool calling; long-horizon planning; visual agents
API CompatibilityOpenAI and Anthropic protocols

What Is Qwen3.6-Plus?

Qwen3.6-Plus is the latest proprietary multimodal large language model in Alibaba’s Qwen3 series. Unlike earlier open-weight variants in the family, this “Plus” tier is a hosted-only flagship optimized for production-grade agentic performance. It excels at bridging perception, long-term memory, and precise tool execution in a single workflow—hallmarks of true agentic AI. Built on a next-generation hybrid architecture (efficient linear attention combined with sparse mixture-of-experts routing), it scales efficiently while maintaining frontier-level capabilities in coding, planning, and multimodal understanding.

Main Features of Qwen3.6-Plus

  • Agentic Coding Excellence: From one-prompt full-stack applications to repository-level debugging and frontend development (including 3D scenes and games). It supports seamless integration with tools like OpenClaw, Qwen Code, and terminal environments.
  • Advanced Multimodal Perception: Sharper understanding of images, documents, charts, UI elements, and video, enabling visual agents for screen navigation, OCR, and temporal reasoning.
  • Long-Horizon Planning & Tool Use: Reliable multi-step execution, memory retention, and adaptive decision-making.
  • Stability & “Vibe Coding”: Refined response to community feedback for consistent, production-ready performance.
  • Multilingual & Cross-Domain Strength: Competitive across 200+ languages and diverse domains (STEM, legal, finance, healthcare).

Benchmark Performance of Qwen3.6-Plus

Qwen3.6-Plus delivers state-of-the-art or near-state-of-the-art results across agentic, coding, reasoning, and multimodal benchmarks. Selected highlights (official Qwen evaluations, temperature=1.0, appropriate context windows):

Agentic & Coding Benchmarks

  • SWE-bench Verified: 78.8 (Claude Opus 4.5: 80.9; competitive with GPT-5.x variants)
  • Terminal-Bench 2.0: 61.6 (outperforms Claude Opus 4.5: 59.3)
  • QwenClawBench: 57.2 (Claude: 52.3)
  • SWE-bench Multilingual: 73.8

Reasoning & Knowledge

  • GPQA: 90.4 (Claude Opus 4.5: 87.0)
  • LiveCodeBench v6: 87.1 (Claude: 84.8)
  • MMLU-Pro: 88.5

Multimodal & Vision

  • OmniDocBench1.5: 91.2 (GPT-5.2: 85.7)
  • VideoMME (with subtitles): 87.8 (GPT-5.2: 86.0)
  • MMMU: 86.0
  • RealWorldQA: 85.4

The model leads in practical agentic scenarios (terminal execution, long planning) while remaining highly competitive in pure reasoning and vision tasks. It frequently matches or exceeds frontier models like Claude Opus 4.5/4.6, GPT-5.x, Gemini 3 Pro, and Kimi K2.5 in targeted evaluations.

How to Access Qwen3.6-Plus via CometAPI

CometAPI is a developer-centric AI gateway that unifies access to over 500 models—including the full Qwen series—through a single OpenAI-compatible API endpoint. To use Qwen3.6-Plus:

  1. Sign up at cometapi.com.
  2. Obtain your API key from the dashboard.
  3. Use the standard OpenAI client (or any compatible SDK) with the model identifier for Qwen3.6-Plus (typically qwen3.6-plus or equivalent in their catalog).

CometAPI supports text, image, and video inputs, tool calling, and full context up to 1M tokens.

Why Choose CometAPI to Use Qwen3.6-Plus?

CometAPI stands out for high-volume, production use of frontier models like Qwen3.6-Plus because it offers:

  • Unified Access: One API key and endpoint for 500+ models (OpenAI, Anthropic, Google, Qwen, DeepSeek, etc.)—eliminate vendor sprawl and simplify billing.
  • Competitive Pricing: Significantly lower effective rates than direct Alibaba Cloud access (Qwen3.6-Plus input typically ~$0.32 per million tokens via aggregators; Qwen3.6-Plus input typically ~$1.92 per million tokens). Pay-as-you-go with no minimums.
  • Smart Routing & Reliability: Global infrastructure, automatic fallback, and optimization for speed/latency.
  • Developer Experience: Lightweight SDKs, interactive playground, usage analytics, privacy-first (no data retention), and enterprise-grade security.
  • Cost Efficiency at Scale: Ideal for agentic workflows that consume large contexts; bulk purchasing power translates to substantial savings versus direct provider pricing.

Whether you are building autonomous coding agents, enterprise automation platforms, or next-generation multimodal applications, Qwen3.6-Plus via CometAPI delivers frontier performance with the simplicity and economics required for real-world deployment. The model’s rapid adoption and benchmark leadership signal that agentic AI has moved from experimental to production-ready—and CometAPI makes it immediately accessible to every developer.

FAQ

What is the context window of Qwen3.6-Plus API?

Qwen3.6-Plus supports a 1,000,000 token context window, enabling repository-scale reasoning and long-document analysis.

Is Qwen3.6-Plus optimized for coding agents?

Yes, Qwen3.6-Plus is designed for agentic coding and achieves 78.8 on SWE-Bench Verified.

Does Qwen3.6-Plus support multimodal inputs?

Yes, Qwen3.6-Plus supports multimodal inputs including text, images, and documents.

How does Qwen3.6-Plus compare to Qwen3.5-Plus?

Qwen3.6-Plus introduces a 1M token context window and improved reasoning.

Does Qwen3.6-Plus support function calling and tools?

Yes, Qwen3.6-Plus includes native function calling.

What are the best use cases for Qwen3.6-Plus?

Coding agents, long document reasoning, and automation workflows.

Is Qwen3.6-Plus suitable for long-context applications?

Yes, it supports 1M token context.

What makes Qwen3.6-Plus different from GPT or Claude models?

It focuses on agentic automation and long-context reasoning.

Features for Qwen3.6-Plus

Explore the key features of Qwen3.6-Plus, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for Qwen3.6-Plus

Explore competitive pricing for Qwen3.6-Plus, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how Qwen3.6-Plus can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Input:$0.32/M
Output:$1.92/M
Input:$0.4/M
Output:$2.4/M
-20%

Sample code and API for Qwen3.6-Plus

Access comprehensive sample code and API resources for Qwen3.6-Plus to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of Qwen3.6-Plus in your projects.
POST
/v1/chat/completions
Python
JavaScript
Curl
from openai import OpenAI
import os

# Get your CometAPI key from https://www.cometapi.com/console/token
COMETAPI_KEY = os.environ.get("COMETAPI_KEY") or "<YOUR_COMETAPI_KEY>"
BASE_URL = "https://api.cometapi.com/v1"

client = OpenAI(base_url=BASE_URL, api_key=COMETAPI_KEY)

completion = client.chat.completions.create(
    model="qwen3.6-plus",
    messages=[{"role": "user", "content": "Hello! Tell me a short joke."}],
)

print(completion.choices[0].message.content)

Python Code Example

from openai import OpenAI
import os

# Get your CometAPI key from https://www.cometapi.com/console/token
COMETAPI_KEY = os.environ.get("COMETAPI_KEY") or "<YOUR_COMETAPI_KEY>"
BASE_URL = "https://api.cometapi.com/v1"

client = OpenAI(base_url=BASE_URL, api_key=COMETAPI_KEY)

completion = client.chat.completions.create(
    model="qwen3.6-plus",
    messages=[{"role": "user", "content": "Hello! Tell me a short joke."}],
)

print(completion.choices[0].message.content)

JavaScript Code Example

import OpenAI from "openai";

// Get your CometAPI key from https://www.cometapi.com/console/token
const COMETAPI_KEY = process.env.COMETAPI_KEY || "<YOUR_COMETAPI_KEY>";
const BASE_URL = "https://api.cometapi.com/v1";

const client = new OpenAI({
  apiKey: COMETAPI_KEY,
  baseURL: BASE_URL,
});

const completion = await client.chat.completions.create({
  model: "qwen3.6-plus",
  messages: [{ role: "user", content: "Hello! Tell me a short joke." }],
});

console.log(completion.choices[0].message.content);

Curl Code Example

#!/bin/bash

# Get your CometAPI key from https://www.cometapi.com/console/token
# Export it as: export COMETAPI_KEY="your-key-here"

response=$(curl -s https://api.cometapi.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_KEY" \
  -d '{
    "model": "qwen3.6-plus",
    "messages": [
      {
        "role": "user",
        "content": "Hello! Tell me a short joke."
      }
    ]
  }')

printf '%s\n' "$response" | python -c 'import json, sys; print(json.load(sys.stdin)["choices"][0]["message"]["content"])'

More Models

A

Claude Opus 4.6

Input:$4/M
Output:$20/M
Claude Opus 4.6 is Anthropic’s “Opus”-class large language model, released February 2026. It is positioned as a workhorse for knowledge-work and research workflows — improving long-context reasoning, multi-step planning, tool use (including agentic software workflows), and computer-use tasks such as automated slide and spreadsheet generation.
A

Claude Sonnet 4.6

Input:$2.4/M
Output:$12/M
Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta.
O

GPT-5.4 nano

Input:$0.16/M
Output:$1/M
GPT-5.4 nano is designed for tasks where speed and cost matter most like classification, data extraction, ranking, and sub-agents.
O

GPT-5.4 mini

Input:$0.6/M
Output:$3.6/M
GPT-5.4 mini brings the strengths of GPT-5.4 to a faster, more efficient model designed for high-volume workloads.
A

Claude Opus 4.7

A

Claude Opus 4.7

Input:$4/M
Output:$20/M
エージェントやコーディング向けの最も高知能なモデル
M

Kimi K2.6

M

Kimi K2.6

Input:$0.48/M
Output:$2.4/M
Kimi K2.6 preview version is now available for testing.