How Much Does Cursor Composer Cost?
Cursor Composer is a new, frontier-grade coding model released as part of Cursor 2.0 that delivers much faster, agentic code-generation for complex, multi-file workflows. Access to Composer is governed by Cursor’s existing tiered subscriptions plus token-based usage when you exhaust plan allowances or use Cursor’s “Auto” routing — meaning costs are a mix of a fixed subscription fee and metered token charges. Below you’ll find a full, practical breakdown (features, advantages, pricing mechanics, worked examples and competitor comparisons) so you can estimate real-world costs and decide whether Composer is worth it for your team.
What is Cursor Composer?
Composer is Cursor’s new “frontier model” introduced as part of Cursor 2.0. It was built and tuned specifically for software engineering workflows and agentic (multi-step) coding tasks. According to Cursor’s announcement, Composer delivers frontier-level coding performance while being optimized for low latency and fast iteration — Cursor says most conversational turns complete in under 30 seconds in practice and claims generation throughput roughly four times that of similarly capable models in their internal benchmarks. Composer was trained with codebase-wide search and tool access so it can reason about, and perform edits across, large projects.
Where Composer sits inside Cursor’s product
Composer is not a separate “app” you buy on its own; it’s offered as a model option inside the Cursor product (desktop & web) and is routable through Cursor’s model router (Auto). You get model-level access depending on which Cursor subscription you have and whether you pay metered usage fees beyond your plan’s allowance. Cursor’s model docs list Composer among the available models and the company provides both subscription tiers and token-metering for model usage.
Cursor’s mid-2025 changes to usage pools and credit systems illustrate this trend: rather than truly unlimited use of premium models, Cursor provides plan allowances (and Auto choices), then bills extra usage at API/token rates.
Key features and advantages of Composer
Composer is aimed at developer productivity for nontrivial engineering tasks. The main selling points:
- Agentic code reasoning: Composer supports multi-step workflows (e.g., understanding a bug, searching a repo, editing multiple files, running tests and iterating). This makes it better suited than single-shot completions for complex engineering work.
- Speed / low latency: Cursor reports Composer is significantly faster in generation throughput compared to comparable models and that typical interactive turns finish quickly, enabling faster iteration loops.
- Tight codebase integration: Composer was trained with access to Cursor’s retrieval and editing toolset as well as codebase indexing, which improves its ability to work with large repositories and maintain context across files.
- Agent modes & tools: Composer is designed to work with Cursor’s agent modes and the Model Context Protocol (MCP), letting it call specialized tools, read indexed code, and avoid repeatedly re-explaining the project structure. That reduces repetitive token usage in many workflows.
Why that matters: for teams doing deep code edits and multi-file refactors, Composer can reduce manual iteration and context switching — but because it is agentic and can perform more compute work per request, per-request token usage tends to be higher than simple completion models (which drives the metered costs discussion below).
How was Composer built ?
Architecture and training approach
Composer is described as an MoE model fine-tuned with reinforcement learning and a custom, large-scale training pipeline. Key elements highlighted by Cursor:
- Mixture-of-experts (MoE) design to scale capacity efficiently for long-context code tasks.
- Reinforcement learning (RL) with reward signals tuned to agentic behaviors useful in software engineering: plan writing, using search, editing code, writing tests, and maximizing parallel tool use.
- Tool-aware training: during training Composer had access to a set of tools (file read/write, semantic search, terminal, grep) so it learned to call tools when appropriate and integrate the outputs.
- Custom infra: Cursor built PyTorch + Ray based pipelines, MXFP8 MoE kernels, and large VM clusters to enable asynchronous, tool-enabled RL at scale. The infra choices (low-precision training, expert parallelism) are intended to reduce communication costs and keep inference latency low.
Why moE + RL matters for code
Code editing requires precise, multi-step reasoning over large repositories. MoE gives the model episodic capacity (lots of parameters available selectively) while RL optimizes for behaviors (don’t hallucinate, run tests, propose minimal diffs). Training with the agent toolset means Composer is not being fine-tuned purely on next-token prediction — it learned to use the tooling available in Cursor’s product setting. That’s why Cursor positions Composer as an “agentic” model rather than just a completion model.
How are Cursor subscription plans priced for Composer?
Cursor’s pricing combines subscription tiers (monthly plans) with usage-based charges (tokens, cache, and certain agent/tool fees). The subscription tiers give you base capabilities and included, prioritized usage; the heavy or premium-model usage is then billed on top. Below are the public list prices and the high-level meaning of each plan.
Individual (personal) tiers
- Hobby (Free): entry-level, limited agent requests / tab completions; includes a short Pro trial. Good for light experimentation.
- Pro — $20 / month (individual): everything in Hobby plus extended agent usage, unlimited tab completions, background agents, and maximum context windows. This is the common starting point for individual developers who want Composer-level features.
- Pro+ — $60 / month (individual, recommended for power users): more included usage on premium models . Cursor’s June 2025 pricing rollout clarified that Pro plans include a pool of model credits (for “frontier model” usage) and that additional usage can be purchased at cost-plus rates or via token billing.
- Ultra — $200 / month: for heavy individuals needing substantially larger included model usage and priority access.
Team / Enterprise
Teams — $40 / user / month: adds centralized billing, usage analytics, role-based controls and SSO. Larger teams can also buy Enterprise (custom pricing) that includes pooled usage, invoice/PO billing, SCIM, audit logs and priority support.
Token-Based Pricing for Cursor Composer
Cursor mixes per-user plans with per-token billing for premium or agentic requests. There are two related but distinct billing contexts to understand:
- Auto / Max mode token rates (Cursor’s “Auto” dynamic selection or Max/Auto billing buckets).
- Model-list / direct model pricing (if you select a model like Composer directly, the model list APIs have per-model token rates).
These different modes change the effective input/output token rates you’ll see on your bill. Below are the canonical figures Cursor publishes in its documentation and model pages — these are the most load-bearing numbers for cost calculations.
Auto / Max
When you go beyond plan allowances (or explicitly use Auto to route to premium models), Cursor charges for model usage on a per-token basis. The most commonly referenced rates for Cursor’s Auto router (which picks a premium model on demand) are:
- Input + Cache Write: $1.25 per 1,000,000 tokens
- Output (generation): $6.00 per 1,000,000 tokens
- Cache Read: $0.25 per 1,000,000 tokens
Those rates were documented in Cursor’s account/pricing docs describing Auto billing and are the backbone of Composer’s operating cost when Composer usage is billed via Auto or when you directly select model usage charged at API rates.
Composer and model-list prices
Cursor’s model list / model-pricing reference shows per-model pricing entries. For some premium models inside Cursor, Composer in model-list prices : Input $1.25 / 1M; Output $10.00 / 1M. In practice this means if you explicitly choose Composer as the model rather than running Auto, the output token rate you incur could be higher than Auto’s $6 output rate
Why input vs output tokens differ
- Input tokens are the tokens you send (prompts, instructions, code snippets, file context). Cursor charges for writing those into the system (and occasionally caching them).
- Output tokens are what the model generates (the code edits, suggestions, diffs, etc.). Output generation is more expensive because it consumes more compute. Cursor’s published numbers reflect those relative costs.
Comparing Cursor Composer with competitors
When judging cost and value, it’s useful to compare Composer’s unit economics to other widely-used developer AI services. Note that model capabilities, latency, integration, and included plan allowances also matter — price alone isn’t the whole story.
GitHub Copilot (individual tiers)
GitHub Copilot is primarily priced per user with tiers (Free, Pro at ~$10/month, Pro+ and Business tiers higher). Copilot provides a number of “premium” requests per month and charges for additional premium requests (published per-request add-ons). Copilot bundles models (including Google/Anthropic/OpenAI options in some plans) and is sold as a per-developer SaaS. For many individual devs, Copilot’s all-in per-seat price can be simpler and cheaper for routine completions; for heavy multi-step agentic tasks, a token-metered model may be more transparent.
OpenAI (API / advanced models)
OpenAI’s higher-end models (GPT-5 series and premium variants) have different per-token economics that can be higher than Cursor’s Composer rate for certain pro models. OpenAI also provides many performance tiers (and batch or cached discounts) that affect effective costs. If comparing, consider latency, accuracy on coding tasks, and the value of Cursor’s editor integration (which may offset a per-token cost delta).
Which is cheaper in practice?
- Small, frequent completions / autocompletes: A per-seat SaaS (Copilot) is often cheapest and simplest.
- Large multi-file, agentic tasks: Token-metered models (Composer via Cursor Auto or Anthropic/OpenAI directly) give flexibility/quality but cost more per heavy request; careful modeling of token use is essential.
Conclusion — Is Composer “expensive”?
Composer is not billed as a single flat line-item — it’s part of a hybrid system. For light-to-moderate interactive use, a $20/month Pro plan plus Auto-mode usage may keep your costs low (tens of dollars a month). For heavy, parallel agent workloads with many long outputs, Composer can drive hundreds or thousands per month because output-token rates and concurrency multiply costs. Compared to subscription-first competitors (e.g., GitHub Copilot), Cursor’s Composer trades a higher marginal inference cost for much faster, agentic, repository-aware capabilities.
If your goals are multi-agent automation, repo-wide refactors, or shorter iteration cycles that save engineering time, Composer’s speed and tooling can deliver strong ROI.
How do I use CometAPI inside Cursor? (step-by-step)
Short summary: CometAPI is a model-aggregation gateway (single endpoint that can proxy many model vendors). To use it in Cursor you register at CometAPI, get an API key and model identifier, then add that key + endpoint into Cursor’s Models settings as a custom provider (override base URL) and select the CometAPI model in Composer/Agent mode.
CometAPI also designed a proprietary coding model based on Claude specifically for cursor: cometapi-sonnet-4-5-20250929-thinking and cometapi-opus-4-1-20250805-thinking etc.
Step A — Get your CometAPI credentials
- Sign up at CometAPI and create an API key from their dashboard. Keep the key secret (treat it like any bearer token).
- Create / copy an API key and note the model name/ID you want to use (e.g.,
claude-sonnet-4.5or another vendor model available via CometAPI). CometAPI docs/guides describe the process and list supported model names.
Step B — Add CometAPI as a custom model/provider in Cursor
- Open Cursor → Settings → Models (or Settings → API Keys).
- If Cursor shows an “Add Custom Model” or “Override OpenAI Base URL” option, use it:
- Base URL / Endpoint: paste the CometAPI OpenAI-compatible base URL (CometAPI will document whether they expose an
openai/v1style endpoint or a provider-specific endpoint). (Example:https://api.cometapi.com/v1— use the actual URL from CometAPI docs.) - API Key: paste your CometAPI key in the API key field.
- Model name: add the model identifier exactly as CometAPI documents (e.g.,
claude-sonnet-4.5orcomposer-like-model).
- Base URL / Endpoint: paste the CometAPI OpenAI-compatible base URL (CometAPI will document whether they expose an
- Verify the connection if Cursor offers a “Verify” / “Test” button. Cursor’s custom model mechanism commonly requires the provider to be OpenAI-compatible (or for Cursor to accept a base URL + key). Community guides show the same pattern (override base URL → provide key → verify).
If you want to know more tips, guides and news on AI follow us on VK, X and Discord!
See also Cursor 2.0 and Composer: how a multi-agent rethink surprised AI coding



