MiniMax-M2.7 Explained: Features, Benchmarks, Access & Price

CometAPI
AnnaMar 19, 2026
MiniMax-M2.7 Explained: Features, Benchmarks, Access & Price

MiniMax-M2.7 is the evolution in MiniMax’s M2-series large language models (LLMs), designed for high-efficiency reasoning, coding, and agentic workflows. Building on the success of M2 and M2.5, it introduces improvements in batch generation, cost efficiency, and scalable API deployment (e.g., via CometAPI). It targets enterprise AI use cases including automation, multi-step reasoning, and large-scale content generation.

What is MiniMax-M2.7?

A flagship model built for coding and agents

MiniMax-M2.7 is presented by MiniMax as a current flagship text model for demanding coding, agent, and productivity workflows.

MiniMax-M2.7 is the latest generation large language model (LLM) released by MiniMax in March 2026 as part of its M2 family. It is designed as a high-performance, cost-efficient, agent-oriented AI model that extends the capabilities of its predecessor, M2.5, while introducing improvements in reasoning, self-improvement loops, and real-world task execution.

M2.5 already demonstrated near state-of-the-art (SOTA) performance (Achieved 80.2% on SWE-Bench Verified) while being dramatically cheaper than competitors, achieving comparable results to models like GPT, Gemini, and Claude at less than one-tenth of the cost.

M2.7 builds on this foundation, emphasizing:

  • Autonomous agent loops
  • Reduced iteration cost
  • Improved reasoning consistency
  • Enhanced production readiness

Self-evolving?

M2.7was built with a development process that let it update its own memory, create skills in its harness, and improve its learning process based on experiment results. In plain English, the company is signaling that M2.7 was trained and optimized with a strong agentic loop in mind, not just a static chat benchmark recipe.

5 features of MiniMax-M2.7

Stronger software-engineering behavior

MiniMax-M2.7 is particularly strong at end-to-end project delivery, log analysis, bug troubleshooting, code security, and machine learning tasks. That makes the model relevant not just for code generation, but for the more annoying, time-consuming parts of engineering work: tracing failures, navigating large repositories, and stitching multiple steps together into a workable outcome. M2.7 maintains a 97% skill adherence rate while working with more than 40 complex skills, each of which exceeds 2,000 tokens, a detail that points to its intended use in long-horizon workflows.

A large context window for long tasks

MiniMax-M2.7 model has a 204,800-token context window, which is a major practical feature for users handling long prompts, multi-file codebases, or extended agent sessions. The standard M2.7 model around roughly 60 tokens per second and a “highspeed” variant around 100 tokens per second. That combination matters because a large context window alone is not enough; users also need usable throughput if they want the model to stay responsive in a real workflow.

Office editing and document work are part of the story too

MiniMax is also making a point that M2.7 is not only about coding. The company says the model has improved complex editing across Excel, PowerPoint, and Word, with better multi-round revisions and high-fidelity editing. It also reports a GDPval-AA ELO of 1495 and says this is the highest among open-source models. That is a strong claim, and it is best read as MiniMax’s own assessment of the model’s office-productivity competence rather than an industry-wide consensus, but it is still important because it broadens the release beyond software engineering.

Tool use and environment interaction are core design themes

MiniMax emphasizes that M2.7 can interact with complex environments and work with a large set of skills, which aligns with the company’s broader agent strategy. M2.7 as having strong code understanding, multi-turn dialogue, and reasoning capabilities, and they present it as suitable for tool-rich environments rather than simple single-turn chat. In practical terms, this means the model is being sold as a controller or collaborator, not merely a text generator.

Self-Improvement Mechanisms

A key innovation in M2.7 is model self-improvement loops:

  • Iterative reasoning refinement
  • Feedback-based corrections
  • Reduced hallucination rates

This enables more reliable outputs in:

  • Coding
  • Research
  • Enterprise workflows

Access and Price of Minimax-M2.7

MiniMax-M2.7 is available through MiniMax’s own Open Platform and is also listed on CometAPI, so there are two straightforward access paths depending on whether you want to work directly with MiniMax or through an API aggregator. MiniMax’s docs say M2.7 can be used with billing options such as Token Plan and Pay-As-You-Go, and they specifically recommend using M2.7 in coding-tool workflows like Claude Code.

One of MiniMax’s most disruptive advantages is pricing. Compared to competitors: Up to 10×–20× cheaper than leading frontier models. M2.7 continues this trend, making it:

  • Ideal for large-scale deployment
  • Suitable for long-running agents
  • Accessible for startups and enterprises

In CometAPI,Minimax M2.7 API price is 20% off:

Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Input:$0.24/M; Output:$0.96/MInput:$0.3/M; Output:$1.2/M-20%

MiniMax-M2.7 is available through MiniMax’s own Open Platform and is also listed on CometAPI, so there are two straightforward access paths depending on whether you want to work directly with MiniMax or through an API aggregator. MiniMax’s docs say M2.7 can be used with billing options such as Token Plan and Pay-As-You-Go, and they specifically recommend using M2.7 in coding-tool workflows like Claude Code.

So the practical takeaway is simple: if you want the most direct official route, use MiniMax’s Open Platform; if you want a cheaper third-party access layer, CometAPI currently advertises lower per-token pricing for M2.7.

Conclusion

MiniMax-M2.7 looks like a serious step in the company’s agent-model roadmap, emphasizes software engineering, office productivity, complex environment interaction, and a self-improvement flavored training story. The benchmark claims are strong enough to merit attention, and the independent Kilo test suggests the model can hold its own in real coding-agent scenarios. For developers, the most sensible way to think about M2.7 is as a deep-reading, tool-capable model that rewards clear instructions, structured workflows, and careful cost management.

Developers can access MiniMax-M2.7 via CometAPI(CometAPI offer a price far lower than the official price to help you integrate.) now.Before accessing, please make sure you have logged in to CometAPI and obtained the API key. Ready to Go?

Access Top Models at Low Cost

Read More