Mistral AI launches Magistral,its first open source model focused on inference

French AI startup Mistral AI announced Magistral, its first family of reasoning-focused language models, marking Europe’s entry into the emerging space of models that generate responses through explicit multi-step “chain-of-thought” reasoning rather than purely pattern-based prediction. The launch underscores Mistral’s strategy to differentiate itself through open-source principles and a commitment to transparent, verifiable AI reasoning, bolstered by high-profile support from French President Emmanuel Macron.
Magistral is available in two variants:
- Magistral Small, a 24-billion-parameter model released under the Apache 2.0 license and freely downloadable via Hugging Face;
- Magistral Medium, a more powerful, enterprise-grade offering with enhanced inference capabilities, available through Mistral’s commercial API.
Both versions excel at domain-specific tasks—ranging from physics simulations to strategic planning—and are fine-tuned to deliver transparent, step-by-step reasoning that users can inspect and verify . Unlike many large language models that operate predominantly in English, Magistral supports reasoning in multiple major languages—including English, French, Spanish, Arabic, German, Italian, Russian, and Simplified Chinese—allowing queries to be processed in their native linguistic context for improved accuracy and cultural nuance.
Core Technologies and Architecture
Native Chain-of-Thought Support
Magistral is built from the ground up to support Chain-of-Thought (CoT) reasoning, enabling the automatic generation of clear and interpretable reasoning chains. This is essential for high-stakes domains where trust, explainability, and logical rigor are paramount.
- Reasoning-Oriented Design: The model is fine-tuned specifically for multi-step logical tasks.
- Inner Monologue Generation: Outputs include a detailed inner reasoning path, making each conclusion traceable.
- <think> Tag Formatting: Reasoning drafts are encapsulated in
<think>...</think>
blocks, cleanly separating thought processes from final summaries to enhance interpretability.
Enterprise-Exclusive Features: Flash Answers + Think Mode
Available via the Le Chat enterprise platform, Magistral Medium introduces:
- Flash Answers: Lightning-fast inference speeds—up to 10x faster than major competitors such as ChatGPT.
- Think Mode: Optimized for multi-turn conversations, enabling efficient back-and-forth reasoning.
Performance Highlights:
- Near-instantaneous output for structured reasoning tasks (e.g., decision trees, formal proofs, software planning)
- Maintains high accuracy and consistent logical coherence across sessions
Transparent & Auditable Reasoning
Every response generated by Magistral includes a clearly traceable reasoning trail, making it uniquely suited to regulated industries such as:
- Legal Services
- Financial Compliance
- Healthcare and Diagnostics
Availability & Access
- Magistral Small can be downloaded immediately from Hugging Face at the repository
mistralai/Magistral-Small-2506
. - Magistral Medium is accessible via Mistral AI’s enterprise API, with pricing and deployment options detailed on Mistral’s website.
With Magistral, Mistral AI stakes Europe’s claim in the next wave of AI innovation—pivoting from sheer scale to sophisticated reasoning—and invites developers and enterprises worldwide to leverage models that think as transparently as humans do.
Conclusion
Mistral AI, valued at roughly $6.2 billion in its latest funding round, positions Magistral as a direct competitor to reasoning models from U.S. tech giants like OpenAI and Google, as well as emerging Chinese challengers such as DeepSeek and Alibaba. By championing open-source distribution and European-based development, the company aims to narrow the gap with better-funded rivals while fostering an ecosystem of transparency and collaboration.
Getting Started
CometAPI provides a unified REST interface that aggregates hundreds of AI models—under a consistent endpoint, with built-in API-key management, usage quotas, and billing dashboards. Instead of juggling multiple vendor URLs and credentials.
Developers can access Mistral 7B API(model name: mistral-large-latest
; mistral-medium-latest; mistral-small-latest
) through CometAPI, the latest models listed are as of the article’s publication date. To begin, explore the model’s capabilities in the Playground and consult the API guide for detailed instructions. Before accessing, please make sure you have logged in to CometAPI and obtained the API key. CometAPI offer a price far lower than the official price to help you integrate.
The latest integration Magistral API will soon appear on CometAPI, so stay tuned!While we finalize Magistral Model upload, explore our other models such as Gemini 2.5 Pro Preview API and Claude Opus 4 API etc on the Models page or try them in the AI Playground.