On June 17, Shanghai AI unicorn MiniMax officially open‑sourced MiniMax‑M1, the world’s first open‑weight large‑scale hybrid‑attention inference model. By combining a Mixture‑of‑Experts (MoE) architecture with the new Lightning Attention mechanism, MiniMax‑M1 delivers major gains in inference speed, ultra‑long context handling, and complex task performance . Background and Evolution Building upon the foundation of MiniMax-Text-01, which introduced […]