OpenAI Unveils o3 and o4-mini: Pioneering AI Models Elevate Reasoning Capabilities
April 17, 2025: OpenAI has introduced two groundbreaking AI models on Wednesday, o3 and o4-mini, marking a significant advancement in artificial intelligence reasoning capabilities. These models are designed to enhance performance in complex tasks, integrating visual comprehension and advanced problem-solving skills.

o3: Advancing Towards Human-Level Reasoning
The o3 model stands as OpenAI’s most sophisticated reasoning system to date. It has demonstrated exceptional performance across various benchmarks:
- Mathematics: Achieved a 96.7% score on the AIME 2024 exam, missing only one question.
- Scientific Reasoning: Scored 87.7% on the GPQA Diamond benchmark, tackling graduate-level science problems.
- Software Engineering: Attained a 71.7% accuracy on the SWE-Bench Verified coding tests.
- General Intelligence: Surpassed the human-like threshold on the ARC-AGI benchmark with an 87.5% score under high-compute settings.
These achievements position o3 as a significant step toward Artificial General Intelligence (AGI), showcasing its ability to adapt to novel tasks beyond memorized patterns.
See Also GPT-4.1: What Is It & How Can You Use It?
o4-mini: Efficient and Versatile
The o4-mini model offers a more compact and cost-effective alternative without compromising performance. It excels in tasks such as mathematics, coding, and visual analysis, making it suitable for a wide range of applications.
Innovations in Visual Reasoning and Enhanced Tool Autonomy
Both o3 and o4-mini introduce the capability to reason with visual inputs, including images, sketches, and whiteboard content. This integration allows the models to manipulate images—such as zooming or rotating—as part of their analytical processes, enhancing their problem-solving abilities.
OpenAI has implemented a novel training paradigm called “deliberative alignment” in these models. This approach enables the AI to engage in structured reasoning aligned with human-written safety standards, enhancing adherence to safety benchmarks and providing context-sensitive responses.
CEO Sam Altman has acknowledged the complexity of OpenAI’s model naming conventions and indicated that a more intuitive naming system is forthcoming.
See Also Can GPT-4o Generate NSFW pictures?
Accessibility and Future Developments
The o3 and o4-mini models are now available to ChatGPT Plus, Pro, and Team users. The rollout aligns with OpenAI’s recent unveiling of the GPT-4.1 model, reflecting the company’s rapid progress in AI development.
CEO Sam Altman has acknowledged the complexity of OpenAI’s model naming conventions and indicated that a more intuitive naming system is forthcoming.
These advancements underscore OpenAI’s commitment to pushing the boundaries of AI capabilities while maintaining a focus on safety and accessibility.
OpenAI also launched Codex CLI, an open source code agent that runs locally on the user’s terminal. It aims to provide users with a simple and clear way to connect AI models (including o3 and o4-mini (with support for GPT-4.1 coming soon)) to code and tasks running on their own computers. Codex CLI is open source and you can access it now on GitHub.
For more information on OpenAI’s latest models and their capabilities, visit CometAPI o3 API and O4 Mini API, describes how to access and integrate o3 API and O4 Mini API through CometAPI.