Where Is Deep Research in ChatGPT? A professional overview
Over 2024–2025 ChatGPT and its sibling models shifted from being purely conversational LLMs to offering end-to-end deep research capabilities: browser-assisted retrieval, long-form synthesis, multimodal evidence extraction, and tightly integrated safety controls. Now we will discuss what in-depth research is and where we can obtain it.
What is “Deep Research” in ChatGPT ?
“Deep Research” is a productized feature in ChatGPT that goes beyond single-turn Q&A: you give a research prompt (for example, “survey the latest work on XX, summarize key methods and give reproducible citations”), and the system autonomously retrieves web documents, reads and extracts evidence, aggregates conflicting viewpoints, and returns a structured, referenced report. The feature bundles browsing, document retrieval, and synthesis into one flow so a user gets a near-human research assistant experience rather than a plain generated reply.
Why the timing? data, compute, models and product demand
Three converging trends made Deep Research practical in 2024–2025:
- Improved multimodal and reasoning models. Newer base models (o-series, GPT-4o, and later GPT-5 family) deliver stronger reasoning and ability to follow multi-step instructions. That permits deeper analysis of retrieved evidence.
- Tooling for safe browsing and retrieval. Better tool interfaces (sandboxes, click-through browsing, retrieval modules) and architectural patterns like retrieval-augmented generation (RAG) enabled models to consult external sources during a session. The result: richer, updatable knowledge without retraining.
- Product demand for time-saving automation. Organizations and individuals want automated research assistants that produce structured, citable outputs in minutes rather than hours — pushing vendors to productize research pipelines as features. OpenAI’s launch of a dedicated “deep research” tool and later lightweight variants reflects that market pull.
Where is deep research in chatgpt
ChatGPT web/ app:
Deep Research is a built-in ChatGPT agent (a dedicated tool/mode) that autonomously browses, reads, and synthesizes web pages, PDFs, images and uploaded files into a cited research report. It appears inside the ChatGPT interface as the Deep Research option (or via “Agent mode” / agent selector) and is available in tiered form (a full-model paid version plus a cheaper “lightweight” variant rolled out to more users). it’s a built-in option in the ChatGPT composer — pick “Deep research” from the composer/tools dropdown (or from “agent mode” in newer UI updates) and type your research query.
Plus/Team/Enterprise/Edu plans allow 25 tasks per month; Pro users can run 250 tasks per month; Free users can run 5 tasks per month, and will activate Lightweight Backup mode after reaching the quota limit.
Quick steps:
- Open ChatGPT (chatgpt.com / chat.openai.com) and sign in.
- Start a new chat and look at the message composer (where you type). Click the mode/tools dropdown. You should see “Deep research” (or select agent mode to access the updated visual/agent features).
- Enter your prompt and (optionally) attach files (PDFs, spreadsheets, images). Deep Research will run (typically 5–30 minutes) and return a cited report.
If you don’t see the “+” sign, you need to type “/” (delete “”) in the input box before the prompt, and then you will see the in-depth analysis.
API Access
OpenAI does provide a Deep Research API. Alternatively, you can choose CometAPI , which uses chatgpt’s deep research API. This is a third-party aggregated API platform that offers API pricing at a lower cost than the official platform, Use the Responses endpoint to call Deep Research.
There are two Deep Research–specialized models as of 2025:
- O3-Deep-Research API:
o3-deep-research— the more powerful, high-quality research model. - O4-Mini-Deep-Research API:
o4-mini-deep-research— a lighter, lower-cost version for faster or more frequent queries.
OpenAI charges for Deep Research based on token usage (input and output tokens), plus tool usage (e.g., web search), similar to other models. CometAPI offers prices at 20% of the official price. Here are the details:
| Model | Input Token Cost | Output Token Cost |
|---|---|---|
| o3-deep-research | US$ 8 per 1M tokens | US$ 32 per 1M tokens |
| o4-mini-deep-research | US$ 1.6 per 1M tokens | US$ 6.4 per 1M tokens |
My Recommendation
Use ChatGPT Deep Research: when you want a hands-off research assistant: you type a query, the agent does web browsing, synthesizes, and gives you a report with citations. This is ideal for ad hoc research, ideation, or business/academic exploration.
Use the API if:
- You have a developer workflow (e.g., generating daily research summaries, integrating with internal tools, automating research pipelines).
- You’re okay handling tool orchestration: clarifying questions, crawling, chunking, and post-processing results.
- You need more control: you can adjust prompts, handle clarifications, link tools, and integrate with your own systems.
How does Deep Research in ChatGPT actually work under the hood?
Core technical components (pipeline view)
A typical Deep Research run chains several subsystems:
1) Query understanding and decomposition: The system first parses the user prompt into sub-tasks (e.g., define scope, find primary sources, extract numbers, synthesize disagreement). Explicit decomposition improves traceability for long, complex tasks.
2) Retrieval and browsing: The assistant uses a combination of cached indexes, web search APIs, and an internal browsing agent to fetch pages, PDFs, datasets, and code snippets. Retrieval is not just “top-k” pass-through; it usually includes reranking for authority and relevance, and snippet extraction for evidence. Scholarly reviews of RAG show this hybrid retrieval + generation pattern is now standard for grounded outputs.
3) Document ingestion & long-context reasoning: Documents are chunked, converted into vector embeddings, and fed into the reasoning model together with a chain-of-thought or deliberative reasoning prompt. Modern research modes exploit longer context windows (and sometimes selective fine-tuning or in-context exemplars) to maintain coherence across multi-source synthesis.
4) Evidence consolidation and citation: The model identifies claims that require support, attaches provenance (URLs, quoted snippets, or bibliographic metadata), and highlights uncertainties. Products may provide a bibliography and inline citations or an exportable report.
5) Safety, filtering and human-in-the-loop checks: Before delivering final outputs, Deep Research modules run safety policies (filtering hallucinations, flagging controversial claims, adding content warnings) and sometimes route high-risk tasks to human reviewers or require user confirmation.
What algorithms and approaches are most important right now?
- Retrieval-Augmented Generation (RAG) — still central to grounding model outputs in external evidence. Systematic reviews show RAG remains a dominant approach for factual grounding, though debates continue about cost and robustness.
- Deliberative / chain-of-thought alignment — explicit internal reasoning steps used both to improve accuracy and to enable models to reference safety specifications when answering.
- Graph-structured retrieval (GraphRAG and variants) — integrating relational knowledge and multi-hop connections to surface more relevant, context-aware evidence. This is an active research area in 2024–2025.
- Agent frameworks — small controller agents that orchestrate browsing, extraction, verification, and summarization steps are now common in production Deep Research flows. These controllers reduce end-to-end brittleness.
limitations and safety/ethical concerns
How trustworthy are the outputs (hallucination and misattribution)?
While Deep Research improves citation rates compared with simple prompts, models still hallucinate facts and misattribute claims, especially for low-signal queries or when authoritative sources are behind paywalls. The product announcements and reporting acknowledge these limits; lighter, cheaper model variants also increase the risk of shorter, less-supported answers when used without care.
What are the mental-health and societal risks tied to broad availability?
OpenAI and independent reporting reveal a nontrivial set of social-harm risks. Public reporting indicates that substantial weekly interactions with ChatGPT include suicidal ideation or psychosis flags; that figure has driven scrutiny, litigation, and regulatory attention. These incidents underscore that Deep Research — especially when used for advice or therapeutic contexts — must be combined with fail-safes, signposting to human experts, and clear disclaimers.
What about bias, misuse, and adversarial manipulation?
Deep Research could be manipulated by adversaries who optimize web content for deceptive signals (SEO, sockpuppet sources), or by groups who intentionally seed misinformation to sway synthesis. Research into adversarial-robust retrieval, provenance verification, and provenance-aware model training is therefore critical.
Privacy and copyright concerns
Scraping, indexing, and summarizing paywalled or copyrighted research raises legal and ethical questions. Product teams are exploring licensed corpora, permissions, and watermarking to address these concerns; research into fair use boundaries for automated summarization is ongoing.
Conclusion
Deep research in ChatGPT is not a single lab or single technique; it is a layered effort that spans retrieval and grounding, alignment-by-reasoning, multimodal and real-time interaction, efficient model engineering, and the systems/infrastructure that make these experiments possible at scale. Recent product launches (the “deep research” feature and upgraded GPT series), corporate research on deliberative alignment, active academic work on RAG and agentic models, and massive infrastructure investments together map the territory of where the field is placing its bets right now.
Currently, deep research can be used through ChatGPT and API, each with its own advantages and disadvantages.
Developers can access O4-Mini-Deep-Research API and O3-Deep-Research API through CometAPI, the latest model version is always updated with the official website. To begin, explore the model’s capabilities in the Playground and consult the API guide for detailed instructions. Before accessing, please make sure you have logged in to CometAPI and obtained the API key. CometAPI offer a price far lower than the official price to help you integrate.
Ready to Go?→ Sign up for CometAPI today !
If you want to know more tips, guides and news on AI follow us on VK, X and Discord!


