模型定价企业
500+ AI 模型 API,一次搞定,就在 CometAPI
模型 API
开发者
快速入门文档API 仪表板
公司
关于我们企业
资源
AI 模型博客更新日志支持
服务条款隐私政策
© 2026 CometAPI · All rights reserved
Home/Models/OpenAI/omni-moderation-latest
O

omni-moderation-latest

每次请求:$0.0016
商用
概览
功能亮点
定价
API

Technical Specifications of omni-moderation-latest

AttributeDetails
Model IDomni-moderation-latest
ProviderOpenAI
Model typeModeration model for safety classification
Primary use caseDetecting potentially harmful content in user inputs or model outputs
Supported inputsText and images
Output typeStructured moderation results in text/JSON form, including flags, categories, and scores
EndpointModerations API
Multimodal supportYes; supports multi-modal input objects, including image URLs or base64 image data
Default/latest statusOpenAI documents it as the latest omni moderation model and the recommended choice for new moderation integrations
Legacy alternativetext-moderation-latest is the older text-only option with fewer categorizations
Performance profileOpenAI lists it as high-performance moderation with medium speed
PricingOpenAI describes moderation models as free models

What is omni-moderation-latest?

omni-moderation-latest is OpenAI’s current moderation model for identifying unsafe or policy-sensitive content in text and images. It is designed for developers who need to screen user prompts, uploaded media, or model-generated outputs before those items are shown to end users or passed into downstream systems.

OpenAI introduced omni-moderation-latest as a multimodal upgrade to earlier moderation models. According to OpenAI’s documentation, it is more capable than legacy text-only moderation options, supports broader categorization, and is the best choice for new applications using the moderation endpoint.

In practice, teams use omni-moderation-latest for input moderation, output moderation, trust-and-safety pipelines, forum filtering, social content review, and abuse detection workflows where fast automated triage is needed before human review or enforcement logic. This is an application-level inference based on OpenAI’s moderation guidance and API design.

Main features of omni-moderation-latest

  • Multimodal moderation: The model accepts both text and image inputs, making it suitable for modern applications that need to evaluate user messages alongside uploaded visual content.
  • Broader safety categorization: OpenAI states that the omni moderation family supports more categorization options than legacy text moderation models.
  • Structured safety output: Responses include a moderation object with fields such as flagged, category labels, and category scores, which makes it easier to automate allow, block, review, or escalation logic.
  • Improved accuracy: OpenAI reported that omni-moderation-latest is more accurate than the previous moderation generation, especially for non-English content.
  • Image-aware classification support: The API can ingest image URLs or base64-encoded image data as moderation inputs, enabling screening of uploaded or linked visual assets.
  • Best fit for new integrations: OpenAI’s moderation guide explicitly recommends the newer omni moderation models for new applications rather than the legacy text-only model family.
  • Free moderation usage from the provider side: OpenAI’s model listing describes moderation models as free, which can make them attractive for large-scale safety filtering workflows.

How to access and integrate omni-moderation-latest

Step 1: Sign Up for API Key

To start using omni-moderation-latest, first sign up on CometAPI and generate your API key from the dashboard. After creating the key, store it securely in an environment variable such as COMETAPI_API_KEY. This key will be used to authenticate every request you send to the model.

Step 2: Send Requests to omni-moderation-latest API

Once you have your API key, you can call the OpenAI-compatible chat endpoint on CometAPI while specifying omni-moderation-latest as the model. Example:

curl https://api.cometapi.com/v1/moderations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -d '{
    "model": "omni-moderation-latest",
    "input": "Sample text to classify for safety."
  }'

You can also send multimodal moderation input when your workflow includes images, provided your client and request format follow the upstream moderation schema supported by the model. OpenAI’s moderation API documentation shows that omni-moderation-latest supports text strings, arrays of strings, and multi-modal input objects.

Step 3: Retrieve and Verify Results

The API response will return a moderation result object containing the model used, a flagged decision, category-level labels, and category scores. After receiving the response, verify whether the content crosses your application’s policy threshold before allowing it into your product flow. For production use, many teams combine automated thresholding with logging, manual review queues, and policy-specific business rules.

omni-moderation-latest 的功能

了解 omni-moderation-latest 的核心能力,帮助提升性能与可用性,并改善整体体验。

omni-moderation-latest 的定价

查看 omni-moderation-latest 的竞争性定价,满足不同预算与使用需求,灵活方案确保随需求扩展。
Comet 价格 (USD / M Tokens)官方定价 (USD / M Tokens)折扣
每次请求:$0.0016
每次请求:$0.002
-20%

omni-moderation-latest 的示例代码与 API

获取完整示例代码与 API 资源,简化 omni-moderation-latest 的集成流程,我们提供逐步指导,助你发挥模型潜能。

更多模型

O

GPT Image 2

输入:$6.4/M
输出:$24/M
GPT Image 2 是 OpenAI 最先进的图像生成模型,可实现快速、高质量的图像生成与编辑。它支持灵活的图像尺寸和高保真图像输入。
D

Doubao-Seedance-2-0

每秒:$0.07
Seedance 2.0 是 ByteDance 的下一代多模态视频基础模型,专注于电影化的、多镜头叙事视频生成。不同于单镜头的文本生成视频演示,Seedance 2.0 强调基于参考的控制(图像、短视频片段、音频)、跨镜头的人物与风格一致性,以及原生的音视频同步——旨在让 AI 视频切实服务于专业创意与前期预演工作流。
C

Claude Opus 4.7

输入:$3/M
输出:$15/M
用于智能体和编程的最智能模型
A

Claude Sonnet 4.6

输入:$2.4/M
输出:$12/M
Claude Sonnet 4.6 是迄今为止我们最强大的 Sonnet 模型。它对模型在编码、计算机使用、长上下文推理、智能体规划、知识工作和设计等方面的能力进行了全面升级。Sonnet 4.6 还在 beta 阶段提供 1M token 上下文窗口。
O

GPT 5.5 Pro

输入:$24/M
输出:$144/M
一款为应对极其复杂的逻辑和专业需求而设计的先进模型,代表了深度推理能力与精确分析能力的最高标准。
O

GPT 5.5

输入:$4/M
输出:$24/M
一款下一代多模态旗舰模型,在卓越性能与高效响应之间取得平衡,致力于提供全面、稳定的通用人工智能服务。