ModelsPricingEnterprise
500+ AI Model API, All In One API.Just In CometAPI
Models API
Developer
Quick StartDocumentationAPI Dashboard
Company
About usEnterprise
Resources
AI ModelsBlogChangelogSupport
Terms of ServicePrivacy Policy
© 2026 CometAPI · All rights reserved
Home/Models/OpenAI/omni-moderation-latest
O

omni-moderation-latest

Per Request:$0.0016
Commercial Use
Overview
Features
Pricing
API

Technical Specifications of omni-moderation-latest

AttributeDetails
Model IDomni-moderation-latest
ProviderOpenAI
Model typeModeration model for safety classification
Primary use caseDetecting potentially harmful content in user inputs or model outputs
Supported inputsText and images
Output typeStructured moderation results in text/JSON form, including flags, categories, and scores
EndpointModerations API
Multimodal supportYes; supports multi-modal input objects, including image URLs or base64 image data
Default/latest statusOpenAI documents it as the latest omni moderation model and the recommended choice for new moderation integrations
Legacy alternativetext-moderation-latest is the older text-only option with fewer categorizations
Performance profileOpenAI lists it as high-performance moderation with medium speed
PricingOpenAI describes moderation models as free models

What is omni-moderation-latest?

omni-moderation-latest is OpenAI’s current moderation model for identifying unsafe or policy-sensitive content in text and images. It is designed for developers who need to screen user prompts, uploaded media, or model-generated outputs before those items are shown to end users or passed into downstream systems.

OpenAI introduced omni-moderation-latest as a multimodal upgrade to earlier moderation models. According to OpenAI’s documentation, it is more capable than legacy text-only moderation options, supports broader categorization, and is the best choice for new applications using the moderation endpoint.

In practice, teams use omni-moderation-latest for input moderation, output moderation, trust-and-safety pipelines, forum filtering, social content review, and abuse detection workflows where fast automated triage is needed before human review or enforcement logic. This is an application-level inference based on OpenAI’s moderation guidance and API design.

Main features of omni-moderation-latest

  • Multimodal moderation: The model accepts both text and image inputs, making it suitable for modern applications that need to evaluate user messages alongside uploaded visual content.
  • Broader safety categorization: OpenAI states that the omni moderation family supports more categorization options than legacy text moderation models.
  • Structured safety output: Responses include a moderation object with fields such as flagged, category labels, and category scores, which makes it easier to automate allow, block, review, or escalation logic.
  • Improved accuracy: OpenAI reported that omni-moderation-latest is more accurate than the previous moderation generation, especially for non-English content.
  • Image-aware classification support: The API can ingest image URLs or base64-encoded image data as moderation inputs, enabling screening of uploaded or linked visual assets.
  • Best fit for new integrations: OpenAI’s moderation guide explicitly recommends the newer omni moderation models for new applications rather than the legacy text-only model family.
  • Free moderation usage from the provider side: OpenAI’s model listing describes moderation models as free, which can make them attractive for large-scale safety filtering workflows.

How to access and integrate omni-moderation-latest

Step 1: Sign Up for API Key

To start using omni-moderation-latest, first sign up on CometAPI and generate your API key from the dashboard. After creating the key, store it securely in an environment variable such as COMETAPI_API_KEY. This key will be used to authenticate every request you send to the model.

Step 2: Send Requests to omni-moderation-latest API

Once you have your API key, you can call the OpenAI-compatible chat endpoint on CometAPI while specifying omni-moderation-latest as the model. Example:

curl https://api.cometapi.com/v1/moderations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $COMETAPI_API_KEY" \
  -d '{
    "model": "omni-moderation-latest",
    "input": "Sample text to classify for safety."
  }'

You can also send multimodal moderation input when your workflow includes images, provided your client and request format follow the upstream moderation schema supported by the model. OpenAI’s moderation API documentation shows that omni-moderation-latest supports text strings, arrays of strings, and multi-modal input objects.

Step 3: Retrieve and Verify Results

The API response will return a moderation result object containing the model used, a flagged decision, category-level labels, and category scores. After receiving the response, verify whether the content crosses your application’s policy threshold before allowing it into your product flow. For production use, many teams combine automated thresholding with logging, manual review queues, and policy-specific business rules.

Features for omni-moderation-latest

Explore the key features of omni-moderation-latest, designed to enhance performance and usability. Discover how these capabilities can benefit your projects and improve user experience.

Pricing for omni-moderation-latest

Explore competitive pricing for omni-moderation-latest, designed to fit various budgets and usage needs. Our flexible plans ensure you only pay for what you use, making it easy to scale as your requirements grow. Discover how omni-moderation-latest can enhance your projects while keeping costs manageable.
Comet Price (USD / M Tokens)Official Price (USD / M Tokens)Discount
Per Request:$0.0016
Per Request:$0.002
-20%

Sample code and API for omni-moderation-latest

Access comprehensive sample code and API resources for omni-moderation-latest to streamline your integration process. Our detailed documentation provides step-by-step guidance, helping you leverage the full potential of omni-moderation-latest in your projects.

More Models

O

GPT Image 2

Input:$6.4/M
Output:$24/M
GPT Image 2 is openai state-of-the-art image generation model for fast, high-quality image generation and editing. It supports flexible image sizes and high-fidelity image inputs.
D

Doubao-Seedance-2-0

Per Second:$0.07
Seedance 2.0 is ByteDance’s next-generation multimodal video foundation model focused on cinematic, multi-shot narrative video generation. Unlike single-shot text-to-video demos, Seedance 2.0 emphasizes reference-based control (images, short clips, audio), coherent character/style consistency across shots, and native audio/video synchronization — aiming to make AI video useful for professional creative and previsualization workflows.
C

Claude Opus 4.7

Input:$3/M
Output:$15/M
Claude Opus 4.7 is a hybrid reasoning model designed specifically for frontier-level coding, AI agents, and complex multi-step professional work. Unlike lighter models (e.g., Sonnet or Haiku variants), Opus 4.7 prioritizes depth, consistency, and autonomy on the hardest tasks.
A

Claude Sonnet 4.6

Input:$2.4/M
Output:$12/M
Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta.
O

GPT 5.5 Pro

Input:$24/M
Output:$144/M
An advanced model engineered for extremely complex logic and professional demands, representing the highest standard of deep reasoning and precise analytical capabilities.
O

GPT 5.5

Input:$4/M
Output:$24/M
A next-generation multimodal flagship model balancing exceptional performance with efficient response, dedicated to providing comprehensive and stable general-purpose AI services.