Technical Specifications of omni-moderation-latest
| Attribute | Details |
|---|---|
| Model ID | omni-moderation-latest |
| Provider | OpenAI |
| Model type | Moderation model for safety classification |
| Primary use case | Detecting potentially harmful content in user inputs or model outputs |
| Supported inputs | Text and images |
| Output type | Structured moderation results in text/JSON form, including flags, categories, and scores |
| Endpoint | Moderations API |
| Multimodal support | Yes; supports multi-modal input objects, including image URLs or base64 image data |
| Default/latest status | OpenAI documents it as the latest omni moderation model and the recommended choice for new moderation integrations |
| Legacy alternative | text-moderation-latest is the older text-only option with fewer categorizations |
| Performance profile | OpenAI lists it as high-performance moderation with medium speed |
| Pricing | OpenAI describes moderation models as free models |
What is omni-moderation-latest?
omni-moderation-latest is OpenAI’s current moderation model for identifying unsafe or policy-sensitive content in text and images. It is designed for developers who need to screen user prompts, uploaded media, or model-generated outputs before those items are shown to end users or passed into downstream systems.
OpenAI introduced omni-moderation-latest as a multimodal upgrade to earlier moderation models. According to OpenAI’s documentation, it is more capable than legacy text-only moderation options, supports broader categorization, and is the best choice for new applications using the moderation endpoint.
In practice, teams use omni-moderation-latest for input moderation, output moderation, trust-and-safety pipelines, forum filtering, social content review, and abuse detection workflows where fast automated triage is needed before human review or enforcement logic. This is an application-level inference based on OpenAI’s moderation guidance and API design.
Main features of omni-moderation-latest
- Multimodal moderation: The model accepts both text and image inputs, making it suitable for modern applications that need to evaluate user messages alongside uploaded visual content.
- Broader safety categorization: OpenAI states that the omni moderation family supports more categorization options than legacy text moderation models.
- Structured safety output: Responses include a moderation object with fields such as
flagged, category labels, and category scores, which makes it easier to automate allow, block, review, or escalation logic. - Improved accuracy: OpenAI reported that
omni-moderation-latestis more accurate than the previous moderation generation, especially for non-English content. - Image-aware classification support: The API can ingest image URLs or base64-encoded image data as moderation inputs, enabling screening of uploaded or linked visual assets.
- Best fit for new integrations: OpenAI’s moderation guide explicitly recommends the newer omni moderation models for new applications rather than the legacy text-only model family.
- Free moderation usage from the provider side: OpenAI’s model listing describes moderation models as free, which can make them attractive for large-scale safety filtering workflows.
How to access and integrate omni-moderation-latest
Step 1: Sign Up for API Key
To start using omni-moderation-latest, first sign up on CometAPI and generate your API key from the dashboard. After creating the key, store it securely in an environment variable such as COMETAPI_API_KEY. This key will be used to authenticate every request you send to the model.
Step 2: Send Requests to omni-moderation-latest API
Once you have your API key, you can call the OpenAI-compatible chat endpoint on CometAPI while specifying omni-moderation-latest as the model. Example:
curl https://api.cometapi.com/v1/moderations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $COMETAPI_API_KEY" \
-d '{
"model": "omni-moderation-latest",
"input": "Sample text to classify for safety."
}'
You can also send multimodal moderation input when your workflow includes images, provided your client and request format follow the upstream moderation schema supported by the model. OpenAI’s moderation API documentation shows that omni-moderation-latest supports text strings, arrays of strings, and multi-modal input objects.
Step 3: Retrieve and Verify Results
The API response will return a moderation result object containing the model used, a flagged decision, category-level labels, and category scores. After receiving the response, verify whether the content crosses your application’s policy threshold before allowing it into your product flow. For production use, many teams combine automated thresholding with logging, manual review queues, and policy-specific business rules.