Do you mean:
- Comet (comet.com, aka Comet ML / CometLLM for LLM observability), or
- CometChat (cometchat.com messaging API)?

If you mean Comet (Comet ML/CometLLM), there’s no official, built‑in AnythingLLM integration yet, but you can integrate via either of these patterns:

Option A — OpenAI‑compatible relay (no changes to AnythingLLM)
- Goal: Insert a small proxy between AnythingLLM and your LLM provider to log prompts/responses to Comet.
- Steps:
  1) Deploy a relay that exposes /v1/chat/completions (and /v1/embeddings if needed). In the handler, start a CometLLM trace, forward the request to your real provider, stream/collect the response, log metadata/tokens, then return the response.
  2) Set AnythingLLM to use your relay:
     - Provider: OpenAI (or any OpenAI‑compatible)
     - Base URL: your relay URL
     - API Key: your relay’s key (the relay then uses the real provider key internally)
  3) Configure the relay with COMET_API_KEY (and COMET_WORKSPACE/COMET_PROJECT).
- Minimal outline (Node/Express + comet-llm + upstream OpenAI client):
  - Read COMET_API_KEY from env.
  - POST /v1/chat/completions:
    - Create a CometLLM trace/run with request metadata (model, messages, user, latency).
    - Call upstream OpenAI’s chat.completions.create.
    - Log tokens, cost (if available), and response text.
    - Return upstream response as‑is so AnythingLLM remains unaware of the proxy.

Option B — Instrument AnythingLLM’s provider layer (requires forking/self‑hosting)
- Goal: Add CometLLM logging inside AnythingLLM where it calls OpenAI/Anthropic/etc.
- Steps:
  1) Fork AnythingLLM and locate the LLM provider modules (e.g., OpenAI/Anthropic adapters).
  2) Wrap calls with CometLLM SDK (JS/TS):
     - Before calling the model, start a trace/run; include system/user prompts, tool calls, thread IDs, and any conversation metadata.
     - After receiving the response (or stream chunks), log tokens, latency, status, and final text.
  3) Expose COMET_API_KEY/COMET_WORKSPACE/COMET_PROJECT via env.
  4) Rebuild and deploy your fork.

Best practices
- Do not log secrets or raw PII; use Comet’s redaction/anonymization to scrub prompts if needed.
- Include AnythingLLM conversation/thread IDs in Comet metadata for correlation.
- For streaming, log incremental tokens or buffer and emit a single completion with timing.

If you meant CometChat (messaging), typical approaches are:
- Create a small middleware service that listens to AnythingLLM events (e.g., message created) and forwards them to CometChat rooms/users via CometChat REST APIs.
- Or implement a custom “tool” in AnythingLLM that calls your service, which then uses CometChat APIs with your appId/apiKey.

Tell me which “Comet” you’re using and whether you prefer a no‑code relay or a code change in AnythingLLM, and I can provide concrete setup steps and a ready‑to‑run skeleton.
Mar 27, 2026
AnythingLLM
cometapi

Do you mean: - Comet (comet.com, aka Comet ML / CometLLM for LLM observability), or - CometChat (cometchat.com messaging API)? If you mean Comet (Comet ML/CometLLM), there’s no official, built‑in AnythingLLM integration yet, but you can integrate via either of these patterns: Option A — OpenAI‑compatible relay (no changes to AnythingLLM) - Goal: Insert a small proxy between AnythingLLM and your LLM provider to log prompts/responses to Comet. - Steps: 1) Deploy a relay that exposes /v1/chat/completions (and /v1/embeddings if needed). In the handler, start a CometLLM trace, forward the request to your real provider, stream/collect the response, log metadata/tokens, then return the response. 2) Set AnythingLLM to use your relay: - Provider: OpenAI (or any OpenAI‑compatible) - Base URL: your relay URL - API Key: your relay’s key (the relay then uses the real provider key internally) 3) Configure the relay with COMET_API_KEY (and COMET_WORKSPACE/COMET_PROJECT). - Minimal outline (Node/Express + comet-llm + upstream OpenAI client): - Read COMET_API_KEY from env. - POST /v1/chat/completions: - Create a CometLLM trace/run with request metadata (model, messages, user, latency). - Call upstream OpenAI’s chat.completions.create. - Log tokens, cost (if available), and response text. - Return upstream response as‑is so AnythingLLM remains unaware of the proxy. Option B — Instrument AnythingLLM’s provider layer (requires forking/self‑hosting) - Goal: Add CometLLM logging inside AnythingLLM where it calls OpenAI/Anthropic/etc. - Steps: 1) Fork AnythingLLM and locate the LLM provider modules (e.g., OpenAI/Anthropic adapters). 2) Wrap calls with CometLLM SDK (JS/TS): - Before calling the model, start a trace/run; include system/user prompts, tool calls, thread IDs, and any conversation metadata. - After receiving the response (or stream chunks), log tokens, latency, status, and final text. 3) Expose COMET_API_KEY/COMET_WORKSPACE/COMET_PROJECT via env. 4) Rebuild and deploy your fork. Best practices - Do not log secrets or raw PII; use Comet’s redaction/anonymization to scrub prompts if needed. - Include AnythingLLM conversation/thread IDs in Comet metadata for correlation. - For streaming, log incremental tokens or buffer and emit a single completion with timing. If you meant CometChat (messaging), typical approaches are: - Create a small middleware service that listens to AnythingLLM events (e.g., message created) and forwards them to CometChat rooms/users via CometChat REST APIs. - Or implement a custom “tool” in AnythingLLM that calls your service, which then uses CometChat APIs with your appId/apiKey. Tell me which “Comet” you’re using and whether you prefer a no‑code relay or a code change in AnythingLLM, and I can provide concrete setup steps and a ready‑to‑run skeleton.

I 2025–2026 fortsatte landskapet for AI-verktøy å konsolidere seg: gateway-API-er (som CometAPI) utvidet for å tilby OpenAI-lignende tilgang til hundrevis av modeller,