Skip to content

[bot] Mistral: Classifiers API (client.classifiers.moderate() / classify()) not instrumented #326

@braintrust-bot

Description

@braintrust-bot

Summary

The Mistral Classifiers API (client.classifiers) is not instrumented. Calls to client.classifiers.moderate(), moderate_chat(), classify(), and classify_chat() (plus their async variants) produce zero Braintrust tracing. These are GA inference endpoints that accept text or chat messages, run a classifier model (e.g. mistral-moderation-latest), and return moderation categories/scores or classification labels.

This repo already instruments the analogous OpenAI Moderations API (ModerationsPatcher in the OpenAI integration), making this an asymmetry across providers.

What is missing

Mistral Resource Method Instrumented?
client.chat complete(), stream() Yes
client.embeddings create() Yes
client.fim complete(), stream() Yes
client.agents complete(), stream() Yes
client.ocr process() Yes
client.audio.transcriptions complete(), stream() Yes
client.audio.speech complete() Yes
client.classifiers moderate(), moderate_async() No
client.classifiers moderate_chat(), moderate_chat_async() No
client.classifiers classify(), classify_async() No
client.classifiers classify_chat(), classify_chat_async() No

Endpoints

Method HTTP Endpoint Description
moderate() POST /v1/moderations Text moderation
moderate_chat() POST /v1/chat/moderations Chat message moderation
classify() POST /v1/classifications Text classification
classify_chat() POST /v1/chat/classifications Chat message classification

All four endpoints are listed in the main Mistral API reference without beta tags, indicating GA status.

Minimum instrumentation

At minimum, all 8 methods should create spans capturing:

  • Input: text or messages being classified/moderated
  • Output: classification/moderation results with category scores
  • Metadata: provider: "mistral", model name, any threshold or category configuration
  • Metrics: latency

Precedent in this repo

The OpenAI integration instruments the equivalent client.moderations.create() endpoint via ModerationsPatcher in py/src/braintrust/integrations/openai/patchers.py. The Mistral classifiers follow the same pattern: accept input, run a model, return inference results.

Braintrust docs status

not_found — The Mistral integration page documents chat completions, FIM, embeddings, and agents. No mention of classifiers, moderation, or classification endpoints.

Upstream sources

Local files inspected

  • py/src/braintrust/integrations/mistral/patchers.py — defines patchers for Chat, Embeddings, Fim, Agents, Ocr, Speech, Transcriptions; zero references to classifiers, moderate, or classify
  • py/src/braintrust/integrations/mistral/tracing.py — wrapper functions for chat, embeddings, FIM, agents, OCR, audio; no classifier wrappers
  • py/src/braintrust/integrations/mistral/integration.py — integration class registers 7 composite patchers; no ClassifiersPatcher
  • py/src/braintrust/integrations/mistral/test_mistral.py — no classifier test cases
  • py/src/braintrust/integrations/openai/patchers.py — OpenAI ModerationsPatcher exists as precedent for this pattern

Relationship to existing issues

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions