Summary
The Mistral Classifiers API (client.classifiers) is not instrumented. Calls to client.classifiers.moderate(), moderate_chat(), classify(), and classify_chat() (plus their async variants) produce zero Braintrust tracing. These are GA inference endpoints that accept text or chat messages, run a classifier model (e.g. mistral-moderation-latest), and return moderation categories/scores or classification labels.
This repo already instruments the analogous OpenAI Moderations API (ModerationsPatcher in the OpenAI integration), making this an asymmetry across providers.
What is missing
| Mistral Resource |
Method |
Instrumented? |
client.chat |
complete(), stream() |
Yes |
client.embeddings |
create() |
Yes |
client.fim |
complete(), stream() |
Yes |
client.agents |
complete(), stream() |
Yes |
client.ocr |
process() |
Yes |
client.audio.transcriptions |
complete(), stream() |
Yes |
client.audio.speech |
complete() |
Yes |
client.classifiers |
moderate(), moderate_async() |
No |
client.classifiers |
moderate_chat(), moderate_chat_async() |
No |
client.classifiers |
classify(), classify_async() |
No |
client.classifiers |
classify_chat(), classify_chat_async() |
No |
Endpoints
| Method |
HTTP Endpoint |
Description |
moderate() |
POST /v1/moderations |
Text moderation |
moderate_chat() |
POST /v1/chat/moderations |
Chat message moderation |
classify() |
POST /v1/classifications |
Text classification |
classify_chat() |
POST /v1/chat/classifications |
Chat message classification |
All four endpoints are listed in the main Mistral API reference without beta tags, indicating GA status.
Minimum instrumentation
At minimum, all 8 methods should create spans capturing:
- Input: text or messages being classified/moderated
- Output: classification/moderation results with category scores
- Metadata:
provider: "mistral", model name, any threshold or category configuration
- Metrics: latency
Precedent in this repo
The OpenAI integration instruments the equivalent client.moderations.create() endpoint via ModerationsPatcher in py/src/braintrust/integrations/openai/patchers.py. The Mistral classifiers follow the same pattern: accept input, run a model, return inference results.
Braintrust docs status
not_found — The Mistral integration page documents chat completions, FIM, embeddings, and agents. No mention of classifiers, moderation, or classification endpoints.
Upstream sources
Local files inspected
py/src/braintrust/integrations/mistral/patchers.py — defines patchers for Chat, Embeddings, Fim, Agents, Ocr, Speech, Transcriptions; zero references to classifiers, moderate, or classify
py/src/braintrust/integrations/mistral/tracing.py — wrapper functions for chat, embeddings, FIM, agents, OCR, audio; no classifier wrappers
py/src/braintrust/integrations/mistral/integration.py — integration class registers 7 composite patchers; no ClassifiersPatcher
py/src/braintrust/integrations/mistral/test_mistral.py — no classifier test cases
py/src/braintrust/integrations/openai/patchers.py — OpenAI ModerationsPatcher exists as precedent for this pattern
Relationship to existing issues
Summary
The Mistral Classifiers API (
client.classifiers) is not instrumented. Calls toclient.classifiers.moderate(),moderate_chat(),classify(), andclassify_chat()(plus their async variants) produce zero Braintrust tracing. These are GA inference endpoints that accept text or chat messages, run a classifier model (e.g.mistral-moderation-latest), and return moderation categories/scores or classification labels.This repo already instruments the analogous OpenAI Moderations API (
ModerationsPatcherin the OpenAI integration), making this an asymmetry across providers.What is missing
client.chatcomplete(),stream()client.embeddingscreate()client.fimcomplete(),stream()client.agentscomplete(),stream()client.ocrprocess()client.audio.transcriptionscomplete(),stream()client.audio.speechcomplete()client.classifiersmoderate(),moderate_async()client.classifiersmoderate_chat(),moderate_chat_async()client.classifiersclassify(),classify_async()client.classifiersclassify_chat(),classify_chat_async()Endpoints
moderate()POST /v1/moderationsmoderate_chat()POST /v1/chat/moderationsclassify()POST /v1/classificationsclassify_chat()POST /v1/chat/classificationsAll four endpoints are listed in the main Mistral API reference without beta tags, indicating GA status.
Minimum instrumentation
At minimum, all 8 methods should create spans capturing:
provider: "mistral", model name, any threshold or category configurationPrecedent in this repo
The OpenAI integration instruments the equivalent
client.moderations.create()endpoint viaModerationsPatcherinpy/src/braintrust/integrations/openai/patchers.py. The Mistral classifiers follow the same pattern: accept input, run a model, return inference results.Braintrust docs status
not_found — The Mistral integration page documents chat completions, FIM, embeddings, and agents. No mention of classifiers, moderation, or classification endpoints.
Upstream sources
mistralaiPython SDK on PyPI (v2.4.0):client.classifiers.moderate(),.moderate_chat(),.classify(),.classify_chat()plus*_async()variantsLocal files inspected
py/src/braintrust/integrations/mistral/patchers.py— defines patchers forChat,Embeddings,Fim,Agents,Ocr,Speech,Transcriptions; zero references toclassifiers,moderate, orclassifypy/src/braintrust/integrations/mistral/tracing.py— wrapper functions for chat, embeddings, FIM, agents, OCR, audio; no classifier wrapperspy/src/braintrust/integrations/mistral/integration.py— integration class registers 7 composite patchers; no ClassifiersPatcherpy/src/braintrust/integrations/mistral/test_mistral.py— no classifier test casespy/src/braintrust/integrations/openai/patchers.py— OpenAIModerationsPatcherexists as precedent for this patternRelationship to existing issues
client.batch.jobs.create()) not instrumented #272 tracks Mistral Batch Jobs API (separate surface)client.beta.conversations) not instrumented #273 tracks Mistral Beta Conversations API (separate surface)