Summary
The Cohere Audio Transcription API (client.audio.transcriptions.create()) is not instrumented. Calls to this endpoint produce zero Braintrust tracing. This is a speech-to-text inference endpoint that runs the cohere-transcribe-03-2026 ASR model (2B parameter Conformer architecture) on audio files and returns transcribed text.
The Braintrust Cohere integration currently instruments chat, chat_stream, embed, and rerank (V1 and V2), but has no handling for the audio transcription surface introduced in Cohere SDK v6.1.0.
What is missing
| Cohere Resource |
Method |
Instrumented? |
client.chat() / client.v2.chat() |
sync + async |
Yes |
client.chat_stream() / client.v2.chat_stream() |
sync + async |
Yes |
client.embed() / client.v2.embed() |
sync + async |
Yes |
client.rerank() / client.v2.rerank() |
sync + async |
Yes |
client.audio.transcriptions.create() |
sync |
No |
client.audio.transcriptions.create() (async) |
async |
No |
API details
- Endpoint:
POST /v2/audio/transcriptions
- Model:
cohere-transcribe-03-2026
- Supported formats: FLAC, MP3, MPEG, MPGA, OGG, WAV (25MB max)
- Languages: 14 languages (English, German, French, Italian, Spanish, Portuguese, Greek, Dutch, Polish, Arabic, Vietnamese, Chinese, Japanese, Korean)
- Parameters:
model, language (ISO-639-1), file, temperature
- Return type:
AudioTranscriptionsCreateResponse
- SDK: Both
TranscriptionsClient.create() and AsyncTranscriptionsClient.create() exist in the cohere package at src/cohere/audio/transcriptions/client.py
Minimum instrumentation
At minimum, both sync and async create() methods should create spans capturing:
- Input: file metadata (name, size, format), language, model
- Output: transcribed text
- Metadata:
provider: "cohere", model, language, temperature
- Metrics: latency
Precedent in this repo
Several other integrations instrument equivalent audio transcription endpoints:
- OpenAI:
AudioTranscriptionsPatcher in py/src/braintrust/integrations/openai/patchers.py
- Mistral:
TranscriptionsPatcher in py/src/braintrust/integrations/mistral/patchers.py
- LiteLLM:
LiteLLMTranscriptionPatcher in py/src/braintrust/integrations/litellm/patchers.py
Braintrust docs status
not_found — Cohere is not listed on the Braintrust integrations page. There is no dedicated Cohere integration docs page.
Upstream sources
Local files inspected
py/src/braintrust/integrations/cohere/patchers.py — defines patchers for Chat, ChatStream, Embed, Rerank (V1 and V2 variants); zero references to audio, transcription, or transcribe
py/src/braintrust/integrations/cohere/tracing.py — wrapper functions for chat, embed, rerank; no audio/transcription wrappers
py/src/braintrust/integrations/cohere/integration.py — integration class registers chat/embed/rerank patchers; no TranscriptionPatcher
py/src/braintrust/integrations/cohere/test_cohere.py — no audio/transcription test cases
py/pyproject.toml — Cohere version matrix: latest pinned to cohere==6.1.0, older pin at 5.0.0
Summary
The Cohere Audio Transcription API (
client.audio.transcriptions.create()) is not instrumented. Calls to this endpoint produce zero Braintrust tracing. This is a speech-to-text inference endpoint that runs thecohere-transcribe-03-2026ASR model (2B parameter Conformer architecture) on audio files and returns transcribed text.The Braintrust Cohere integration currently instruments
chat,chat_stream,embed, andrerank(V1 and V2), but has no handling for the audio transcription surface introduced in Cohere SDK v6.1.0.What is missing
client.chat()/client.v2.chat()client.chat_stream()/client.v2.chat_stream()client.embed()/client.v2.embed()client.rerank()/client.v2.rerank()client.audio.transcriptions.create()client.audio.transcriptions.create()(async)API details
POST /v2/audio/transcriptionscohere-transcribe-03-2026model,language(ISO-639-1),file,temperatureAudioTranscriptionsCreateResponseTranscriptionsClient.create()andAsyncTranscriptionsClient.create()exist in thecoherepackage atsrc/cohere/audio/transcriptions/client.pyMinimum instrumentation
At minimum, both sync and async
create()methods should create spans capturing:provider: "cohere", model, language, temperaturePrecedent in this repo
Several other integrations instrument equivalent audio transcription endpoints:
AudioTranscriptionsPatcherinpy/src/braintrust/integrations/openai/patchers.pyTranscriptionsPatcherinpy/src/braintrust/integrations/mistral/patchers.pyLiteLLMTranscriptionPatcherinpy/src/braintrust/integrations/litellm/patchers.pyBraintrust docs status
not_found — Cohere is not listed on the Braintrust integrations page. There is no dedicated Cohere integration docs page.
Upstream sources
src/cohere/audio/transcriptions/client.py)Local files inspected
py/src/braintrust/integrations/cohere/patchers.py— defines patchers forChat,ChatStream,Embed,Rerank(V1 and V2 variants); zero references toaudio,transcription, ortranscribepy/src/braintrust/integrations/cohere/tracing.py— wrapper functions for chat, embed, rerank; no audio/transcription wrapperspy/src/braintrust/integrations/cohere/integration.py— integration class registers chat/embed/rerank patchers; no TranscriptionPatcherpy/src/braintrust/integrations/cohere/test_cohere.py— no audio/transcription test casespy/pyproject.toml— Cohere version matrix: latest pinned tocohere==6.1.0, older pin at5.0.0