Add per-token logprobs content schema to LogprobsPart#245
Add per-token logprobs content schema to LogprobsPart#245TechnoFinch wants to merge 1 commit intomainfrom
Conversation
The API returns per-token log probability information in a content array with nested top_logprobs, but the OpenAPI spec did not describe this structure. This caused the generated Python SDK to silently drop the content and per-token top_logprobs data during deserialization. Adds LogprobsToken and LogprobsTokenTopLogprob schema types, and a content field to LogprobsPart. All fields are optional for backwards compatibility. Fixes https://linear.app/together-ai/issue/ENG-86462/fix-logprob-misalignment-with-open-sdk
✱ Stainless preview buildsThis PR will update the go openapi python terraform typescript Edit this comment to update them. They will appear in their respective SDK's changelogs. ✅ togetherai-python studio · code · diff
✅ togetherai-openapi studio · code · diff
✅ togetherai-typescript studio · code · diff
✅ togetherai-go studio · code · diff
✅ togetherai-terraform studio · code · diff
This comment is auto-generated by GitHub Actions and is automatically kept up to date as you push. |
Summary
LogprobsTokenandLogprobsTokenTopLogprobschema types to describe the per-token logprobs structure the API already returnscontentfield toLogprobsPartreferencing the new typesProblem
The chat completions API returns logprobs in a
contentarray with nestedtop_logprobsper token, but the OpenAPI spec only describes the legacy flat format (tokens,token_logprobs,token_ids). This causes the Stainless-generated Python SDK to silently dropcontentand per-tokentop_logprobsduring deserialization. This brings back lost functionality from 1.x.x to 2.x.x.Verification
Confirmed via raw curl that the API returns the
content[].top_logprobs[]structure. The spec change matches the actual response format exactly.Fixes https://linear.app/together-ai/issue/ENG-86462/fix-logprob-misalignment-with-open-sdk