feat: add spec-compliant streaming error event for Responses API#5224
feat: add spec-compliant streaming error event for Responses API#5224gyliu513 wants to merge 1 commit intollamastack:mainfrom
Conversation
✱ Stainless preview buildsThis PR will update the Edit this comment to update it. It will appear in the SDK's changelogs. ✅ llama-stack-client-openapi studio · code · diff
✅ llama-stack-client-go studio · conflict
✅ llama-stack-client-node studio · conflict
✅ llama-stack-client-python studio · conflict
This comment is auto-generated by GitHub Actions and is automatically kept up to date as you push. |
|
can you please add a description to this PR |
|
@cdoern Done, this is a draft PR, so I did not add description when submit it, it is still draft, I may need more test and the description may be updated later. |
84b1d89 to
a0883a5
Compare
Summary
Replace the ad-hoc
OpenAIErrorResponsedict-based error event in SSE streaming with a spec-compliantOpenAIResponseObjectStreamErrorPydantic model.Problem: When errors occur mid-stream (e.g.,
ValueErrorfrom a provider), the SSE generator previously emitted anOpenAIErrorResponse— which is the OpenAI HTTP error response format ({"error": {"message": "...", "code": "..."}}). This is the wrong format for streaming. OpenAI's streaming API uses a distinctResponseErrorEventwithtype: "error"as a top-level SSE event (openai-python SDK reference).Fix: Introduce
OpenAIResponseObjectStreamErrormatching OpenAI'sResponseErrorEventschema:{"type": "error", "code": "400", "message": "not found", "sequence_number": 5}This allows OpenAI-compatible client SDKs to correctly deserialize streaming errors.