fix(core): Intercept .withResponse() to preserve OpenAI stream instru…#19122
fix(core): Intercept .withResponse() to preserve OpenAI stream instru…#19122RulaKhaled wants to merge 1 commit intodevelopfrom
Conversation
Codecov Results 📊Generated by Codecov Action |
size-limit report 📦
|
node-overhead report 🧳Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.
|
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| const result = client.chat.completions.create({ | ||
| model: 'gpt-4', | ||
| messages: [{ role: 'user', content: 'Test withResponse' }], | ||
| }); |
There was a problem hiding this comment.
Test doesn't cover streaming .withResponse() regression
Low Severity
The PR description states the fix is specifically for streaming calls where .withResponse() returned the original uninstrumented stream. However, the test scenario only tests non-streaming calls (no stream: true parameter). According to the review rules for fix PRs, tests should verify the specific regression being fixed. A streaming test case with .withResponse() would provide confidence that the core bug is actually fixed. Flagging this because the review rules file specifies that fix PRs should include tests for the specific regression.
| // Call .withResponse() and verify structure | ||
| const { data } = await result.withResponse(); | ||
|
|
||
| if (!data) { |
There was a problem hiding this comment.
m: I think we should check that we actually preserve the correct data here (as in check what's returned in the withResponse result)
| throw new Error('.withResponse() did not return data'); | ||
| } | ||
|
|
||
| // Test 2: Verify regular await still works |
There was a problem hiding this comment.
l: Is "Test 2" not covered by other scenarios already?
|
|
||
| // Special handling for .withResponse() to preserve instrumentation | ||
| // .withResponse() returns { data: T, response: Response, request_id: string } | ||
| if (prop === 'withResponse' && typeof value === 'function') { |
There was a problem hiding this comment.
Q: we only apply special handling here for withResponse, do we not have this issue with other custom OpenAI methods (e.g. asResponse)?
| const useInstrumentedPromise = prop in Promise.prototype || prop === Symbol.toStringTag; | ||
| const source = useInstrumentedPromise ? instrumentedPromise : target; |
There was a problem hiding this comment.
m: can we add a comment here to explain what's happening here? not super obvious I think at first glance


When Sentry instruments the OpenAI client, we return an instrumented promise. The OpenAI SDK returns a custom promise-like object with extra methods like .withResponse(), so returning a plain Promise breaks those methods.
We added wrapPromiseWithMethods() to proxy SDK methods back onto the instrumented promise. However, for streaming calls, .withResponse() returned the original uninstrumented stream, so we fixed it by intercepting .withResponse() in the proxy:
Closes #19073