ref(core): Drop provider specific attributes not part of sentry conventions#20011
ref(core): Drop provider specific attributes not part of sentry conventions#20011nicohrubec wants to merge 3 commits intodevelopfrom
Conversation
Semver Impact of This PR🟢 Patch (bug fixes) 📋 Changelog PreviewThis is how your changes will appear in the changelog. New Features ✨
Bug Fixes 🐛
Documentation 📚
Internal Changes 🔧Core
Deps
Other
🤖 This preview updates automatically when you update the PR. |
size-limit report 📦
|
node-overhead report 🧳Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.
|
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix prepared a fix for the issue found in the latest run.
- ✅ Fixed: Unused
responseTimestampstill tracked in streaming state- Removed the unused
responseTimestampfield fromStreamingState, its initialization, and both dead assignment sites in stream processing.
- Removed the unused
Or push these changes by commenting:
@cursor push e9dd5d81c7 Preview (e9dd5d81c7)
diff --git a/packages/core/src/tracing/openai/streaming.ts b/packages/core/src/tracing/openai/streaming.ts --- a/packages/core/src/tracing/openai/streaming.ts +++ b/packages/core/src/tracing/openai/streaming.ts @@ -36,8 +36,6 @@ responseId: string; /** The model name. */ responseModel: string; - /** The timestamp of the response. */ - responseTimestamp: number; /** Number of prompt/input tokens used. */ promptTokens: number | undefined; /** Number of completion/output tokens used. */ @@ -99,7 +97,6 @@ function processChatCompletionChunk(chunk: ChatCompletionChunk, state: StreamingState, recordOutputs: boolean): void { state.responseId = chunk.id ?? state.responseId; state.responseModel = chunk.model ?? state.responseModel; - state.responseTimestamp = chunk.created ?? state.responseTimestamp; if (chunk.usage) { // For stream responses, the input tokens remain constant across all events in the stream. @@ -183,7 +180,6 @@ const { response } = event as { response: OpenAIResponseObject }; state.responseId = response.id ?? state.responseId; state.responseModel = response.model ?? state.responseModel; - state.responseTimestamp = response.created_at ?? state.responseTimestamp; if (response.usage) { // For stream responses, the input tokens remain constant across all events in the stream. @@ -227,7 +223,6 @@ finishReasons: [], responseId: '', responseModel: '', - responseTimestamp: 0, promptTokens: undefined, completionTokens: undefined, totalTokens: undefined,This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.
JPeer264 left a comment
There was a problem hiding this comment.
As you already mentioned, I think this should rather be dropped in a major release.


In the openai and anthropic integrations we send multiple provider specific attributes. None of these are part of our sentry conventions and should therefore be emitted. These fall into two categories:
gen_ainamespace:openai.response.id,openai.response.model,openai.usage.prompt_tokens,openai.usage.completion_tokensgen_aiequivalent:openai.response.timestamp,anthropic.response.timestampgen_aiequivalent so we would no longer send this data at all, but since they are not in the semantic conventions we probably shouldn't send them either.According to Hex, none of these attributes are used in any stored queries, dashboards or alerts. I am aware that users could still use these attributes in hooks so it could technically be interpreted as a breaking change. Let me know if you think this is fine to remove as is or if we should handle this a bit more graciously (e.g. mention in the changelog, or wait for the next major).
Closes #20015 (added automatically)