fix(web): resolve chat message loading race conditions and infinite loops#30695
Conversation
Summary of ChangesHello @MkDev11, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses critical issues within the chat message loading mechanism in the app log panel, specifically targeting race conditions, log disorder, and infinite loops. By introducing robust request management, state synchronization, and retry logic, the changes significantly improve the stability and reliability of message loading and pagination, leading to a smoother user experience. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request refactors the chat message loading and pagination logic within the DetailPanel component. Key changes include implementing request cancellation using AbortController and requestIdRef to prevent race conditions and ignore stale responses, and introducing oldestAnswerIdRef for a more robust pagination strategy to avoid stale closure issues. The loadMoreMessages function was enhanced with throttling using lastLoadTimeRef and a retry mechanism (retryCountRef, MAX_RETRY_COUNT) to improve scroll-based loading reliability, especially when no new unique messages are immediately available. Additionally, the scroll event listener was simplified, and state updates for allChatItems were converted to functional updates. The review comment points out a React anti-pattern where other state setters (setChatItemTree, setThreadChatItems) and ref updates (oldestAnswerIdRef) are performed inside the setAllChatItems functional updater. The reviewer suggests moving this logic into a separate useEffect hook that depends on allChatItems and hasMore to ensure the purity of the updater function and proper state management.
9de9b5e to b1918cd Compare …oops Fixes langgenius#30259 - Add AbortController to cancel in-flight requests on new scroll - Add requestIdRef to ignore stale responses - Add oldestAnswerIdRef to store pagination anchor in ref (avoids stale closures) - Add lastLoadTimeRef for throttling that persists across effect re-runs - Add retryCountRef with MAX_RETRY_COUNT=3 to prevent infinite retry loops - Remove duplicate scroll listener that was firing both fetchData and loadMoreMessages - Use functional state updates in setAllChatItems (pure updater functions) - Add useEffect to derive chatItemTree and threadChatItems from allChatItems - Remove unused SCROLL_THRESHOLD_PX constant - Remove allChatItems from useCallback dependency arrays
b1918cd to 0e5b589 Compare | Please add a test for it as well :) The rest LGTM |
Tests verify the core algorithms used to prevent race conditions: - Request deduplication (filtering duplicate message IDs) - Retry counter logic (MAX_RETRY_COUNT=3 limit) - Throttling logic (SCROLL_DEBOUNCE_MS=200ms) - AbortController cancellation of in-flight requests - Stale response detection via requestId comparison - Pagination anchor management (oldestAnswerIdRef) - Functional state update pattern for avoiding stale closures Related to langgenius#30259
Done! |
| Please fix the lint errors. |
Tests verify the core algorithms used to prevent race conditions: - Request deduplication (filtering duplicate message IDs) - Retry counter logic (MAX_RETRY_COUNT=3 limit) - Throttling logic (SCROLL_DEBOUNCE_MS=200ms) - AbortController cancellation of in-flight requests - Stale response detection via requestId comparison - Pagination anchor management (oldestAnswerIdRef) - Functional state update pattern for avoiding stale closures Related to langgenius#30259
f35c150 to 1ad6cae Compare | @crazywoola Could you please merge the PR? |
| We will review this later :) |
Summary
Fix chat message loading race conditions, log disorder, and infinite loops in the app log panel.
Fixes #30259
Changes:
AbortControllerto cancel in-flight requests on new scrollrequestIdRefto ignore stale responsesoldestAnswerIdRefto store pagination anchor in ref (avoids stale closures)lastLoadTimeReffor throttling that persists across effect re-runsretryCountRefwithMAX_RETRY_COUNT=3to prevent infinite retry loopsfetchDataandloadMoreMessagessetAllChatItemsto avoid stale state issuesChecklist
make lintandmake type-check(backend) andcd web && npx lint-staged(frontend) to appease the lint godsContribution by Gittensor, see my contribution statistics at https://gittensor.io/miners/details?githubId=94194147