fix: correct docx hyperlink extraction#30360
Conversation
Summary of ChangesHello @yangzq50, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses an issue where hyperlinks in DOCX documents were not being correctly extracted. It introduces enhanced parsing logic to identify and resolve embedded hyperlinks, converting them into a standard Markdown link format. This ensures that all relevant link information is preserved and correctly represented when processing DOCX files. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request refactors the docx hyperlink extraction to be more robust by parsing the paragraph's underlying XML structure. The changes are a good improvement. I've identified a block of dead code from the previous implementation that should be removed for maintainability. I also suggested a small improvement to handle cases where a hyperlink has no text, to avoid creating empty links in the output Markdown. Additionally, the PR description mentions that tests have been added, but I don't see any tests for the new hyperlink extraction logic in the provided changes. It would be beneficial to add a unit test to cover this new functionality and prevent future regressions.
There was a problem hiding this comment.
Pull request overview
This PR improves hyperlink extraction from DOCX files by iterating through paragraph XML elements to detect w:hyperlink nodes and resolve their external relationship IDs into Markdown-formatted links.
Key Changes:
- Added XML-based hyperlink extraction that properly resolves relationship IDs to URLs
- Refactored run processing into a helper function for better code organization
- Updated imports to include
qnandRunfrom thedocxlibrary for XML namespace handling
Comments suppressed due to low confidence (1)
api/core/rag/extractor/word_extractor.py:255
- The old hyperlink extraction code (lines 234-255) appears to be dead code now that hyperlinks are being extracted using the new XML-based approach (lines 327-357). This code modifies
run.textdirectly but those modifications won't be used since the new iteration throughparagraph._elementin lines 327-357 doesn't usepara.runsanymore. This code should be removed to avoid confusion and improve maintainability.
hyperlinks_url = None url_pattern = re.compile(r'HYPERLINK\s+"([^"]+)"', re.IGNORECASE) for para in doc.paragraphs: for run in para.runs: if run.text and hyperlinks_url: result = f" [{run.text}]({hyperlinks_url}) " run.text = result hyperlinks_url = None if "HYPERLINK" in run.element.xml: try: xml = ElementTree.XML(run.element.xml) x_child = [c for c in xml.iter() if c is not None] for x in x_child: if x is None: continue if x.tag.endswith("instrText"): if x.text is None: continue for i in url_pattern.findall(x.text): hyperlinks_url = str(i) except Exception: logger.exception("Failed to parse HYPERLINK xml") 💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
08c1423 to 1688c68 Compare There was a problem hiding this comment.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
0e6c56c to 2940040 Compare Iterate paragraph XML to detect w:hyperlink nodes and resolve external r:id relationships into Markdown links.
Important
Fixes #<issue number>.Summary
Iterate paragraph XML to detect w:hyperlink nodes and resolve external r:id relationships into Markdown links.
Fixes #30359
Screenshots
Checklist
dev/reformat(backend) andcd web && npx lint-staged(frontend) to appease the lint gods