Expand Parquet API: async, filtering, projection pushdown#313
Open
kylebarron wants to merge 16 commits intomainfrom
Open
Expand Parquet API: async, filtering, projection pushdown#313kylebarron wants to merge 16 commits intomainfrom
kylebarron wants to merge 16 commits intomainfrom
Conversation
This was referenced Mar 25, 2025
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Change list
ParquetFileclass that can wrap either a sync or an async source.ParquetFile.read, which reads from a local or remote source, always synchronously.BlockingAsyncParquetReaderthat appears to correctly compile, so we should be able to expose an async source as a synchronous Python API. This means that each record batch will be fetched synchronously (but there could be multiple concurrent reads in the process of reading that record batch).ParquetFile.read_async, which reads from a local (tbd, but should be easy) or remote source, and exposes an async iterator. This async iterator can't be exchanged via FFI, but Python can await on these async functions and then the result of the future can be exchanged to Python via FFI.ParquetRecordBatchStreamas a blocking Python iterator (and actually via C FFI!!) so that we can provide a clean synchronous API for remote files.ParquetFileTODO:
ParquetFile(e7004d7(#313))AsyncFileReaderforSyncSourceto allowread_asyncfrom local file (e03ab04(#313))RowFilter: Prototype letting user pass an Arrow UDF forArrowPredicate. Now that I've learned how to have these Python user callbacks in obstore, we can apply it here. Then the user can pass in a list of these predicates as aRowFilter. (ba671ab(#313))RowFilterhere...?projectionvs columns ✅read_tableAPI that reads multiple row groups concurrently (seenext_row_groupand this example in the tests and this PR)RowSelection(this can be left for a future PR)ParquetDatasetread_parquet_async?filtersto pyarrowfilters.row_groupsandfiltersfor best predicate pushdownExample
Ref #258 (comment) for earlier example.
Closes #258, closes #195