You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Connection and Relational APIs both have an alias for fetch_record_batch():
arrow() -> pyarrow.lib.RecordBatchReader This function was the first we exposed in the API and is probably most often used. It changed return type over the course of 1.4.X, from Table to RecordBatchReader, which caused a number of issues.
arrow() -> pyarrow.lib.RecordBatchReaderNote: we will not deprecate this function in v1.5.0, but we will discourage its use in both the documentation and the docstring. We encourage users to use to_arrow_reader() instead.
Relation::record_batch() -> pyarrow.lib.RecordBatchReader will be removed.
What's in a Name
Arrow's ADBC Driver Manager API uses the fetch_* naming convention (docs):
fetch_arrow_table()
fetch_record_reader()
fetch_df()
This is what we've adopted, in spite of our own (not consistently applied) convention of using to_*:
to_csv
to_df
to_parquet
to_table
to_view
We however also provide:
fetch_df
fetch_df_chunk
Looking at other libraries (Vortex, Pandas, etc) there is precedent to move from the fetch_* prefix to the to_* prefix, which seems the preferred way of expressing a conversion.
This looks great. My only question is, why is Relation::record_batch() the only one to get removed, why is it not merely deprecated like the rest of the deprecations? Without knowing your reasoning I would weakly prefer to also deprecate it, for consistency and just to ease the migration path of any users.
This looks great. My only question is, why is Relation::record_batch() the only one to get removed, why is it not merely deprecated like the rest of the deprecations? Without knowing your reasoning I would weakly prefer to also deprecate it, for consistency and just to ease the migration path of any users.
@NickCrewsRelation::record_batch() was already deprecated since 1.4.0
Oh, excellent, then that is a great plan. Thanks for your work here!
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
The Arrow API of the Python client regularly causes confusion. The most important issues seem to be that:
Also see #97
Core Functions
The Connection API (and with it, the
duckdbmodule) and the Relational API have two core functions to create Arrow objects:fetch_record_batch() -> pyarrow.lib.RecordBatchReaderfetch_arrow_table() -> pyarrow.lib.TableThe Connection API has another function to create a Relation from an Arrow object:
arrow(arrow_object, connection = None) -> DuckDBPyRelationAliases
The Connection and Relational APIs both have an alias for
fetch_record_batch():arrow() -> pyarrow.lib.RecordBatchReaderThis function was the first we exposed in the API and is probably most often used. It changed return type over the course of 1.4.X, fromTabletoRecordBatchReader, which caused a number of issues.The Relational API has three more aliases:
to_arrow_table() -> pyarrow.lib.Tablefetch_arrow_reader() -> pyarrow.lib.RecordBatchReaderrecord_batch() -> pyarrow.lib.RecordBatchReader(this has been deprecated for a while)Changes
v1.5.0 API
The Connection and Relational APIs will have the following functions:
to_arrow_reader() -> pyarrow.lib.RecordBatchReaderto_arrow_table() -> pyarrow.lib.Tablearrow() -> pyarrow.lib.RecordBatchReaderNote: we will not deprecate this function in v1.5.0, but we will discourage its use in both the documentation and the docstring. We encourage users to useto_arrow_reader()instead.The Connection API will keep this function:
arrow(arrow_object, connection = None) -> DuckDBPyRelationv1.5.0 Deprecated API
The
fetch_*functions will be deprecated in v1.5.0 (which will be emitted as a DeprecationWarning) and removed in v1.6.0:fetch_record_batch() -> pyarrow.lib.RecordBatchReaderfetch_arrow_table() -> pyarrow.lib.Tablefetch_arrow_reader() -> pyarrow.lib.RecordBatchReaderv1.5.0 Removed API
Relation::record_batch() -> pyarrow.lib.RecordBatchReaderwill be removed.What's in a Name
Arrow's ADBC Driver Manager API uses the
fetch_*naming convention (docs):fetch_arrow_table()fetch_record_reader()fetch_df()This is what we've adopted, in spite of our own (not consistently applied) convention of using
to_*:to_csvto_dfto_parquetto_tableto_viewWe however also provide:
fetch_dffetch_df_chunkLooking at other libraries (Vortex, Pandas, etc) there is precedent to move from the
fetch_*prefix to theto_*prefix, which seems the preferred way of expressing a conversion.