Skip to content

Conversation

@timsaucer
Copy link
Member

@timsaucer timsaucer commented Nov 3, 2025

Which issue does this PR close?

Closes #1172

Rationale for this change

Since we now have the ability to pass Field information instead of just DataType with ScalarUDFs, this feature adds similar support for Python written UDFs. Without this feature you must write your UDFs in rust and expose them to Python. This enhancement greatly expands the use cases where PyArrow data can be leveraged.

What changes are included in this PR?

  • Adds a ScalarUDF implementation for python based UDFs instead of relying on the create_udf function
  • Adds support for converting to pyarrow array via FFI but including the Field schema instead of the data type

Are there any user-facing changes?

This expands on the current API and is backwards compatible.

TODO

  • Add unit tests
  • Update online documentation
  • Update python signatures (type hints)

@timsaucer
Copy link
Member Author

@kosiew Here is an alternate approach. Instead of relying on extension type features it is going to pass the Field information when creating the FFI array. This will capture pyarrow extensions as well as any other metadata that any user assigns on the input.

I'm going to leave it in draft until I can finish up those additional items on my check list.

What do you think?

cc @paleolimbot

@paleolimbot
Copy link
Member

What do you think?

Definitely! Passing the argument fields/return fields should do it. Using __arrow_c_schema__ might be more flexible than isinstance(x, pa.Field) (arro3, nanoarrow, and polars types would work too).

We have a slightly different signature model in SedonaDB ("type matchers") because the existing signature matching doesn't consider metadata, but at the Arrow/FFI level we're doing approximately the same thing: apache/sedona-db#228 . We do use the concept of SedonaType for arguments and return type (but these are serializable to/deserializable from fields).

src/udf.rs Outdated
Comment on lines 106 to 110
"_import_from_c",
(
addr_of!(array) as Py_uintptr_t,
addr_of!(schema) as Py_uintptr_t,
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the use of PyArrow's private _import_from_c advisable?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code is a near duplicate of how we already convert ArrayData into a pyarrow object. You can see the original here. The difference in this function is that we know the field instead of only the data type.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A more modern way is to use __arrow_c_schema__ (although I think import_from_c will be around for a while). It's only a few lines:

https://github.com/apache/sedona-db/blob/main/python/sedonadb/src/import_from.rs#L151-L157

@timsaucer
Copy link
Member Author

timsaucer commented Nov 6, 2025

Also worth evaluating while we're doing this: For scalar values, is it possible for them to contain metadata? If I do pa.scalar(uuid.uuid4().bytes, type=pa.uuid()) and I check the type I should have the extension data. Maybe this is already supported, but as part of this PR I want to evaluate that as well.

Opened new issue so there isn't too much scope creep

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ScalarUDFs created using datafusion.udf() do not propagate extension type metadata

3 participants