-
Suppose you want to have a function which takes an argument There are several ways of doing this in nanobind that I know of:
I'm curious, are there any performance trade-offs associated with these? Any other things to think about when deciding between Here's a more detailed example. Using a m.def(
"myfunction",
[](const std::variant<int, array>& x) {
if (auto pv = std::get_if<int>(&x); pv) {
return function(*pv);
} else {
return function(std::get<array>(x));
}
},
... Using m.def(
"myfunction",
[](const nb::object& x) {
if (nb::isinstance<int>(x)) {
return function(nb::cast<int>(x));
} else if (nb::isinstance<array>(x)) {
return function(nb::cast<array>(x));
} else {
throw std::invalid_argument(
"[function] Received invalid type for second input.");
}
},
... We usually use the 2nd or 3rd option in MLX as it let's us control the function signatures and documentation. I also like the 3rd option because we can control the error message ( |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
They are all good 🤗. It mainly depends on your preference with respect to function signatures in tooling. If you have a function with many overloads and many arguments (or few, but ones that are costly to cast, e.g. |
Beta Was this translation helpful? Give feedback.
They are all good 🤗. It mainly depends on your preference with respect to function signatures in tooling.
If you have a function with many overloads and many arguments (or few, but ones that are costly to cast, e.g.
list -> std::vector<int>
with big lists), then its likely cheaper to crunch it down to a single overload and work out the specific way to handle each argument in C++. Otherwise, nanobind will try the overloads one by one, which may involve some redundant type casting overheads.