-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CoreML ] ML Program more operators support [3/N] #22710
base: main
Are you sure you want to change the base?
Conversation
8b444d3
to
43ac908
Compare
43ac908
to
bbb09e9
Compare
if (input_defs.size() > 2) { | ||
std::string_view layer_input_name_x = model_builder.GetUniqueName(node, "variadic"); | ||
auto input_dtype = input_defs[0]->TypeAsProto()->tensor_type().elem_type(); | ||
const int32_t elem_type = static_cast<int32_t>(input_dtype); | ||
std::vector<int64_t> x0_shape, x1_shape; | ||
GetShape(*input_defs[0], x0_shape, logger); | ||
GetShape(*input_defs[1], x1_shape, logger); | ||
for (size_t i = 0; i < x0_shape.size(); i++) { | ||
x0_shape[i] = std::max(x0_shape[i], x1_shape[i]); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be clearer if this explicitly called out if was specific to Max as well as a comment explaining how we're converting the variadic ONNX Max to the a sequence of CoreML 'maxiumum' operations.
Do we need to manually calculate the output shape? I would have expected CoreML could infer that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Coreml Can't fully infer out the output shape
Output '0' has unexpected type 'ios18.maximum'. Expected tensor<fp32, [3, 3]>; got tensor<fp32, >
} else if (op_type == "ReduceSum") { | ||
coreml_op_type = "reduce_sum"; | ||
} else if (op_type == "ReduceMean") { | ||
coreml_op_type = "reduce_mean"; | ||
} else if (op_type == "ReduceMax") { | ||
coreml_op_type = "reduce_max"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's trivial to add ReduceMin and ReduceProd for ML Program should we add those as well for completeness? #Closed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, It's trival to support that. Will add it in this PR.
onnxruntime/core/providers/coreml/builders/impl/reduction_op_builder.cc
Outdated
Show resolved
Hide resolved
onnxruntime/core/providers/coreml/builders/impl/reduction_op_builder.cc
Outdated
Show resolved
Hide resolved
const auto* tensor_shape = node.InputDefs()[0]->Shape(); | ||
if (tensor_shape == nullptr) { | ||
return false; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need the shape to be known upfront if we're using the default start/end values?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a way to test it when shape-info is not exist?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the prior node is a ConstantOfShape with a runtime input for the shape I don't think it would have shape info.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought ConastantFolding will resolve this.
onnxruntime/core/providers/coreml/builders/impl/squeeze_op_builder.cc
Outdated
Show resolved
Hide resolved
onnxruntime/core/providers/coreml/builders/impl/squeeze_op_builder.cc
Outdated
Show resolved
Hide resolved
|
||
if (coreml_op_type == "squeeze") { | ||
if (!axes.empty()) { | ||
AddOperationInput(*op, "axes", model_builder.AddConstant(op->type(), "axes", axes)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to handle negative axes here? #Closed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ML ops support negative axis
Will add comments above the statement.
onnxruntime/core/providers/coreml/builders/impl/squeeze_op_builder.cc
Outdated
Show resolved
Hide resolved
onnxruntime/core/providers/coreml/builders/impl/softmax_op_builder.cc
Outdated
Show resolved
Hide resolved
23b540b
to
ddd4b1e
Compare
onnxruntime/core/providers/coreml/builders/impl/softmax_op_builder.cc
Outdated
Show resolved
Hide resolved
onnxruntime/core/providers/coreml/builders/impl/softmax_op_builder.cc
Outdated
Show resolved
Hide resolved
onnxruntime/core/providers/coreml/builders/impl/softmax_op_builder.cc
Outdated
Show resolved
Hide resolved
onnxruntime/core/providers/coreml/builders/impl/softmax_op_builder.cc
Outdated
Show resolved
Hide resolved
onnxruntime/core/providers/coreml/builders/impl/softmax_op_builder.cc
Outdated
Show resolved
Hide resolved
2b23670
to
7ba7d7b
Compare
// the conversion is doing the following: | ||
// max(x, y, z, ...) -> max(max(x, y), z, ...) | ||
#if defined(COREML_ENABLE_MLPROGRAM) | ||
static void AddVariadicInputsToMB(std::unique_ptr<CoreML::Specification::MILSpec::Operation>* op, ModelBuilder& model_builder, const Node& node, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: please keep lines to 120 chars
Would it be clearer to call this AddVariadicMax
and have it handle all the work of adding the operator instead of doing part in AddToModelBuilderImpl and part here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Min
operator has the same schema, we can call it directly in case needing to support "Min"
@@ -86,8 +127,11 @@ Status BinaryOpBuilder::AddToModelBuilderImpl(ModelBuilder& model_builder, const | |||
std::unique_ptr<Operation> op = model_builder.CreateOperation(node, coreml_op_type); | |||
AddOperationInput(*op, "x", input_defs[0]->Name()); | |||
AddOperationInput(*op, "y", input_defs[1]->Name()); | |||
if (input_defs.size() > 2) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I understand the check on inputs size is slightly more efficient, but explicitly checking for "Max" makes the code clearer as there's no question about which ops match the condition, and this isn't a performance critical path. The comment below helps, but comments can get out of date.
if (input_defs.size() > 2) { | |
if (op_type == "Max" && input_defs.size() > 2) { |
GetShape(*input_defs[0], x0_shape, logger); | ||
GetShape(*input_defs[1], x1_shape, logger); | ||
if (x0_shape.size() < x1_shape.size()) { | ||
std::swap(x0_shape, x1_shape); | ||
} | ||
std::fill(x0_shape.begin(), x0_shape.end(), -1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we get the rank of each input instead of the shape?
Would be good to explain why we're adding a shape with -1 values in a comment so that's captured for the next developer that has to work on the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. I will add comments here to explain "-1".
const auto* tensor_shape = node.InputDefs()[0]->Shape(); | ||
if (tensor_shape == nullptr) { | ||
return false; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the prior node is a ConstantOfShape with a runtime input for the shape I don't think it would have shape info.
onnxruntime/core/providers/coreml/builders/impl/squeeze_op_builder.cc
Outdated
Show resolved
Hide resolved
…lder.cc Co-authored-by: Scott McKay <[email protected]>
onnxruntime/core/providers/coreml/builders/impl/binary_op_builder.cc
Outdated
Show resolved
Hide resolved
…der.cc Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Description
Motivation and Context