-
Notifications
You must be signed in to change notification settings - Fork 637
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added openvino compiler with tensorflow interface #137
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @marcoschouten can you please fix the issues I pointed out in the review?
|
||
def execute( | ||
self, | ||
model: Union[str, Path], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the input for tensorflow compilers should be a tensorflow mode not a file. You should then save it to file and just at the end give it as input to the OpenVino CLI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, this is how I would do it:
-
Save the model to a file, (would we need to define a specific folder?):
model.save('saved_model/my_model')
-
Load the model:
my_model= tf.keras.models.load_model('saved_model/my_model')
-
Pass it as input to the OpenVino CLI
mo --input_model <my_model>.pb
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can skip step 2, the execute method receives a tensorflow model, you can then save it to file (remember to save it to a temp directory as we do in the other sections) and pass it to the mo
command
@@ -146,6 +146,25 @@ def execute( | |||
) | |||
|
|||
|
|||
class TensorFlowOpenVINOBuildInferenceLearner(BuildInferenceLearner): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this class is actually useless. You can simply reuse the OpenVINOBuildInferenceLearner
passing TensorFlow as source_dl_framework
.
Sure thing, I will fix the code once I will get home.
Marco
…On Thu, 5 Jan 2023, 6:48 pm Diego Fiori, ***@***.***> wrote:
***@***.**** requested changes on this pull request.
Hello @marcoschouten <https://github.com/marcoschouten> can you please
fix the issues I pointed out in the review?
------------------------------
In workflows/tests.yml
<#137 (comment)>:
> @@ -0,0 +1,120 @@
+name: Run tests
I think this is a duplicate file: could you remove it?
------------------------------
In nebullvm/tools/openvino.py
<#137 (comment)>:
> @@ -0,0 +1,174 @@
+from pathlib import Path
I think this is a duplicate file: could you remove it?
------------------------------
In nebullvm/operations/optimizations/compilers/openvino.py
<#137 (comment)>:
> @@ -25,6 +25,137 @@
from nebullvm.tools.transformations import MultiStageTransformation
+# classe di base. Contains generic methods.
please write comments in english
------------------------------
In nebullvm/operations/optimizations/compilers/openvino.py
<#137 (comment)>:
> +
+ check_quantization(quantization_type, metric_drop_ths)
+ train_input_data = input_data.get_split("train").get_numpy_list(
+ QUANTIZATION_DATA_NUM
+ )
+
+ # SYNTAX for MO command
+ # f"""mo
+ # --saved_model_dir "{model_path}"
+ # --input_shape "[1,224,224,3]"
+ # --mean_values="[127.5,127.5,127.5]"
+ # --scale_values="[127.5]"
+ # --model_name "{model_path.name}"
+ # --compress_to_fp16
+ # --output_dir "{model_path.parent}"
+ # """
remove the commented code above
------------------------------
In nebullvm/operations/optimizations/compilers/openvino.py
<#137 (comment)>:
> +class TensorFlowOpenVINOCompiler(Compiler):
+ supported_ops = {
+ "cpu": [
+ None,
+ # QuantizationType.STATIC,
+ # QuantizationType.HALF,
+ ],
+ "gpu": [],
+ }
+
+ def __init__(self):
+ super().__init__()
+
+ def execute(
+ self,
+ model: Union[str, Path],
the input for tensorflow compilers should be a tensorflow mode not a file.
You should then save it to file and just at the end give it as input to the
OpenVino CLI.
------------------------------
In nebullvm/operations/inference_learners/builders.py
<#137 (comment)>:
> @@ -146,6 +146,25 @@ def execute(
)
+class TensorFlowOpenVINOBuildInferenceLearner(BuildInferenceLearner):
this class is actually useless. You can simply reuse the
OpenVINOBuildInferenceLearner passing TensorFlow as source_dl_framework.
—
Reply to this email directly, view it on GitHub
<#137 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AKBLQ2AQMUDEKTZHKPCNHQDWQ4CPPANCNFSM6AAAAAATSFME7M>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@@ -40,7 +171,7 @@ def __init__(self): | |||
|
|||
def execute( | |||
self, | |||
model: Union[str, Path], | |||
model: tf.keras.Model(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model type should be only tf.keras.Model
|
||
|
||
# save model to temporary folder | ||
dirpath = tempfile.mkdtemp() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as temporary directory you can use the directory that is passed to each compiler execute with the parameter onnx_output_path
, you can find it in nebullvm/operations/optimizations/base.py
when compiler_op.to(self.device).execute
is called. I would also rename the parameter to something more general (such as tmp_dir_path
), and when you rename it make sure to rename it wherever it's used.
|
||
|
||
# save model to temporary folder | ||
tmp_dir_path = Compiler.onnx_output_path |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Be careful, the Compiler class doesn't have an onnx_output_path
attribute.
onnx_output_path
is an argument passed to this method from the caller (it's passed to each execute method of each compiler), so you should add it to the arguments above of the execute (line 46->51) like this:
def execute(
self,
model: tf.keras.Model,
model_params: ModelParams,
onnx_output_path: str,
input_tfms: MultiStageTransformation = None,
metric_drop_ths: float = None,
quantization_type: QuantizationType = None,
input_data: DataManager = None,
**kwargs,
):
This parameters was introduced in the DeepSparseCompiler
, that needs to export the original PyTorch model to onnx and so it needs the path to a temporary dir to generate the onnx model (that's why it's called onnx_output_path). In this case we are going to use it to convert a model to openvino, so I would suggest to rename it to tmp_dir_path
that is a more general name (please reame it also in the DeepSparseCompiler and in the caller).
@marcoschouten I ran the unit tests but they are not passing, can you check? |
The error is: I discovered that there was commented code for the quantization, which I uncommented. How did you run the unit tests? |
They are triggered for each new commit on this PR, you can also run them locally by running the |
From the error message, I noticed that the
Other classes, such as It is not clear to me how the quantization is done, and how the I read about quantization for Openvino here
I recall that the original goal was to translate a model from TensorFlow to OpenVino. So it might make sense to convert first the model to Openvino (by running the execute command) and, at a later point in time, perform the quantization on the Openvino model (then following the Openvino quantization functions and the appropriate parameters. What exactly would you like me to do? |
No description provided.