Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added openvino compiler with tensorflow interface #137

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

marcoschouten
Copy link

No description provided.

Copy link
Collaborator

@diegofiori diegofiori left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @marcoschouten can you please fix the issues I pointed out in the review?

workflows/tests.yml Outdated Show resolved Hide resolved
nebullvm/tools/openvino.py Outdated Show resolved Hide resolved
nebullvm/operations/optimizations/compilers/openvino.py Outdated Show resolved Hide resolved
nebullvm/operations/optimizations/compilers/openvino.py Outdated Show resolved Hide resolved

def execute(
self,
model: Union[str, Path],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the input for tensorflow compilers should be a tensorflow mode not a file. You should then save it to file and just at the end give it as input to the OpenVino CLI.

Copy link
Author

@marcoschouten marcoschouten Jan 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, this is how I would do it:

  1. Save the model to a file, (would we need to define a specific folder?):
    model.save('saved_model/my_model')

  2. Load the model:
    my_model= tf.keras.models.load_model('saved_model/my_model')

  3. Pass it as input to the OpenVino CLI
    mo --input_model <my_model>.pb

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can skip step 2, the execute method receives a tensorflow model, you can then save it to file (remember to save it to a temp directory as we do in the other sections) and pass it to the mo command

@@ -146,6 +146,25 @@ def execute(
)


class TensorFlowOpenVINOBuildInferenceLearner(BuildInferenceLearner):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this class is actually useless. You can simply reuse the OpenVINOBuildInferenceLearner passing TensorFlow as source_dl_framework.

@marcoschouten
Copy link
Author

marcoschouten commented Jan 5, 2023 via email

nebullvm/operations/optimizations/compilers/openvino.py Outdated Show resolved Hide resolved
nebullvm/operations/optimizations/compilers/openvino.py Outdated Show resolved Hide resolved
nebullvm/operations/optimizations/compilers/openvino.py Outdated Show resolved Hide resolved
@@ -40,7 +171,7 @@ def __init__(self):

def execute(
self,
model: Union[str, Path],
model: tf.keras.Model(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

model type should be only tf.keras.Model

nebullvm/operations/optimizations/compilers/openvino.py Outdated Show resolved Hide resolved


# save model to temporary folder
dirpath = tempfile.mkdtemp()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as temporary directory you can use the directory that is passed to each compiler execute with the parameter onnx_output_path, you can find it in nebullvm/operations/optimizations/base.py when compiler_op.to(self.device).execute is called. I would also rename the parameter to something more general (such as tmp_dir_path), and when you rename it make sure to rename it wherever it's used.



# save model to temporary folder
tmp_dir_path = Compiler.onnx_output_path
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Be careful, the Compiler class doesn't have an onnx_output_path attribute.
onnx_output_path is an argument passed to this method from the caller (it's passed to each execute method of each compiler), so you should add it to the arguments above of the execute (line 46->51) like this:

def execute(
     self,
     model: tf.keras.Model,
     model_params: ModelParams,
     onnx_output_path: str,
     input_tfms: MultiStageTransformation = None,
     metric_drop_ths: float = None,
     quantization_type: QuantizationType = None,
     input_data: DataManager = None,
     **kwargs,
 ):

This parameters was introduced in the DeepSparseCompiler, that needs to export the original PyTorch model to onnx and so it needs the path to a temporary dir to generate the onnx model (that's why it's called onnx_output_path). In this case we are going to use it to convert a model to openvino, so I would suggest to rename it to tmp_dir_path that is a more general name (please reame it also in the DeepSparseCompiler and in the caller).

@valeriosofi
Copy link
Collaborator

@marcoschouten I ran the unit tests but they are not passing, can you check?

@marcoschouten
Copy link
Author

The error is:
Can't instantiate abstract class TensorFlowOpenVINOCompiler with abstract method _quantize_model

I discovered that there was commented code for the quantization, which I uncommented.

How did you run the unit tests?

@valeriosofi
Copy link
Collaborator

The error is: Can't instantiate abstract class TensorFlowOpenVINOCompiler with abstract method _quantize_model

I discovered that there was commented code for the quantization, which I uncommented.

How did you run the unit tests?

They are triggered for each new commit on this PR, you can also run them locally by running the pytest command in the speedster folder

@marcoschouten
Copy link
Author

The error is: Can't instantiate abstract class TensorFlowOpenVINOCompiler with abstract method _quantize_model.
I discovered that there was commented code for the quantization, which I uncommented.
How did you run the unit tests?

They are triggered for each new commit on this PR, you can also run them locally by running the pytest command in the speedster folder

From the error message, I noticed that the _quantize_model() method must be implemented.
Because it is required by the Class interface defined in compilers\base.py.

@abc.abstractmethod
  def _quantize_model(self, **kwargs) -> Any:
      raise NotImplementedError()

Other classes, such as class OpenVINOCompiler(Compiler) or class PytorchBackendCompiler(Compiler) have different implementations (i.e. input parameters) of the _quantize_model() method.

It is not clear to me how the quantization is done, and how the _quantize_model() class method should be implemented for the specific TensorFlow-OpenVino conversion.

I read about quantization for Openvino here

I recall that the original goal was to translate a model from TensorFlow to OpenVino.

So it might make sense to convert first the model to Openvino (by running the execute command) and, at a later point in time, perform the quantization on the Openvino model (then following the Openvino quantization functions and the appropriate parameters.

What exactly would you like me to do?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants