You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I would like to inquire if you could provide a TensorFlow Lite private detector model with a dType of float32.
I've attempted to convert an existing saved_model.pb to .tflite for use on Android platform mobile phones. Android platform does not support Float16. Ultimately, I found a way to obtain a Float32 model, which is by retraining and setting the dType from Float16 to Float32 during the training process. However, this method does not seem to be orthodox, so I was wondering if you could provide a .tflite file with a dType of float32. Thank you.
The text was updated successfully, but these errors were encountered:
Hey! Thank you for your response.
Retraining with FP32 and adding the SavedModel should work.
I'm curious about how you retrained the model with FP32.
My approach involved simply changing the sections highlighted in red from FP16 to FP32. Is this method similar to yours? Additionally, are there any potential concerns I should be aware of with my approach?
Hi, I would like to inquire if you could provide a TensorFlow Lite private detector model with a dType of float32.
I've attempted to convert an existing saved_model.pb to .tflite for use on Android platform mobile phones. Android platform does not support Float16. Ultimately, I found a way to obtain a Float32 model, which is by retraining and setting the dType from Float16 to Float32 during the training process. However, this method does not seem to be orthodox, so I was wondering if you could provide a .tflite file with a dType of float32. Thank you.
The text was updated successfully, but these errors were encountered: