Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EfficientNetLiteB0Backbone in KerasHub #20593

Open
cjohn001 opened this issue Dec 4, 2024 · 9 comments
Open

EfficientNetLiteB0Backbone in KerasHub #20593

cjohn001 opened this issue Dec 4, 2024 · 9 comments
Assignees

Comments

@cjohn001
Copy link

cjohn001 commented Dec 4, 2024

Hello together,
I am currently trying to migrate my code to KerasHub, as it seems for KerasCV the documentation is gone now. In the past I was using the EfficientNetLiteB0Backbone. Seems like it has not been ported to KerasHub. Is there a chance to see this model in KerasHub as well? As I am not seeing it yet I am wondering, is there nowadays maybe a better model to use? I need something small with good classification results for a mobile phone, I think EfficientNetLite was doing the job best with KerasCV.

And in case it will be moved to KerasHub, it would also be great if one could get a pretrained weights for it. Thanks for consideration.

@edge7
Copy link
Contributor

edge7 commented Dec 6, 2024

Hi, am not part of the Keras team, so this is my personal opinion.

I understand your issue, though. As Keras Hub is still WIP, the Keras CV documentation should still be around. I got a similar issue for an other model recently.
EfficientNetLiteB0 is not ported yet, it probably will.
You can still use Keras CV for the time being, look here in particular at the tests, you might be able to load it correctly even without official documentation.

@sachinprasadhs
Copy link
Collaborator

Hi, We have a EfficientNet Lite model for the edge devices here https://www.kaggle.com/models/keras/efficientnet/keras/efficientnet_lite0_ra_imagenet which has been ported from Timm.
For more details about the model check https://huggingface.co/timm/efficientnet_lite0.ra_in1k

@cjohn001
Copy link
Author

cjohn001 commented Dec 6, 2024

Hello, thanks for the directions. For the moment this code works for me, I am primary missing pretrained weights and the docs. If it get ported all is fine. For the moment I can live without the docs. Some kind of versioning in the docs would be great.

efnetlite_backbone = keras_cv.models.EfficientNetLiteB0Backbone(include_rescaling=True,
                                                                input_shape=(HEIGHT,WIDTH,3))
efnetlite_classifier = keras_cv.models.ImageClassifier(efnetlite_backbone, num_classes=len(class_mapping), activation='sigmoid')
efnetlite_classifier.compile(loss=keras.losses.CategoricalCrossentropy(label_smoothing=0.1),
                   optimizer=keras.optimizers.SGD(momentum=0.9),
                   metrics=["accuracy"])

@cjohn001
Copy link
Author

@sachinprasadhs I am still trying to figure out how I can get the EfficientNetliteB0Backbone trained to a similar accuracy like was state in the papers of the models. Is there maybe somewhere the training script from ImageNet training available for the model you referenced? Would be a great help to see how the hyperparameters have to be set for training. Thanks for your help!

@sachinprasadhs
Copy link
Collaborator

@pkgoogle, PTAL

@pkgoogle
Copy link

pkgoogle commented Jan 7, 2025

Hi @cjohn001,

can you try something like this after installing keras_hub?:

# original timm b0
model = keras_hub.models.ImageClassifier.from_preset("hf://timm/efficientnet_b0.ra_in1k")
# lite variant
model = keras_hub.models.ImageClassifier.from_preset("hf://timm/efficientnet_lite0.ra_in1k")

This will load the same preset weights as timm. You can find all the available variants here.

@cjohn001
Copy link
Author

cjohn001 commented Jan 11, 2025

@pkgoogle sorry for the late reply and thanks for the directions. In the meantime I was able to load the weights from here:

https://github.com/sebastian-sz/efficientnet-lite-keras/releases/tag/v1.0

Unfortunately, it seems like I cannot use keras_hub yet. I want to deploy my models in a mobile application, which I assume requires me to use tensorflow-model-optimization toolkit if I want to use Sparsity and cluster preserving quantization aware training.
Tensorflow-model-optimization toolkit unfortunately forces me to stay on keras2 :(
Is there maybe another kind of optimization functionality integrated in keras3 already, as a replacement, which I could use instead?
I would very much like to switch to keras3 as soon as possible, as I think, I could than also directly use timm models with keras3, like described here https://lightning.ai/sitammeur/studios/keras3-with-pytorch-workflow?section=featured. However, keras3 is not an option if I cannot prepare the models for deployment. I am quite new to the entire ecosystem, hence it would be great if you could provide me a direction. Thanks for your help!

@pkgoogle
Copy link

Hi @cjohn001, If you absolutely need that workflow -- feel free to continue doing so. We usually recommend you get your workflow/system/project working/going before optimization. In which case you have multiple options (including continuing this workflow).

Depending on your use case:

  • mediapipe. If your usecase fits the tasks listed here -- this is probably the most user friendly way to get started
  • LiteRT. If you already have a .tflite model and need to run it on device but you need more control/flexibility than mediapipe allows you can use this library directly
  • AI-Edge-Torch. If you need to convert a Pytorch model to a .tflite file to be used either with mediapipe or LiteRT, this is the library you would use. (You can accomplish some of the same optimizations here as well).

Does that answer your question?

@cjohn001
Copy link
Author

cjohn001 commented Jan 13, 2025

Hello @pkgoogle thanks for the directions. I looked in your mentioned options already. The thought behind my current framework choice was that keras3 gives me full control over the entire build and deployment process and also allows me to build custom solutions based on the building blocks provided with the framework. Switching back to keras2 seems not like a good option to me. Mediapipe seems to me like something nice to get quick results but nothing which supports a full fledged development lifecycle.
However, if I understand you correct, you are saying with Keras3 there is currently no decent way to optimize build models with state of the art techniques like tensorflow-optimization-toolkit for deployment as tfilte? I am currently most interested in pruning and pruning preserving quantization aware training as I want to deploy my models with the app binary.
Is there something on the roadmap for Keras3 in this regards? I believe not having these capabilies will drastically reduce the usability of the framework. With embedded applications in mind would you rather recommend to switch to another framework like pytorch if more control on the build process is required? Thx for your feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants