-
Notifications
You must be signed in to change notification settings - Fork 655
Importing model and visualizing it with Lucid #58
Comments
Hi Ricardo, the When i tried to import my own model into Lucid, it only worked when i removed all constraints for the expected size of the images, exept the number of channels(3). |
Thanks for the highlight @tschwabe . Another question, if I want to run Lucid on a model with more channels (>3), is it possible to use it? I wonder that perhaps I need to constrain it for three channels due to the RGB mapping. |
Im not exactly sure. I would recommend to remove the constraints from the height and the width and keep it for the channels, thats what i did. |
First question: When you said to remove width and height constraints, and keep channels, would be something like going from Second: I've now tested just passing the |
1.) Yeah, 2.) I would assume this Error arises, just like in my case, because the shapes inside your model are fixed and therefore not aligned with |
Oh, I see. How did you change/edit your trained model? (I'm new to Tensorflow) |
I trained my model with fixed size of the pictures. Then i constructed a new model with the same architecture, but no constraints, and loaded the wheights from the other model Into the new one.. |
In a nutshell, when lucid imports your model's graph, it needs to replace your model's input (usually a
As a first step, set Also, @ricardobarroslourenco, feel free to create colab notebooks that let me reproduce your issue. I can not promise actual support, but if you can make it easier for me to work on your problem… it increases the probability of me doing just that. Thanks for being an early adopter! :-) |
@tschwabe 's extra step should not be necessary when you tell lucid to use the same input size as you use during training. |
@ludwigschubert thanks for the thorough answer. Let me give you some background on my application. In my project, I'm building a convolutional autoencoder, but the application is meant for remote sensing images I'm parsing from Google Earth Engine. More specifically I'm ingesting several multispectral images (7 channels, each one associated with a spectral band acquired at the satellite sensor) in which I want to replicate cloud texture patterns. We believe that such cloud class we feed to the autoencoder is too broad, and we expect to have a reasonable amount of variance at the hidden layers, even the most compact ones, even after convergence (by class misclassification of a cloud class, or due to cloud classes that are not described in the present framework, but were enclosed into a hyper class due to lack of a more proper label). The reason I'm willing to use Lucid is to streamline the process of visualizing the embeddings I'm learning because I will probably drive my network architecture development, and parametrization, by the features I'm able to represent, and their quality. Ideally, this embeddings would show textures present in the samples I'm providing and would be subject to analysis by geophysicists on their feasibility as new cloud classes. Currently, I've been working on a colab notebook which needs some cleanup, and I'll be glad to share it with you 😃 What google account should I share it to? |
Sounds like an exciting application! My Google account is [EDIT] |
Thanks @ludwigschubert . I've just shared with you a more clean and commented notebook. Thanks a lot for your availability :) |
@ludwigschubert I would like to know if you were able to run the notebook. I've changed the dataset permissions, so it should be ok. |
@ricardobarroslourenco I am not sure if you have resolved your issues of not able to visualize if your model takes input something like [batch_size,w,h,1]. I was able to visualize by making a small change To this image module something like this at line number 37
The resultant image rendered is black and white. |
@hegman12 oh nice. I'll try to change that and see how it goes. |
Sorry for tacking this on, but a similar question on the subject of image_value_range, based on https://github.com/tensorflow/lucid/blob/master/lucid/modelzoo/vision_models.py . In the following class, image_value_range is set as image_value_range = (-117, 255-117). I have seen this with InceptionV1 as well as InceptionV3. Does anyone know where this comes from? I don't see these values in the InceptionV1 preprocessing codes. class InceptionV1(Model): |
@ebwendol The specific pre-trained weights we use were trained on that input range. We are not using the weights trained with the slim reimplementation. @colah is currently working on getting more models and more versions of models into modelzoo—check out #85 if you're curious. In particular, this will include slim's InceptionV1, with the value range that you'd expect from a slim model—(-1,1). |
@ludwigschubert Thank you very much for clarifying this! |
Hi everyone. I run into the same issues with the input size.
Is there a simpler way to solve it than what to have two models one with fixed and one with unconstrained sizes? p.s. I am rather new to tf, keras etc. |
Hi @colah ! Thanks for the heads up. I've just moved back to Brazil (I've left the PhD at UChicago), so I am still accomodating things. I'll let you know once I test it, but this is a nice contribution :) |
I'm trying to open an autoencoder model I've trained myself, on Lucid, and I'm using as reference the notebook Importing a graph into modelzoo.
I'm mostly in doubt on how to use the provided class:
What should I define as
image_shape
,image_value_range
? For what images I'm considering this? The output of a certain convolutional layer?Also, for what is defined the
input_name
?The text was updated successfully, but these errors were encountered: