-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Visualization #13
Comments
I'd send you some python code for doing the vis, but I no longer have access to the system :( On the bright side, it's not too hard (and in fact much easier!) to load the embeddings using numpy and displaying them in the Tensorboard projector or an iPython notebook. Please let me know if you have any questions about doing either of these! |
@nhynes Thank you so much for the reply! Actually, I need to visualize this on web app, honestly I'm not sure which one is better for that case? Can you please advise? |
Ah, a webapp is going to require a bit of trickery to display the results on the page, but making the backend which generates the results is much more easily done using a python backend. Do you need the results to be real-time or do you just want to see what you get for your own viewing? In the latter case, a Jupyter notebook would be ideal since you can plot the images inline using If you want to be able to serve im2recipe results to users, I had a good experience creating a lightweight Twisted webserver which sent the image links to a javascript frontend for viewing. Alas, I no longer have the code that does this or I would gladly give it to you! |
I'm OK with running a backend server and frontend UI.
I want the former. The flow would be like, the user uploads a recipe image and the server (Python backend) returns the image processed through the model.
Yes, exactly this. What I'm looking for a simple command line to request the results from im2recipe model. Which I can serve to my users |
Gotcha. Yeah, the best way I've found to do that is convert the model to CPU and then create a worker pool of model executors (to service concurrent requests; the latency of a request is pretty high). Then, you just pop a webserver on top of that (e.g., Tornado) and have it send each request to the pool, wait for a response, and send the API response back to the client. |
@nhynes Great. Here are a couple of more questions for you
Thank you for the help so far! |
If you have a GPU sitting around waiting for requests, then more power to you! For deployment, I found CPU boxes more reliable, but that was mostly because they worked better with Kerberos logins 📦
They're just threads in a thread pool into which you call when you receive a request on the main webserver thread. Like basically just do round-robin scheduling: call |
I has taken a course project on my university (University of Science_HCMUS) which we should follow and analyze a science paper in our favor topic. And i choose your paper as the best one since i am a food lover . I am running the your pretrained model (model_e500_v-8.950.pth.tar) well now.However , i would like to see the visualization off this trained model and view the dataset recipe.Cause i try alot to open the .mdb file on ubuntu but no one is effective enough. Moreover, i wonder if i want to have a new predict one,what size of image i should prepared. Please give me some advices ? |
In the lua version, you have a section to see the visualization of the trained model, how this will be achieved with Pytorch once the model is trained?
The text was updated successfully, but these errors were encountered: