-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
basic plugin functionality (up through writing to disk) #1
Comments
|
original comment:
|
|
Also, @jart, you mentioned about not using huge pull requests...
|
You can develop things however you want in this repository. But since it isn't a fork, you're going to encounter things that are impossible for you to do, without modifying the TensorBoard codebase. When You encounter those situations, send us small pull requests that unblock your development workflow, and we'll review them as quickly as possible. Then you bump the TensorBoard SHA1 in your WORKSPACE file and continue your work here. (Note: You can use Bazel's Eventually you'll get this repository in a state where it's working really well, and at that point, we'll start having a conversation about upstreaming it into the TensorBoard codebase. |
|
Can you use tf.summary.tensor_summary instead of creating a |
Yes, but wouldn't it be good to have a summary that is explicitly for streaming things in real time? So I have some function that accepts image frames, somehow (I have zero video experience, let alone live video) streams them to the server, which somehow streams it to the TensorBoard client? |
So my plugin becomes only something that takes frames and streams them, and then... I have some separate thing that is more for TensorFlow that creates the images. You can use it to make gifs or whatever, or you can send it live to TensorBoard using the plugin. |
If you have it go through the summary system, and it's slow, then we'll just make the summary system go fast. I'm currently working on a new data ingestion pipeline, that takes summaries and writes them directly to a SQL database. I'm very interested in making this pipeline go as quickly as possible. If you build something awesome, that is a little laggy, then that just gives me even more incentive to make this thing as lightning fast as possible. Plus you'll get your work done quicker. |
Big +1 to what Justine said, about using tf.summary.tensor_summary rather than adding a new video summary op. |
Okay, I can do tensor summaries. @jart, you mentioned earlier using ffmpeg and streaming it to the browser, and something else about using sockets. I'm not sure what you meant (I know very little about streaming) - are tensor summaries better than those ideas? Also, from what I read in the code, there is a global reload timer for reading event files, and they are cached and returned from the multiplexer on demand. Should I base timing off how quickly the client requests frames, and manually grab the tensor summary at every request? It looks like that's how the text plugin works, at least for |
The problem with doing sockets is it takes ops works for users to configure and it wouldn't persist in a database. It's nice to persist the data. Maybe someone will want to let it train overnight, and watch the video in the morning. One thing you can do to create the video, is just generate it on the fly from the tensor summaries as soon as the web browser requests it. For example you could probably pipe the raw tensor data into the ffmpeg command and then pipe the output to the web browser. |
It is far from perfect, but 4b5f245 has the user-side script writing tensor summaries. |
To: @dandelionmane, @jart, @caisq
I named it Beholder (for now) because... it's a viz project, I'm a nerd, it's short, and I'm too lazy to think of anything else right now. Anyway, here it goes. It's pretty close to how @dandelionmane described it in tensorflow/tensorboard#130.
proposed design
People should push tensors to the front end with two function calls: a constructor,
Beholder
, and anupdate
function. Here's the flow I'm imagining.Beholder
, with configuration options, including:logdir
: where the logs go.window_size
: how many frames to use for calculating variance.tensors
: a list of tensors to visualize. Default behavior is "grab everything I can find".scaling
: either"layer"
or"network"
. Determines how to scale the parameters for display. Scales using the min/max of the layer or the entire network.beholder.update()
in the train loop. If visualizing variance, a size-limited queue will be used to hold thet
most recent tensors, one for each timeupdate
is called.logdir/beholder/mode
.logdir/beholder/current_frame
or something. Only one file will exist there at the time, so there's no need to worry about disk space. The current worry is more about memory since I'm thinking of storing millions of parameters for several timesteps.questions / response requested
cv2.imwrite
, but I suppose I could make a tensor from the numpy array (sounds like it could be expensive, but I haven't tested anything) and then usetf.summary.image
and aFileWriter
to save the image.t
steps? It will be faster to do it this way.The text was updated successfully, but these errors were encountered: