-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Many problems with video presentation #358
Comments
This will also be essential to figure out in order to present video on the monitor with the corresponding audio from the TDT. |
For the record, I never got it satisfactorily working in my initial attempts, so I'm not surprised you're struggling. I bailed when I realized I didn't actually need AV synchrony, and used a hacky solution of hiding the expyfun window and popping up a borderless full-screen VLC window to play the video. That doesn't mean we can't get this to work, but it will probably require some additions to / overhaul of the video code. |
For the time sync, would providing our own clock with t=0 as the start time help? |
@drammock Yeah, it is pretty much essential for us now. We have many upcoming experiments where playing natural videos is the only way to go, and they have to be time synced. |
The sync itself should not be difficult. It was essentially solved with 1v2a. What we need to be able to do is just use a pyglet video source as a container for a sequence of frames, pull the frames out one by one, and flip them at the time we deem correct. |
Sooooooo. I just took maybe 45 seconds to see if I could do what I want to do with I don't love the idea of adding a dependency, but damn. No AVBin, no trouble, just reading frames in. And it happens way way faster than real time, so it shouldn't hold anything up. |
Here is a very siomple example that reads in all the frames of a video using
Note that your RAM will die if you do a long video. There is no need in practice to store all the frames. |
When you figure out how to do this with opencv, can you turn it into an example? Bonus points if you can find and use the original McGurk stimuli to do it with. |
Are you saying we should use |
Yeah that seems like a reasonable approach to me |
@sfiscell you probably already know this, but in case not: the video quality probably arises because (1) the chosen video codec only encodes pixels that changed from the previous frame, and (2) the expyfun flip operation clears all those non-changed pixels before displaying the new frame. Results might be better with a video format like AVI, which encodes every pixel in every frame. But if we want to get this working quickly and support a wider range of formats, OpenCV may be the way to go. |
Here is a gist that gives the idea of what I am thinking of. It should not be considered an example of how to do this. I need to figure out how to get the timing right and always stay one frame ahead and skip frames when necessary etc etc. But this should at least make Mark's head spin. |
I plan to work more on this tomorrow to make it reasonable. |
Update on this: OpenCV works great for loading the frames into numpy arrays. however, getting these into pyglet RawImage objects (even if we try to write over the texture directly) takes much longer than a frame, so we cannot do it in real time. This leads to about 3 or 4 s worth of video loading before every trial, which really isn't workable. Our current thought is perhaps pickling the textures right before running the experiment to avoid converting them during. But would that work, or does making the texture also load stuff into GPU memory? I'm frankly shocked that starting with the data in the proper format it could take so long just to get it on the screen. Any recommendations greatly appreciated--we are slated to start collecting data early next week, and @sfiscell needs to have a respectable number of subjects collected by the end of the week to present for her summer project's final poster. |
It needs to upload to the GPU, so no pickle. The slow part could be the
shader compilation not data upload. Did you try creating a single RawImage
and just updating the new data? That should be fast enough I hope
… |
... although looking at the code, even What probably would be fast is using http://pyglet.readthedocs.io/en/pyglet-1.3-maintenance/modules/image/index.html |
Have tried something along those lines, didn't seem to help. You know what's funny though? Despite being in the docs, there is no |
Hold on, I may be wrong on that. Will be back. |
It seems like the |
To elaborate, you are right that using |
Putting this here in case it helps: https://github.com/motmot/pygarrayimage. Reference in this stack overflow post: https://stackoverflow.com/questions/31030000/capture-webcam-image-using-cv2-and-pyglet-in-python. |
FYI @rkmaddox Pyglet 1.4 is out. It no longer uses AVBin but rather FFmpeg. Maybe it works better. But if we want to use that there is a lot more work to do because they have some backward incompatible stuff (#385). I was planning to / hoping to get rid of Pyglet. But if you find that the FFmpeg video replay actually works, then that would be a good reason to support it. |
the avbin / ffmpeg change is a good thing. It turns out there was a brief schism in the ffmpeg dev community, avbin was a splinter project that was (briefly) better, but has since been abandoned / merged back into ffmpeg. Whether it's good enough when used through pyglet to do what @rkmaddox wants is a separate question, but if you want to test it I think we could (should?) switch our dependency back from avbin to ffmpeg without updating the pyglet dependency (unless pyglet 1.3 hardcodes avbin? I forget). |
I don't think there is an FFmpeg source in Pyglet < 1.4 https://bitbucket.org/pyglet/pyglet/src/pyglet-1.3.2/pyglet/media/sources/ |
OK. I probably won't think about this until next time we need video. This will only help with loading videos, though, not presenting them right? Either way that will be good, since I think we ended up converting all our videos to images and then loading those in before each trial, which did introduce a longer inter-trial pause than was optimal. |
FYI Pyglet has revamped their support to use FFMpeg and I recently fixed a bug with video in #413. I'm going to assume this will magically fix everything and mark that PR to close this one, but let's reopen if there continue to be issues. @rkmaddox feel free to try (or have others try) with the new code! |
FYI @rkmaddox video should work much better on all platforms on latest |
So my labbies and I are having a heck of a time getting video to work. Here is a summary of the issues as I understand them, though they will fill in the rest.
Those are the problems with trying to play a movie in pyglet. Strangely, the issues don't appear in the
simple_video
example.Another, bigger, issue, which @drammock alluded to in #273 is that the sync may be off when playing videos. What we really need is access to the video frames and the ability to get the textures and flip as needed. This would also solve the problem of getting it in sync with the audio and triggers. Basically, we would handle video as we did in 1v2a, but instead of saucers it would be video frames. We have tried very hard to pull textures from videos in pyglet, but it is not working for us yet.
Labbies are currently signing up for github and will post more info here.
The text was updated successfully, but these errors were encountered: