You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a great work. I have some questions about the forward function in moviechat.py. In the forward function, only the encode_videoQformer_visual function is used, but this function seems to directly merge all the video frames according to the similarity until there are 256 frames left, which seems inconsistent with the description in the paper.
The text was updated successfully, but these errors were encountered:
Thanks for your question! We don't use function encode_videoQformer_visual and forward for our model. We choose VideoLLaMA as our base model, and some functions are not used. Sorry for causing confusion, and we will delete them in our next version.
This is a great work. I have some questions about the forward function in moviechat.py. In the forward function, only the encode_videoQformer_visual function is used, but this function seems to directly merge all the video frames according to the similarity until there are 256 frames left, which seems inconsistent with the description in the paper.
The text was updated successfully, but these errors were encountered: