You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After the modification, I still got errors indicating the shapes of some tensors are not aligned. I found it is because when you pass validation_data to validation_pipeline
, video_length is also passed to this function. To allow the batch processing, I have to pop video_length from validation_data and pass current batch size to the function. I didn't want to modify the codes anymore.
Summary
Currently, the users cannot set the batch size of testing data. All the frames are processed in a single batch, which may lead to "CUDA out of memory". I think you should refactor the codes in Dataset class and let DataLoader manage the batch prcoessing.
The text was updated successfully, but these errors were encountered:
Experience
I wanted to process a video of 11 frames, so I modified the config.
But when I ran the script, I got "CUDA out of memory". I tried to modify the code by spliting the data into batchs manually (in
test_vid2vid_zero.py
):After the modification, I still got errors indicating the shapes of some tensors are not aligned. I found it is because when you pass
validation_data
tovalidation_pipeline
,
video_length
is also passed to this function. To allow the batch processing, I have to popvideo_length
fromvalidation_data
and pass current batch size to the function. I didn't want to modify the codes anymore.Summary
Currently, the users cannot set the batch size of testing data. All the frames are processed in a single batch, which may lead to "CUDA out of memory". I think you should refactor the codes in
Dataset
class and letDataLoader
manage the batch prcoessing.The text was updated successfully, but these errors were encountered: