You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question about the batch size you used in the experiments. For protocol 1, you used batch size 16 and 8 for protocol 2. It seems that researchers in the area of head pose estimation prefers small batch size. But as far as I know, the training process can be more stable with a larger batch size.
Did you do any experiments on how batch size affect the final performance of the model?
The text was updated successfully, but these errors were encountered:
Hello, I don't have the experiment against the batchsize, but we do observe small batchsize is much better for the head pose learning. Comparing to general understanding of the batchsize, for example image classification, head pose is a shared concept between each training data while general classification contains high-level semantic meaning. Is it possible different natures of different tasks cause such results.
Hi, thanks for your great work.
I have a question about the batch size you used in the experiments. For protocol 1, you used batch size 16 and 8 for protocol 2. It seems that researchers in the area of head pose estimation prefers small batch size. But as far as I know, the training process can be more stable with a larger batch size.
Did you do any experiments on how batch size affect the final performance of the model?
The text was updated successfully, but these errors were encountered: