You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Based on the paper, I know that when SAM is the teacher model, both the student and teacher models have an input size of 1024x1024; when DINO is the teacher model, both the student and teacher models have an input size of 432x432. I would like to know, during the training process:
For the student model, does it compute the loss with only one teacher model at a time, or does it compute the loss with all teacher models simultaneously?
If the loss is computed with all teacher models, how does the student model simultaneously infer results with different input sizes?
The text was updated successfully, but these errors were encountered:
We partition the set of GPUs we use, to be either low-resolution or high-resolution. In the low-res partition, we have CLIP and DINO; in the high-res partition, we have SAM. We do standard distributed data parallel, so the gradients from a batch are the average over both the low-res and hi-res partitions.
We still have the low and high res partitions, however, we're running all teachers on both partitions, and we use different strategies to deal with resolution mismatch based on the properties of the student/teacher combination. This is shown in Figure 6, as well as sections 4.2, 4.3, and 4.6.
Based on the paper, I know that when SAM is the teacher model, both the student and teacher models have an input size of 1024x1024; when DINO is the teacher model, both the student and teacher models have an input size of 432x432. I would like to know, during the training process:
For the student model, does it compute the loss with only one teacher model at a time, or does it compute the loss with all teacher models simultaneously?
If the loss is computed with all teacher models, how does the student model simultaneously infer results with different input sizes?
The text was updated successfully, but these errors were encountered: