You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am very interested to know more about your work.
I am thinking your motivation is to push all other instances away. However, what will happen if some other instances share the same class label without any label information?
In Equation (3), I do not understand P(i | f(x_i)) means.
Specifically, I do not understand how you sample images for each mini-batch?
Given an image in the minibatch, you do not compare with all other images in the mini-batch, instead you compare with feature vectors (each vector per class?) stored in the memory bank. How do you know which vector shares the same class label with the given image without any label information?
I do not understand the concept of non-parametric. The feature vectors stored in the memory bank can be viewed as learning parameters as you initialised them and update them during training. In addition, the memory bank size is also C x D, i.e., class number x feature size, so the memory and computation complexities are the same as original categorical cross entropy loss.
The text was updated successfully, but these errors were encountered:
Hi,
I am very interested to know more about your work.
I am thinking your motivation is to push all other instances away. However, what will happen if some other instances share the same class label without any label information?
In Equation (3), I do not understand P(i | f(x_i)) means.
Specifically, I do not understand how you sample images for each mini-batch?
Given an image in the minibatch, you do not compare with all other images in the mini-batch, instead you compare with feature vectors (each vector per class?) stored in the memory bank. How do you know which vector shares the same class label with the given image without any label information?
I do not understand the concept of non-parametric. The feature vectors stored in the memory bank can be viewed as learning parameters as you initialised them and update them during training. In addition, the memory bank size is also C x D, i.e., class number x feature size, so the memory and computation complexities are the same as original categorical cross entropy loss.
The text was updated successfully, but these errors were encountered: