Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

np.square(knn.kneighbors(clsts, 2)[1]) Should be changed? #84

Open
wpumain opened this issue Mar 2, 2023 · 3 comments
Open

np.square(knn.kneighbors(clsts, 2)[1]) Should be changed? #84

wpumain opened this issue Mar 2, 2023 · 3 comments

Comments

@wpumain
Copy link

wpumain commented Mar 2, 2023

dsSq = np.square(knn.kneighbors(clsts, 2)[1])

Should this code be changed to this? ":
dsSq = np.square(knn.kneighbors(clsts, 2)[0])

knn.kneighbors(clsts, 2) returns two ndarrays, the first ndarray is the distance value, the second is the index value.

What we need here is the distance, not the index, right?

@Nanne
Copy link
Owner

Nanne commented Mar 9, 2023

Thanks for spotting this! Looks like it should indeed be changed to that; would you mind submitting a PR?

@wpumain
Copy link
Author

wpumain commented Mar 10, 2023

Sorry, my understanding of NetVlad is not deep enough. For example, I don't understand why the value of self.alpha should be set in this way in your program?

self.alpha = (-np.log(0.01) / np.mean(dsSq[:,1] - dsSq[:,0])).item()

@fantasy567
Copy link

@wpumain

Sorry, my understanding of NetVlad is not deep enough. For example, I don't understand why the value of self.alpha should be set in this way in your program?

self.alpha = (-np.log(0.01) / np.mean(dsSq[:,1] - dsSq[:,0])).item()

Here,the appendix A of original paper mentioned that, "The α parameter used for initialization is chosen to be large, such that the soft assignment weights ak(xi) are very sparse in order to mimic the conventional VLAD well. Specifically, α is computed so that the the ratio of the largest and the second largest soft assignment weight ak(xi) is on average equal to 100.", and you can derived the formula to set α for initialization based on equation 2 in original paper when the initial centres and traindescriptors were obtained.
By the way, the code self.alpha = (-np.log(0.01) / np.mean(dsSq[:,1] - dsSq[:,0])).item() is right but dsSq should be return of searching the nearest and second nearest centers for each traindesc, only in this way can get sparse weights to mimic conventional assign weight that only nearest weight is 1 other 0
dsSq = np.square(knn.kneighbors(clsts, 2)[1])
This is my understaning,wish it's helpful to you.
So for convenience, @Nanne, appreciate the nice job sincerely.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants