You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the package and excellent documentation!
I have a question about the best way to optimize performance (speed up without debilitating loss to accuracy) for my use case. Any insight you could provide would be much appreciated.
I'm using PyNNDescent as the blocking step in a record linkage algorithm. Specifically, I have a list of ~1M names (first and last) that I have converted into vector space by getting pairs of adjacent letters. E.g. "JOHN SMITH" becomes ["JO", "OH", "HN", "N ", " S", "SM", "MI", "IT", "TH"]. For each name, I want to get a complete list of similar names (something like all names with cosine distance < 0.4).
I believe I am in the scenario outlined in the "Nearest neighbors of the training set" section of the documentation, in that I know all of the names I want to query at the time of index creation. However, because I have a lot of names and I'm wanting to use a fairly high cosine distance threshold I have to set k to something pretty high like 300. Naturally this causes the index creation step to take a very long time, but eliminates the query time entirely.
My question is whether you think in this case it would be more efficient to build the index with a smaller k and then query all the points in the index with the larger k=300 value. Or do you have any other suggestions for how to minimize runtime for this use case?
The text was updated successfully, but these errors were encountered:
For such a large k, yes, there is going to be some potential benefit to building with a decently large k to ensure a quality index, and then querying. I can't say for certain what k would be good, but probably on the order of 80 to 160 would be a good start.
Some other options for dropping the runtime: you can max out the number of trees used at something smaller than it would pick given the size of the dataset. I suspect 16 trees are likely enough. That can save some time. Another trick would be to l2-normalize all your vectors and use "dot" as your distance metric instead of cosine (these will be equivalent). That will amortize the normalization cost into the pre-processing instead of being redone on every distance computation.
Thanks for the package and excellent documentation!
I have a question about the best way to optimize performance (speed up without debilitating loss to accuracy) for my use case. Any insight you could provide would be much appreciated.
I'm using PyNNDescent as the blocking step in a record linkage algorithm. Specifically, I have a list of ~1M names (first and last) that I have converted into vector space by getting pairs of adjacent letters. E.g. "JOHN SMITH" becomes ["JO", "OH", "HN", "N ", " S", "SM", "MI", "IT", "TH"]. For each name, I want to get a complete list of similar names (something like all names with cosine distance < 0.4).
I believe I am in the scenario outlined in the "Nearest neighbors of the training set" section of the documentation, in that I know all of the names I want to query at the time of index creation. However, because I have a lot of names and I'm wanting to use a fairly high cosine distance threshold I have to set
k
to something pretty high like 300. Naturally this causes the index creation step to take a very long time, but eliminates the query time entirely.My question is whether you think in this case it would be more efficient to build the index with a smaller
k
and then query all the points in the index with the largerk=300
value. Or do you have any other suggestions for how to minimize runtime for this use case?The text was updated successfully, but these errors were encountered: