You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, guys. I am also using CCA algorithm to perform dimension reduction, i.e. using the api from sklearn. However, this operation may require more than a whole day to achieve that goal, from 2048-dims to 1048-dims. So I wonder if there is another solution to accelerate the procedure of this operation?
The text was updated successfully, but these errors were encountered:
What kind of hardware are you using? How big is the dataset? In either case, it might be helpful to subsample the data if you have a lot of it; the linear model won't have enough capacity to model all of the data. Also, 1048 is still a lot of dimensions, which suggests the use of PCA. You'll have an easier time doing retrieval when the space isn't so large.
Hi, nhynes. Thank you for your reply. If possible, could I inquriy you some details about the baseline protocols through email. Previously, I sent an enquiry email to the address(nhynes at mit dot edu), but, unfortunately, the email was rejected.
I've since graduated so that email is no longer fresh. If you have questions that you don't want to ask on github, please feel free to resend to the.nhynes@<popular, definitely evil mail provider>.com
Hi, guys. I am also using CCA algorithm to perform dimension reduction, i.e. using the api from sklearn. However, this operation may require more than a whole day to achieve that goal, from 2048-dims to 1048-dims. So I wonder if there is another solution to accelerate the procedure of this operation?
The text was updated successfully, but these errors were encountered: