-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use Toeplitz-dot-Hankel for very large dimensional transforms #209
Comments
It's true there has been a significant regression in the lib plans (a couple reasons for this, not all justified). |
There's a library method |
Most of the allocation seems to come from FastTransforms.jl/src/toeplitzhankel.jl Line 111 in daafeb3
which allocates an n x n matrix, but discards most of the columns inFastTransforms.jl/src/toeplitzhankel.jl Line 131 in daafeb3
This may be re-thought. E.g. in the n = 10_000 example, only 39 columns are actually eventually retained.
|
ah thanks, that should be an easy fix, we can just change it to 100 columns for now (we know the number of columns grows logarithmically so this probably will never be reached) |
Toeplitz-dot-Hankel has better complexity plans so should be the default when arrays are large: Eg for
cheb2leg
evenn = 1000
sees a signifcant speedup:This was prematurely changed but reverted as there were some regressions. But also the number of allocations in
th_*
is exorbitant, probably because it dates back to a port of Matlab code.For matrices I'm not seeing much improvement in the
th_*
, even for a 40k x 10 transform, which is suspicious....but doing a profile all the time is in the FFTsThe text was updated successfully, but these errors were encountered: