Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Impoving NLS Solvers #236

Merged
merged 5 commits into from
Oct 16, 2023
Merged

Impoving NLS Solvers #236

merged 5 commits into from
Oct 16, 2023

Conversation

avik-pal
Copy link
Member

@avik-pal avik-pal commented Oct 14, 2023

  • Add a wrapper over LeastSquaresOptim
  • Sparse mul!(A, J', J) seems slower than A .= J' * J (@ChrisRackauckas any idea why?)
  • Avoid spurious repeated factorizations
  • Add tests

@codecov
Copy link

codecov bot commented Oct 14, 2023

Codecov Report

Merging #236 (1c19fa7) into master (a6af39c) will increase coverage by 73.03%.
The diff coverage is 89.28%.

@@             Coverage Diff             @@
##           master     #236       +/-   ##
===========================================
+ Coverage   20.19%   93.22%   +73.03%     
===========================================
  Files           9       12        +3     
  Lines         832      901       +69     
===========================================
+ Hits          168      840      +672     
+ Misses        664       61      -603     
Files Coverage Δ
src/gaussnewton.jl 75.00% <100.00%> (+75.00%) ⬆️
src/levenberg.jl 98.42% <100.00%> (+98.42%) ⬆️
ext/NonlinearSolveLeastSquaresOptimExt.jl 95.83% <95.83%> (ø)
src/NonlinearSolve.jl 85.00% <0.00%> (+0.78%) ⬆️
src/utils.jl 77.77% <50.00%> (+19.82%) ⬆️
ext/NonlinearSolveFastLevenbergMarquardtExt.jl 91.66% <91.66%> (ø)
src/algorithms.jl 85.71% <85.71%> (ø)
src/jacobian.jl 86.15% <77.77%> (+5.82%) ⬆️

... and 4 files with indirect coverage changes

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@ChrisRackauckas
Copy link
Member

A .= J' * J

What's this actually used for? Generally that's a bad idea

@avik-pal
Copy link
Member Author

For GaussNewton and LM if it is NLLS. For a NLProblem we can get around by dropping the $J^T$ term on both sides.

@avik-pal
Copy link
Member Author

I did that dispatch for the sparse arrays case, because the in place hits the generic matmul code

@avik-pal
Copy link
Member Author

For Matrices it seems to hit the correct BLAS dispatches of syrk, but doesn't do it for Banded Matrices or Sparse Matrices (there seems to be a version in Intel MKL docs but don't know how to access it)

@avik-pal
Copy link
Member Author

Let's hold off a merge on this. We should be specializing the J'J matmul to use symmetric factorizations

@avik-pal avik-pal force-pushed the ap/lsoptim branch 2 times, most recently from 564950a to 1a97686 Compare October 15, 2023 23:51
@avik-pal
Copy link
Member Author

avik-pal commented Oct 16, 2023

Should be good to go!

While I am at it let me wrap FastLM as well

@avik-pal
Copy link
Member Author

@ChrisRackauckas I will handle the J' * J thing in a later PR. The caching stuff makes the change non-trivial since we have to specialize on which linsolve is being used

@ChrisRackauckas ChrisRackauckas merged commit 1a0e5ee into SciML:master Oct 16, 2023
9 of 10 checks passed
@avik-pal avik-pal deleted the ap/lsoptim branch October 17, 2023 20:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants