@@ -8,7 +8,7 @@ On the test example:
8
8
``` @example
9
9
using NonlinearSolve, BenchmarkTools
10
10
11
- N = 100_000;
11
+ const N = 100_000;
12
12
levels = 1.5 .* rand(N);
13
13
out = zeros(N);
14
14
myfun(x, lv) = x * sin(x) - lv
31
31
@btime f2(out, levels, 1.0)
32
32
```
33
33
34
- MATLAB 2022a achieves 1.66s. Try this code yourself: we receive 0.06 seconds, or a 28x speedup.
35
- This example is still not optimized in the Julia code, and we expect an improvement in a near
36
- future version.
34
+ MATLAB 2022a achieves 1.66s. Try this code yourself: we receive 0.009 seconds, or a 184x
35
+ speedup.
37
36
38
37
For more information on performance of SciML, see the [ SciMLBenchmarks] ( https://docs.sciml.ai/SciMLBenchmarksOutput/stable/ ) .
38
+
39
+ ## The solver tried to set a Dual Number in my Vector of Floats.How do I fix that?
40
+
41
+ This is a common problem that occurs if the code was not written to be generic based on the
42
+ input types. For example, consider this example taken from
43
+ [ this issue] ( https://github.com/SciML/NonlinearSolve.jl/issues/298 )
44
+
45
+ ``` @example dual_error_faq
46
+ using NonlinearSolve, Random
47
+
48
+ function fff_incorrect(var, p)
49
+ v_true = [1.0, 0.1, 2.0, 0.5]
50
+ xx = [1.0, 2.0, 3.0, 4.0]
51
+ xx[1] = var[1] - v_true[1]
52
+ return var - v_true
53
+ end
54
+
55
+ v_true = [1.0, 0.1, 2.0, 0.5]
56
+ v_init = v_true .+ randn!(similar(v_true)) * 0.1
57
+
58
+ prob_oop = NonlinearLeastSquaresProblem{false}(fff_incorrect, v_init)
59
+ try
60
+ sol = solve(prob_oop, LevenbergMarquardt(); maxiters = 10000, abstol = 1e-8)
61
+ catch e
62
+ @error e
63
+ end
64
+ ```
65
+
66
+ Essentially what happened was, NonlinearSolve checked that we can use ForwardDiff.jl to
67
+ differentiate the function based on the input types. However, this function has
68
+ ` xx = [1.0, 2.0, 3.0, 4.0] ` followed by a ` xx[1] = var[1] - v_true[1] ` where ` var ` might
69
+ be a Dual number. This causes the error. To fix it:
70
+
71
+ 1 . Specify the ` autodiff ` to be ` AutoFiniteDiff `
72
+
73
+ ``` @example dual_error_faq
74
+ sol = solve(prob_oop, LevenbergMarquardt(; autodiff = AutoFiniteDiff()); maxiters = 10000,
75
+ abstol = 1e-8)
76
+ ```
77
+
78
+ This worked but, Finite Differencing is not the recommended approach in any scenario.
79
+ Instead, rewrite the function to use
80
+ [ PreallocationTools.jl] ( https://github.com/SciML/PreallocationTools.jl ) or write it as
81
+
82
+ ``` @example dual_error_faq
83
+ function fff_correct(var, p)
84
+ v_true = [1.0, 0.1, 2.0, 0.5]
85
+ xx = eltype(var)[1.0, 2.0, 3.0, 4.0]
86
+ xx[1] = var[1] - v_true[1]
87
+ return xx - v_true
88
+ end
89
+
90
+ prob_oop = NonlinearLeastSquaresProblem{false}(fff_correct, v_init)
91
+ sol = solve(prob_oop, LevenbergMarquardt(); maxiters = 10000, abstol = 1e-8)
92
+ ```
0 commit comments