Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Constrained optimization (IPNewton?) for big numbers #651

Open
olugovoy opened this issue Jul 20, 2018 · 12 comments
Open

Constrained optimization (IPNewton?) for big numbers #651

olugovoy opened this issue Jul 20, 2018 · 12 comments

Comments

@olugovoy
Copy link

Hello,
Following the JuliaLang/julia#647 and the fix JuliaLang/julia#648 (which works - thanks!). Are the any available options of constrained problems for BigFloat? Any plans to add methods for bigs to IPNewton or other options?
Thanks again!

@anriseth
Copy link
Contributor

Any plans to add methods for bigs to IPNewton or other options

Have you tried if it works already? If it doesn't, I believe it can be enabled fairly easily by relaxing the number type specifications in the relevant IPNewton code.

I have not planned to do so myself, but it should not be too difficult so please have a go and make a PR :)

@pkofod
Copy link
Member

pkofod commented Jul 20, 2018

That shouldn’t be a problem to get to work, if it doesn’t already. If BigFloats don’t work, I’d consider it a bug that should be fixed.

@olugovoy
Copy link
Author

Good to know it should work in general! And that the fix should be easy.
Here is the working example which I'm trying to replicate http://julianlsolvers.github.io/Optim.jl/stable/#examples/generated/ipnewton_basics/#plain-program with big numbers, but have the "no method" error:

julia> using Optim, NLSolversBase #hide

julia> fun(x) =  (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
fun (generic function with 1 method)

julia> function fun_grad!(g, x)
           g[1] = -2.0 * (1.0 - x[1]) - 400.0 * (x[2] - x[1]^2) * x[1]
           g[2] = 200.0 * (x[2] - x[1]^2)
       end
fun_grad! (generic function with 1 method)

julia> function fun_hess!(h, x)
           h[1, 1] = 2.0 - 400.0 * x[2] + 1200.0 * x[1]^2
           h[1, 2] = -400.0 * x[1]
           h[2, 1] = -400.0 * x[1]
           h[2, 2] = 200.0
       end;

julia> x0 = big.([0.0, 0.0])
2-element Array{BigFloat,1}:
 0.000000000000000000000000000000000000000000000000000000000000000000000000000000
 0.000000000000000000000000000000000000000000000000000000000000000000000000000000

julia> df = TwiceDifferentiable(fun, fun_grad!, fun_hess!, x0)
NLSolversBase.TwiceDifferentiable{BigFloat,Array{BigFloat,1},Array{BigFloat,2},Array{BigFloat,1}}(fun, fun_grad!, NLSolversBase.fg!, fun_hess!, 0.000000000000000000000000000000000000000000000000000000000000000000000000000000, BigFloat[#undef, #undef], BigFloat[BigFloat(NaN, 256) BigFloat(NaN, 256); BigFloat(NaN, 256) BigFloat(NaN, 256)], BigFloat[BigFloat(NaN, 256), BigFloat(NaN, 256)], BigFloat[BigFloat(NaN, 256), BigFloat(NaN, 256)], BigFloat[BigFloat(NaN, 256), BigFloat(NaN, 256)], [0], [0], [0])

julia> lx = big.([-0.5, -0.5])
2-element Array{BigFloat,1}:
 -5.000000000000000000000000000000000000000000000000000000000000000000000000000000e-01
 -5.000000000000000000000000000000000000000000000000000000000000000000000000000000e-01

julia> ux = big.([0.5, 0.5])
2-element Array{BigFloat,1}:
 5.000000000000000000000000000000000000000000000000000000000000000000000000000000e-01
 5.000000000000000000000000000000000000000000000000000000000000000000000000000000e-01

julia> dfc = TwiceDifferentiableConstraints(lx, ux)
NLSolversBase.TwiceDifferentiableConstraints{NLSolversBase.##90#93,NLSolversBase.##91#94,NLSolversBase.##92#95,BigFloat}(NLSolversBase.#90, NLSolversBase.#91, NLSolversBase.#92, ConstraintBounds:
  Variables:
    x[1]≥-5.000000000000000000000000000000000000000000000000000000000000000000000000000000e-01, x[1]≤5.000000000000000000000000000000000000000000000000000000000000000000000000000000e-01, x[2]≥-5.000000000000000000000000000000000000000000000000000000000000000000000000000000e-01, x[2]≤5.000000000000000000000000000000000000000000000000000000000000000000000000000000e-01
  Linear/nonlinear constraints:)

julia> res = optimize(df, dfc, x0, IPNewton())
ERROR: MethodError: no method matching getindex(::Base.LinAlg.CholeskyPivoted{BigFloat,Array{BigFloat,2}}, ::Symbol)
Stacktrace:
 [1] convert at ./linalg/cholesky.jl:408 [inlined]
 [2] convert at ./linalg/cholesky.jl:411 [inlined]
 [3] full(::Base.LinAlg.CholeskyPivoted{BigFloat,Array{BigFloat,2}}) at ./linalg/cholesky.jl:414
 [4] solve_step!(::Optim.IPNewtonState{BigFloat,Array{BigFloat,1}}, ::NLSolversBase.TwiceDifferentiableConstraints{NLSolversBase.##90#93,NLSolversBase.##91#94,NLSolversBase.##92#95,BigFloat}, ::Optim.Options{Float64,Void}, ::Bool) at /Users/OL/.julia/v0.6/Optim/src/multivariate/solvers/constrained/ipnewton/ipnewton.jl:302
 [5] update_state!(::NLSolversBase.TwiceDifferentiable{BigFloat,Array{BigFloat,1},Array{BigFloat,2},Array{BigFloat,1}}, ::NLSolversBase.TwiceDifferentiableConstraints{NLSolversBase.##90#93,NLSolversBase.##91#94,NLSolversBase.##92#95,BigFloat}, ::Optim.IPNewtonState{BigFloat,Array{BigFloat,1}}, ::Optim.IPNewton{Optim.#backtrack_constrained_grad,Symbol}, ::Optim.Options{Float64,Void}) at /Users/OL/.julia/v0.6/Optim/src/multivariate/solvers/constrained/ipnewton/ipnewton.jl:234
 [6] optimize(::NLSolversBase.TwiceDifferentiable{BigFloat,Array{BigFloat,1},Array{BigFloat,2},Array{BigFloat,1}}, ::NLSolversBase.TwiceDifferentiableConstraints{NLSolversBase.##90#93,NLSolversBase.##91#94,NLSolversBase.##92#95,BigFloat}, ::Array{BigFloat,1}, ::Optim.IPNewton{Optim.#backtrack_constrained_grad,Symbol}, ::Optim.Options{Float64,Void}, ::Optim.IPNewtonState{BigFloat,Array{BigFloat,1}}) at /Users/OL/.julia/v0.6/Optim/src/multivariate/solvers/constrained/ipnewton/interior.jl:216
 [7] optimize(::NLSolversBase.TwiceDifferentiable{BigFloat,Array{BigFloat,1},Array{BigFloat,2},Array{BigFloat,1}}, ::NLSolversBase.TwiceDifferentiableConstraints{NLSolversBase.##90#93,NLSolversBase.##91#94,NLSolversBase.##92#95,BigFloat}, ::Array{BigFloat,1}, ::Optim.IPNewton{Optim.#backtrack_constrained_grad,Symbol}) at /Users/OL/.julia/v0.6/Optim/src/multivariate/solvers/constrained/ipnewton/interior.jl:196

The rest of the code with big.() is below, but basically it will show the same "no method" issue in optimize():

ux = big.(fill(Inf, 2))
dfc = TwiceDifferentiableConstraints(lx, ux)

clear!(df)
res = optimize(df, dfc, x0, IPNewton())

lx =  big.(fill(-Inf, 2)); ux =  big.(fill(Inf, 2))
dfc = TwiceDifferentiableConstraints(lx, ux)

clear!(df)
res = optimize(df, dfc, x0, IPNewton())

lx =  big.(Float64[]); ux =  big.(Float64[])
dfc = TwiceDifferentiableConstraints(lx, ux)

clear!(df)
res = optimize(df, dfc, x0, IPNewton())

con_c!(c, x) = (c[1] = x[1]^2 + x[2]^2; c)
function con_jacobian!(J, x)
    J[1,1] = 2*x[1]
    J[1,2] = 2*x[2]
    J
end
function con_h!(h, x, λ)
    h[1,1] += λ[1]*2
    h[2,2] += λ[1]*2
end;

lx =  big.(Float64[]); ux =  big.(Float64[])
lc =  big.([-Inf]); uc =  big.([0.5^2])
dfc = TwiceDifferentiableConstraints(con_c!, con_jacobian!, con_h!,
                                     lx, ux, lc, uc)
res = optimize(df, dfc, x0, IPNewton())

lc =  big.([0.1^2])
dfc = TwiceDifferentiableConstraints(con_c!, con_jacobian!, con_h!,
                                     lx, ux, lc, uc)
res = optimize(df, dfc, x0, IPNewton())

function con2_c!(c, x)
    c[1] = x[1]^2 + x[2]^2     ## First constraint
    c[2] = x[2]*sin(x[1])-x[1] ## Second constraint
    c
end
function con2_jacobian!(J, x)
    # First constraint
    J[1,1] = 2*x[1]
    J[1,2] = 2*x[2]
    # Second constraint
    J[2,1] = x[2]*cos(x[1])-1.0
    J[2,2] = sin(x[1])
    J
end
function con2_h!(h, x, λ)
    # First constraint
    h[1,1] += λ[1]*2
    h[2,2] += λ[1]*2
    # Second constraint
    h[1,1] += λ[2]*x[2]*-sin(x[1])
    h[1,2] += λ[2]*cos(x[1])
    # Symmetrize h
    h[2,1]  = h[1,2]
    h
end;

x0 =  big.([0.25, 0.25])
lc =  big.([-Inf, 0.0]); uc =  big.([0.5^2, 0.0])
dfc = TwiceDifferentiableConstraints(con2_c!, con2_jacobian!, con2_h!,
                                     lx, ux, lc, uc)
res = optimize(df, dfc, x0, IPNewton())

@pkofod
Copy link
Member

pkofod commented Jul 20, 2018

This is positivefactorizations again? I’ll look into it, thanks for bringing it up

@anriseth
Copy link
Contributor

This is positivefactorizations again?

I think this goes deeper than that. The error appears when IPNewton calls full(state.Htilde).

On Julia 0.6, you can create pivoted Cholesky factorization for BigFloat matrices, but they have only implemented getindex, which is called in full, for BlasFloat types.

On Julia 0.7, pivoted Cholesky factorization has not been implemented for non-BLAS types:

julia> A = Matrix(big(1.0)I, 3,3)
3×3 Array{BigFloat,2}:
 1.0  0.0  0.0
 0.0  1.0  0.0
 0.0  0.0  1.0

julia> F = cholesky(A, Val(true))
ERROR: ArgumentError: generic pivoted Cholesky factorization is not implemented yet
Stacktrace:
 [1] #cholesky!#93(::Float64, ::Bool, ::Function, ::Hermitian{BigFloat,Array{BigFloat,2}}, ::Val{true}) at /scratch/riseth/julia-dev/usr/share/julia/stdlib/v0.7/LinearAlgebra/src/cholesky.jl:201
 [2] (::getfield(LinearAlgebra, Symbol("#kw##cholesky!")))(::NamedTuple{(:tol, :check),Tuple{Float64,Bool}}, ::typeof(cholesky!), ::Hermitian{BigFloat,Array{BigFloat,2}}, ::Val{true}) at ./none:0
 [3] #cholesky!#94(::Float64, ::Bool, ::Function, ::Array{BigFloat,2}, ::Val{true}) at /scratch/riseth/julia-dev/usr/share/julia/stdlib/v0.7/LinearAlgebra/src/cholesky.jl:221
 [4] #cholesky! at ./none:0 [inlined]
 [5] #cholesky#96(::Float64, ::Bool, ::Function, ::Array{BigFloat,2}, ::Val{true}) at /scratch/riseth/julia-dev/usr/share/julia/stdlib/v0.7/LinearAlgebra/src/cholesky.jl:296
 [6] cholesky(::Array{BigFloat,2}, ::Val{true}) at /scratch/riseth/julia-dev/usr/share/julia/stdlib/v0.7/LinearAlgebra/src/cholesky.jl:296
 [7] top-level scope at none:0

@anriseth
Copy link
Contributor

anriseth commented Jul 21, 2018

I guess we have twothree options:

  1. Implement a more generic linear system solver to IPNewton
  2. Only apply pivoted Cholesky for BlasFloats, and otherwise revert back to the default Cholesky (I don't know why Tim wanted Pivoted Cholesky in the first place, so this may cause instabilities)
  3. File an issue upstream to see whether they could loosen the Cholesky codes that currently only work with BlasFloat to Union{BigFloat, BlasFloat}. (This won't fix the problem for arbitrary precision, however)

@pkofod
Copy link
Member

pkofod commented Jul 21, 2018

Hmm, I’ll have to be at a computer to understand the issue

@anriseth
Copy link
Contributor

Sorry, I didn't explain the issue very well. Let me know if I should expand on what I mean after you've had the opportunity to look at it on a computer ;)

@SouthEndMusic
Copy link

SouthEndMusic commented Jul 19, 2024

I'm looking into solving a problem with IPNewton with BigFloat, but every time I try to run it my REPL crashes. It looks like it happens somewhere within the state initialization. This can simply be reproduced be adapting the example here with BigFloat input.

Edit: I found out this is an issue with BigFloat, even this causes the crash:

A = zeros(BigFloat, 0, 5)
b = zeros(BigFloat, 0)

A \ b

@edljk
Copy link

edljk commented Jul 31, 2024

+1

@SouthEndMusic
Copy link

@edljk this fixed for me, I temporarily overload these methods until it's in the Julia version I'm using: JuliaLang/LinearAlgebra.jl#1080

@edljk
Copy link

edljk commented Jul 31, 2024

Making IPNewton available for big numbers on basic examples?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants