Skip to content
This repository has been archived by the owner on May 14, 2020. It is now read-only.

API for Parameters #2

Closed
ChrisRackauckas opened this issue Sep 1, 2016 · 36 comments
Closed

API for Parameters #2

ChrisRackauckas opened this issue Sep 1, 2016 · 36 comments

Comments

@ChrisRackauckas
Copy link
Member

I want to consolidate loosely the topics:

The idea is for a general API change from (t,y,dy) to (t,y,dy,p) where p is some type that holds the parameters. It seems to be a common feature request among users. Indeed, it's not truly necessary because on can use a closure here, but it can be tricky to do correctly due to JuliaLang/julia#15276. Also, to be compatible with tools which require "jiggling the parameters", like dynamical systems tools (bifurcation diagrams), sensitivity analysis, etc. the functions have to explicitly have the parameter

Thus I think it's best for across the board compatibility for the ODE packages to use (t,y,dy,p). This would make the same functions usable in not only ODE solvers but for other related purposes. The ODE solvers would could have a simple change to write a closure and continue as usual. However, it's not clear what that type should be for the parameters. I would suggest using types for holding the parameters and embracing https://github.com/mauro3/Parameters.jl as a standard way to define such types (use it in examples).

@pwl @mauro3 @acroy

@gabrielgellner
Copy link

This scares me, as it has the potential for adding an additional api idiom to Julia that is likely to be different from other similar api's (such as root solvers, optimization solvers, anything that takes a callback). I don't really see why not arguing for a closure isn't the best solution (outside of the mentioned speed regression, which seems like premature optimization, with a strong possibility of making requiring the parameter call to be the default, even if the speed regression is fixed. I think this since so many people will be used to the "old way" of doing this from C/Fortran etc). My feeling is that API choice is rarely a good solution, as it trades convenience for consistency.

I do highly support pushing Parameters.jl though :) it is the best (I am dying for your Dict unpack to land).

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Sep 13, 2016

I think it's moreso because we have to have one, so it's about finding the right choice. It's not essential for projects like ODE.jl which are just solving ODEs, but the moment you start to go past that, you need the parameters. You need the parameters to be able to take parameter derivatives for sensitivity analysis (which is why Sundials does (t,u,du,userdata) with userdata essentially being the parameters). You need the parameters to make bifurcation plots, or do optimal control, or parameter inference, etc. So the situation with "just use closures" is impractical unless we want to limit what we do to what some (only some, because Sundials already goes beyond) ODE/PDE solvers can do.

So it's more of, do we just put parameters as the fourth argument (like Sundials)? Is that our thing? Make it a type so that the fields can be anything? That seems like a reasonable choice, I just think we should make a good choice, know the downsides , and stick with it. Sticking with it leads to consistency. If Sundials and DifferentialEquations.jl does it, then ODE.jl should be made compatible, which since it's domain is simpler it can just use a closure to make (t,u,du,p)->(t,u,du) and be done with it, but if two of the packages take functions like that, they all should in some capacity. That's what I am getting at.

@ChrisRackauckas
Copy link
Member Author

It wouldn't be pushing Parameters.jl, but if the choice is to go by defining parameters as types (somehow, but how do you deal with namespace issues? Like if I want to define parameters in a macro and give that parameter p as the output, if I define a type, what do I name that type?), then Parameters.jl is an easy to define them, so I'd start using that in the docs and maybe think about loading it up with more features. That's what I mean by "embracing"

@gabrielgellner
Copy link

I guess I am missing the issue. If I pass a callback to a bifurcation routine, then I definitely need to pass the parameters, but I don't if I am solving an ODE. I don't see why the later should be made consistent with the former, they have different API's. Similar for sensitivity analysis, that routine would expect a callback with a different signature. Having a universal API for all of these domains doesn't feel right to me. If I am doing a maximum likelihood problem I need to pass the data and the parameters, but that doesn't mean an optimization callback should always take a data argument. This feels like it would require ever increasing argument lists if every possible application area needs to be modelled.

@ChrisRackauckas
Copy link
Member Author

But in modern applications these things are intertwined. That's why Sundials gives you the sensitivity analysis results at the same time as solving the ODE when calling cvodes. I don't see the argument list growing any further since a data function f is only defined by (t,x,u,du,parameters) in any PDE/ODE that I know of, and if parameters is a type then it's sufficiently powerful enough to also hold parameters like a(t) for optimal control problems.

The functions for differential equations only go that far. Is there any application you can think of where more than the parameters is needed as arguments to the function? I can't think of any. There is more that can be associated with them, which is why I went the way of the ODEProblem type and a solution type. But I think that the function itself should be able to be straight passable between all these different components because any non-trivial application is using a few of them.

If there was a PR to ODE.jl so that it first looks at the function for the number of arguments, and if it's 4 it closes the optional userdata into it (like Sundials), would that be accepted? Because I'm going to do something because I'm going to keep pushing DifferentialEquations.jl forward, and I'm going to start making bifurcation analysis tools and have them be compatible: I just want to make sure I head in the right direction with it and, while keeping compatibility with Sundials, see if ODE.jl will follow.

@gabrielgellner
Copy link

It is hard for me to fully grasp the Sundials approach since it is written in C and therefore closures are not really an option (though like you are suggesting it could be that even if closures are possible then the parameters argument is still required for the general solution).

Maybe this is the only way forward. I just am sad that soon the examples people share will be mixed up with parameter types being passed, or closures, or some kind of macro dsl ... I feel Julia does things a variety of ways far to often, since it can, but this makes the burden of learning the language and the libraries brutal, and makes it hard for me to recommend it to colleagues over more restrained languages like matlab or python. I also hate that this will be so different from using integration routines and optimization, but I admit you have greater knowledge here, so it might be required.

@acroy
Copy link

acroy commented Sep 13, 2016

The reason for this issue is that in some situations one needs additional information about the ODE (parameters, Jacobian, etc). This is similar to optimization problems. IMO the Julian way to solve this would be via "problem types", where by default additional information is discarded or obtained in some automatic way. The user can provide additional information by overwriting functions and we can make use of multiple dispatch. I think this is the approach taken by JuliaOpt. However, to make the libraries more accessible and the learning curve not so steep, we would still provide a "simplified interface" like it is now. In many cases this will also be sufficient.

IMHO the possibility to do things different ways in Julia is actually its strength. In particular, you can adjust your code with respect to simplicity vs. performance, but you don't have to leave the language (like in Matlab or python). Of course macros et al make the code more complicated, but if you really need the performance you will embrace the possibilities.

In this case it's not so much about performance, but rather about functionality, which we cannot offer with the current API.

@ChrisRackauckas
Copy link
Member Author

I think parameters are slightly different though. You can specify a Jacobian and stuff in a problem type. But to take parameter derivatives for sensitivity and things like that, you need them to be part of the function. I think parameters are the only thing that need to specifically be part of the function for their full use. So most parts of the ODE definition will be handled as fields in problem types (as I am doing already), but I think parameters need to be slightly different.

What makes me uncomfortable though is that if it's passing a type as the parameter p, we may run into an issue inside of packages:

function testdefine()
  type TempType end
  tmp = TempType()
end 
testdefine()

That will error since you cannot define a type inside a local scope. Would that be necessary?

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Sep 13, 2016

Speaking of JuliaOpt / JuMP, one of the things they did was turn to macros for user definitions. This is to hide some of the complexity that's actually going on. I have some very new macros for doing that here: http://juliadiffeq.github.io/DifferentialEquations.jl/latest/man/function_definition_macros.html (they should generate functions compatible with all four major ODE packages).

I plan to expand it a little bit like;

f,p = @ode_define begin
  dy₁ = -k₁*y₁+k₃*y₂*y₃
  dy₂ =  k₁*y₁-k₂*y₂^2-k₃*y₂*y₃
  dy₃ =  k₂*y₂^2
end k₁=>0.04 k₂=>3e7 k₃=1e4

where k₃ would inline because it was =, but k₁ and k₂ would be fields of a type

type TmpParameters
  k₁
  k₂
end

with p = TmpParameters(0.04,3e7) and f(t,u,du,p). I don't know how I'd name that type though so that way multiple uses of the macro in one session would work. That's making me think twice about using types.

@pwl
Copy link

pwl commented Sep 13, 2016

I totally agree with @gabrielgellner, there is not enough arguments for not using closures. They may not be very intuitive at first for some new users (mostly coming from different programming languages) but these users don't come back to complain after we explain them how the closure works. There might be some performance drawbacks but from my understanding these drawbacks only show up in the global scope and are bugs in Julia, so in a sense, they are only temporary. Designing an API basing on comments by newcomers and to circumvent a bug in Julia would be a huge mistake.

When I was implementing an ODE suite in C or Fortran I was forced to add parameters (no closures in these languages) and I ended up saving the parameters in types and/or passing the parameters to every subroutine that had to evaluate the equation; it was painful to look at and needlessly complicated. All in all the approach proposed by @ChrisRackauckas looks to me like something coming straight from C/Fortran.

@acroy we have problem types in ODE.jl (the forked version) but as @ChrisRackauckas commented, it is a slightly different issue. Also we try to keep these types hidden from the user by defining simpler layer of "pedestrian" API.

turn to macros for user definitions

The API we are discussing is a very simple one and I don't think we have to resort to macros at this point. On top of that, if new users find closures counterintuitive they will have trouble with macros as well.

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Sep 13, 2016

What about just having the front ends accept both (t,u,du) and (t,u,du,p)? For things like ODE solvers which don't need the p, it throws a closure over it. It's similar to allowing not in-place functions (t,u) and just correcting it to an in-place function.

@pwl
Copy link

pwl commented Sep 13, 2016

How would you distinguish between F(t,y,dy) and F(t,y,dy,p)? Julia cannot dispatch on function signatures.

EDIT: we could have ode(F,t0,y0,p) and ode(F,t0,y0), with the first being an alias for ode((t,y)->F(t,y,p),t0,y0)

@mauro3
Copy link
Contributor

mauro3 commented Sep 13, 2016

Is it not possible to have a modular approach: the solver use closures and the tools which need direct access to the parameters, such as bifurcation and sensitivity analysis tools, can carry the parameters, and create a closure on the fly when passing computations off to solvers.

Ideally a lower level part of a stack should not need to care about a higher level part.

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Sep 13, 2016

@pwl You can just read the number of parameters and swap out the function that's actually used using a closure (for safety, you can isolate this and then have it call your inner real function). I already do that, and I thought you were doing that on your branch with changing functions from not in-place to in-place.

@acroy
Copy link

acroy commented Sep 13, 2016

@pwl: The point is that you can't use a closure since you need the explicit value of the parameter(s) in the solver, not only in the rhs function. And exactly since you can't dispatch on function signatures, one would have to introduce a type. All of this doesn't mean that the current interface has to go away.

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Sep 13, 2016

"""
`numparameters(f)`

Returns the number of parameters of `f` for the method which has the most parameters.
"""
function numparameters(f)
  if length(methods(f))>1
    warn("Number of methods for f is greater than 1. Choosing based off of method with most parameters")
  end
  numparm = maximum([length(m.sig.parameters) for m in methods(f)]) #in v0.5, all are generic
  return (numparm-1) #-1 in v0.5 since it adds f as the first parameter
end

Currently, when defining the problem type, I have this read the parameters and change a function which is not in place f(t,u) to g(t,u,du) -> du[:]=f(t,u). Then the that's the function actually used by the solvers. That's if it sees 2 and knows it wants 3. For parameters you can do the other way around: if it sees 4 save in the type you have the function with 3 arguments which uses a closure to hide the parameters. So that part is possible. The part that is less obvious to me is how to actually use types to define parameters well (or however it would be done)

@pwl
Copy link

pwl commented Sep 13, 2016

@acroy could you provide an example of how a solver you describe would work? It is hard for me to argue if I don't see the specific use case.

@acroy
Copy link

acroy commented Sep 13, 2016

The keyword is "sensitivity analysis" (there are plenty of examples out there), where you basically need the derivative of F wrt the parameter p for the current solution y. I am by no means an expert on this, but in this case you need to be able to pass the parameter explicitly to F ...

@acroy
Copy link

acroy commented Sep 13, 2016

Well, you guys know much better the internals of ODE 2.0 etc, but I don't see why an ODE with parameters is not a special subtype of an ODE problem?

@mauro3
Copy link
Contributor

mauro3 commented Sep 13, 2016

For the ignorant ones, like myself, there is quite a nice, short description of sensitivity analysis in CVODES manual: http://computation.llnl.gov/sites/default/files/public/cvs_guide.pdf.

Is my understanding correct that sensitivity analysis needs to be built into the solver and cannot be bolted-on on top, right? If so, and if we want to support it eventually in ODE.jl, then a way to pass parameters is indeed needed. One thing to note is that the ICs are also parameters, so it would probably be best to pass IC and other parameters in the same way.

@pwl
Copy link

pwl commented Sep 13, 2016

@mauro3 Why not have a separate type ExplicitODEP (mimicking the existing ExplicitODE) and a separate solver for it?

EDIT: we could then have a top level interface defined as odep(F,t0,y0,p) with F(t,y,p) or something similar. It could (and it should, from what you say about separate solvers) be completely separated from the base interface.

@ChrisRackauckas
Copy link
Member Author

While ICs mathematically are parameters (or functions), they are different since they don't need to be passed to the function each timestep.

And sensitivity analysis is just one place. One application that is done a lot in uncertainty quantification is to put noise on parameters and solve the ODE a bunch of times (note that this is different than solving the SDE since it's measuring the numerical uncertainty) (this is related to sensitivity analysis). Another application is types of control problems where it's part of the solver to feedback onto a parameter depending on error estimates.

@pwl
Copy link

pwl commented Sep 13, 2016

It looks like we are mixing two things here: parameters for the sake of sensitivity analysis and parameters in the sense of constants coming from the user; let's call them sensitivity parameters and user parameters respectively. These two are completely independent use cases of parameters.

The user parameters can be (and are) implemented with closures, there is no need for any special API in this case. Modulo some bugs and minor user complains this is the most elegant solution in sight.

On the other hand, sensitivity parameters have to be part of the definition of an ODE problem as in F!(t,y,dy,p). But we should only use such signatures for selected solvers that can deal with sensitivity analysis, we shouldn't switch to these signatures by default nor recommend them in normal use cases.

One reason for this is because most users (myself included) don't deal with sensitivity analysis so there is no need to pass around a void argument. The other reason is that sensitivity parameters seem to be more strictly defined (I think they can only be scalars, so p would be a vector of scalars of some kind). Because of that some solvers might want to restrict the type of p to reflect this, which, in the end, would be confusing for the users (why can't I pass an array as a parameter? etc.).

To summarize I vote for defaulting to (t,y,dy) signature and only allow/require (t,y,dy,p) signatures in special solvers that support sensitivity analysis (are there any such solvers in JuliaDiffEq?). If we default to (t,y,dy,p) it would seem like we discourage people from using closures.

Am I missing some use cases?

@ChrisRackauckas You mentioned that one could add noise through p, how would that work in practice? Wouldn't you need a specific solver for that?

@ChrisRackauckas
Copy link
Member Author

Sundials already supports sensitivity analysis in cvodes, which is why it has the userdata argument.

The uncertainty quantification usage is adding noise to p, not through p. You wouldn't necessary need special solvers if you add noise through p, but you'd just get a strong order 0.5 method which is pretty bad. But adding noise to p is different: you'd measure how much your solution varies with uncertainty in p. Your solution at each step in this case is a distribution or interval, plotted usually with a ribbon plot. The key is that you use the same numerical method as you normally would, not a different one, because you're measuring the certainty in the solution given the output of the numerical method. So it is a kind of tagged on feature like how sensitivity analysis or stiffness checks are added to normal ODE solvers.

The parameters are mathematically the same thing in both of the cases you mention. The only difference is whether you use them explicitly (to be able to check/affect them), or use them as constants. And I think you could do things like pass an array as a parameter as long as it's not a parameter that's used for the sensitivity analysis. Using a type for parameters would let you choose the parameters for this analysis by field name, defaulting to fieldnames(p) if sensitivity analysis is enabled.

I don't think there's a need to default to (t,u,du,p) (note I use u instead of y because y gets confusing when talking about PDEs over two+ spatial dimensions), just have it be an option even for solvers which don't make full use of it. If it's just a thing that all of the packages accept (t,u,du) and (t,u,du,p) then it's easy. If some do and some don't, or some do it differently, then it's a mess.

@gabrielgellner
Copy link

I am still not convinced that Sundials has userdata because of the sensitivity analysis, versus that it is a C library and this is the only way to do this kind of interface (odepack, vode etc had the equivalent of a userdata argument as well, but did not do sensitivity analysis, it was simply at way to get closures in languages without them ...). Like @mauro3 has mentioned it might be required since the sensitivity analysis needs to be baked into the solvers (not just in some kind of driver that calls the ode solvers as part of the solution algorithm (like many bifurcation routines would use simple ode/bvp solvers)).

To reiterate my fear, in hopefully a more clear manner, is that if all routines (ones that need them or not) allow a "parameter" argument, then that will be used by end users whether it is needed or not (giving a closure friendly version as well will not matter, both will be used, likely passing the parameter will be used more often as it is more like "older" languages). And then when teaching/using these packages you will need to convince people to use closures versus passing arguments, and all the subtleties between using one api versus the other. My experience is that this will just scare people way. I have found that having, what seems, like a more complex api is not always a problem for end users. But having many api's is. That is why I don't like having many ways to define the same RHS. If you need to learn to do the u[1] = blah style definition, the complexity needs to be learned once. If sometimes you use a macro to define the types and you can so foo = blah, but it gets translated to u[1] = blah behind the scenes, you are adding a tremendous level of complexity for teaching and reasoning about the package, in a sense you now require everyone to learn two api's versus one (even if it seems like everything gets reduced to the same base form) if you have any hope of reading other peoples code.

My point about DSL with macros etc., is not that I don't like macros and Julia's flexibility, but rather fear of a Lisp like world were every library has its own semantics, or Perl where there is a million and one ways to do anything, so it is really hard to learn/read other peoples code. My feeling is that sometimes a less optimal solution gives a far better user experience because when looked at from a more high level it gives consistency across a language and makes everything feel more discoverable. Mathematica is an amazing example of this.

Anyway I will just lurk now. I love all the packages in JuliaDiffEQ, so whatever is decided is fine. But I just want to say, as an end user, that API consistency (both in JuliaDiffEQ, and across Julia itself, hence the importance of closures in callbacks) is far more important to me than having an optimal API.

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Sep 16, 2016

@ihnorton had a really great idea of implementing the parameterized version via call overloading. For example, it would look like:

type  LotkaVoltera <: ParameterizedFunction
         a::Float64
         b::Float64
end
f = LotkaVoltera(0.0,0.0)
(p::LotkaVoltera)(t,u,du) = begin
         du[1] = p.a * u[1] - p.b * u[1]*u[2]
         du[2] = -3 * u[2] + u[1]*u[2]
end

Then f(t,u,du) would act normally like a function, with f.a = and f.b = being able to change the parameters. If someone passed a ParameterizedFunction to a solver which looks for a function and doesn't use the parameters (ODE.jl), it should just work without any changes. If someone passes a function to something which can use a ParameterizedFunction, then a type check f <: ParameterizedFunction can turn on/off the features which require parameters. This makes ParameterizedFunctions entirely optional, since if that option is used, less sophisticated solvers can still use them (you might need to change the ODE.jl iterator interface so that, instead of F being a Function, enlarge that to ::T with T<:Union{Function,ParameterizedFunction}.)

This then ties in really nicely with re-defining functions and the function definition macros. For example, a definition macro could look like

f = @ode_def LotkaVoltera begin
dx = a*x - b*x*y
dy = -c*y + d*x*y
end a=>1.5 b=>1 c=3 d=1

and then f(t,u,du) would be a ParameterizedFunction with c and d inlined, while a and b are the parameter fields. What's cool about this is that the type constructor f = LotkaVoltera(a,b) would also be generated in this macro, so you could easily create the same function with different parameters.

The end result is that all of this is a superset of the existing functionality which is compatible with the existing functionality. You can teach people to define equations for differential equations using the f(t,u,du) format, and if they need more power use ParameterizedFunctions which have convenience macros to make them easier to define. But no matter what they use, it would work in any solver (except if parameters are required).

Does that hit all of your points @gabrielgellner?

@pwl
Copy link

pwl commented Sep 16, 2016

This is a very neat idea! I still don't think that macros are necessary here but the type structure is simple and elegant. Is there anything against having ParametrizedFunction<:Function? Then we wouldn't have to change anything in the existing code.

EDIT: we should still use closures by default, leaving ParametrizedFunction exclusively for solvers supporting sensitivity analysis and similar methods.

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Sep 16, 2016

That's a good idea. ParameterizedFunction <: Function works and solves that issue. 👍.

I disagree that the macros aren't necessary. It depends on how intense your models are. For simple models, sure you can define the ParameterizedFunction without the macro and you'll be fine. But the reason that I came up with the macros is because I actually had to implement the following for a recent publication. This isn't an exception: systems bio is filled with these kinds of differential equations. I'll be keeping them around because I am definitely going to use them to enhance readability in these situations. (A model I am working on now has about 50 reactants. If I use reactant names, I can easily double check the equations since I know the chemical reactions. If I use y[i], then I have to look them up each time).

  function f(t,y,dy)
    # y(1) = snailt
    # y(2) = SNAIL
    # y(3) = miR34t
    # y(4) = SR1 # abundance of SNAIL/miR34 complex
    # y(5) = zebt
    # y(6) = ZEB
    # y(7) = miR200t
    # y(8) = ZR1 # abundance of ZEB/miR200 complex with i copies of miR200 bound on the sequence of ZEB1
    # y(9) = ZR2
    # y(10) = ZR3
    # y(11) = ZR4
    # y(12) = ZR5
    # y(13) = tgft
    # y(14) = TGF
    # y(15) = tgfR # abundance of TGF/miR200 complex
    # y(16) = Ecad
    # y(17) = Ncad
    # y(18) = Ovol2
    TGF0=.5(t>100)
    #ODEs
    dy[1]=k0_snail+k_snail*(((y[14]+TGF0)/J_snail0))^2/(1+(((y[14]+TGF0)/J_snail0))^2+(y[19]/J_SO)^nSO)/(1+y[2]/J_snail1)-kd_snail*(y[1]-y[4])-kd_SR1*y[4]
    dy[2]=k_SNAIL*(y[1]-y[4])-kd_SNAIL*y[2]
    dy[3]=k0_34+k_34/(1+((y[2]/J1_34))^2+((y[6]/J2_34))^2)-kd_34*(y[3]-y[4])-kd_SR1*y[4]+lamdas*kd_SR1*y[4]
    dy[4]=Timescale*(Ks*(y[1]-y[4])*(y[3]-y[4])-y[4])
    dy[5]=k0_zeb+k_zeb*((y[2]/J_zeb))^2/(1+((y[2]/J_zeb))^2+((y[19]/J_2z))^nO)-kd_zeb*(y[5]-(5*y[8]+10*y[9]+10*y[10]+5*y[11]+y[12]))-dk_ZR1*5*y[8]-dk_ZR2*10*y[9]-dk_ZR3*10*y[10]-dk_ZR4*5*y[11]-dk_ZR5*y[12]
    dy[6]=k_ZEB*(y[5]-(5*y[8]+10*y[9]+10*y[10]+5*y[11]+y[12]))-kd_ZEB*y[6]
    dy[7]=k0_200+k_200/(1+((y[2]/J1_200))^3+((y[6]/J2_200))^2)-kd_200*(y[7]-(5*y[8]+2*10*y[9]+3*10*y[10]+4*5*y[11]+5*y[12])-y[15])-dk_ZR1*5*y[8]-dk_ZR2*2*10*y[9]-dk_ZR3*3*10*y[10]-dk_ZR4*4*5*y[11]-dk_ZR5*5*y[12]+lamda1*dk_ZR1*5*y[8]+lamda2*dk_ZR2*2*10*y[9]+lamda3*dk_ZR3*3*10*y[10]+lamda4*dk_ZR4*4*5*y[11]+lamda5*dk_ZR5*5*y[12]-kd_tgfR*y[15]+lamdatgfR*kd_tgfR*y[15]
    dy[8]=Timescale*(K1*(y[7]-(5*y[8]+2*10*y[9]+3*10*y[10]+4*5*y[11]+5*y[12])-y[15])*(y[5]-(5*y[8]+10*y[9]+10*y[10]+5*y[11]+y[12]))-y[8])
    dy[9]=Timescale*(K2*(y[7]-(5*y[8]+2*10*y[9]+3*10*y[10]+4*5*y[11]+5*y[12])-y[15])*y[8]-y[9])
    dy[10]=Timescale*(K3*(y[7]-(5*y[8]+2*10*y[9]+3*10*y[10]+4*5*y[11]+5*y[12])-y[15])*y[9]-y[10])
    dy[11]=Timescale*(K4*(y[7]-(5*y[8]+2*10*y[9]+3*10*y[10]+4*5*y[11]+5*y[12])-y[15])*y[10]-y[11])
    dy[12]=Timescale*(K5*(y[7]-(5*y[8]+2*10*y[9]+3*10*y[10]+4*5*y[11]+5*y[12])-y[15])*y[11]-y[12])
    dy[13]=k_tgf-kd_tgf*(y[13]-y[15])-kd_tgfR*y[15]
    dy[14]=k_OT+k_TGF*(y[13]-y[15])-kd_TGF*y[14]
    dy[15]=Timescale*(TGF_flg+KTGF*(y[7]-(5*y[8]+2*10*y[9]+3*10*y[10]+4*5*y[11]+5*y[12])-y[15])*(y[13]-y[15])-y[15])
    dy[16]=GE*(k_ecad0+k_ecad1/(((y[2]/J_ecad1))^2+1)+k_ecad2/(((y[6]/J_ecad2))^2+1)-kd_ecad*y[16])
    dy[17]=k_ncad0+k_ncad1*(((y[2]/J_ncad1))^2)/(((y[2]/J_ncad1))^2+1)+k_ncad2*(((y[6]/J_ncad2))^2)/(((y[6]/J_ncad2)^2+1)*(1+y[19]/J_ncad3))-kd_ncad*y[17]
    dy[18]=k0O+kO/(1+((y[6]/J_O))^nzo)-kdO*y[18]
    dy[19]=kOp*y[18]-kd_Op*y[19]
  end

@gabrielgellner
Copy link

I really like the parameterized solution. I could see myself using that even for regular ODE problems with the nice macro syntax. What I really like about this is how regular it is. It doesn't feel like a special case, but rather a really cool way of using Julia's call overloading to get at a kind of functor like object. Again my issue with the macro solutions was how it felt like a new dialect, using the call overloading approach and the macro together feels like it could easily become the default way of defining such problems. Really really cool solution. I will need to play with it a bit.

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Sep 16, 2016

One interesting thing that comes out of this: an easy way of specifying the Jacobian. It's a little hacky:

function Base.ctranspose(p::LotkaVoltera)(t,u,J)
  J[1,1] = a-b
  J[1,2] = -b
  J[2,1] = 1
  J[2,2] = -2
end

and then f'(t,u,J) would do an in-place calculation of the Jacobian J. Then it's perfectly reasonable for

f = @ode_def LotkaVoltera begin
dx = a*x - b*x*y
dy = -c*y + d*x*y
end a=>1.5 b=>1 c=3 d=1

to return an overloaded type where f is overloaded with f(t,u,du) and f'(t,u,J). We can get f' from ForwardDiff or, if installed, use SJulia to get the symbolic expression for the derivative. If you instead have Base.ctranspose(p::LotkaVoltera)(t,u,J) return a ParameterizedFunction, then you can recursively do this and define f''(t,u,H) to be the Hessian (if any solvers need it).

This could be a way for the macro to automatically supply the Jacobian to a Rosenbrock solver.

@gabrielgellner
Copy link

Oh man that looks so sweet. Though I worry that LotkaVoltera example is just to make me love it too much :)

@ChrisRackauckas
Copy link
Member Author

ChrisRackauckas commented Sep 17, 2016

I made a prototype. Check out ParameterizedFunctions.jl. The macro does calculate the Jacobian automatically as well (using SymPy.jl). This needs to be tested with the ODE solvers, but the tests in the package shows that it makes the right functions and Jacobians. It also has a macro for the Finite Element Methods.

@mauro3
Copy link
Contributor

mauro3 commented Sep 17, 2016

I like the first example, but the Jacobian-example gets in the cute but too obscure territory, IMO.

@pwl
Copy link

pwl commented Sep 17, 2016

So the bottom line of this discussion is that we can completely avoid f(t,y,dy,p) and we don't have to update any APIs because, if need be, we can rely on ParameterizedFunction or other ways of adding parameters to the equation (plus closures for user parameters).

Because there are many ways to include parameters we shouldn't be too specific with the API until we have a pure Julia solver with actual sensitivity analysis or similar features. Until then, let's keep the API simply as f(t,y,dy) and let's keep suggesting closures for user parameters as we don't yet support sensitivity parameters. As the example by @ChrisRackauckas demonstrated, we can always add parameters on top of the existing API in a backward compatible way.

Can we close this issue now and move the discussion on ParameterizedFunctions to https://github.com/JuliaDiffEq/ParameterizedFunctions.jl?

@ChrisRackauckas
Copy link
Member Author

Yeah sounds good.

@ChrisRackauckas
Copy link
Member Author

@mauro3 I came up with that because I was wondering if I could silently calculate and return the Jacobian as part of the construction of f in a macro (since I had the ASTs, why not do some symbolic calculations?). The answer ended up being, yes, it's possible and actually not that hard to do (though the macro is a little lengthy). However, I agree that it's such an obscure syntax that I wouldn't recommend it to be the way to define the Jacobian either: instead inside of a solver check if it's a ParameterizedFunction (with a Jacobian, somehow they should be optional) and then if it is, grab the free analytical Jacobian. This fits into the philosophy that ParameterizedFunctions are not and will not be required anywhere, but they can always be used where you used to use a function and get some free gains. So it's moreso that this proof-of-concept lets me almost silently add a lot of optimizations (I can also simplify the equations, re-build the function if it's a polynomial to be written via Horner's rule, etc.). The discussion can continue at ParameterizedFunctions.jl.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants