diff --git a/docs/src/analysis/parameter_estimation.md b/docs/src/analysis/parameter_estimation.md index 923a9d816d..5dc9fbdac5 100644 --- a/docs/src/analysis/parameter_estimation.md +++ b/docs/src/analysis/parameter_estimation.md @@ -6,28 +6,39 @@ that the problem is defined using a [ParameterizedFunction](https://github.com/J ### build_loss_objective -`build_loss_objective` builds an objective function to be used with Optim.jl and -MathProgBase-associated solvers like NLopt. +`build_loss_objective` builds an objective function to be used with Optim.jl and MathProgBase-associated solvers like NLopt. ```julia -function build_loss_objective(prob::DEProblem,t,data,alg; - loss_func = L2DistLoss, +function build_loss_objective(prob::DEProblem,alg,loss_func; mpg_autodiff = false, - verbose = false, + verbose_opt = false, verbose_steps = 100, kwargs...) ``` -The first argument is the DEProblem to solve. Next is `t`, -the set of timepoints which the data is found at. The last argument which is required -is the data, which are the values that are known, in order to be optimized against. -Optionally, one can choose a loss function from LossFunctions.jl or use the default -of an L2 loss. One can also choose `verbose` and `verbose_steps`, which, in the -MathProgBase interface, will print the steps and the values at the steps every +The first argument is the DEProblem to solve, and next is the `alg` to use. +One can also choose `verbose_opt` and `verbose_steps`, which, in the optimization routines, will print the steps and the values at the steps every `verbose_steps` steps. `mpg_autodiff` uses autodifferentiation to define the derivative for the MathProgBase solver. The extra keyword arguments are passed to the differential equation solver. +#### The Loss Function + +```julia +loss_func(sol) +``` + +is a function which reduces the problem's solution. While this is very +flexible, a convenience routine is included for fitting to data: + +```julia +CostVData(t,data;loss_func = L2DistLoss) +``` + +where `t` is the set of timepoints which the data is found at, and +`data` which are the values that are known. Optionally, one can choose a loss +function from LossFunctions.jl or use the default of an L2 loss. + ### build_lsoptim_objective `build_lsoptim_objective` builds an objective function to be used with LeastSquaresOptim.jl. @@ -97,7 +108,7 @@ To build the objective function for Optim.jl, we simply call the `build_loss_obj funtion: ```julia -cost_function = build_loss_objective(prob,t,data,Tsit5(),maxiters=10000) +cost_function = build_loss_objective(prob,Tsit5(),CostVData(t,data),maxiters=10000) ``` Note that we set `maxiters` so that way the differential equation solvers would @@ -146,7 +157,7 @@ prob = ODEProblem(f2,u0,tspan) To solve it using LeastSquaresOptim.jl, we use the `build_lsoptim_objective` function: ```julia -cost_function = build_lsoptim_objective(prob,t,data,Tsit5())) +cost_function = build_lsoptim_objective(prob,Tsit5(),CostVData(t,data)) ``` The result is a cost function which can be used with LeastSquaresOptim. For more @@ -203,7 +214,7 @@ t = collect(linspace(0,10,200)) randomized = [(sol(t[i]) + .01randn(2)) for i in 1:length(t)] data = vecvec_to_mat(randomized) -obj = build_loss_objective(prob,t,data,Tsit5(),maxiters=10000) +obj = build_loss_objective(prob,Tsit5(),CostVData(t,data),maxiters=10000) ``` We can now use this `obj` as the objective function with MathProgBase solvers. diff --git a/docs/src/features/monte_carlo.md b/docs/src/features/monte_carlo.md index a5af46a593..a6c99b31f4 100644 --- a/docs/src/features/monte_carlo.md +++ b/docs/src/features/monte_carlo.md @@ -2,16 +2,17 @@ ## Performing a Monte Carlo Simulation -To perform a Monte Carlo simulation, you simply use the interface: +### Building a Problem + +To perform a Monte Carlo simulation, define a `MonteCarloProblem`. The constructor is: ```julia -sim = monte_carlo_simulation(prob,alg,kwargs...) -``` -The keyword arguments take in the arguments for the common solver interface and will -pass them to the differential equation solver. The special keyword arguments to note are: +MonteCarloProblem(prob::DEProblem; + output_func = identity, + prob_func= (prob,i)->prob) +``` -* `num_monte`: The number of simulations to run. Default is 10,000. * `prob_func`: The function by which the problem is to be modified. * `output_func`: The reduction function. @@ -49,6 +50,39 @@ output_func(sol) = sol[end,2] Thus the Monte Carlo Simulation would return as its data an array which is the end value of the 2nd dependent variable for each of the runs. +#### Parameterizing the Monte Carlo Components + +The Monte Carlo components can be parameterized by using the `ConcreteParameterizedFunction` +constructors. + +```julia +ProbParameterizedFunction(prob_func,params) +OutputParameterizedFunction(output_func,params) +``` + +Here, the signatures are `prob_func(prob,i,params)` and `output_func(sol,params)`. +These parameters are added to the parameter list for use in the parameter estimation +schemes. + +### Solving the Problem + +```julia +sim = solve(prob,alg,kwargs...) +``` + +The keyword arguments take in the arguments for the common solver interface and will +pass them to the differential equation solver. The special keyword arguments to note are: + +* `num_monte`: The number of simulations to run. Default is 10,000. + +Additionally, a `MonteCarloEstimator` can be supplied + +```julia +sim = solve(prob,estimator,alg,kwargs...) +``` + +These will be detailed when implemented. + ## Parallelism Since this is using `pmap` internally, it will use as many processors as you @@ -80,7 +114,8 @@ prob_func = function (prob) prob.u0 = rand()*prob.u0 prob end -sim = monte_carlo_simulation(prob,Tsit5(),prob_func=prob_func,num_monte=100) +monte_prob = MonteCarloProblem(prob,prob_func=prob_func) +sim = solve(monte_prob,Tsit5(),num_monte=100) using Plots plotly()