From b27542321f8ddd00f8654dd323c478c0ae089df1 Mon Sep 17 00:00:00 2001
From: andrewhooker ODE solution of PK model, mult
of describing the same model.
tic(); eval <- evaluate_design(poped.db); toc()
-#> Elapsed time: 2.759 seconds.
+#> Elapsed time: 2.855 seconds.
tic(); eval <- evaluate_design(poped.db.Rcpp); toc()
-#> Elapsed time: 1.318 seconds.
The difference is noticeable and gets larger for more complex ODE models.
@@ -1129,7 +1129,7 @@
ev <- evaluate_design(poped_db_2)
round(ev$ofv,1)
-#> [1] 42.3
+#> [1] 42.6
round(ev$rse)
We can see that the result, based on MC sampling, is somewhat variable with so few samples.
@@ -1968,7 +1968,7 @@#> 5.021700 2.980981 14.068646 29.765030 36.691675 26.754137 31.469425 #> SIGMA[2,2] #> 25.311870 -#> Elapsed time: 0.135 seconds.
-library(PopED)
-packageVersion("PopED")
-#> [1] '0.6.0.9002'
Here we define, as an example, a one-compartment pharmacokinetic model with linear absorption (analytic solution) in PopED (Nyberg et al. 2012).
+
+library(PopED)
+packageVersion("PopED")
+#> [1] '0.6.0.9002'
ff <- function(model_switch,xt,parameters,poped.db){
with(as.list(parameters),{
@@ -138,10 +138,9 @@ Define a modelDefine an initial design and design space
Now we define the model parameter values, the initial design and
-design space for optimization.
-We define model parameters similar to the Warfarin example from the
-software comparison in Nyberg et al. (2015) and an
-arbitrary design of two groups of 20 individuals.
+design space for optimization. We define model parameters similar to the
+Warfarin example from the software comparison in Nyberg et al. (2015) and an arbitrary design of one
+group of 32 individuals.
poped_db <-
create.poped.database(
@@ -164,9 +163,9 @@ Define an initial design and
Simulation
-First it may make sense to check your model and design to make sure
-you get what you expect when simulating data. Here we plot the model
-typical values for the two groups:
+First it may make sense to check the model and design to make sure we
+get what we expect when simulating data. Here we plot the model typical
+value and a 95% prediction interval (PI) for the intial design:
plot_model_prediction(poped_db, model_num_points = 500,facet_scales = "free",PI=T)
@@ -177,12 +176,87 @@ Design evaluation
eval_full <- evaluate_design(poped_db)
-round(eval_full$rse)
-#> CL V KA d_CL d_V d_KA sig_prop sig_add
-#> 5 4 15 34 70 28 89 36
+round(eval_full$rse)
+
+
+
+
+
+RSE
+
+
+
+
+
+CL
+
+
+5
+
+
+
+
+V
+
+
+4
+
+
+
+
+KA
+
+
+15
+
+
+
+
+d_CL
+
+
+34
+
+
+
+
+d_V
+
+
+70
+
+
+
+
+d_KA
+
+
+28
+
+
+
+
+sig_prop
+
+
+89
+
+
+
+
+sig_add
+
+
+36
+
+
+
+
We see that the relative standard error of the parameters (in
percent) are relatively well estimated with this initial design except
-for the proportional RUV parameter (sig_prop
).
+for the between subject variability parameter for volume of distribution
+(d_V
) and the proportional RUV parameter
+(sig_prop
).
We use optimization criteria based on the D6
-(loq_method=1
which is the default) and D2
-(loq_method=2
) methods from Vong et
-al. (2012). For
-the D6 method, which has been shown to be a better method in comparison
-to SSE studies, we have updated the method to only investigate points
-where the 95% PI overlaps LOQ, otherwise we set the design point to
-either no information or full information. Further we filter out
-situations with very low probabilities
-(loq_prob_limit = 0.001
). Both the CI% and the low
-probability limit can be specified in the calculation (default values
-are loq_PI_conf_level = 0.95
and
-loq_prob_limit = 0.001
). In this way we can get the D6
-method to compute in reasonable time-frames even for larger number of
-design samples.
Here we can evaluate the design with both methods and test the speed -of computation. We see that D6 is significantly slower than D2 (but D6 -should be a more accurate representation of the RSE expected using M3 -estimation methods).
+To evaluate the designs we use the design evaluation criteria based
+on the “integration and FIM scaling” method (loq_method=1
+which is the default) and the “omit observations where PRED<LOQ”
+method (loq_method=2
) from Vong et
+al. (2012)
+(referred to as the D6 and D2 methods, respectively, in the presentation
+by Vong et al.). In the D6 method we:
Enumerate all permutations of each sample point being
+quantifiable or not (below the lower LOQ, or above the upper LOQ). If
+sample points have an expected prediction interval (default is 95%,
+loq_PI_conf_level = 0.95
) that does not overlap the LOQ
+then the design point is assumed to either always be observed or to
+always be outside the limit of quantification.
Compute the probability of each permutation occurring, filtering
+out potential realized designs with very low probabilities (default is
+loq_prob_limit = 0.001
).
Evalaute the Fisher Information Matrix (FIM) for all remaining +design permutations, assuming no information from any design point if, +for that permutation, it is not in within the limits of +quantification.
Take the weighted sum of the resulting information +matrices.
The D2 method is a simplification of this process where all samples +with a typical value prediction (PRED) below the lower LOQ or above +upper LOQ are removed from the design before calculating the FIM.
+Here we evaluate the initial design with both methods and test the +speed of the computations. We see that D6 is significantly slower than +D2 (but D6 should be a more accurate representation of the RSE expected +using M3 estimation methods).
set.seed(1234)
e_time_D6 <- system.time(
@@ -223,11 +309,12 @@ LOQ handlingeval_D2 <- evaluate_design(poped_db,loq=2, loq_method=2)
)
-cat("D6 evaluation time: ",e_time_D6[1],"\n" )
-cat("D2 evaluation time: ",e_time_D2[1],"\n" )
-#> D6 evaluation time: 0.047
-#> D2 evaluation time: 0.007
The D2 nmethod is the same as removing the last data point
+cat("D6 evaluation time: ",e_time_D6[1],"seconds \n" ) +cat("D2 evaluation time: ",e_time_D2[1],"deconds \n" ) +#> D6 evaluation time: 0.047 seconds +#> D2 evaluation time: 0.007 decondsThe D2 method is the same as removing the last design point, as you +can se below.
poped_db_2 <- create.poped.database(
ff_fun=ff,
@@ -247,12 +334,17 @@ LOQ handlingeval_red <- evaluate_design(poped_db_2)
testthat::expect_equal(eval_red$ofv,eval_D2$ofv)
testthat::expect_equal(eval_red$rse,eval_D2$rse)
Predicted parameter uncertainty for the three methods is shown in the -table below (as relative standard error, RSE, in percent). We see that -the uncertainty is generally higher with the LOQ evaluations (as -expected). We also see that because the D2 method ignores data that is -below LOQ (the last observation in the design), then the predictions of -uncertainty are significantly larger.
+The predicted parameter uncertainty for the three methods is shown in +the table below (as relative standard error, RSE, in percent). We see +that the uncertainty is generally higher with the LOQ evaluations (as +expected). We also see that the predictions of uncertainty are +significantly larger than the D6 method. This too is expected, because +the D2 method ignores design points where the model PRED is below LOQ +(the last observation in the design), whereas it appears from the +previous figure that ~25% of the observations from the last design point +will be above LOQ. The M6 method accounts for this probability that the +last design point will have data above LOQ and is thus a more realistic +assessment of the expected parameter uncertainty.