-
Notifications
You must be signed in to change notification settings - Fork 0
/
PopSim_Introduction_PKPD_Workshop_2024.Rmd
2180 lines (1677 loc) · 75.8 KB
/
PopSim_Introduction_PKPD_Workshop_2024.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
title: "PopSim 2024 workshop Exercises - Introduction to Population Pharmacokinetic modeling"
author:
- Theodoros (Theo) Papathanasiou (GSK plc)
- Eva Sverrisdottir (Ascendis Pharma)
- Daniel Jonker (Ferring Pharmaceuticals)
- Rasmus Juul Kildemoes (Novo Nordisk A/S)
- Trine Meldgaard Lund (University of Copenhagen)
date: "26 September 2024"
output:
html_document:
css: styles.css
df_print: kable
fig_caption: yes
fig_height: 6.72
fig_width: 9.4
theme: readable
toc: yes
toc_depth: 5
toc_float: true
number_sections: true
code_folding: hide
editor_options:
markdown:
wrap: sentence
chunk_output_type: console
---
```{r, initial-setup}
# knitr::opts_knit$set(verbose=T)
knitr::opts_chunk$set(
cache = TRUE
# dev = c("png")
)
options(knitr.kable.NA = "")
```
```{r popsim-logo, include=T, fig.width=1, fig.height=10, echo = FALSE}
# htmltools::img(src = knitr::image_uri(file.path(here::here(), 'GSK_logo.png')),
# alt = 'logo',
# style = 'position:fixed; top:0; right:0; padding:10px; width:300px;')
htmltools::img(src = 'https://farmaceutisk-selskab.dk/wp-content/uploads/2021/03/44261803-00003075-001-PopSim2-640x635.png',
alt = 'logo',
style = 'position:fixed; top:0; right:0; padding:10px; width:200px;')
```
# Overview
This tutorial was created based on the warfarin data set, which was originally published in:
```
O’Reilly (1968). Studies on coumarin anticoagulant drugs. Initiation of warfarin therapy without a loading dose. Circulation 1968, 38:169-177.
```
Warfarin is an anticoagulant normally used in the prevention of thrombosis and thromboembolism, the formation of blood clots in the blood vessels and their migration elsewhere in the body, respectively.
The data set provides set of plasma warfarin concentrations and Prothrombin Complex Response in thirty normal subjects after a single loading dose.
A single large loading dose of warfarin sodium, 1.5 mg/kg of body weight, was administered orally to all subjects.
Measurements were made each 12 or 24h.
The dataset can be accessed using the following link: <https://dataset.lixoft.com/data-set-examples/warfarin-data-set/>
Below, we attach some useful background in pharmacokinetics, and the open-source software we will be using in this tutorial.
**rxode2/nlmixr2** and **xgxr**
**Pharmacokinetics** <https://pharmacy.ufl.edu/files/2013/01/two-compartment-model.pdf>
**Exploratory Graphics Initiative (xGx)** <https://opensource.nibr.com/xgx/>
**nlmixr2** <https://nlmixr2.org/>
```{r out.width="100px", echo = FALSE}
url <- "https://blog.nlmixr2.org/img/nlmixr2.png"
knitr::include_graphics(url)
```
**rxode2** <https://nlmixr2.github.io/rxode2/>
```{r out.width="100px", echo = FALSE}
url <- "https://nlmixr2.github.io/rxode2/logo.png"
knitr::include_graphics(url)
```
**Interactive Clinical Pharmacology** <https://www.icp.org.nz/>
**ModViz POP** <https://pavanvaddady.shinyapps.io/modvizpop/>
***Acknowledgements*** Many of the nlmixr2 estimation parts of this tutorial were heavily inspired by the course codes developed by Rik Schoemaker.
These will be presented during the advanced course that will take place in Day 2 of this symposium.
The material of the advanced course can be freely accessed from this link: <https://blog.nlmixr2.org/courses/page2024/>
The PopSim team would also like to thank the rxode2/nlmxr2 team for value feedback and support for this course <https://blog.nlmixr2.org/>
```{r Knitr setup, cache=TRUE, echo = FALSE, message = FALSE, warning = FALSE, results='hide'}
knitr::opts_chunk$set(warning = FALSE, message = FALSE, echo = FALSE, fig.width = 9.4, fig.height = 6.72)
```
```{r File Setup, echo = FALSE, message = FALSE, warning = FALSE, results='hide' }
remove(list = ls())
# Load libraries
library(tidyverse)
library(knitr)
library(ggplot2)
library(xgxr)
library(nlmixr2)
library(data.table)
library(xpose.nlmixr2)
library(vpc)
library(flextable)
library(caTools) # for NCA
library(patchwork) # for combining plots
library(plotly) # for interactive graphs
library(nlmixr2lib) # needed for addEta()
library(GGally) # needed for correlation plots
library(ggpubr) # helpful for combining plots in panels
## updated
library(units)
# geom_line(data = Res, aes(x=as.numeric(time), y=C1), linewidth=1)
##
options(dplyr.summarise.inform = FALSE)
theme_set(theme_bw(base_size = 18) +
theme(plot.caption = element_text(hjust = 0.5)) +
theme(legend.position = 'bottom'))
center_legend <- list(orientation = "h", x = 0.4, y = -0.2)
run.estimation = F
```
# Hands-on 1
## Understanding your data set
Briefly about the warfarin data set: <br>
- 32 healthy subjects
- single oral administration of 1.5 mg/kg
- age, weight, and sex recorded
- PK sampling from 0.5 h to 120 h post-dose in a subset of subjects (full PK profile, dense samping)
- PK sampling from 24 h to 120 h post-dose in a subset of subjects (sparse samping)
<br>
```{r}
## read in the Warfarin PK-only data set
PK_dose_data <- warfarin
PK_dose_data <- subset(warfarin, dvid == "cp", select = c(-dvid))
names(PK_dose_data) <- c("ID", "TIME", "AMT", "DV", "EVID", "WT", "AGE", "SEX")
## add sampling column
### find subjects with observations in the first 24 h after dosing
dense_id <- unique(PK_dose_data$ID[PK_dose_data$TIME < 24 & PK_dose_data$TIME > 0])
PK_dose_data$SPARSE <- ifelse(PK_dose_data$ID %in% dense_id, 0, 1)
# #ensure dataset has all the necessary columns
PK_dose_data <- PK_dose_data %>%
# Sex
mutate(SEXf = as.factor(SEX)) %>%
mutate(SEXf = factor(SEXf, labels = c('Female', 'Male'))) %>%
mutate(SEX = ifelse(SEXf == "Female", 0, 1)) %>%
# Sparse sampling
mutate(SPARSEf = as.factor(SPARSE)) %>%
mutate(SPARSEf = factor(SPARSEf, labels = c('Dense sampling', 'Sparse sampling'))) %>%
# body weight into tertiles
mutate(WTf = cut(WT,
breaks = quantile(WT, c(0:3/3)), include.lowest=T, labels = c("Low weight", "Medium weight", "High weight")) ) %>%
mutate(DV_UNIT = ifelse(EVID==0, "ng/mL", NA))
# Create a dosing dataset
Dose_data <- PK_dose_data %>% filter(EVID == 1)
# Create a PK dataset
PKdata <- PK_dose_data %>% filter(EVID == 0)
# Create a baseline characheristics dataset
baseline_char_data <- Dose_data %>%
select(ID, WT, AGE, SEX, SPARSE, SEXf, SPARSEf, WTf) %>%
mutate(ID = as.factor(ID))
#units and labels to be used in script
time_units_dataset = "hours"
time_units_plot = "days"
trtact_label = "Dose"
x_label = "Time after dose (hours)"
dose_label = "Dose (mg)"
conc_units = paste0("\U003BC","g/mL") # Somewhat complex code, but useful for using the Greek letter mu
AUC_units = paste0("h*", conc_units)
AUC_label = paste0("AUC (", AUC_units, ")")
conc_label = paste0("Concentration (", conc_units, ")")
cmax_label = paste0("Cmax (", conc_units, ")")
warfarin_label = paste0("Warfarin (", conc_units, ")")
concnorm_label = paste0("Normalized Concentration (", conc_units, ")/mg")
```
### Covariate summary
First, it is always a good idea to get a feel of the data you are going to work with.
An excellent resource that lays out the foundation of exploring PK and PK/PD data is the **Exploratory Graphics (xGx)** initiative, which can be accessed through this link:
<https://opensource.nibr.com/xgx/>
A lot of the material presented below have been heavily inspired by **xGx** and we strongly recommend to spend some time on the **xGx** website to get a feeling of how exploratory graphics can be used to help us understand our data.
First, let's construct some baseline characteristics tables.
```{r}
summary_data <- baseline_char_data %>%
group_by(SPARSEf) %>%
summarise(SEX_proportion = mean(SEX)*100,
WT_mean = mean(WT),
AGE_mean = mean(AGE),
AGE_sd = sd(AGE)) %>%
mutate_if(is.numeric, round, 1)
# Here, we create a nice looking summary table, using the flextable package.
# It is not necessary to undestand flextable, but it is good to be aware that R language is
# very versatile and can help up create any outputs we may be interested in.
summary_data %>%
flextable() %>%
set_header_labels(SPARSEf = "Sampling scheme",
SEX_proportion = "Proportion male (%)",
WT_mean = "Mean weight (kg)",
AGE_mean = "Mean age (years)",
AGE_sd = "SD age (years)") %>%
set_table_properties(layout = "autofit", width = .7) %>%
align(align = "center", part = "all") %>%
add_header_lines("Table: Summary of baseline characteristics")
```
<br> We can even change how we summarize the data.
<br>
```{r message=FALSE}
summary_data_2 <- baseline_char_data %>%
group_by(SPARSEf, SEXf) %>%
summarise(WT_mean = mean(WT),
AGE_mean = mean(AGE),
AGE_sd = sd(AGE)) %>%
mutate_if(is.numeric, round, 1)
summary_data_2 %>%
flextable() %>%
set_header_labels(SPARSEf = "Sampling",
SEXf = "Sex",
WT_mean = "Mean weight (kg)",
AGE_mean = "Mean age (years)",
AGE_sd = "SD age (years)") %>%
set_table_properties(layout = "autofit", width = .7) %>%
align(align = "center", part = "all") %>%
add_header_lines("Table: Summary of baseline characteristics")
```
<br> We can also create graphical correlation plots <br>
```{r message=FALSE, warning=FALSE}
GGally::ggpairs(baseline_char_data,
columns = c('WT', 'AGE', "SEXf"),
diag = list(continuous = "barDiag"))
```
<br>
As we can see from the summary table, there are no female subjects in the sparse sampling group.
Now let's try to take a look at how the PK profiles look like.
<br>
### Pharmacokinetic profiles
Here we can see the our very first exploratory graph of our data.
Points connected with lines represents the measured plasma concentrations over time for each subject.
We can already start appreciating the variability in the data (remember, everyone received the same dose!).
<div class="gray-box">
**Pro tip** This is an interactive graph.
You can try to zoom in and out to explore different areas of the curve.
Pay special attention to the absorption phase!
</div>
```{r}
my_first_plot <- ggplot(PKdata, aes(x=TIME, y = DV, group = ID)) +
geom_point() +
geom_line() +
labs(x = x_label, y = warfarin_label)
ggplotly(my_first_plot)
```
<br>
We can also try to split the data into relevant sub populations For example, in the warfarin data set, we have subjects with either dense or sparse sampling Let's see how these profiles look like.
<br>
```{r}
my_second_plot <- my_first_plot + facet_wrap(~SPARSEf)
ggplotly(my_second_plot)
```
<br>
<div class="gray-box">
**Coding Tip** Let's change the names of the objects we store the plots to something easier (see the folded R code).
Let's try to color by Sex.
See how we combined the code from above to create a similar result
</div>
<br>
```{r}
g1 <- ggplot(PKdata, aes(x=TIME, y = DV, group = ID, color = SEXf)) +
geom_point() +
geom_line() +
facet_wrap(~SPARSEf) +
labs(x = "Time after dose (hours)", y = conc_label) +
scale_color_discrete(name = "Sex")
ggplotly(g1) %>% layout(legend = center_legend)
```
<br>
#### Optional code {.tabset .tabset-fade .tabset-pills}
***Optional code***
The code that follows is optional as many of these steps have been automated.
The calculations in this part are included for students who are interested to know what happens in the back end of ready-to-use functions that we use throughout these exercises.
Let's create a summary plot - This is what we usually look at after all.
```{r warning = FALSE, message = FALSE}
sumdata <- PKdata %>%
group_by(SPARSEf, TIME) %>%
summarise(mean.conc = mean(DV),
sd.conc = sd(DV),
n.conc = n()) %>%
mutate(se.conc = sd.conc / sqrt(n.conc),
lower.ci.conc = mean.conc - qt(1 - (0.05 / 2), n.conc - 1) * se.conc,
upper.ci.conc = mean.conc + qt(1 - (0.05 / 2), n.conc - 1) * se.conc)
```
##### Linear scale {.tabset .tabset-fade .tabset-pills .unnumbered}
```{r message=FALSE}
g2 <- ggplot(sumdata, aes(x=TIME, y = mean.conc)) +
geom_errorbar(aes(ymin = lower.ci.conc, ymax = upper.ci.conc), width = 5) +
geom_point() +
geom_line() +
facet_wrap(~SPARSEf) +
ylim(0, NA) +
labs(x = x_label, y = warfarin_label, caption = "Points with bars represent the mean with 95% Confidence Intervals")
g2
```
##### Log scale {.tabset .tabset-fade .tabset-pills .unnumbered}
```{r message=FALSE}
g3 <- g2 + scale_y_log10(breaks = c(1,2,4,8,16))
suppressWarnings(print(g3))
```
#### Automated code {.tabset .tabset-fade .tabset-pills}
Many of the steps under the ***optional code*** section have been automated.
For example we can create a similar looking plot, using just one line of code from the xgxr package.
Note that we only use one line of code to plot our the median with 95% confidence intervals.
Also, we can use some convenient commands for manipulating the names of the x and y axes, as well as change the time unit from hours to days (see folded code) <br>
##### Linear scale {.tabset .tabset-fade .tabset-pills .unnumbered}
```{r}
g <- ggplot(data = PKdata, aes(x = TIME, y = DV)) +
xgx_stat_ci(conf_level = .95) + # xGx package
facet_wrap(~SPARSEf) +
ylim(0, NA) +
xgx_scale_x_time_units(units_dataset = time_units_dataset,
units_plot = time_units_plot) +
labs(y=conc_label)
g
```
##### Log scale {.tabset .tabset-fade .tabset-pills .unnumbered}
```{r message=FALSE}
g1 <- g + xgx_scale_y_log10(breaks = c(1,2,4,8,16)) +
coord_cartesian(ylim = c(1,16))
suppressWarnings(print(g1))
```
<br>
We can immediately appreciate that the elimination phase in the semi-log scale plot appears linear.
This can give us some initial information that warfarin distribution kinetics can be described using a 1-compartment pharmacokinetic model.
More on that during the modeling hands-on session.
<br>
### Exploring covariate effects on PK
When we want to get an initial feeling of how some covariates may influence our drugs pharmacokinetics, we can start by working through some simple exploratory analyses.
Some common tools in our tool set, is coloring and faceting across different variables.
This can be done very easily with ggplot and xGx.
```{r}
g <- ggplot(data = PKdata, aes(x = TIME, y = DV, color = SEXf)) +
xgx_stat_ci(conf_level = .95) + # xGx package
facet_wrap(~SPARSEf) +
ylim(0, NA) +
xgx_scale_x_time_units(units_dataset = time_units_dataset,
units_plot = time_units_plot) +
labs(y=conc_label) +
scale_color_discrete(name = "Sex")
g
```
<br>
It looks like there is not a big difference between males and females.
But how about weight?
Here, we use the calculated 'Weight tertile':
```
quantile(PK_dose_data$WT, c(0:3/3))
```
```{r}
quartiles <- quantile(PK_dose_data$WT, c(0:3/3))
# Create the data frame with Weight tertile and Weight range
weight_data <- data.frame(
"Weight tertile" = c("Low", "Medium", "High"),
"Weight range" = c(
paste0(quartiles[1], " to ", quartiles[2], " kg"),
paste0(quartiles[2], " to ", quartiles[3], " kg"),
paste0(quartiles[3], " to ", quartiles[4], " kg")
)
)
flextable(weight_data) %>%
set_header_labels(
Weight.tertile = "Weight tertile",
Weight.range = "Weight range") %>%
width(width = 1.5) %>%
align(align = "center", part = "all")
```
<br>
```{r}
g <- ggplot(data = PKdata, aes(x = TIME, y = DV, color = WTf)) +
xgx_stat_ci(conf_level = .95) +
facet_wrap(~SPARSEf) +
ylim(0, NA) +
xgx_scale_x_time_units(units_dataset = time_units_dataset,
units_plot = time_units_plot) +
labs(y=conc_label) +
scale_color_discrete(name = "Weight tertile")
g
```
<br>
It looks like there may be an effect of weight on the PK profiles.
It can also be useful to look at the individual profiles one at a time.
<br>
```{r,fig.width=12, fig.height=10}
g1 <- ggplot(PKdata, aes(x=TIME, y = DV, group = ID, color = SEXf)) +
geom_point() +
geom_line() +
facet_wrap(~ID) +
xgx_scale_x_time_units(units_dataset = time_units_dataset,
units_plot = time_units_plot) +
labs(y=conc_label) +
scale_color_discrete(name = "Sex")
g1
```
### Performing a non-compartmental analysis (NCA)
We can always perform simple non-compartmental analysis (NCA) analyses with our PK data.
In in the code snippet below, we construct some basic calculations for deriving the Area Under the Curve (AUC), as well as the maximum observed concentration (Cmax).
```{r, }
# Perform NCA, for additional plots
NCA <- PKdata %>%
group_by(ID) %>%
filter(!is.na(DV)) %>%
summarize(AUC_last = caTools::trapz(TIME,DV),
Cmax = max(DV)) %>%
mutate(ID = as.factor(ID)) # needed for the join operation below
NCAdat <- suppressMessages(left_join(NCA, baseline_char_data))
NCAdat_dense <- NCAdat %>% filter(SPARSEf == "Dense sampling")
```
<div class="lightblue-box">
**Group Question**: Is there anything that we should take into account before exploring the calculated AUC and Cmax?
By what would the AUC and Cmax calculations be affected?
</div>
<br>
<div class="gray-box">
<details>
<summary>***Hint***</summary>
Remember the structure of our data!
We have both dense and sparse sampling.
</details>
</div>
<br>
<div class="lightblue-box">
**Group Question**: Let's take a look at a comparison between the summary exposure metrics (AUC and Cmax) between the two sampling schemes.
What do we see?
</div>
<br>
<div class="blue-box">
<details>
<summary>***Answer***</summary>
There is a clear difference between the exposure metrics.
The sparse sampling appears to have lower AUC and Cmax.
The most important thing to remember, is that this is an artifact of the sampling scheme and does not represent some underlying physiological reason.
We will only use the dense sampling data for the plots that follow, to make sure that we do not introduce bias in our discussions because of the sampling scheme.
We will see how modeling can help us use the information form the sparse sampling patients in the sections that follow.
</details>
</div>
<br>
<div class="gray-box">
<details>
<summary>***Pro tip 1***</summary>
Always consider your sampling scheme when designing a PK clinical pharmacology study.
</details>
</div>
<br>
<div class="gray-box">
<details>
<summary>***Pro tip 2***</summary>Modeling can help us ***fill in the gaps*** for situations such as this (provided that your PK model does a good job at describing the data of course!).
We will see how during the modeling exercises that follow.
</details>
</div>
<br>
```{r, fig.width=9, fig.height=4}
g_AUC <- ggplot(data = NCAdat, aes(x = SPARSEf, y = AUC_last)) +
geom_boxplot(aes(group = SPARSEf)) +
ylab(AUC_label) +
xlab("Sampling scheme")
g_Cmax <- ggplot(data = NCAdat, aes(x = SPARSEf, y = Cmax)) +
geom_boxplot(aes(group = SPARSEf)) +
ylab(cmax_label) +
xlab("Sampling scheme")
g_AUC + g_Cmax
```
### Correlations between exposure metrics and covariates of interest {.tabset .tabset-fade .tabset-pills}
NCA metrics based on full PK profiles (dense sampling)
#### Sex {.tabset .tabset-fade .tabset-pills .unnumbered}
```{r, fig.width=9, fig.height=4}
g_AUC <- ggplot(data = NCAdat_dense, aes(x = SEXf, y = AUC_last)) +
geom_boxplot(aes(group = SEXf)) +
ylab(AUC_label) +
xlab("Sex")
g_Cmax <- ggplot(data = NCAdat_dense, aes(x = SEXf, y = Cmax)) +
geom_boxplot(aes(group = SEXf)) +
ylab(cmax_label) +
xlab("Sex")
g_AUC + g_Cmax
```
#### Body Weight {.tabset .tabset-fade .tabset-pills .unnumbered}
```{r, fig.width=9, fig.height=4}
g_AUC <- ggplot(data = NCAdat_dense, aes(x = WTf, y = AUC_last)) +
geom_boxplot(aes(group = WTf)) +
ylab(AUC_label) +
xlab('') +
theme(axis.text.x = element_text(angle = 15, vjust = 0.5, hjust=1))
g_Cmax <- ggplot(data = NCAdat_dense, aes(x = WTf, y = Cmax)) +
geom_boxplot(aes(group = WTf)) +
ylab(cmax_label) +
xlab('') +
theme(axis.text.x = element_text(angle = 15, vjust = 0.5, hjust=1))
g_AUC + g_Cmax
```
#### Body Weight Continuous {.tabset .tabset-fade .tabset-pills .unnumbered}
```{r, fig.width=9.5, fig.height=5}
g_AUC <- ggplot(data = NCAdat_dense, aes(x = WT, y = AUC_last)) +
geom_point() +
ylab(AUC_label) +
xlab("Body Weight (kg)") +
geom_smooth(formula = y ~ x, method="lm")
g_Cmax <- ggplot(data = NCAdat_dense, aes(x = WT, y = Cmax)) +
geom_point() +
ylab(cmax_label) +
xlab("Body Weight (kg)") +
geom_smooth(formula = y ~ x, method="lm")
g_AUC + g_Cmax
```
#### Body Weight stratified by sex {.tabset .tabset-fade .tabset-pills .unnumbered}
```{r, fig.width=9.5, fig.height=5}
g_AUC_sex <- g_AUC +
aes(color = SEXf) +
scale_color_discrete(name = "Sex")
g_Cmax_sex <- g_Cmax + aes(color = SEXf) +
scale_color_discrete(name = "Sex")
ggpubr::ggarrange(g_AUC_sex, g_Cmax_sex, common.legend = T)
```
## Understanding PK simulations
Lets us do our first simulation for warfarin.
Remember, the PK profiles for Warfarin look like this:
```{r}
g <- ggplot(data = PKdata, aes(x = TIME, y = DV)) +
xgx_stat_ci(conf_level = .95) + # xGx package
ylim(0, NA) +
xgx_scale_x_time_units(units_dataset = time_units_dataset,
units_plot = time_units_plot) +
labs(y=conc_label)
g
```
Let's build a simulation model.
First, we set up the system of ordinary differential equations (ODEs).
One good way to think about ODEs in PK, is to try and conceptualize the "bathtub model"
**Bathtub model concept** <https://www.tcpharm.org/pdf/10.12793/tcp.2015.23.2.42>
```{r}
## set up the system of differential equations (ODEs)
odeKA1 <- "
CL = TV_CL;
V = TV_V;
KA = TV_KA;
KE0 = (CL/V);
d/dt(depot) = -KA * depot;
d/dt(central) = KA * depot - KE0*central;
C1 = central/V;
"
## compile the model
modKA1 <- rxode2(model = odeKA1)
```
**Provide the parameter values** In our first example, we (somewhat randomly) set the absorption rate constant (ka) equal to \~1.3 1/h, Clearance (CL) equal to 0.0675 L/h and the Volume of distribution (V) equal to 15 L.
```{r}
## provide the parameter values to be simulated:
Params <- c( TV_KA = 0.5, # 1/h (aborption half-life of 30 minutes)
TV_CL = 0.0675, # L/h
TV_V = 15 # L
)
```
Subsequently, we need to create an event table.
The event table stores the dosing information (how much? how often?) and the sampling information (how often?)
For this first example, we will simulate a single dose for simplicity.
(Remember the actual dosing was done as mg/kg and thus it varies!).
How much dose (in mg) was given to the subjects?
Let's look at some quantiles.
```{r}
# How much dose (in mg) was given to the subjects? (Remember the dosing was done as mg/kg and thus it varies)
quantile(Dose_data$AMT)
```
Looks like the median dose was close 100 mg.
Let's use that dose for our simulation.
Now we have everything we need for our first simulation.
The simulation is done using the rxSolve command (click on the `Show` button to see the commands).
```{r}
## create an empty event table that stores both dosing and sampling information :
ev <- eventTable(amount.units = 'mg', time.units = "hr")
## add a dose to the event table:
ev$add.dosing(dose = 100) #mg
## add time points to the event table where concentrations will be simulated; these actions are cumulative
ev$add.sampling(seq(0, 120, 0.1))
## Then solve the system
##
## The output from rxSolve is a solved RxODE object,
## but by making it a data.frame only the simulated values are kept:
Res <- data.frame(rxSolve(modKA1,Params,ev))
```
<br> Now, we can plot the simulated outcomes in the compartments <br>
```{r, fig.width=8, fig.height=8}
g_depot <- ggplot(data = Res, aes(x=time, y=depot)) +
geom_line(linewidth=2) +
labs(x = "Time (hours)", y = 'Simulated Warfarin \namount (mg)', title = 'Simulated amount in depot (dosing) compartment')
g_central <- ggplot(data = Res, aes(x=time, y=C1)) +
geom_line(linewidth=2) +
labs(x = "Time (hours)", y = 'Simulated Warfarin \nconcentrations (ug/mL)', title = "Simulated concentrations in central compartment")
g_depot / g_central
```
<br> Let us now overlay the simulated profile with the observed data: <br>
```{r}
g <- ggplot(data = PKdata, aes(x = TIME, y = DV)) +
xgx_stat_ci(conf_level = .95, geom = list("point","errorbar")) +
geom_line(data = Res, aes(x=as.numeric(time), y=C1), linewidth=1) +
ylim(0, NA) +
xgx_scale_x_time_units(units_dataset = time_units_dataset,
units_plot = time_units_plot) +
labs(y=conc_label)
g
```
<br> What do you think?
Is this a good fit for the observed concentration data?
Not so much...
What if we provide better values?
You may recall our initial estimates:
| Parameters | Estimate | |
|-----------:|:---------|-----|
| ka (1/h) | 0.5 | |
| CL (L/h) | 0.0675 | |
| V (L) | 15 | |
Let's update the parameters to:
| Parameters | Estimate | |
|-----------:|:---------|-----|
| ka (1/h) | 1.4 | |
| CL (L/h) | 0.135 | |
| V (L) | 8 | |
```
Params <- c(TV_KA = log(2) / 0.5, # 1/h (absorption half-life of 30 minutes)
TV_CL = 0.135, # L/h
TV_V = 8 # L
)
```
<br>
```{r}
## provide the parameter values to be simulated:
Params <- c( TV_KA = log(2) / 0.5, # 1/h (absorption half-life of 30 minutes)
TV_CL = 0.135, # L/h
TV_V = 8 # L
)
Res <- data.frame(rxSolve(modKA1,Params,ev))
# Let us overlay the simulated profile with the observed data:
g <- ggplot(data = PKdata, aes(x = TIME, y = DV)) +
xgx_stat_ci(conf_level = .95, geom = list("point","errorbar")) +
geom_line(data = Res, aes(x=as.numeric(time), y=C1), linewidth=1) +
ylim(0, NA) +
xgx_scale_x_time_units(units_dataset = time_units_dataset,
units_plot = time_units_plot) +
labs(y=conc_label)
g
```
This is much better!
- By varying the parameters, we can get a better description of the data we have.
This is what the estimation algorithms do for us.
By varying all parameters multiple times, we can end up with having a model with parameters that describes our data.
The goodness of statistical fit can be assessed with the "Objective Function Value", as well as a variety of goodness-of-fit plots that we have at our disposal.
More on that is described in the modeling sections.
We can also create multiple simulation scenarios, such as multiple dosing, simulations with between subject variability and simulations with random residual error.
### Simulating multiple doses
Here we create a simulation of five 100 mg doses, given once daily (every 24 h).
Notice that we overlay the observations even though they are from a single dose study.
This is helpful for making sure that the model captures the data well following the first dose, and importantly, to help us appreciate the accumulation of the drug over time.
```{r}
ev_multiple <- eventTable(amount.units = 'mg', time.units = "hr") %>%
add.dosing(dose=100,
nbr.doses=5,
dosing.interval=24)
Res <- data.frame(rxSolve(modKA1,Params,ev_multiple))
# Let us overlay the simulated profile with the observed data:
g <- ggplot(data = PKdata, aes(x = TIME, y = DV)) +
xgx_stat_ci(conf_level = .95, geom = list("point","errorbar")) +
geom_line(data = Res, aes(x=as.numeric(time), y=C1), linewidth=1) +
ylim(0, NA) +
xgx_scale_x_time_units(units_dataset = time_units_dataset,
units_plot = time_units_plot) +
labs(y=conc_label)
g
```
### Simulating with inter-individual variability
Population models can go one step further and simulate multiple potential concentration time courses.
We know that every patient is different, which means that variability in the pharmacokinetic profiles may come from various intrinsic or extrinsic factors.
With population models, we can simulate different profiles, but assuming that there is some randomness around the structural parameters of the structural model.
These concepts might be somewhat confusing at first, but let's try to use the same model as before, but now simulate 30 patients receiving 100 mg warfarin once daily.
First, we create a simulation where **only** the clearance (CL) varies.
If you look at the code you can see that there are parameters for variability on ka and V as well, but the variance is set at 0.0001, which is \~ 0.
```
# Simulation with variability in clearance only
omega <- lotri(eta.CL ~ 0.4^2,
eta.V ~ 0.0001, # Very small value
eta.KA ~ 0.0001)`
```
In the second example, we create a simulation, were **all** parameters are allowed to vary.
```
# Simulation with variability in all parameters
omega <- lotri(eta.CL ~ 0.4^2,
eta.V ~ 0.4^2,
eta.KA ~ 0.4^2)
```
<div class="lightblue-box">
**Group Question** What do you observe?
Can you describe the differences between these simulated profiles?
</div>
```{r, fig.width=12, fig.height=5}
odeKA1_IIV <- "
CL = TV_CL * exp(eta.CL);
V = TV_V * exp(eta.V);
KA = TV_KA * exp(eta.KA);
KE0 = (CL/V);
d/dt(depot) = -KA * depot;
d/dt(central) = KA * depot - KE0*central;
C1 = central/V * (1+prop.err);
"
## compile the model
modKA1_IIV <- rxode2(model = odeKA1_IIV)
## provide the parameter values to be simulated with proportional error
Params_error <- c( TV_KA = log(2) / 0.5, # 1/h (absorption half-life of 30 minutes)
TV_CL = 0.135, # L/h
TV_V = 8, # L
prop.err = 0
)
# Simulation with variability in clearance only
omega <- lotri(eta.CL ~ 0.4^2,
eta.V ~ 0.0001, # Very small value
eta.KA ~ 0.0001)
Res_1 <- data.frame(rxSolve(odeKA1_IIV, Params_error, ev_multiple, omega=omega, nSub=30))
# Simulation with variability in all parameters
omega <- lotri(eta.CL ~ 0.4^2,
eta.V ~ 0.4^2,
eta.KA ~ 0.4^2)
Res_2 <- data.frame(rxSolve(odeKA1_IIV, Params_error, ev_multiple, omega=omega, nSub=30))
g_1 <- ggplot(data = Res_1, aes(x=as.numeric(time), y=C1, group=sim.id)) +
geom_line(linewidth=1) +
ylim(0, NA) +
xgx_scale_x_time_units(units_dataset = time_units_dataset,
units_plot = time_units_plot) +
labs(title = "Simulation with variability on CL", y=conc_label)
g_2 <- ggplot(data = Res_2, aes(x=as.numeric(time), y=C1, group=sim.id)) +
geom_line(linewidth=1) +
ylim(0, NA) +
xgx_scale_x_time_units(units_dataset = time_units_dataset,
units_plot = time_units_plot) +
labs(title = "Simulation with variability on all parameters", y=conc_label)
g_1 + g_2
```
# Hands-on 2
## Parameter estimation
### Fit model to data
Now let us do our first parameter estimation (click on the `Show` button to see the commands).
For the following example, we are going to define a one-compartment model, with linear absorption and linear elimination kinetics and fit the model to the warfarin data set.
Notice that we will only estimate inter-individual variability (IIV) for clearance (eta.cl).
Notice that we have 'commented-out' the code for estimating IIV in Ka and V. This is because we plan to explore the influence of estimating IIVs in Ka and V later in the course.
For our residual error model, we choose to start by estimating a proportional error only.
```{r message = FALSE, warning = FALSE, results='hide'}
# ------------------------------------------------------
# Run 001. One comp, lin. abs and elim. Eta on CL only - proportional residual error
# ------------------------------------------------------
One.comp.KA.ODE <- function() {
ini({
# Where initial conditions/variables are specified
lka <- log(1.15) #log ka (1/h)
lcl <- log(0.135) #log Cl (L/h)
lv <- log(8) #log V (L)
prop.err <- 0.15 #proportional error (SD/mean)
# add.err <- 0.6 #additive error (mg/L)
# eta.ka ~ 0.5
eta.cl ~ 0.1
# eta.v ~ 0.1
})
model({
# Where the model is specified
cl <- exp(lcl + eta.cl)
v <- exp(lv)
ka <- exp(lka)
## ODE example
d/dt(depot) = -ka * depot # depot is defined as amount (i.e. the unit is mg)
d/dt(central) = ka * depot - (cl/v) * central
## Concentration is calculated
cp = central/v