forked from bbolker/mixedmodels-misc
-
Notifications
You must be signed in to change notification settings - Fork 0
/
ecostats_chap.rmd
2125 lines (1828 loc) · 86.5 KB
/
ecostats_chap.rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
title: "GLMM worked examples"
author: Ben Bolker
date: "`r format(Sys.time(), '%d %B %Y')`"
bibliography: glmm.bib
csl: apa.csl
output:
html_document:
toc: true
toc_depth: 2
---
<!--
* Jan 2016: lots of minor updates. Dispense with coefplot2
* re-run 10 Aug 2015 with new lme4; tweaks (newlines after plots etc.)
* update 27 Jan 2015: glmmADMB hacks
* update 23 Dec 2014: tweaks for final upload (package installation etc.)
* update 15 July 2014:
change scapeMCMC to plotMCMC
* re-run with new Mathjax targets etc.
-->
```{r opts,echo=FALSE}
library("knitr")
opts_chunk$set(tidy=FALSE,fig.width=6,fig.height=4)
## work around transparency problem
opts_chunk$set(dev.args=list(type="cairo"))
grDevices::X11.options(type='cairo')
library("pander")
```
# Introduction
These are worked examples for a book chapter on mixed models in *Ecological Statistics: Contemporary Theory and Application* editors Negrete, Sosa, and Fox
(available from the [Oxford University Press catalog](http://ukcatalogue.oup.com/product/9780199672554.do) or from [Amazon.com](http://www.amazon.com/Ecological-Statistics-Contemporary-theory-application/dp/0199672555) or [Powell's Books](http://www.powells.com/biblio/62-9780199672554-1) or ...).
Data and source code for this file are currently available at [Github](http://github.com/bbolker/mixedmodels-misc).
There's a *lot* of material here. I have erred on the side of including things, and on the side of compact rather than elementary code. Try not to be overwhelmed, just skim it the first time and thereafter focus on the parts that are most relevant to your analyses.
I use a very large set of packages in this example, because I am trying out a wide range of approaches. You should whittle these down to just the ones you need for any given analysis ... if you want to install all of them at once, the following code should be sufficient (remember that you only have to install packages one time on each computer you are using, but you have to use `library()` to load them in every new R session). The packages noted below as "not on CRAN" (`glmmADMB`) are hosted on R-forge (`http://r-forge.r-project.org`), but in order to install them you may have to install from `http://www.math.mcmaster.ca/bolker/R`, as below.
```{r install_pkgs,eval=FALSE}
pkgs_CRAN <- c("lme4","MCMCglmm","blme",
"pbkrtest","coda","aods3","bbmle","ggplot2",
"reshape2","plyr","numDeriv","Hmisc",
"plotMCMC","gridExtra","R2admb",
"broom.mixed","dotwhisker")
install.packages(pkgs_CRAN)
rr <- "http://www.math.mcmaster.ca/bolker/R"
install.packages("glmmADMB",type="source",repos=rr)
library("devtools")
```
```{r pkgs,message=FALSE,warning=FALSE}
## primary GLMM-fitting packages:
library("lme4")
library("glmmADMB") ## (not on CRAN)
library("glmmTMB")
library("MCMCglmm")
library("blme")
library("MASS") ## for glmmPQL (base R)
library("nlme") ## for intervals(), tundra example (base R)
## auxiliary
library("ggplot2") ## for pretty plots generally
## ggplot customization:
theme_set(theme_bw())
scale_colour_discrete <- function(...,palette="Set1") {
scale_colour_brewer(...,palette=palette)
}
scale_colour_orig <- ggplot2::scale_colour_discrete
scale_fill_discrete <- function(...,palette="Set1") {
scale_fill_brewer(...,palette=palette)
}
## to squash facets together ...
zmargin <- theme(panel.spacing=grid::unit(0,"lines"))
library("gridExtra") ## for grid.arrange()
library("broom.mixed")
## n.b. as of 25 Sep 2018, need bbolker github version of dotwhisker ...
library("dotwhisker")
library("coda") ## MCMC diagnostics
library("aods3") ## overdispersion diagnostics
library("plotMCMC") ## pretty plots from MCMC fits
library("bbmle") ## AICtab
library("pbkrtest") ## parametric bootstrap
library("Hmisc")
## for general-purpose reshaping and data manipulation:
library("reshape2")
library("plyr")
## for illustrating effects of observation-level variance in binary data:
library("numDeriv")
```
Package versions used (core packages only):
```{r pkgversion,echo=FALSE}
pkgs_used <- c("lme4","glmmADMB","MCMCglmm","blme",
"pbkrtest","coda","aods3","bbmle","glmmTMB")
pkgs_CRAN <- c("lme4","MCMCglmm","blme",
"pbkrtest","coda","aods3","bbmle",
"ggplot2","reshape2","plyr","numDeriv",
"Hmisc","plotMCMC","gridExtra", "glmmTMB")
##print(paste(paste0('"',pkgs_CRAN,'"'),collapse=","),quote=FALSE)
vers <- sapply(pkgs_used,function(x) as.character(packageVersion(x)))
print(vers,quote=FALSE)
```
If you are using Windows and have problems using `glmmADMB` on some problems, there is a (slightly tedious) workaround:
* check the [directory that contains the latest version of the glmmADMB binary component](http://www.admb-project.org/buildbot/glmmadmb/); find the most recent version of the binaries for your operating system (Windows, MacOS, Linux [Fedora or Ubuntu], and 32 vs 64 bits: check `sessionInfo()$platform` for more information). For example, the most recent Windows binary as of this writing is `glmmadmb-mingw64-r2885-windows8-mingw64.exe`. If you find more than one file that seems to apply, just pick one at random.
* Once you've figured out what file to download, execute the following code (substituting the name of the appropriate binary file in the last line):
```{r glmmadmb_hack,eval=FALSE}
library("glmmADMB")
bb <- glmmADMB:::get_bin_loc()[["bin_loc"]]
bpath <- gsub("glmmadmb$","",bb)
file.copy(bb,paste0(bpath,"glmmadmb.bak"))
bburl <- "http://admb-project.org/buildbot/glmmadmb/"
download.file(paste0(bburl,
"glmmadmb-mingw64-r2885-windows8-mingw64.exe"), dest=bb)
```
# Tundra carbon
These data were originally analyzed in @belshe_tundra_2013.
## Data exploration
Read the data:
```{r tundradat}
mc1 <- read.csv("data/tundra.csv",na.strings=c("-","NA"))
```
```{r tundradat_tab,echo=FALSE,results="asis"}
dd <- read.table(sep="|",text="
Year | year of data collection
Site | location
Lat | latitude
Long | longitude
Method | ?
GS.NEE | net ecosystem exchange, growing season
GS.GPP | gross primary production, growing season
GS.ER | ecosystem respiration, growing season
W.NEE | net ecosystem exchange, winter
MAT | ?
TAP | ?
Reference | Literature reference")
pander(dd)
```
A first look at the data, plotting net ecosystem exchange during the growing season (`GS.NEE`) against year, using colour to distinguish sites, and superimposing a separate linear regresssion (with confidence bands) per site:
```{r tundra_plot1,warning=FALSE}
ggplot(mc1,aes(x=Year,y=GS.NEE,colour=Site))+geom_point()+
geom_smooth(method="lm",alpha=0.3)+
## oob=squish retains the (truncated) confidence bands;
## otherwise bands that went outside the limits would disappear
scale_y_continuous(limits=c(-150,400),oob=scales::squish)+
## use original colours (Dark2 doesn't have enough distinct values)
## suppress legend
scale_colour_orig(guide="none")
```
We have suppressed the legend for the colours,
because there are a lot of sites and their names will not be
particularly meaningful except to tundra biologists.
There is lots of variability among sites,
both in the trend and in the uncertainty of the trend. The uncertainty
is caused in part by noisiness of the data, and part by sparsity/shortness
of the time series for individual sites.
In some cases there were multiple observations per site in a single year.
We could have handled this by adding a year-within-site random variable,
but it proved to be easier to aggregate these cases by calculating the
mean, and account for the different amount of sampling by weighting
site-year combinations according to the number of samples.
The following (rather complicated) function aggregates the data; in the course of our analysis we ended up needing to aggregate several different response variables according to several different sets of grouping variables. We re-centred the year to have year=0 at the beginning of the observation period.
```{r tundra_agg}
aggfun <- function(dat,agg=c("Year","Site"),response="Winter.adj",
baseYear=min(mc1$Year)) {
## select only site, year, and response
sub1 <- na.omit(dat[,c("Site","Year",response)])
## compute means of non-aggregation variables
agg1 <- aggregate(sub1[!names(sub1) %in% agg],by=sub1[agg],FUN=mean)
## compute sample size of non-aggregation variables
aggn <- aggregate(sub1[response],by=sub1[agg],FUN=length)
names(aggn)[ncol(aggn)] <- "n" ## assumes response is last column
## put mean and sample size together
agg2 <- merge(agg1,aggn)
## recompute centred year
agg2$cYear <- agg2$Year - baseYear
agg2
}
mc2 <- aggfun(mc1,response="GS.NEE")
## center year at the mean rather than the date of
## the first observation:
mc2B <- aggfun(mc1,response="GS.NEE",baseYear=mean(mc1$Year))
```
The aggregated data show a similar picture:
```{r tundra_plot2,warning=FALSE}
ggplot(mc2,aes(x=cYear,y=GS.NEE,colour=Site))+
geom_point(aes(size=n),alpha=0.7)+
geom_smooth(method="lm",alpha=0.3,aes(weight=n))+
scale_y_continuous(limits=c(-150,400),oob=scales::squish)+
scale_colour_orig(guide="none")+
scale_size_continuous(range=c(2,5),breaks=c(1,3,6))
```
## Fitting
We use `nlme::lme` because at present
it is the only *easy* way to allow
for temporal autocorrelation in a LMM in R.
* we use `corCAR1`, which implements a continuous-time
first-order autocorrelation model (i.e. autocorrelation
declines exponentially with time), because we have missing
values in the data. The more standard discrete-time autocorrelation
models (`lme` offers `corAR1` for a first-order model and `corARMA`
for a more general model) don't work with missing data.
* The `weights=varFixed(~I(1/n))` specifies that
the residual variance for each (aggregated) data point
is inversely proportional to the number of samples.
* The `control` argument lets the model try more iterations
(otherwise we get an error).
```{r tundra_fit1,cache=TRUE}
cmod_lme <- lme(GS.NEE ~ cYear,
data=mc2, method="REML",
random = ~ 1 + cYear | Site,
correlation=corCAR1(form=~cYear|Site),
weights=varFixed(~I(1/n)),
control=list(maxIter=10000, niterEM=10000))
```
Model summary:
```{r tundra_sum1}
summary(cmod_lme)
```
The results generally look sensible: the only warning sign is that
the among-site variation in baseline NEE (`(Intercept)`) and
the among-site variation in slope are perfectly correlated (i.e., the
`-1` term in the `Corr` column under `Random effects`). We weren't
happy with this, but we kept the full random effects model anyway.
The alternative, dropping the random effect of year, seemed
unacceptable in our case. Recentring the data at the mean year ($\approx$ 1997)
improves things slightly:
```{r recenter}
cmod2_lme <- update(cmod_lme,data=mc2B)
```
`VarCorr()` or `summary()` show that the intercept
and slope random effects are still very strongly, but not perfectly, correlated
($\rho=`r as.numeric(VarCorr(cmod2_lme)["cYear","Corr"])`$); the fixed-effect intercept is very
different (of course), and the year effect is almost identical, but its
standard error is larger (so its $p$-value doubles).
```{r prtlmetab,echo=FALSE}
prtlmetab <- function(x)
printCoefmat(summary(x)$tTab,
tst.ind=c(1,2,4),
digits=3,
has.Pvalue=TRUE)
```
```{r ctr_sum,echo=FALSE}
prtlmetab(cmod2_lme)
```
We can also fit the model with `lmer` from the `lme4` package: it's faster and allows for crossed random effects (neither of which really matters here), but unfortunately it can't incorporate temporal autocorrelation in the model:
```{r tundra_lmer}
cmod_lmer <- lmer(GS.NEE ~ cYear + (1+cYear|Site),
data=mc2B, REML=TRUE,
weights=n)
summary(cmod_lmer)
```
Note that `weights` here are specified as `n` rather than `1/n` (`varFixed()` in the `lme` call specifies the variance, rather than the actual weights of different observations)
The results are not *too* different -- the `cYear` fixed-effect slope is slightly smaller than for the `lme` fit (`r round(fixef(cmod_lmer)["cYear"],2)` vs. `r round(fixef(cmod2_lme)["cYear"],2)` (g C m^2 /year/season)/year), but the standard error is smaller, so the $t$-statistic is similar (`r round(coef(summary(cmod_lmer))["cYear","t value"],2)` vs. `r round(summary(cmod2_lme)$tTab["cYear","t-value"],2)`).
```{r tundra_glmmTMB}
cmod_glmmTMB <- glmmTMB(GS.NEE ~ cYear + (1+cYear|Site),
data=mc2B,
weights=n)
summary(cmod_glmmTMB)
```
Compare parameters (not yet working!)
```{r dwplot,eval=FALSE}
dwplot(list(glmmTMB=cmod_glmmTMB,lmer=cmod_lmer),by_2sd=TRUE)
```
We have to admit that the model we are using
is just a little bit more complex than the data can easily handle, and especially that Toolik is quite different from the other sites ... making the estimates somewhat unstable.
## Diagnostics
Here are the default residuals-vs.-fitted plot; the scale-location plot
($\sqrt{|\textrm{residual}|}$ vs. fitted); a plot of the residuals
as a function of year; and the Q-Q plot.
```{r tundra_diag,fig.width=6,fig.height=6}
colvec <- c("#ff1111","#007eff") ## second colour matches lattice default
grid.arrange(plot(cmod2_lme,type=c("p","smooth")),
plot(cmod2_lme,sqrt(abs(resid(.)))~fitted(.),
col=ifelse(mc2$Site=="Toolik, AK",colvec[1],colvec[2]),
type=c("p","smooth"),ylab=expression(sqrt(abs(resid)))),
## "sqrt(abs(resid(x)))"),
plot(cmod2_lme,resid(.,type="pearson")~cYear,
type=c("p","smooth")),
qqnorm(cmod2_lme,abline=c(0,1),
col=ifelse(mc2$Site=="Toolik, AK",colvec[1],colvec[2])))
```
The diagnostics look OK -- mostly.
* The biggest concern is that the scale-location plot (upper right)
shows that
the variance increases with the fitted values, which is
entirely driven by data from the Toolik site (see below).
(I initially diagnosed the problem by using `id=0.05`, which labels relatively large residuals by site -- since they were mostly from Toolik I changed the diagnostic plot above to colour all of the Toolik points differently.) The standard approach would be to transform the data, but transforming data in way that (1) preserves the linear trends in the data (2) allows for negative as well as positive values is a bit challenging (we could try the `yjPower()` function in the `car` package). We could also use
the `weights` argument to specify that the variance increases with the fitted values,
but we are already using `weights` to account for the number of samples aggregated
per data point ...
* The Q-Q plot is a little bit funky, too, also partially driven by Toolik, although non-normality is generally less of a concern than heteroscedasticity.
```{r tundra_toolik,warning=FALSE}
ggplot(mc2,aes(x=cYear,y=GS.NEE,colour=(Site=="Toolik, AK")))+
geom_point(aes(size=n),alpha=0.7)+
geom_smooth(method="lm",alpha=0.3,aes(weight=n,group=Site))+
scale_colour_orig(name="Site",breaks=0:1,labels=c("other","Toolik"))+
scale_y_continuous(limits=c(-150,400),oob=scales::squish)+
scale_size_continuous(range=c(2,5),breaks=c(1,3,6))
```
We should also check whether our model has dealt with temporal
autocorrelation properly. One potential trap when diagnosing
autocorrelation in `lme` models is that the `ACF()` function
by default uses a form of the residuals that do not correct
for autocorrelation, *even when there is a correlation term
specified in the fitted model*: you have to use `resType="normalized"`
to get autocorrelation-corrected residuals, as shown in the right-hand
plot below:
```{r tundra_acfplot,fig.width=8}
grid.arrange(plot(ACF(cmod2_lme),alpha=0.05),
plot(ACF(cmod2_lme,resType="normalized"),alpha=0.05),
nrow=1)
```
The autocorrelation term in the model has successfully corrected
for the autocorrelation at lag 1, but there are some large
anomalies for longer time lags. These are somewhat worrisome, but autocorrelation estimates this large (absolute value >1) probably represent other anomalies in the data, rather than an actual signal of autocorrelation.
We can try refitting the model without the Toolik data.
As it turns out we run into convergence problems and have to
drop the autocorrelation structure from the model.
```{r refit_notoolik,fig.keep="none"}
cmod3_lme <- update(cmod2_lme,
data=subset(mc2,Site!="Toolik, AK"),
correlation=NULL)
plot(ACF(cmod3_lme),alpha=0.05) ## not shown, but looks fine
```
We are again stuck between what the data might tell us to do
(throw out the Toolik data), and what we want to do as biologists
(incorporate all the data). At least the results (the slope of
NEE with respect to time) are not too much different when we fit
the model without the Toolik data (the slope is actually steeper, although noisier): the coefficient
table from `summary(cmod3_lme)` looks like this:
```{r tooliktab,echo=FALSE}
prtlmetab(cmod3_lme)
```
What about the estimated random effects (conditional modes)? They are easy to extract, using the `ranef()` method, which is the same way you would do it with `lme4` or `glmmADMB`. Plotting them takes a little bit more effort -- the default plot produced by the `nlme` package is OK, but (1) it doesn't plot the random effects in sorted order (which is a good default); (2) it doesn't produce standard errors; (3) it makes it a bit hard if we want to plot only one set of random effects (in this case, since `lme` has estimated the among-site variance in intercept to be very close to zero, it's not really worth plotting the intercept random effects).
```{r tundra_ranef}
rr <- ranef(cmod2_lme)
rr.order <- rr[order(rr$cYear),] ## order by RE slope value
```
```{r tundra_ranef_plot,echo=FALSE,fig.width=10}
rr.slope <- rr.order[,2,drop=FALSE]
attr(rr.slope,"effectNames") <- "cYear"
attr(rr.slope,"label") <- "Random effects"
attr(rr.slope,"level") <- 1
attr(rr.slope,"standardized") <- FALSE
attr(rr.slope,"grpNames") <- "Site"
qqr <- qqmath
grid.arrange(plot(rr.slope),qqmath(rr.slope[,1],abline=c(0,1),
ylab="Random-effects slope"),nrow=1)
```
We can more easily plot the random effects for the `lmer` fit (we also get confidence intervals without too much fuss):
```{r rr_lme4_ranefplot}
dotplot(ranef(cmod_lmer,condVar=TRUE),
lattice.options=list(layout=c(1,2)))
```
* In contrast to the `lme` fit, the `lmer` fit gives a non-negligible estimate for the among-site standard deviation.
* You can use `qqmath()` instead of `dotplot()` to get Q-Q plots of the conditional modes.
* Toolik still stands out as an outlier.
## Inference
Looking more carefully at the model fit, we can see that the intercept variation has disappeared -- this means that, at the temporal center of the data, the variation among sites is essentially indistinguishable from measurement error. There is still substantial variation in the slope, along with
the almost-perfect correlation described above:
```{r cmod_varcorr}
VarCorr(cmod2_lme)
```
Fixed effects:
```{r cmod_tab,echo=FALSE}
prtlmetab(cmod2_lme)
```
Because the model only has a single continuous predictor, the conclusions of `summary()`, `drop1()`, and `anova()` are all equivalent (the `anova` method for `lme` has a useful `type="marginal"` option, but you should be **very** careful when using it with models containing interactions -- `car::Anova` is safer). **Note** that `lme` gets the degrees-of-freedom calculation wrong for random-slopes models: because the year effect varies among sites, it should be tested with 23 denominator df rather than 53 as reported by `lme`. However, it doesn't make a huge difference here:
```{r cmod_fstat}
fstat <- anova(cmod2_lme)["cYear","F-value"]
pf(fstat,1,53,lower.tail=FALSE) ## as reported by lme
pf(fstat,1,23,lower.tail=FALSE) ## correct
```
In Belshe *et al.* we reported that when we simulated data according to the null hypothesis (incorporating information about the variation in intercepts and slopes across sites, and autocorrelation, but with the population-level slope set to zero), we saw an inflated type I error. Therefore, we used this parametric bootstrap distribution of the $t$ statistic to test the fixed effect of slope, which approximately doubled our $p$-value.
The `nlme` package uses `intervals()`, rather than `confint()`, to extract confidence intervals: if we try `intervals(cmod2_lme)` we get an error about "non-positive definite approximate variance-covariance" (which should be yet another indication that our model is somewhat unstable or overfitted). However, we can work around this by restricting our question to the fixed effects only:
```{r t_int}
intervals(cmod2_lme,which="fixed")
```
The `lmer` profile is basically messed up (lots of warnings, lots of `NA`s).
This should give us
pause, but if we just want to get Wald intervals to see how well they
match those given by `nlme::intervals()`, we can do it as follows:
```{r cmod_wald_confint}
confint(cmod_lmer,parm="beta_",method="Wald")
```
They're not identical, but they're reasonably similar (in general, you should
always expect that confidence interval estimates will be more
unstable/variable/sensitive to differences in implementation than
point estimates).
We could plot the profiles with the following code, but the results
are too ugly even to show:
```{r cprofplot,warning=FALSE,eval=FALSE}
ggplot(as.data.frame(pp),aes(.focal,.zeta))+
geom_point()+geom_line()+
facet_wrap(~.par,scale="free_x")+
geom_hline(yintercept=0,colour="gray")+
geom_hline(yintercept=c(-1.96,1.96),linetype=2,
colour="gray")
```
# *Culcita*
The data are from @mckeon_multiple_2012.
## Data exploration
The basic data can be reduced, for the purposes
of this exercise, to a single treatment (`ttt`)
[which consists of combinations of different
symbionts: crab, shrimp, both or neither]; a binary response (`predation`);
and a blocking factor (`block`).
```{r getdat}
load("data/culcita.RData")
summary(culcita_dat)
```
Confirm that this is a randomized block design with
2 replications per treatment per block:
```{r exptab}
with(culcita_dat,table(ttt,block))
```
(the `ftable()` function can be handy for displaying experimental designs with more than two levels).
Plot summary statistics (mean and bootstrap 95% CI) for treatments, ignoring block structure:
```{r plot1,message=FALSE}
ggplot(culcita_dat,aes(x=ttt,y=predation))+
stat_summary(fun.data=mean_cl_boot,size=2)+
ylim(c(0,1))
```
The basic conclusion is that symbionts have a definite protective
effect; the combination of two symbionts seems *slightly* more
protective than a single symbiont. However, we have to see if
this holds up when we account for among-plot variation.
I have yet to find a good way to visualize these data that displays
the among-block variation.
This is my best attempt, but it's more artistic than compelling:
```{r plot2,message=FALSE}
ggplot(culcita_dat,aes(x=ttt,y=predation,colour=block,group=block))+
scale_colour_orig() +
stat_summary(fun.y=sum,geom="line",alpha=0.4)+
stat_summary(fun.y=sum,geom="point",alpha=0.7,
position=position_dodge(width=0.25))
```
## Fitting
First, fit the model in each framework.
### lme4::glmer
(The syntax `pkg::fun` refers in general to a function `fun` in the R package `pkg`.)
```{r lme4_0,message=FALSE,cache=TRUE}
cmod_lme4_L <- glmer(predation~ttt+(1|block),data=culcita_dat,
family=binomial)
```
Take a quick look at the results:
```{r lme4_results}
print(summary(cmod_lme4_L),correlation=FALSE)
```
It would be nice to fit the model with a random effect of treatments across blocks as well, but it takes a long time and warns that it has failed to converge ...
```{r c_block,cache=TRUE}
cmod_lme4_block <- update(cmod_lme4_L,.~ttt+(ttt|block))
```
(we could extend the maximum number of iterations by using (e.g.)
`control=glmerControl(optCtrl=list(maxfun=1e5))`, but it probably
wouldn't help). (When you use `update()` you normally specify
which variables to add (e.g. `.~.+new_var`) or subtract
(`.~.-dropped_var`). Here I replaced the entire
right-hand side of the formula: I might have been able to
replace the intercept-only random effect with a random effect of
treatment by subtracting and adding, i.e. `.~.-(1|block)+(ttt|block)`, but I wasn't
sure it would work ...)
If we examine the parameter estimates from this fit, we see various additional indications of trouble. The `crab` parameter and several of the variance-covariance parameters are very large and some of the variance-covariance parameters are very strongly correlated with each other, both indications that the model is overfitted:
```{r c_block_est}
fixef(cmod_lme4_block)
VarCorr(cmod_lme4_block)
```
We can also use model comparison to see that it's not worth
adding the extra terms:
```{r AICcomp1}
AICtab(cmod_lme4_L,cmod_lme4_block,nobs=nrow(culcita_dat))
```
(If we use AICc instead of AIC `cmod_lme4_block` does even worse,
by 6 AICc units ...)
Fitting with Gauss-Hermite quadrature:
```{r c_AGQ}
cmod_lme4_AGQ10 <- update(cmod_lme4_L,nAGQ=10)
```
```{r c_AGQ_all,echo=FALSE,cache=TRUE}
agqfun <- function(i) {
f <- update(cmod_lme4_L,nAGQ=i)
c(fixef(f),sqrt(unlist(VarCorr(f))))
}
agqvec <- 1:25
agqres <- sapply(agqvec,agqfun)
```
```{r fix_agqres,echo=FALSE}
dimnames(agqres) <- list(c("intercept","crabs","shrimp",
"both","sd.block"),agqvec)
## melt and restore order of parameter labels
m <- transform(melt(agqres),
Var1=factor(Var1,levels=rownames(agqres)))
```
The next plot (showing the values of each parameter
as a function of the number of quadrature points)
shows that using more quadrature points
does have some effect, but on an absolute
scale the parameters don't vary much: probably the biggest effect
is in the estimated among-block standard deviation, increasing
irregularly from `r round(agqres["sd.block",1],2)` for the
Laplace approximation ($n=1$) to an asymptote of
`r round(agqres["sd.block",25],2)`.
```{r c_AGQ_plot,fig.width=10,echo=FALSE}
ggplot(m,aes(x=Var2,y=value))+
geom_line()+facet_wrap(~Var1,scale="free",nrow=1)+
labs(x="Number of adaptive Gauss-Hermite\nquadrature points")
```
### glmmADMB
The only difference in moving from `lme4` to `glmmADMB` is that you have to express the `family` argument as a quoted string (i.e., `family="binomial"` rather than `family=binomial`):
```{r glmmADMB_culc}
cmod_gA_L <- glmmadmb(predation~ttt+(1|block),data=culcita_dat,
family="binomial")
```
The `summary()` for `glmmADMB` is nearly identical.
```{r checkSE,echo=FALSE,eval=FALSE}
library(numDeriv)
## compute SEs from Hessian
hfun <- function(model,incTheta=TRUE,...) {
dd <- update(model,devFunOnly=TRUE)
if (!incTheta) {
dd2 <- dd
th <- getME(model,"theta")
dd <- function(x) {
dd2(c(th,x))
}
v <- getME(model,"beta")
} else v <- unlist(getME(model,c("theta","beta")))
h <- hessian(dd,v,...)/2
sqrt(diag(solve(h)))
}
h <- hfun(cmod_lme4_L)
h2 <- hfun(cmod_lme4_L,incTheta=FALSE)
cmat <- cbind(h[-1],
h2,
coef(summary(cmod_lme4_L))[,"Std. Error"],
coef(summary(cmod_gA_L))[,"Std. Error"])
colnames(cmat) <- c("hess","hess2","lme4","glmmADMB")
cmat
```
```{r glmmADMB_culc_sum}
summary(cmod_gA_L)
```
### MCMCglmm
The `MCMCglmm` random effect specification is a little different
from `glmer` and `glmmadmb`: other
differences include
* the Bernoulli (binomial with $N=1$) response is specified as `family="categorical"` (you would code a binomial response with $N>1$ as a two-column matrix of successes and failures, as in `glm()`, and specify `family="multinomial2"`)
* After looking at the diagnostics on the default fit (see below), and with advice from Jarrod Hadfield (the author of `MCMCglmm`), I decided to
* Set priors.
* The standard way to set priors for the variance-covariance matrices is to specify them as *inverse-Wishart* distributions; the corresponding [Wikipedia page](http://en.wikipedia.org/wiki/Inverse-Wishart_distribution) explains that for a single variance, the inverse-Wishart reduces to an inverse-gamma distribution with shape parameter $\alpha$ equal to half the Wishart shape ($\nu$) parameter. (The [inverse-gamma distribution](http://en.wikipedia.org/wiki/Inverse_gamma) is in turn a distribution of a random variable $X$ where $1/X$ is [Gamma-distributed](http://en.wikipedia.org/wiki/Gamma_distribution) ...) I initially used an informative prior with `G=list(list(V=11,nu=1))))` (i.e. specify that the mean of the variance is approximately equal to the among-block variance we got from `glmer`/`glmmADMB` and, the shape parameter is 1 (weak)).
* Hadfield suggested that I could instead "use parameter expanded priors which can be made approximately flat (but still proper) for the standard deviation [see `prior.c` below]. Transforming the variance parameters to the intra-class correlation (ICC) can cause problems however, as it puts high prior mass on extreme ICC's. Parameter expanded priors can also improve mixing considerably for the variances that are close to zero."
* For the observation-level variance (`R`) I initially used a strong Gamma-distributed prior with mean 1 and a shape parameter of 10, i.e. $1/\sigma^2$ was Gamma-distributed with a mean of 1 and coefficient of variance $1/\sqrt{5}$, but Hadfield suggested instead that instead I should simply fix `R` (which is unidentifiable anyway for binomial trials with a single observation) to 1.
* Specify the number of iterations (`nitt`), number
of iterations to discard (`burnin`), and thinning (`thin`) manually,
to get a much longer chain than the defaults (`nitt=13000`, `thin=10`, `burnin=3000`).
First attempts, first with everything set to the default and then with a longer run:
⌛ (60 seconds: *in general I will use this hourglass icon to indicate code that might take a long time to run -- you might want to skip running these chunks.*)
```{r MCMCglmm1,cache=TRUE}
cmod_MG0 <- MCMCglmm(predation~ttt,
random=~block,data=culcita_dat,
family="categorical",
verbose=FALSE)
cmod_MG1 <- MCMCglmm(predation~ttt,
random=~block,data=culcita_dat,
family="categorical",verbose=FALSE,
nitt=5e5,burnin=5e4,thin=100)
```
As we will see below, simply running for longer won't fix everything. Instead, we set stronger/better priors as described above:
```{r MCMCglmm_longrun,cache=TRUE}
prior.c <- list(R=list(V=1,fix=1),
G=list(G1=list(V=1, nu=1, alpha.mu=0, alpha.V=1000)))
cmod_MG <- MCMCglmm(predation~ttt,
random=~block,data=culcita_dat,
slice=TRUE, ## slice sampling: for binary trials
## with independent residuals
pl=TRUE, ## save posterior distributions of latent
## variables (= conditional modes)
prior=prior.c,
family="categorical",verbose=FALSE,
nitt=13000*10,burnin=3000*10,thin=5*10)
```
```{r mcmcglmm_sum}
(cmod_MGsum <- summary(cmod_MG))
```
Take note of the `eff.samp` columns in the summary output, which describe the size of the sample we have taken from the posterior distribution corrected for autocorrelation; values much smaller than the nominal sample size (listed in the third line of the summary) indicate poor mixing.
## Diagnostics and model summaries
### lme4
Diagnostic plots:
```{r cmod_lme4_L_diagplot,fig.width=8}
p1 <- plot(cmod_lme4_L,id=0.05,idLabels=~.obs)
p2 <- plot(cmod_lme4_L,ylim=c(-1.5,1),type=c("p","smooth"))
grid.arrange(p1,p2,nrow=1)
```
The only thing that the default (left-hand)
diagnostic plot tells us is that observation \#20
has a (very) extreme residual (we use
`idLabels=~.obs` to get outliers labelled by their
observation number; otherwise, by default, they are
labelled by their groups); if we use `ylim=c(-1.5,1)`
to limit the $y$-range,
we get (on the right) the usual not-very-informative residuals plot expected
from binary data.
## Digression: complete separation
If we exclude observation 20, we see the symptoms of [complete separation](http://www.ats.ucla.edu/stat/mult_pkg/faq/general/complete_separation_logit_models.htm) (e.g., parameter values with $|\beta|>10$, huge Wald confidence intervals and large Wald $p$-values).
We refit the model with the new data ...
```{r refit_outlier,cache=TRUE}
newdat <- subset(culcita_dat,
abs(resid(cmod_lme4_L,"pearson"))<10)
cmod_lme4_L2 <- update(cmod_lme4_L,data=newdat)
```
We can use `bglmer` from the `blme` package to impose zero-mean Normal priors on the fixed effects (a 4 $\times$ 4 diagonal matrix with diagonal elements equal to 9, for variances of 9 or standard deviations of 3), or add a `B` element to the priors list for `MCMCglmm` to specify the same priors:
```{r constr,cache=TRUE}
cmod_blme_L2 <- bglmer(predation~ttt+(1|block),data=newdat,
family=binomial,
fixef.prior = normal(cov = diag(9,4)))
cmod_MG2 <- MCMCglmm(predation~ttt,
random=~block,data=newdat,
family="categorical",verbose=FALSE,
nitt=13e4,burnin=3e4,thin=50,
prior=c(prior.c,
list(B=list(mu=rep(0,4),
V=diag(9,4)))))
```
Another possible option (haven't worked out the details yet!) is to
make the treatment effect a *random effect*, which allows for some
shrinkage toward the overall mean ... it some cases it might be
useful, or necessary, to fix the variance parameter for the treatment
random effect - at present, this would have to be done at a low level
(see `?lme4::modular`)
```{r ttt_random}
cmod_lme4_L3 <- glmer(predation~(1|ttt)+(1|block),
data=newdat,family=binomial)
```
Both methods of constraining the parameters give similar results:
```{r constr_CI,echo=FALSE}
modList <- list(glmer=cmod_lme4_L2,bglmer=cmod_blme_L2,MCMCglmm=cmod_MG2)
dotwhisker::dwplot(modList,effects="fixed",by_2sd=FALSE)
```
There are other packages in R (`brglm`, `logistf`) that can handle completely separated data, but they only apply to logistic regression, and they cannot simultaneously incorporate random effects (@pasch_interspecific_2013 dropped random effects and used `brglm` to handle completely separated data, arguing that random effects were small and/or hard to estimate in their system.)
(*End of digression*)
We can do a little bit better with boxplots grouped by
treatment (again limiting the range to exclude the outlier).
```{r cmod_lme4_L_boxplot}
plot(cmod_lme4_L,ttt~resid(.,type="pearson"),xlim=c(-1.5,1))
```
Check the random effects:
```{r cmod_ranef}
dotplot(ranef(cmod_lme4_L,condVar=TRUE))
```
There are only a few unique values
of the random effects because there are only a few
possible configurations per block of predation/no-predation in
each treatment.
It's worth checking `methods(class="merMod")` to see all the things
you can do with the fit (some of them may be a bit obscure ...)
The most important are:
* `fixef()` to extract the vector of fixed-effect parameters (confusingly,
`coef()` -- which is the accessor method for finding coefficients for
most other models in R -- gives a matrix showing the estimated coefficients
for each block (incorporating the random effects), which I don't find
useful very often)
* `coef(summary(.))` to extract just the table of estimates with standard errors,
$z$ values, and $p$-values
* `VarCorr()` to extract the estimates of the random effect variances
and covariances. (Use `unlist(VarCorr(.))` if you want the variances
as a vector.)
* `ranef()` to extract estimates of the random effects, and `dotplot(ranef(.,condVar=TRUE))` or `qqmath(ranef(.,condVar=TRUE))` to explore them graphically
* `plot()` for diagnostic plots
* `AIC()`, `anova()`, `drop1()`, `confint()`, `profile()` for various statistical tests
* `predict()` for predictions; `simulate()` for simulations based on the model
### glmmADMB
The methods and diagnostics for `glmmADMB` are similar, although
not quite as well developed.
You can still create the same kinds of diagnostic plots
(I have done this one with `ggplot2` rather than `lattice`):
```{r glmmADMB_diag}
augDat <- data.frame(culcita_dat,resid=residuals(cmod_gA_L,type="pearson"),
fitted=fitted(cmod_gA_L))
ggplot(augDat,aes(x=ttt,y=resid))+geom_boxplot()+coord_flip()
```
### MCMCglmm
Within an `MCMCglmm` fit the chains are stored in two separate
matrices (`mcmc` objects, actually, but these can be treated
a lot like matrices) called `Sol` (fixed effects) and
`VCV` (variances and covariances). (Random effects are not
saved unless you set `pr=TRUE`.)
```{r MCMCglmm_chains1}
allChains <- as.mcmc(cbind(cmod_MG0$Sol,cmod_MG0$VCV))
```
The trace plots for the default parameters:
```{r MCMCglmm_diag}
plotTrace(allChains)
```
(`plotTrace` is from the `plotMCMC` package, and is slightly
prettier than the default trace plot you get from `xyplot(allChains)`,
or the trace+density plot you get from `plot(allChains)`, but they
all show the same information.)
This trace plot is terrible: it indicates at the very least that we
need a longer burn-in period (to discard the transient)
and a longer chain with more thinning (to make the chains
look more like white noise). It's also worrying that the `units`
variance appears to be get stuck near zero (although here the
problem may be with the large transient spikes
distorting the scale and making the
rest of the chain look non-variable).
Running longer:
```{r MCMCglmm_diag2}
allChains2 <- as.mcmc(cbind(cmod_MG1$Sol,cmod_MG1$VCV))
plotTrace(allChains2,axes=TRUE,las=1)
```
This looks OK at first glance, but the magnitudes of the parameters (suppressed
by default from the `tracePlot` results since we're initially
interested only in the pattern, not the magnitude, of the
parameters) suggest a problem: the values of the fixed-effect parameters are in the hundreds, when any values with $|\beta|>10$ suggest a problem.
The fundamental problem here is that, for both technical
and philosophical reasons, `MCMCglmm` always adds an
observation-level variance (referred to in `MCMCglmm`
as the "R-structure", for "residual structure"),
corresponding to an overdispersion term. For binary
data,
$$
\text{logit}(p_i) = m_i + \epsilon_i
$$
where $\epsilon_i \sim N(0,\sigma^2)$
then the *expected value* of $p_i$ is
the average of $\text{logistic}(m_i+\epsilon_i)$.
This is a nonlinear average, and so the mean of $p_i$
is *not* equal to $\text{logistic}(m_i)$ (this
phenomenon is called [Jensen's inequality](http://en.wikipedia.org/wiki/Jensen%27s_inequality); [Ruel and Ayres 1999](www.dartmouth.edu/~mpayres/pubs/Jensen.PDF) give an ecologist-friendly explanation).
If $\ell(x)=1/(1+\exp(-x))$ is the logistic function then
the mean of $p_i$ is approximately
$$
\overline{p_i} \approx \ell(m_i) +
\left.\frac{d^2\ell}{dx^2}\right|_{x=m_i} \cdot \sigma^2/2
$$
We won't bother to show the math of $d^2\ell(x)/dx^2$ here:
suffice it to say that this term varies from about +0.1 to -0.1
depending on the baseline value $m_i$, so that the estimates
of $m_i$ and the observation-level variance $\sigma^2$ are
confounded:
```{r d2logis,echo=FALSE}
## deviation of mean(m+eps) from m
## int (m + dL/dm eps + dL^2/d^2m eps^2/s) =
## m + dL^2/d^2m * v/2
## 1/(1+exp(-x))
## dlogis = exp(-x)/(1+exp(-x))^2 = exp(-x) plogis(x)^2
## d2logis = -exp(-x) plogis(x)^2 + exp(-x) *2*plogis(x)*dlogis(x)
## = exp(-x)*plogis(x)^2*(-1 + 2*dlogis(x)/plogis(x))
## = dlogis(x)*(2*dlogis(x)/plogis(x)-1)
d2logis <- function(x) dlogis(x)*(2*dlogis(x)/plogis(x)-1)
if (FALSE) {
## check
library(numDeriv)
grad(dlogis,x=2)
d2logis(2)
}
par(las=1,bty="l")
curve(d2logis(x),from=-3,to=3,
xlab=~d^2*L/d*x^2)
abline(h=0,lty=3)
abline(v=0,lty=3)
```
```{r MCMCglmm_diag_sum,echo=FALSE}
bb <- cmod_MGsum$Gcovariances["block",]
bb2 <- summary(cmod_MG1)$Rcovariances["units",]
apvar <- round(bb2["post.mean"]/100)*100
## mean posterior
## 37.3 3.02 106 147
```
In the MCMC example here, the observation-level
variance has drifted to a very large value
($\sigma^2_R \approx `r apvar`$), and the initial
(variance-free) value of the baseline predation probability was low,
so the variance inflates the mean by a great deal and the
intercept parameter becomes strongly negative to compensate.
To deal with the confounding between these variables, we
fix the value of $\sigma^2_R$ as described above.
The results:
```{r MCMCglmm_diag3}
allChains3 <- as.mcmc(cbind(cmod_MG$Sol,cmod_MG$VCV))
## units variance is fixed to 1 anyway, we don't need
## to examine it
allChains3 <- allChains3[,colnames(allChains3)!="units"]
plotTrace(allChains3,axes=TRUE,las=1)
```
Now the magnitudes of the parameters look a little bit more sensible.
The chains for the variances are a little
bit "spiky" because the posterior densities have
long right tails, so it can be helpful to look at them on a log scale
(it's easier to see the details when the distribution is symmetric):
```{r MCMCglmm_diag_log}
vcChain <- log10(cmod_MG$VCV)[,1]
plotTrace(vcChain)
```
The `block` variance is still slightly worrying: the trace plot still shows
some pattern (we want it to be more or less indistinguishable
from white noise), and the
effective sample size (`r round(bb["eff.samp"])`) is much smaller
than our nominal sample size of `r nrow(vcChain)` (see the `eff.samp` column in the summary above).
We have one more thing to worry about.
If we extract `cmod_MG$Liab` (the conditional modes/latent variables) and plot their histogram:
```{r cmod_MG2_hist,echo=FALSE}
hfun <- function(x,breaks=50,...) {
par(las=1,bty="l")
hist(x,col="gray",main="",breaks=breaks,
freq=FALSE,...)
}
hfun(cmod_MG$Liab,
xlab="Value of conditional mode/latent variable")
```
Some of the latent variables are larger than 20. The likelihood of the linear predictor is essentially flat from $[20,\infty]$. If this happens for all observations belonging to a single term (fixed or random) then there is nothing constraining that term except the prior ($B$) or for random effects the posterior variance. Consequently issues can arise, without suitable prior or hyper prior specifications -- this suggests that we might want to consider using the slightly stronger priors on $B$ specified above when we were trying to deal with complete separation.
In order to compare the estimates we get from `MCMCglmm` with the other models -- which do not incorporate an observation-level random effect -- we
rescale block variance to what it would be had we set the units variance to zero, and overlay the `lmer` estimate:
```{r scale_mcmcglmm}
## see eqs. 2.13, 2.14 of the MCMCglmm 'CourseNotes' vignette
## for further information & references:
c2 <- ((16 * sqrt(3))/(15 * pi))^2
cmod_MGsc <- cmod_MG
cmod_MGsc$VCV[,"block"] <-
cmod_MGsc$VCV[,"block"]/(1+c2*cmod_MGsc$VCV[,"units"])
cmod_MGsc$Sol[] <- cmod_MGsc$Sol/sqrt(1+c2*cmod_MGsc$VCV[,"units"])
cmod_MGsc$VCV[,1] <- sqrt(cmod_MGsc$VCV[,1])
```
```{r mcmcviolins,message=FALSE,echo=FALSE}
mcmcCompFun <- function(mcmcfit,lme4fit,whichvar=1,include.units=FALSE) {
mcmc_ests <- rbind(
melt(data.frame(type="fixed",
as.data.frame(mcmcfit$Sol),
check.names=FALSE)),
melt(data.frame(type="random",
as.data.frame(mcmcfit$VCV),
check.names=FALSE)))
ff <- fixef(lme4fit)
aa <- as.data.frame(VarCorr(lme4fit))
lme4res <- rbind(data.frame(type="fixed",variable=names(ff),
value=ff),
data.frame(type="random",
variable=aa$grp[whichvar],
value=aa$sdcor[whichvar]))
ss2 <- summary(mcmcfit)
fixres <- data.frame(type="fixed",variable=rownames(ss2$solutions),
value=ss2$solutions[,"post.mean"])
vcvres <- data.frame(type="random",
variable=colnames(mcmcfit$VCV),
value=c(ss2$Gcovariances[,"post.mean"],
ss2$Rcovariances[,"post.mean"]))
MGres <- rbind(fixres,vcvres[whichvar,])
allres <- rbind(data.frame(sum="MCMCglmm mean",MGres),
data.frame(sum="lme4 MLE",lme4res))
list(mcmc_ests=mcmc_ests,allres=allres)
}
cmod_comp <- mcmcCompFun(cmod_MGsc,cmod_lme4_L)
v0 <- ggplot(subset(cmod_comp$mcmc_ests,variable!="units"),
aes(variable,value))
v0 + geom_violin(fill="gray") +
geom_point(data=cmod_comp$allres,aes(colour=sum),alpha=0.7)
```
The modes of the `MCMCglmm` results (the fattest part of the
"violins") agree well with
the `lme4` MLEs, for the most part: the `lme4` estimate of the among-block standard deviation is a bit lower than the
`MCMCglmm` estimate. (When we display the overall parameter
comparison below, we will show the MCMCglmm means instead.)
## Inference