-
Notifications
You must be signed in to change notification settings - Fork 13
/
Copy path011-species-distribution-models.qmd
902 lines (718 loc) · 34 KB
/
011-species-distribution-models.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
---
title: "A very brief introduction to species distribution models in R"
author: "Jeff Oliver"
date: "`r format(Sys.time(), '%d %B, %Y')`"
---
```{r library-check, echo = FALSE, results = "hide"}
#| output: false
libs <- c("geodata", "predicts", "terra")
libs_missing <- libs[!(libs %in% installed.packages()[, "Package"])]
if (length(libs_missing) > 0) {
for (l in libs_missing) {
install.packages(l)
}
}
```
Predicting suitable habitats of species from latitude and longitude coordinates
has become increasingly easier with a suite of R packages. This introductory
tutorial will show you how to turn your coordinate data into a range map.
#### Learning objectives
1. Install packages for species distribution modeling
2. Run species distribution models using a generalized linear model
3. Visualize model predictions on a map
Species distribution modeling in becoming an increasingly important tool to
understand how organisms might respond to current and future environmental
changes. There is an ever-growing number of approaches and literature for
species distribution models (SDMs), and you are encouraged to check out the
[Additional Resources](#additional-resources) section for a few of these
resources. The materials in the terra package are especially useful, and
Jeremy Yoder's introduction is another great place to start. In this tutorial,
we'll use publicly available data to build, evaluate, and visualize a
distribution model for the saguaro cactus.
***
## Getting started
Before we do anything, we will need to make sure we have necessary software,
set up our workspace, download example data, and install additional packages
that are necessary to run the models and visualize their output.
### Necessary software
The packages necessary for species distribution modeling will likely require
additional, non-R software to work properly. Which software will depend on the
operating system of your computer.
#### Linux
On Debian Linux systems, you will likely need to install the libgdal-dev
package. You can do this through the terminal via
`sudo apt-get install libgdal-dev`.
#### Windows
On Windows machines, you should probably install Rtools. You can find downloads
and instructions at
[https://cran.r-project.org/bin/windows/Rtools/](https://cran.r-project.org/bin/windows/Rtools/).
#### Mac OS
To use the terra package on Mac OS, you may need to install Xcode Command Line
Tools package. You can do this through a terminal via `xcode-select --install`.
### Workspace organization
First we need to setup our development environment. Open RStudio and create a
new project via:
+ File > New Project...
+ Select 'New Directory'
+ For the Project Type select 'New Project'
+ For Directory name, call it something like "r-sdm" (without the quotes)
+ For the subdirectory, select somewhere you will remember (like "My Documents"
or "Desktop")
We need to create two folders: 'data' will store the data we will be analyzing,
and 'output' will store the results of our analyses. In the RStudio console:
```{r workspace-setup, eval = FALSE}
dir.create(path = "data")
dir.create(path = "output")
```
It is good practice to keep input (i.e. the data) and output separate.
Furthermore, any work that ends up in the **output** folder should be completely
disposable. That is, the combination of data and the code we write should allow
us (or anyone else, for that matter) to reproduce any output.
### Example data
The data we are working with are observations of the
[saguaro, _Carnegiea gigantea_](https://en.wikipedia.org/wiki/Saguaro). We are
using a subset of records available from [GBIF](https://www.gbif.org/), the
Global Biodiversity Information Facility. You can download the data from
[https://tinyurl.com/saguaro-obs](https://tinyurl.com/saguaro-obs); save it in
the 'data' folder that you created in the step above.
### Install additional R packages
Next, there are _three_ additional R packages that will need to be installed:
+ terra
+ geodata
+ predicts
To install these, run:
```{r install-libraries, eval = FALSE}
install.packages("terra")
install.packages("geodata")
install.packages("predicts")
```
***
## Components of the model
The basic idea behind species distribution models is to take two sources of
information to model the conditions in which a species is expected to occur.
The two sources of information are:
1. Occurrence data: these are usually latitude and longitude geographic
coordinates where the species of interest has been observed. These are known as
'presence' data. Some models also make use of 'absence' data, which are
geographic coordinates of locations where the species is known to _not_ occur.
Absence data are a bit harder to come by, but are required by some modeling
approaches. For this lesson, we will use the occurrence data of the saguaro
that you downloaded earlier.
2. Environmental data: these are descriptors of the environment, and can
include abiotic measurements of temperature and precipitation as well as biotic
factors, such as the presence or absence of other species (like predators,
competitors, or food sources). In this lesson we will focus on the 19 abiotic
variables available from [WorldClim](http://www.worldclim.org/bioclim). Rather
than downloading the data from WorldClim, we'll use functions from the geodata
package to download these data (see below).
***
## Data and quality control
We'll start our script by loading those five libraries we need. And of course
adding a little bit of information at the very top of our script that says what
the script does and who is responsible!
```{r load-dependencies-quiet, echo = FALSE}
# Don't want terra message showing up in report
suppressPackageStartupMessages(library(terra))
suppressPackageStartupMessages(library(geodata))
suppressPackageStartupMessages(library(predicts))
```
```{r load-dependancies, eval = FALSE}
# Species distribution modeling for saguaro
# Jeff Oliver
# jcoliver@email.arizona.edu
# 2023-08-08
library(terra)
library(geodata)
library(predicts)
```
You might have seen some red messages print out to the screen. This is
normal, and as long as none of the messages include
"<span style="color:red">ERROR</span>", you can just hum right through those
messages. If loading the libraries _does_ result in an
<span style="color:red">ERROR</span> message, check to see that the libraries
were installed properly.
Now that we have those packages loaded, we can download the bioclimatic
variable data with the `worldclim_global()` function from the geodata package:
```{r get-bioclim-data, warning = FALSE}
bioclim_data <- worldclim_global(var = "bio",
res = 2.5,
path = "data/")
```
We are giving the `worldclim_global()` function three critical pieces of
information:
1. `var = "bio"`: This tells `worldclim_global()` that we want to download all
19 of the bioclimatic variables, rather than individual temperature or
precipitation measurements.
2. `res = 2.5`: This is the resolution of the data we want to download; in this
case, it is 2.5 minutes of a degree. For other resolutions, you can check the
documentation by typing `?worldclim_global` into the console.
3. `path = "data/"`: Finally, this sets the location to which the files are
downloaded. In our case, it is the `data` folder we created at the beginning.
Note also that after the files are downloaded to the `data` folder, they are
read into memory and stored in the variable called `bioclim_data`.
Now the climate data are in memory, and next we need to load in the
observations for the saguaro (these are the data we downloaded at the
[beginning of the lesson](#Example-data)):
```{r load-saguaro-data}
# Read in saguaro observations
obs_data <- read.csv(file = "data/Carnegiea-gigantea-GBIF.csv")
# Check the data to make sure it loaded correctly
summary(obs_data)
```
Notice that there are three `NA` values in the `latitude` and `longitude`
columns. Those records will not be of any use to us, so we can remove them from
our data frame:
```{r remove-NAs}
# Notice NAs - drop them before proceeding
obs_data <- obs_data[!is.na(obs_data$latitude), ]
# Make sure those NA's went away
summary(obs_data)
```
When we look at the `obs_data` data frame now there are no `NA` values, so we
are ready to proceed.
To make species distribution modeling more streamlined, it is useful to have an
idea of how widely our species is geographically distributed. We are going to
find general latitudinal and longitudinal boundaries and store this information
for later use. We use the `ceiling()` and `floor()` to round up and down,
respectively, to the nearest integer:
```{r geographic-extent}
# Determine geographic extent of our data
max_lat <- ceiling(max(obs_data$latitude))
min_lat <- floor(min(obs_data$latitude))
max_lon <- ceiling(max(obs_data$longitude))
min_lon <- floor(min(obs_data$longitude))
# Store boundaries in a single extent object
geographic_extent <- ext(x = c(min_lon, max_lon, min_lat, max_lat))
```
Before we do any modeling, it is also a good idea to run a reality check on
your occurrence data by plotting the points on a map.
```{r plot-occurrence-01}
# Download data with geodata's world function to use for our base map
world_map <- world(resolution = 3,
path = "data/")
# Crop the map to our area of interest
my_map <- crop(x = world_map, y = geographic_extent)
# Plot the base map
plot(my_map,
axes = TRUE,
col = "grey95")
# Add the points for individual observations
points(x = obs_data$longitude,
y = obs_data$latitude,
col = "olivedrab",
pch = 20,
cex = 0.75)
```
Looking good!
***
## Preparing data for modeling
Now that our occurrence data look OK, we can use the bioclimatic variables to
create a model. The first thing we want to do though is limit our consideration
to a reasonable geographic area. That is, for our purposes we are not looking
to model saguaro habitat suitability _globally_, but rather to the general
southwest region of North America. We start by building an Extent object that
is just a little larger than the extent of our saguaro observations. We then
crop the bioclimatic data to that larger, sampling extent. As a reality check
we finish by plotting the cropped version of first bioclimatic variable.
```{r bioclim-crop}
# Make an extent that is 25% larger
sample_extent <- geographic_extent * 1.25
# Crop bioclim data to desired extent
bioclim_data <- crop(x = bioclim_data, y = sample_extent)
# Plot the first of the bioclim variables to check on cropping
plot(bioclim_data[[1]])
```
The plot shows the first bioclimatic varable (annual mean temperature in
degrees Celsius).
In order to evaluate species distribution models, and really understand the
factors influencing where saguaros occur, we need to include some absence
points, those sites where saguaros are known to _not_ occur. The problem is, we
only have presence data for saguaros.
## The pseudo-absence point
One common work around for coercing presence-only data for use with
presence/absence approaches is to use pseudo-absence, or "background" points.
While "pseudo-absence" sounds fancy, it really just means that one randomly
samples points from a given geographic area and treats them like locations
where the species of interest is absent. A great resource investigating the
influence and best practices of pseudo-absence points is a study by
Barbet-Massin _et al._ (2012) (see
[Additional Resources](#additional-resources) below for full details).
For our purposes, we are going to create a set of 1000 background (aka
pseudo-absence) points at random, and add these to our data. We are going to
use the bioclim data for determining spatial resolution of the points, and
restrict the sampling area to the general region of the observations of
saguaros.
```{r pseudo-absence-sampling}
# Set the seed for the random-number generator to ensure results are similar
set.seed(20210707)
# Randomly sample points (same number as our observed points)
background <- spatSample(x = bioclim_data,
size = 1000, # generate 1,000 pseudo-absence points
values = FALSE, # don't need values
na.rm = TRUE, # don't sample from ocean
xy = TRUE) # just need coordinates
# Look at first few rows of background
head(background)
```
We can also plot those points, to see how the random sampling looks.
```{r pseudo-absence-plot}
# Plot the base map
plot(my_map,
axes = TRUE,
col = "grey95")
# Add the background points
points(background,
col = "grey30",
pch = 1,
cex = 0.75)
# Add the points for individual observations
points(x = obs_data$longitude,
y = obs_data$latitude,
col = "olivedrab",
pch = 20,
cex = 0.75)
```
We have observation data and pseudo-absence data and we need to first put them
into one data structure, then add in the climate data so we have a single data
frame with presence points and pseudo-absence points, _and_ climate data for
each of these points. It sounds like a lot, but after putting the two
coordinate datasets (observations and pseudo-absence points) together, the
terra package makes it easy to extract climate data. When we put the
observations and pseudo-absence points in the data frame, we need to make sure
we know which is which - that is, we need a column to indicate whether a pair
of latitude/longitude coordinates indicates a presence point or a (pseudo)
absence point. So we start by preparing the two datasets:
```{r prepare-coordinate-data}
# Pull out coordinate columns, x (longitude) first, then y (latitude) from
# saguaro data
presence <- obs_data[, c("longitude", "latitude")]
# Add column indicating presence
presence$pa <- 1
# Convert background data to a data frame
absence <- as.data.frame(background)
# Update column names so they match presence points
colnames(absence) <- c("longitude", "latitude")
# Add column indicating absence
absence$pa <- 0
# Join data into single data frame
all_points <- rbind(presence, absence)
# Reality check on data
head(all_points)
```
## Adding climate data
We are now ready to add climate data to the coordinate data sets. As mentioned
above, the terra package helps with this. We will use the `extract()` function,
which takes geographic coordinates and raster data as input, and pulls out
values in the raster data for each of the geographic coordinates.
```{r extract-bioclim}
bioclim_extract <- extract(x = bioclim_data,
y = all_points[, c("longitude", "latitude")],
ID = FALSE) # No need for an ID column
```
The process of extracting data results in a data frame with the climate data,
but that data frame doesn't have the coordinate information and, more
importantly, doesn't indicate which rows are presence points and which rows are
pseudo-absence points. So we need to join these extracted data back with our
`all_points` data frame. After we do this, we do not need the latitude and
longitude coordinates anymore, so we can drop those two columns (at least for
building the model).
```{r bind-obs-climate}
# Add the point and climate datasets together
points_climate <- cbind(all_points, bioclim_extract)
# Identify columns that are latitude & longitude
drop_cols <- which(colnames(points_climate) %in% c("longitude", "latitude"))
drop_cols # print the values as a reality check
# Remove the geographic coordinates from the data frame
points_climate <- points_climate[, -drop_cols]
```
## Training and testing data
Now that we have climate data for our presence and pseudo-absence points, we
need to take one more step. We are going to build our model using only part of
our data, and use the "set aside" data to evaluate model performance afterward.
This is known as separating our data into a **training** set (the data used to
_build_ the model) and a **testing** set (the data used to _evaluate_ the
model). We are going to reserve 20% of the data for testing, so we use the
`folds()` function from the predicts package to evenly assign each point
to a random group. To make sure we have roughly representative sample of both
presence and pseudo-absence points, we use the `pa` column to tell R that our
data has these two sub-groups.
```{r data-split-01}
# Create vector indicating fold
fold <- folds(x = points_climate,
k = 5,
by = points_climate$pa)
```
We now can use the `fold` vector to split data into a training set and a
testing set. Values in the `fold` vector are the integers 1, 2, 3, 4, and 5,
each evenly sampled; we can see this with the `table()` function, which counts
how many times each of the fold values occurs:
```{r show-folds}
table(fold)
```
We will say that any observations in fold 1 will be testing data, and any
observations in the other folds (2, 3, 4, 5) will be training data. Note that
we _did not_ add a column for fold identity in the `points_climate` data frame,
but we can still use the information in the `fold` vector to separate training
data from testing data.
```{r data-split-02}
testing <- points_climate[fold == 1, ]
training <- points_climate[fold != 1, ]
```
## Model building
Now that the data are ready, it is (finally!) time to build our species
distribution model! For this model, we will use the generalized linear model -
it's not the best and it's not the worst, but it will work for us. If you want
more information comparing different approaches, see references in the
[Additional Resources](#additional-resources) section below, especially the
work of [Valavi et al. 2021](https://doi.org/10.1002/ecm.1486).
```{r bioclim-training}
# Build a model using training data
glm_model <- glm(pa ~ ., data = training, family = binomial())
```
OK, there's some odd syntax in that `glm()` code that warrants some
explanation, especially that `pa ~ .`. Here is the breakdown of the three
pieces of information passed to `glm()`:
+ `pa ~ .`: This is the formula we are analyzing, that is we are asking R to
predict the value in the `pa` column based on values in _all the remaining
columns_. That is, instead of listing the names of all the bioclimatic
variables (`pa ~ bio1 + bio2 + bio3...`), we can use the dot (`.`) to mean "all
the columns except the column to the left of the tilda (`~`)."
+ `data = training`: This tells R to use only the data stored in the `training`
data frame to build the model.
+ `family = binomial()`: Because the response variable, `pa`, only takes values
of 0 or 1, we need to indicate this to R.
Now that we have built our model, we can use it to predict the habitat
suitability across the entire map. We do this with the `predict()` function,
passing the data to feed into the model (`bioclim_data`), the stored model
itself (`glm_model`), and finally what values we want as output
(`type = "response"`). This last argument (`type = "response"`) will return the
predicted probabilities from our model. After calculating the predicted values,
we can print them out with the `plot()` command.
```{r bioclim-predict}
# Get predicted values from the model
glm_predict <- predict(bioclim_data, glm_model, type = "response")
# Print predicted values
plot(glm_predict)
```
OK, it is a map, but what does it mean? This plot shows the probability of
occurrence of saguaros across the map. Note the values are all below 1.0.
We now take that model, and evaluate it using the observation data and the
pseudo-absence points we reserved for model _testing_. We then use this test to
establish a cutoff of occurrence probability to determine the boundaries of the
saguaro range.
```{r testing-model}
# Use testing data for model evaluation
glm_eval <- pa_evaluate(p = testing[testing$pa == 1, ],
a = testing[testing$pa == 0, ],
model = glm_model,
type = "response")
```
Here is another spot that warrants some additional explanation. We pass three
pieces of information to the `pa_evaluate()` function:
+ `p = testing[testing$pa == 1, ]`: In this case, `p` stands for presence data,
so we pass all the rows in the testing data that correspond to a location where
there was a saguaro present (that is, the value in the `pa` column is equal to
1).
+ `a = testing[testing$pa == 0, ]`: Similarly, `a` stands for absence data, so
we pass all the pseudo-absence rows in our dataset (i.e. all rows where the
value in the `pa` column is equal to 0).
+ `model = glm_model`: This is the model object we are evaluating. One way to
think about this is that the `glm_model` is a calculator that takes bioclimatic
data as input and provides probabilities as output.
With the `pa_evaluate()` function, we pass data that we "know" what the right
answer should be for these probability calculations. That is, the `glm_model`
should predict values close to 1 for those rows that we pass to the `p` argument
(because we know that saguaros occur at those locations) and it should predict
values close to 0 for those rows that we pass to the `a` argument. We use this
information on model performance to determine the probability value to use as a
cutoff to saying whether a particular location is suitable or unsuitable for
saguaros.
```{r model-threshold}
# Determine minimum threshold for "presence"
glm_threshold <- glm_eval@thresholds$max_spec_sens
```
The `thresholds` element of `glm_eval` offers a number of means of determining
the threshold cutoff. Here we chose `max_spec_sens`, which sets "the threshold
at which the sum of the sensitivity (true positive rate) and specificity (true
negative rate) is highest." For more information, check out the documentation
for the `pa_evaluate()` function (`?pa_evaluate`, remember?).
And _finally_, we can use that threshold to paint a map with sites predicted to
be suitable for the saguaro!
```{r plot-sdm-threshold-bad-color}
# Plot base map
plot(my_map,
axes = TRUE,
col = "grey95")
# Only plot areas where probability of occurrence is greater than the threshold
plot(glm_predict > glm_threshold,
add = TRUE,
legend = FALSE,
col = "olivedrab")
# And add those observations
points(x = obs_data$longitude,
y = obs_data$latitude,
col = "black",
pch = "+",
cex = 0.75)
# Redraw those country borders
plot(my_map, add = TRUE, border = "grey5")
```
Hmmm...that doesn't look right. It plotted a large portion of the map green.
Let's look at what we actually asked R to plot, that is, we plot the value of
`predict_presence > bc_threshold`. So what is that?
```{r investigate-raster-comparison}
glm_predict > glm_threshold
```
The comparison of these two rasters produces another raster with values of only
FALSE or TRUE: `FALSE` when the value in a grid cell of `glm_predict` is less
than or equal to the value in `glm_threshold` and `TRUE` for cells with a value
greater than `glm_threshold`. Since there are **two** values in this comparison
(`FALSE` and `TRUE`), we need to update what we pass to the `col` parameter in
our second call to the `plot()` function. Instead of just passing a single
value, we provide a color for 0 (`NA`) and a color for 1 (`"olivedrab"`):
```{r plot-sdm-threshold-good-color}
# Plot base map
plot(my_map,
axes = TRUE,
col = "grey95")
# Only plot areas where probability of occurrence is greater than the threshold
plot(glm_predict > glm_threshold,
add = TRUE,
legend = FALSE,
col = c(NA, "olivedrab")) # <-- Update the values HERE
# And add those observations
points(x = obs_data$longitude,
y = obs_data$latitude,
col = "black",
pch = "+",
cex = 0.75)
# Redraw those country borders
plot(my_map, add = TRUE, border = "grey5")
```
A final note on our approach: the map we have drawn presents a categorical
classification of whether a particular point on the landscape will be suitable
or not for the species of interest. This classification relies quite heavily on
the value of the threshold (see `glm_threshold` and the documentation for
`pa_evaluate()`) _and_ the pseudo-absence points. Given that we used random
sampling to generate those pseudo-absence points, there is potential for
variation in the predicted range if you run this code more than once (try it!
if you re-run the code from the point of creating the pseudo-absence points,
you are almost guaranteed a different map.). There are a number of approaches
to dealing with this variation, and the paper by
[Barbet-Massin et al. (2012)](https://dx.doi.org/10.1111/j.2041-210X.2011.00172.x)
is a great resource. I'll leave it as homework for you to determine which
approach is most appropriate here!
Our final script, generating the model, determining the threshold, and
visualizing the results:
```{r final-script, eval = FALSE}
# Species distribution modeling for saguaro
# Jeff Oliver
# jcoliver@email.arizona.edu
# 2023-08-08
# Load required libraries
library(terra)
library(geodata)
library(predicts)
# Download data for the bioclimatic variables
bioclim_data <- worldclim_global(var = "bio",
res = 2.5,
path = "data/")
# Read in saguaro observations
obs_data <- read.csv(file = "data/Carnegiea-gigantea-GBIF.csv")
# Check the data to make sure it loaded correctly
summary(obs_data)
# Notice NAs - drop them before proceeding
obs_data <- obs_data[!is.na(obs_data$latitude), ]
# Make sure those NA's went away
summary(obs_data)
# Determine geographic extent of our data
max_lat <- ceiling(max(obs_data$latitude))
min_lat <- floor(min(obs_data$latitude))
max_lon <- ceiling(max(obs_data$longitude))
min_lon <- floor(min(obs_data$longitude))
geographic_extent <- ext(x = c(min_lon, max_lon, min_lat, max_lat))
# Make an extent that is 25% larger for sampling
sample_extent <- geographic_extent * 1.25
# Crop bioclim data to desired extent
bioclim_data <- terra::crop(x = bioclim_data, y = sample_extent)
# Set the seed for the random-number generator to ensure results are similar
set.seed(20210707)
# Randomly sample points (same number as our observed points)
background <- spatSample(x = bioclim_data,
size = 1000,
values = FALSE, # don't need values
na.rm = TRUE, # don't sample from ocean
xy = TRUE) # just need coordinates
# Pull out coordinate columns, x (longitude) first, then y (latitude) from
# saguaro data
presence <- obs_data[, c("longitude", "latitude")]
# Add column indicating presence
presence$pa <- 1
# Convert background data to a data frame
absence <- as.data.frame(background)
# Update column names so they match presence points
colnames(absence) <- c("longitude", "latitude")
# Add column indicating absence
absence$pa <- 0
# Join data into single data frame
all_points <- rbind(presence, absence)
# Extract climate data for all those points
bioclim_extract <- extract(x = bioclim_data,
y = all_points[, c("longitude", "latitude")],
ID = FALSE) # No need for an ID column
# Add the point and climate datasets together
points_climate <- cbind(all_points, bioclim_extract)
# Identify columns that are latitude & longitude
drop_cols <- which(colnames(points_climate) %in% c("longitude", "latitude"))
drop_cols # print the values as a reality check
# Remove the geographic coordinates from the data frame
points_climate <- points_climate[, -drop_cols]
# Create vector indicating fold to separate training and testing data
fold <- folds(x = points_climate,
k = 5,
by = points_climate$pa)
# Separate data into training and testing sets
testing <- points_climate[fold == 1, ]
training <- points_climate[fold != 1, ]
# Build a model using training data
glm_model <- glm(pa~., data = training, family = binomial())
# Get predicted values from the model
glm_predict <- predict(bioclim_data, glm_model, type = "response")
# Use testing data for model evaluation
glm_eval <- pa_evaluate(p = testing[testing$pa == 1, ],
a = testing[testing$pa == 0, ],
model = glm_model,
type = "response")
# Determine minimum threshold for "presence"
glm_threshold <- glm_eval@thresholds$max_spec_sens
# Plot the results
# Plot base map
plot(my_map,
axes = TRUE,
col = "grey95")
# Only plot areas where probability of occurrence is greater than the threshold
plot(glm_predict > glm_threshold,
add = TRUE,
legend = FALSE,
col = c(NA, "olivedrab")) # only color for second element ("present")
# And add those observations
points(x = obs_data$longitude,
y = obs_data$latitude,
col = "black",
pch = "+",
cex = 0.75)
# Redraw those country borders
plot(my_map, add = TRUE, border = "grey5")
```
## Advanced: Forecasting distributions
Now that you have a species distribution model, you can make predictions about
the distribution under different climate scenarios. Let us pause for a moment
and be very clear about this approach. With all kinds of math wizardry on our
side, we are attempting to predict the future. Which means _any_ predictions we
make should be interpreted with extreme caution. If you are going to go about
an approach such as this, it would be wise to run a variety of different models
and a variety of different climate scenarios. There are links to such resources
in the [Additional Resources](#additional-resources) section, below.
### Forecast climate data
We will need to download climate data for the time period of interest. For the
purposes of this lesson, we will look a climate projections for the years 2061-
2080. Note there are several different forecast climate models and you can read
about the different models at
[McSweeney et al. 2015](https://link.springer.com/article/10.1007/s00382-014-2418-8)
and on the [CMIP6 page](https://pcmdi.llnl.gov/CMIP6/).
We can download data for one model, using the `cmip6_world()` function from the
geodata package. The syntax is very similar to the syntax we used for the
`worldclim_global()` function above. We are using one climate model and you can
see which models are available by looking at the documentation for
`worldclim_global()` (again, this can be done via `?cmip6_world` in the
console). This is a pretty large file (~440 MB), so it might take a minute or
two to download.
```{r download-cmip6-data}
# Download predicted climate data
forecast_data <- cmip6_world(model = "MPI-ESM1-2-HR",
ssp = "245",
time = "2061-2080",
var = "bioc",
res = 2.5,
path = "data")
```
In order to use these predicted climate data with the model that we built
above, we need to be sure that the _names_ of the data are identical (R is
quite pedantic about this). You can print out the names of each of the layers
in the datasets with the `names()` function (e.g. `names(bioclim_data)` and
`names(forecast_data)`) and see that they differ. Because our _model_ was built
using the names in the `bioclim_data`, we will update the names in the forecast
data:
```{r name-forecast}
# Use names from bioclim_data
names(forecast_data) <- names(bioclim_data)
```
Just like we did with our contemporary climate data, we can crop our forecast
climate data to the geographic extent of interest. Here we use that sampling
extent object to reduce the area of the forecast climate data to a little
beyond the current distribution of the Saguaro.
```{r crop-forcast}
# Crop forecast data to desired extent
forecast_data <- crop(x = forecast_data, y = sample_extent)
```
### Get out the crystal ball
Now that we have the forecast data, we can apply the model we build above,
`bc_model`, to the forecast climate data:
```{r predict-forecast-1}
# Predict presence from model with forecast data
forecast_presence <- predict(forecast_data, glm_model, type = "response")
```
If you want to look at the predicted probabilities of occurrence, you can
modify the code we used above.
```{r map-predicted-probs-1}
# Plot base map
plot(my_map,
axes = TRUE,
col = "grey95")
# Add model probabilities
plot(forecast_presence, add = TRUE)
# Redraw those country borders
plot(my_map, add = TRUE, border = "grey5")
# Add original observations
points(x = obs_data$longitude,
y = obs_data$latitude,
col = "black",
pch = "+",
cex = 0.75)
```
We can also map our predictions for presence / absence, using the same
threshold that we did for predictions based on current climate data.
```{r map-predicted-pa}
# Plot base map
plot(my_map,
axes = TRUE,
col = "grey95")
# Only plot areas where probability of occurrence is greater than the threshold
plot(forecast_presence > glm_threshold,
add = TRUE,
legend = FALSE,
col = c(NA, "olivedrab"))
# And add those observations
points(x = obs_data$longitude,
y = obs_data$latitude,
col = "black",
pch = "+",
cex = 0.6)
# Redraw those country borders
plot(my_map, add = TRUE, border = "grey5")
```
Wow, things look great for saguaros under this climate forecast. Try
downloading other climate models to see how predictions differ. And remember to
interpret these results cautiously.
***
## Additional resources
+ The creators of the terra package have an excellent, in-depth
[guide to species distribution modeling in R](https://rspatial.org/sdm/index.html)
+ A lighter-weight introduction to [species distribution models in R](http://www.molecularecologist.com/2013/04/species-distribution-models-in-r/)
+ A really nice [comparison among different SDM methods](https://doi.org/10.1002/ecm.1486)
+ [Fast and flexible Bayesian species distribution modelling using Gaussian processes](http://onlinelibrary.wiley.com/doi/10.1111/2041-210X.12523/pdf)
+ [Run a range of species distribution models](https://rdrr.io/cran/biomod2/man/BIOMOD_Modeling.html)
+ [SDM polygons on a Google map](https://rdrr.io/rforge/dismo/man/gmap.html)
+ [R package 'maxnet' for functionality of Java maxent package](https://cran.r-project.org/web/packages/maxnet/maxnet.pdf)
+ A [study on the effect of pseudo-absences in SDMs (Barbet-Massin et al. 2012)](https://dx.doi.org/10.1111/j.2041-210X.2011.00172.x)
+ A [PDF version](https://jcoliver.github.io/learn-r/011-species-distribution-models.pdf) of this lesson