-
Notifications
You must be signed in to change notification settings - Fork 0
/
06-knn-bvt.Rmd
189 lines (122 loc) · 7.15 KB
/
06-knn-bvt.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
```{r 06_setup, include=FALSE}
knitr::opts_chunk$set(echo=TRUE, eval=FALSE)
```
# (PART) Regression: Building Flexible Models {-}
# KNN Regression and the Bias-Variance Tradeoff
## Learning Goals {-}
- Clearly describe / implement by hand the KNN algorithm for making a regression prediction
- Explain how the number of neighbors relates to the bias-variance tradeoff
- Explain the difference between parametric and nonparametric methods
- Explain how the curse of dimensionality relates to the performance of KNN
<br>
Slides from today are available [here](https://docs.google.com/presentation/d/1aNEw_cF4Vj8whvFovEtuU8Eh-qxTejK4DXoiTJGb3Rc/edit?usp=sharing).
<br><br><br>
## KNN models in `caret` {-}
To build KNN models in `caret`, first load the package and set the seed for the random number generator to ensure reproducible results:
```{r}
library(caret)
set.seed(___) # Pick your favorite number to fill in the parentheses
```
Then adapt the following code:
```{r}
knn_mod <- train(
y ~ .,
data = ___,
preProcess = "scale",
method = "knn",
tuneGrid = data.frame(k = seq(___, ___, by = ___)),
trControl = trainControl(method = "cv", number = ___, selectionFunction = ___),
metric = "MAE",
na.action = na.omit
)
```
Argument Meaning
------------- -----------------
`y ~ .` Model formula for specifying response and predictors
`data` Sample data
`preProcess` `"scale"` indicates that predictor variables should be scaled to have the same variance (Why might this be important? Should we always do this?)
`method` `"knn"` implements KNN regression (and classification)
`tuneGrid` A mini-dataset (`data.frame`) of tuning parameters. $k$ is the KNN neighborhood size. Supply a sequence as `seq(begin, end, by = size of step)`.
`trControl` Use cross-validation to estimate test performance for each model fit. The process used to pick a final model from among these is indicated by `selectionFunction`, with options including `"best"` and `"oneSE"`.
`metric` Evaluate and compare competing models with respect to their CV-MAE.
`na.action` Set `na.action = na.omit` to prevent errors if the data has missing values.
<br>
Note: When including categorical predictors, `caret` automatically creates the corresponding indicator variables to allow Euclidean distance to still be used. You'll think about the pros/cons of this in the exercises.
An alternative to creating indicator variables is to use **Gower distance**. We'll explore Gower distance more when we talk about clustering.
<br>
**Identifying the "best" KNN model**
The "best" model in the sequence of models fit is defined relative to the chosen `selectionFunction` and `metric`.
```{r}
# Plot CV-estimated test performance versus the tuning parameter
plot(knn_mod)
# CV metrics for each model
knn_mod$results
# Identify which tuning parameter is "best"
knn_mod$bestTune
# Get information from all CV iterations for the "best" model
knn_mod$resample
# Use the best model to make predictions
# newdata should be a data.frame with required predictors
predict(knn_mod, newdata = ___)
```
<br><br><br>
## Exercises {-}
**You can download a template RMarkdown file to start from [here](template_rmds/06-knn-bvt.Rmd).**
We'll explore KNN regression using the `College` dataset in the `ISLR` package (install it with `install.packages("caret")` in the Console). You can use `?College` in the Console to look at the data codebook.
```{r}
library(caret)
library(ggplot2)
library(dplyr)
library(readr)
library(ISLR)
data(College)
# A little data cleaning
college_clean <- College %>%
mutate(school = rownames(College)) %>%
filter(Grad.Rate <= 100) # Remove one school with grad rate of 118%
rownames(college_clean) <- NULL # Remove school names as row names
```
### Hello, how are things? {-}
We're about a week and a half into our last module of the year - how are you feeling? What's on your mind?
### Exercise 1: Bias-variance tradeoff warmup {-}
a. Think back to the LASSO algorithm which depends upon tuning parameter $\lambda$.
- For which values of $\lambda$ (small or large) will LASSO be the most biased, and why?
- For which values of $\lambda$ (small or large) will LASSO be the most variable, and why?
b. The bias-variance tradeoff also comes into play when comparing across algorithms, not just within algorithms. Consider LASSO vs. least squares:
- Which will tend to be more biased?
- Which will tend to be more variable?
- When will LASSO outperform least squares in the bias-variance tradeoff?
### Exercise 2: Impact of distance metric {-}
Consider the 1-nearest neighbor algorithm to predict `Grad.Rate` on the basis of two predictors: `Apps` and `Private`. Let `Yes` for `Private` be represented with the value 1 and `No` with 0.
a. We have a test case whose number of applications is 13,530 and is a private school. Suppose that we have the tiny 2-case training set below. What would the 1-nearest neighbor prediction be using Euclidean distance?
```{r}
college_clean %>%
filter(school %in% c("Princeton University", "SUNY at Albany")) %>%
select(Apps, Private, Grad.Rate, school)
```
b. Do you have any concerns about the resulting prediction? Based on this, comment on the impact of the distance metric chosen on KNN performance. How might you change the distance calculation (or correspondingly rescale the data) to generate a more sensible prediction in this situation?
### Exercise 3: Implementing KNN in `caret` {-}
Adapt our general KNN code to "fit" a set of KNN models with the following specifications:
- Use the predictors `Private`, `Top10perc` (% of new students from top 10% of high school class), and `S.F.Ratio` (student/faculty ratio).
- Use 8-fold CV. (Why 8? Take a look at the sample size.)
- Use mean absolute error (MAE) to select a final model.
- Select the simplest model for which the metric is within one standard error of the best metric.
- Use a sequence of $K$ values from 1 to 100 in increments of 5.
- Should you use `preProcess = "scale"`?
After adapting the code (but before inspecting any output), answer the following conceptual questions:
- Explain your choice for using or not using `preProcess = "scale"`.
- Why is "fit" in quotes? Does KNN actually fit a model as part of training? (This feature of KNN is known as "lazy learning".)
- How is test MAE estimated? What are the steps of the KNN algorithm with cross-validation?
- Draw a picture of how you expect test MAE to vary with $K$. In terms of the bias-variance tradeoff, why do you expect the plot to look this way?
```{r}
set.seed(2021)
knn_mod <- train(
)
```
### Exercise 4: Inspecting the results {-}
- Use `plot(knn_mod)` to verify your expectations about the plot of test MAE vs. $K$.
- Contextually interpret the test MAE.
- How else could you evaluate the KNN model?
- Does your KNN model help you understand which predictors of graduation rate are most important? Why or why not?
### Exercise 5: Curse of dimensionality {-}
Just as with parametric models, we could keep going and add more and more predictors. However, the KNN algorithm is known to suffer from the "curse of dimensionality". Why? **Hint:** First do a quick Google search of this new idea.