forked from susanli2016/Data-Analysis-with-R
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Predict-House-Price.Rmd
169 lines (111 loc) · 6.13 KB
/
Predict-House-Price.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
---
title: "Predict House Sales Prices in Ames, Iowa"
output: html_document
---
The [Ames Housing dataset](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data) was downloaded from [kaggle](https://www.kaggle.com/c/house-prices-advanced-regression-techniques). It is a playground competition's dataset and my taske is to predict house price based on house-level features using multiple linear regression model in R.
```{r global_options, include=FALSE}
knitr::opts_chunk$set(echo=FALSE, warning=FALSE, message=FALSE)
```
### Prepare the data
```{r}
library(Hmisc)
library(psych)
library(car)
```
```{r}
house <- read.csv('house.csv')
head(house)
```
Next, split the data into a training set and a testing set.
```{r}
set.seed(2017)
split <- sample(seq_len(nrow(house)), size = floor(0.75 * nrow(house)))
train <- house[split, ]
test <- house[-split, ]
dim(train)
```
The training set contains 1095 observations and 81 variables. To start, I will hypothesize the following subset of the variables as potential predicators.
* salePrice - the property's sale price in dollars. This is the target variable that I am trying to predict.
* OverallCond - Overall condition rating
* YearBuilt - Original construction date
* YearRemodAdd - Remodel data
* BedroomAbvGr - Number of bedrooms above basement level
* GrLivArea - Above grade (ground) living area square feet
* KitchenAbvGr - Number of kitchens above grade
* TotRmsAbvGrd - Total rooms above grade (does not include bathrooms)
* GarageCars - Size of garage in car capacity
* PoolArea - Pool area in square feet
* LotArea - Lot size in square feet
Construct a new data fram consisting solely of these variables.
```{r}
train <- subset(train, select=c(SalePrice, LotArea, PoolArea, GarageCars, TotRmsAbvGrd, KitchenAbvGr, GrLivArea, BedroomAbvGr, YearRemodAdd, YearBuilt, OverallCond))
head(train)
```
Report variables with missing values.
```{r}
sapply(train, function(x) sum(is.na(x)))
```
Summary statistics
```{r}
summary(train)
```
Before fitting my regression model I want to investigate how the variables are
related to one another.
```{r}
pairs.panels(train, col='red')
```
We can see some of the variables are very skewed. If we want to have a good regression model, the varaibles should be normal distributed. The variables should be independent and not correlated. "GrLivArea" and "TotRmsAbvGrd" clearly have a high correlation, I will need to deal with these.
### Fit the linear model
```{r}
fit <- lm(SalePrice ~ LotArea + PoolArea + GarageCars + TotRmsAbvGrd + KitchenAbvGr + GrLivArea + BedroomAbvGr + YearRemodAdd + YearBuilt + OverallCond, data=train)
summary(fit)
```
interprete the output:
R-squred of 0.737 tells us that approximately 74% of variation in sale price can be explained by my model.
F-statistics and p-value show the overall significance test of my model.
Residual standard error gives an idea on how far observed sale price are from the predicted or fitted sales price.
Intercept is the estimated sale price for a house with all the other variables at zero. It does not provide any meaningful interpretation.
The slope for "GrlivArea"(7.598e+01) is the effect of Above grade living area square feet on sale price adjusting or controling for the other variables, i.e we associate an increase of 1 square foot in "GrlivArea" with an increase of $75.98 in sale price adjusting or controlling for the other variables.
### Stepwise Procedure
Using backward elimination to remove the predictor with the largest p-value over 0.05. In this case, I will remove "PoolArea" first, then fit the model again.
```{r}
fit <- lm(SalePrice ~ LotArea + GarageCars + TotRmsAbvGrd + KitchenAbvGr + GrLivArea + BedroomAbvGr + YearRemodAdd + YearBuilt + OverallCond, data=train)
summary(fit)
```
After eliminating "PoolArea", R-Squared almost identical, Adjusted R-squared slightly improved. At this point, I think I can start building the model.
However, as you have seen earlier, two variables - "GrLivArea" and "TotRmsAbvGrd" are highly correlated, the multicollinearity between "GrLivArea" and "TotRmsAbvGrd" means that we should not directly interpret "GrLivArea" as the effect of "GrLivArea" on sale price adjusting for "TotRmsAbvGrd" These two effects are somewhat bounded together.
```{r}
attach(train)
cor(GrLivArea, TotRmsAbvGrd, method='pearson')
```
### Create a confidence interval for the model coefficients
```{r}
confint(fit, conf.level=0.95)
```
For example, from the 2nd model, I have estimated the slope for "GrLivArea" is 75.43. I am 95% confident that the true slope is between 66.42 and 84.43.
### Check the diagnostic plots for the model
```{r}
plot(fit)
```
The relationship between predictor variables and an outcome variable is approximate linear. There are three extreme cases (outliers).
It looks like I don't have to be concerned too much, although two observations numbered as 524 and 1299 look a little off.
The distribution of residuals around the linear model in relation to the sale price. The most of the houses in the data in the lower and median price range, the higher price, the less observations.
This plot helps us to find influential cases if any. Not all outliers are influential in linear regression analysis. It looks like none of the outliers in my model are influential.
### Testing the prediction model
```{r}
test <- subset(test, select=c(SalePrice, LotArea, GarageCars, TotRmsAbvGrd, KitchenAbvGr, GrLivArea, BedroomAbvGr, YearRemodAdd, YearBuilt, OverallCond))
prediction <- predict(fit, newdata = test)
```
Look at the first few values of prediction, and compare it to the values of salePrice in the test data set.
```{r}
head(prediction)
```
```{r}
head(test$SalePrice)
```
At last, calculate the value of R-squared for the prediction model on the test data set. In general, R-squared is the metric for evaluating the goodness of fit of my model. Higher is better with 1 being the best.
```{r}
SSE <- sum((test$SalePrice - prediction) ^ 2)
SST <- sum((test$SalePrice - mean(test$SalePrice)) ^ 2)
1 - SSE/SST
```