-
Notifications
You must be signed in to change notification settings - Fork 0
/
final.qmd
604 lines (464 loc) · 34.9 KB
/
final.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
---
title: "SmartZoning® Documentation"
subtitle: "Leveraging Permitting and Zoning Data to Predict Upzoning Pressure in Philadelphia"
authors: "Laura Frances and Nissim Lebovits"
date: today
project:
type: website
output-dir: docs
format:
html:
embed-resources: true
toc: true
toc_float: true
theme: cosmo
code-fold: true
code-summary: "Show the code"
number-sections: true
fontsize: "11"
editor: source
editor_options:
markdown:
wrap: sentence
execute:
warning: false
error: false
messages: false
echo: true
cache: false
---
*This model and web application prototype were developed for MUSA508, a Master of Urban Spatial Analytics class focused on predictive public policy analytics at the University of Pennsylvania.*
## Background
Growth is critical for a city to continue to densify and modernize. The benefits of growth range from increased public transit use to updating the built environment to be more climate resilient. Growth fuels development and vice versa. Philadelphia is [6th largest city](https://www.census.gov/data/tables/time-series/demo/popest/2020s-total-cities-and-towns.html) in the US, yet ranks [42nd in cost of living](https://www.axios.com/2023/11/09/lowest-highest-cost-of-living-cities-us-map), and so growth is often met with concern. Many residents and preservationists ask: Will growth deteriorate the city’s best features? Will modernization make the city unaffordable to longtime residents?
Balancing growth with affordability is a precarious task for Philadelphia. To date, politicians favor making exceptions for developers parcel-by-parcel rather than championing a citywide smart growth strategy. Zoning advocates need better data-driven tools to broadcast the benefits of a smart growth approach, a planning framework that aims to maximize walkability and transit use to avoid sprawl, that also demonstrates how parcel-by-parcel, or spot zoning, creates unmet development pressure that can drive costs. Designed to support smart growth advocacy, SmartZoning is a prototype web tool that identifies parcels under development pressure with conflicting zoning. Users can strategically leverage the tool to promote proactive upzoning of high-priority parcels, aligning current zoning more closely with anticipated development. This approach aims to foster affordable housing in Philadelphia, addressing one of the city’s most pressing challenges.
*Smart Growth meets* SmartZoning®
## Overview
Below, we outline how we developed a predictive model that effectively forecasts future annual development patterns with a low mean absolute error. This model is the basis of SmartZoning: by anticipating future development, it can highlight where current zoning may hinder smart growth. We also consider the relationship between development pressure and race, income, and housing cost burden, demonstrating the generalizability of our model across different socioeconomic contexts and investigating the impacts of development locally and city-wide.
```{r setup}
#| output: false
required_packages <- c("tidyverse", "sf", "acs", "tidycensus", "sfdep", "kableExtra", "conflicted",
"gganimate", "tmap", "gifski", "transformr", "ggpubr", "randomForest", "janitor",
'igraph', "plotly", "ggcorrplot", "Kendall", "car", "shiny", "leaflet",
"classInt")
suppressWarnings(
install_and_load_packages(required_packages)
)
source("utils/viz_utils.R")
urls <- c(
roads = 'https://opendata.arcgis.com/datasets/261eeb49dfd44ccb8a4b6a0af830fdc8_0.geojson', # for broad and market
building_permits = building_permits_path,
permits_bg = final_dataset_path,
filtered_zoning = "shiny/app_data.geojson",
ols_preds = 'data/model_outputs/ols_preds.geojson',
rf_test_preds = 'data/model_outputs/rf_test_preds.geojson',
rf_val_preds = 'data/model_outputs/rf_val_preds.geojson',
rf_proj_preds = 'data/model_outputs/rf_proj_preds.geojson',
council_dists = "https://opendata.arcgis.com/datasets/9298c2f3fa3241fbb176ff1e84d33360_0.geojson"
)
suppressMessages({
invisible(
imap(urls, ~ assign(.y, phl_spat_read(.x), envir = .GlobalEnv))
)
})
broad_and_market <- roads %>% filter(ST_NAME %in% c('BROAD',"MARKET") | SEG_ID %in% c(440370, 421347,421338,421337,422413,423051,440403,440402,440391,440380))
building_permits <- building_permits %>%
filter(permittype %in% c("RESIDENTIAL BUILDING", "BP_ADDITION", "BP_NEWCNST"))
```
## Feature Selection and Engineering
This study leverages open data sources including permit counts, council district boundaries, racial mix, median income, housing cost burden and more to holistically understand what drives development pressure. Generally, data are collected at the block group or parcel level and aggregated up to the council district to capture both local and more citywide trends.
| Dataset | Source | Geo Level |
|:------|:-----|:------|
| Construction Permits | [Philadelphia Dept. of Licenses & Inspections](https://opendataphilly.org/datasets/licenses-and-inspections-building-and-zoning-permits/) | Parcel |
| Zoning Base Map | [Planning Commission](https://opendataphilly.org/datasets/zoning-base-districts/) | Parcel |
| Zoning Overlays | [Planning Commission](https://opendataphilly.org/datasets/zoning-overlays/) | Parcel |
| Demographic and Socioeconomic Data | [U.S. Census Bureau’s ACS 5-Y](https://data.census.gov/) | Block Group |
| Council District Boundaries and Leadership | [City of Philadelphia](https://opendataphilly.org/datasets/city-council-districts/) | Parcel |
### Construction Permits
Permits data from 2013 through 2023, collected from the Philadelphia Department of Licenses & Inspections, are the basis of our model. We consider only new construction permits granted for residential projects, but in the future, filtering for data on "full" or "substantial" renovations could add nuance to the compelexities of development pressure. Given the granular spatial scale of our analysis, and the need to aggregate Census data to our unit of analysis, we chose to aggregate these permits data to the block group level.
###### Construction Permits per Year
Philadelphia, PA
```{r gif}
#| results: hide
#| output: false
tm <- tmap_theme(tm_shape(permits_bg %>% filter(!year %in% c(2012, 2024))) +
tm_polygons(col = "permits_count", border.alpha = 0, palette = mono_5_green, style = "fisher", colorNA = "lightgrey", title = "Permits") +
tm_facets(along = "year") +
tm_shape(broad_and_market) +
tm_lines(col = "darkgrey") +
tm_layout(frame = FALSE))
suppressMessages(
tmap_animation(tm, "assets/permits_animation.gif", delay = 50)
)
bar_graph <- ggplot(building_permits %>% filter(!year %in% c(2024)), aes(x = as.factor(year))) +
geom_bar(fill = palette[1], color = NA, alpha = 0.7) +
labs(title = "Permits per Year") +
theme_minimal() +
theme(axis.title = element_blank(),
aspect.ratio = 0.25)
# Ensure the 'assets' directory exists
if (!dir.exists("assets")) {
dir.create("assets")
}
# Save the plot
ggsave(bar_graph, filename = "assets/permits_per_year.png", width = 7, height = 3, units = "in")
```
![](assets/permits_animation.gif)
![](assets/permits_per_year.png)
We note a significant uptick in new construction permits as we approach 2021, followed by a sharp decline. It is generally acknowledged that this trend was due to the [expiration of a tax abatement program](https://www.axios.com/local/philadelphia/2022/01/07/philadelphia-tax-abatement-permit-applications) for developers.
When assessing new construction permit count by Council Districts, a few districts issued the bulk of new permits during that 2021 peak.
**Hover over the lines in the graph to see more about the volume of permits and who granted them.**
```{r perms x dist}
perms_x_dist <- st_join(building_permits, council_dists)
perms_x_dist_sum <- perms_x_dist %>%
st_drop_geometry() %>%
group_by(DISTRICT, year) %>%
summarize(permits_count = n())
perms_x_dist_mean = perms_x_dist_sum %>%
group_by(year) %>%
summarize(permits_count = mean(permits_count)) %>%
mutate(DISTRICT = "Average")
perms_x_dist_sum <- bind_rows(perms_x_dist_sum, perms_x_dist_mean) %>%
mutate(color = ifelse(DISTRICT != "Average", 0, 1))
# Assuming perms_x_dist_sum is your dataframe
# Create a custom hover text column
perms_x_dist_sum$hover_text <- paste("Year: ", perms_x_dist_sum$year,
"<br>Permit Count: ", perms_x_dist_sum$permits_count,
"<br>District: ", perms_x_dist_sum$DISTRICT) # Adjust the council_member field as necessary
# Create the ggplot
p <- ggplot(perms_x_dist_sum %>% filter(year > 2012, year < 2024),
aes(x = year, y = permits_count, color = as.character(color),
group = interaction(DISTRICT, color), text = hover_text)) +
geom_line(lwd = 0.7) +
labs(title = "Permits per Year by Council District", y = "Total Permits") +
theme_minimal() +
theme(axis.title.x = element_blank(), legend.position = "none") +
scale_color_manual(values = c(palette[5], palette[1]))
# Convert to ggplotly and specify custom tooltip
ggplotly(p, tooltip = "text")
```
### Spatio-Temporal Features
New construction exhibits sizable spatial and temporal autocorrelation. In other words, there is a strong relationship between the number of permits in a given block group and the number of permits in neighboring block groups; as well as between the number of permits issued in a block group in a given year and the number of permits issued in that same block group in the previous year. To account for these relationships, we engineer new features, including both space and time lags. We note that all of these engineered features have strong correlation coefficients with our dependent variable, `permits_count`, and p-values indicating that these relationships are statistically significant.
```{r corrplots}
permits_bg_long <- permits_bg %>%
filter(!year %in% c(2024)) %>%
st_drop_geometry() %>%
pivot_longer(
cols = c(starts_with("lag")),
names_to = "Variable",
values_to = "Value"
)
ggscatter(permits_bg_long, x = "permits_count", y = "Value", facet.by = "Variable",
add = "reg.line",
add.params = list(color = palette[3]),
conf.int = TRUE, alpha = 0.2
) +
stat_cor(method = "pearson", p.accuracy = 0.001, r.accuracy = 0.01, size = 3) +
labs(title = "Correlation of `permits_count` and Engineered Features",
x = "Value",
y = "Permits Count") +
theme_minimal()
```
### Socioeconomic Features
Socioeconomic factors such as race, income, and housing cost burden play an outsized role in affordability in cities like Philadelphia, which are marred by a pervasive and persistent history housing discrimination and systemic disinvestment in poor and minority neighborhoods. To account for these issues, we incorporate various data from the US Census Bureau’s American Community Survey 5-Year survey. Later, we also consider our model's generalizability across different racial and economic contexts to ensure that it will not inadvertently reinforce structural inequity.
Spatially, is clear that non-white communities earn lower median incomes and experience higher rates of extreme rent burden (household spends more than 35% of income on gross rent).
```{r socioecon}
med_inc <- tmap_theme(tm_shape(permits_bg %>% filter(year == 2022)) +
tm_polygons(col = "med_inc", border.alpha = 0, palette = mono_5_orange, style = "fisher", colorNA = "lightgrey", title = "Median Income ($)") +
tm_shape(broad_and_market) +
tm_lines(col = "darkgrey") +
tm_layout(frame = FALSE))
race <- tmap_theme(tm_shape(permits_bg %>% filter(year == 2022)) +
tm_polygons(col = "percent_nonwhite", border.alpha = 0, palette = mono_5_orange, style = "fisher", colorNA = "lightgrey", title = "Nonwhite (%)") +
tm_shape(broad_and_market) +
tm_lines(col = "darkgrey") +
tm_layout(frame = FALSE))
rent_burd <- tmap_theme(tm_shape(permits_bg %>% filter(year == 2022)) +
tm_polygons(col = "ext_rent_burden", border.alpha = 0, palette = mono_5_orange, style = "fisher", colorNA = "lightgrey", title = "Rent Burden (%)") +
tm_shape(broad_and_market) +
tm_lines(col = "darkgrey") +
tm_layout(frame = FALSE))
tmap_arrange(med_inc, race, rent_burd)
```
## Model Building
> “All the complaints about City zoning regulations really boil down to the fact that City Council has suppressed infill housing or restricted multi-family uses, which has served to push average housing costs higher.” - Jon Geeting, Philly 3.0 Engagement Director
SmartZoning® seeks to predict where permits are most likely to be filed as a measure to predict urban growth. As discussed, predicting growth is fraught because growth is influenced by political forces rather than by plans published by the city’s Planning Commission. Comprehensive plans, typically set on ten-year timelines, tend to become mere suggestions, ultimately subject to the prerogatives of city council members rather than serving as steadfast guides for smart growth. With these dynamics in mind, SmartZoning’s prediction model accounts for socioeconomic, political, and spatiotemporal features.
### Tests for Correlation and Collinearity
#### Correlation Coefficients
In building our model, we aim to select variables that correlate significantly with `permit_count`. Using a correlation matrix, we can assess whether our predictors are, in fact, meaningfully associated with our dependent variable. As it turns out, socioeconomic variables are not (we exclude the other variables, which we have previously established to be significant), but we retain them for the sake of later analysis.
```{r corrplot}
corr_vars <- c("total_pop",
"med_inc",
"percent_nonwhite",
"percent_renters",
"rent_burden",
"ext_rent_burden")
corr_dat <- permits_bg %>% select(all_of(corr_vars), permits_count) %>% select(where(is.numeric)) %>% st_drop_geometry() %>% unique() %>% na.omit()
corr <- round(cor(corr_dat), 2)
p.mat <- cor_pmat(corr_dat)
ggcorrplot(corr, p.mat = p.mat, hc.order = FALSE,
type = "full", insig = "blank", lab = TRUE, colors = c(palette[2], "white", palette[3])) +
annotate(
geom = "rect",
xmin = .5, xmax = 7.5, ymin = 6.5, ymax = 7.5,
fill = "transparent", color = "red", alpha = 0.5
)
```
#### VIF
We also aim to minimize or eliminate multicollinearity in our model. For this purpose, we evaluate the variance inflation factor (VIF) of a given predictor. The table below lists the VIF of all of our predictors; we exclude any with a VIF of 5 or more from our final model, including `district`, which is council district, and several historic district and planning overlays.
```{r vif}
ols <- lm(permits_count ~ ., data = permits_bg %>% filter(year < 2024) %>% select(-c(mapname, geoid10, year)) %>% st_drop_geometry())
vif(ols) %>%
data.frame() %>%
clean_names() %>%
select(gvif) %>%
arrange(desc(gvif)) %>%
kablerize()
```
\
Notably, permit count does not have a particularly strong correlation to any of our selected variables. This may lead one to the conclusion that permits are evenly distributed throughout the city. However, as we can see below, there are few block groups with more 50 permits. This indicates that permits are granted on a block by block across all districts. **The need for SmartZoning is applicable for most Philadelphia neighborhoods, not just a select few.**
```{r permits per block per year}
ggplot(permits_bg %>% st_drop_geometry %>% filter(!year %in% c(2024)), aes(x = permits_count)) +
geom_histogram(fill = palette[1], color = NA, alpha = 0.7) +
labs(title = "Permits per Block Group per Year",
subtitle = "Log-Transformed",
y = "Count") +
scale_x_log10() +
facet_wrap(~year) +
theme_minimal() +
theme(axis.title.x = element_blank())
```
### Spatial Patterns
In addition to correlation between non-spatial variables, our dependent variable, `permits_count`, displays a high degree of spatial autocorrelation. That is, the number of permits at a given location is closely related to the number of permits at neighboring locations. We've accounted for this in our model by factoring in spatial lag, and we explore it here by evaluating the local Moran's I values, which is the measure of how concentrated high or low values are at a given location. Here, we identify hotspots for new construction in 2023 by looking at statistically signficant concentrations of new building permits.
```{r moran}
lisa <- permits_bg %>%
filter(year == 2023) %>%
mutate(nb = st_contiguity(geometry),
wt = st_weights(nb),
permits_lag = st_lag(permits_count, nb, wt),
moran = local_moran(permits_count, nb, wt)) %>%
tidyr::unnest(moran) %>%
mutate(pysal = ifelse(p_folded_sim <= 0.1, as.character(pysal), NA),
hotspot = case_when(
pysal == "High-High" ~ "Yes",
TRUE ~ "No"
))
#
# palette <- c("High-High" = "#B20016",
# "Low-Low" = "#1C4769",
# "Low-High" = "#24975E",
# "High-Low" = "#EACA97")
morans_i <- tmap_theme(tm_shape(lisa) +
tm_polygons(col = "ii", border.alpha = 0, style = "jenks", palette = mono_5_green, title = "Local Moran's I") +
tm_shape(broad_and_market) +
tm_lines(col = "darkgrey"))
p_value <- tmap_theme(tm_shape(lisa) +
tm_polygons(col = "p_ii", border.alpha = 0, style = "jenks", palette = mono_5_green, title = "P-Value") +
tm_shape(broad_and_market) +
tm_lines(col = "darkgrey"))
sig_hotspots <- tmap_theme(tm_shape(lisa) +
tm_polygons(col = "hotspot", border.alpha = 0, style = "cat", palette = c(mono_5_green[1], mono_5_green[5]), textNA = "Not a Hotspot", title = "Hotspot?") +
tm_shape(broad_and_market) +
tm_lines(col = "darkgrey"))
tmap_arrange(morans_i, p_value, sig_hotspots, ncol = 3)
```
### Compare Models
As our baseline model, we used OLS, or least squares regression. We compared this to a random forest model, which is somewhat more sophisticated. We also considered a Poisson model, although found that it drastically overpredicted for outliers, and we therefore discarded it. As a point of comparison, we built both OLS and random forest models, trained them on data from 2013 through 2021, tested them on 2022 data, and compared the results for accuracy, overfitting, and generalizability.
#### OLS
OLS (Ordinary least squares) is a method to explore relationships between a dependent variable and one or more explanatory variables. It considers the strength and direction of these relationships and the goodness of model fit. Overall, we found that our basic OLS model performed quite well; with a mean absolute error (MAE) of `r round(mean(ols_preds$abs_error, na.rm = TRUE), 2)`, it is fairly accurate in prediciting future development. We also note that it overpredicts in most cases which, given our goal of anticipating and preparing for high demand for future development, is preferrable to underpredicting. That said, it still produces a handful of outliers that deviate substantially from the predicted value. As a result, we considered a random forest model to see if it would handle these outliers better.
```{r ols graphs}
suppressMessages(
ggplot(ols_preds, aes(x = permits_count, y = ols_preds)) +
geom_point(alpha = 0.2) +
labs(title = "Predicted vs. Actual Permits: OLS",
subtitle = "2022 Data",
x = "Actual Permits",
y = "Predicted Permits") +
geom_abline() +
geom_smooth(method = "lm", se = FALSE, color = palette[3]) +
theme_minimal()
)
```
```{r ols maps}
ols_preds_map <- tmap_theme(tm_shape(ols_preds) +
tm_polygons(col = "ols_preds", border.alpha = 0, palette = mono_5_green, style = "fisher", colorNA = "lightgrey", title = "Permits") +
tm_shape(broad_and_market) +
tm_lines(col = "lightgrey") +
tm_layout(frame = FALSE),
"Predicted Permits: OLS")
ols_error_map <- tmap_theme(tm_shape(ols_preds) +
tm_polygons(col = "abs_error", border.alpha = 0, palette = mono_5_orange, style = "fisher", colorNA = "lightgrey", title = "Absolute Error") +
tm_shape(broad_and_market) +
tm_lines(col = "lightgrey") +
tm_layout(frame = FALSE),
"Absolute Error: OLS")
tmap_arrange(ols_preds_map, ols_error_map)
```
#### Random Forest
Random forest models are superior to OLS in their ability to capture non-linear patterns, outliers, and so forth. They also tend to be less sensitive to multicolinearity. Thus, we considered whether a random forest model would improve on some of the weaknesses of the OLS model. We found that this was indeed the case; the random forest model yielded a MAE of `r round(mean(rf_test_preds$abs_error, na.rm = TRUE), 2)`, but the range of absolute error in the model was sizably reduced, with outliers exerting less of an impact on the model.
Compared to the OLS model, the relationship between predicted vs actual permits is similar. However, the slope of the trendlines is noticeably steeper, which may indicate that the new model is making predictions that are more responsive to changes in the observed values. Typically, the steepness of the trend line reflects the strength and direction of the relationship between the predicted and observed permit counts likely indicated a better-fitting model.
```{r rf plots}
suppressMessages(
ggplot(rf_test_preds, aes(x = permits_count, y = rf_test_preds)) +
geom_point(alpha = 0.2) +
labs(title = "Predicted vs. Actual Permits: RF",
subtitle = "2022 Data",
x = "Actual Permits",
y = "Predicted Permits") +
geom_abline() +
geom_smooth(method = "lm", se = FALSE, color = palette[3]) +
theme_minimal()
)
```
```{r rf maps}
test_preds_map <- tmap_theme(tm_shape(rf_test_preds) +
tm_polygons(col = "rf_test_preds", border.alpha = 0, palette = mono_5_green, style = "fisher", colorNA = "lightgrey", title = "Permits") +
tm_shape(broad_and_market) +
tm_lines(col = "lightgrey") +
tm_layout(frame = FALSE),
"Predicted Permits: RF Test")
test_error_map <- tmap_theme(tm_shape(rf_test_preds) +
tm_polygons(col = "abs_error", border.alpha = 0, palette = mono_5_orange, style = "fisher", colorNA = "lightgrey", title = "Absolute Error") +
tm_shape(broad_and_market) +
tm_lines(col = "lightgrey") +
tm_layout(frame = FALSE),
"Absolute Error: RF Test")
tmap_arrange(test_preds_map, test_error_map)
```
## Model Testing
Model training, validation, and testing involved three steps. First, we partitioned our data into training, validation, and testing sets. We used data from 2013 through 2021 for initial model training. Next, we evaluated our models' ability to accurately predict 2022 construction permits using our validation set, which consisted of all permits in 2022. We carried out additional feature engineering and model tuning, iterating based on the results of these training and testing splits. We sought to minimize both the mean absolute error (MAE) of our best model and the distribution of absolute error. Finally, when we were satisfied with the results of our best model, we evaluated it again by training it on all data from 2013 through 2022 and validating it on data from 2023 (all but the last two weeks, which we consider negligible for our purposes), which the model had never "seen" before. As Kuhn and Johnson write in [Applied Predictive Modeling (2013)]((https://www.amazon.com/Applied-Predictive-Modeling-Max-Kuhn/dp/1461468485/ref=as_li_ss_tl?dchild=1&keywords=Applied+Predictive+Modeling&qid=1597365412&sr=8-1&linkcode=sl1&tag=inspiredalgor-20&linkId=0d09eefbda1ecba5066f6b37df9d5ff6&language=en_US)), "Ideally, the model should be evaluated on samples that were not used to build or fine-tune the model, so that they provide an unbiased sense of model effectiveness." (code for all of these steps [is available on GitHub.](https://github.com/nlebovits/musa-508-2023-final/blob/main/utils/models.R))
### Accuracy
Again, testing confirms the strength of our model; based on 2023 data, our random forest model produces a MAE of `r round(mean(rf_val_preds$abs_error, na.rm = TRUE), 2)`. We note again that the range of model error is relatively narrow. Generally, we see that where the model predicts there to be more permits, there is also higher error. This spatial trend is also seen in the distribution of absolute errors clustering in a handful of block groups with high permit counts.
```{r rf validate}
suppressMessages(
ggplot(rf_val_preds, aes(x = permits_count, y = rf_val_preds)) +
geom_point(alpha = 0.2) +
labs(title = "Predicted vs. Actual Permits: RF",
subtitle = "2023 Data",
x = "Actual Permits",
y = "Predicted Permits") +
geom_abline() +
geom_smooth(method = "lm", se = FALSE, color = palette[3]) +
theme_minimal()
)
val_preds_map <- tmap_theme(tm_shape(rf_val_preds) +
tm_polygons(col = "rf_val_preds", border.alpha = 0, palette = mono_5_green, style = "fisher", colorNA = "lightgrey", title = "Permits") +
tm_shape(broad_and_market) +
tm_lines(col = "lightgrey") +
tm_layout(frame = FALSE),
"Predicted Permits: RF Validate")
val_error_map <- tmap_theme(tm_shape(rf_val_preds) +
tm_polygons(col = "abs_error", border.alpha = 0, palette = mono_5_orange, style = "fisher", colorNA = "lightgrey", title = "Absolute Error") +
tm_shape(broad_and_market) +
tm_lines(col = "lightgrey") +
tm_layout(frame = FALSE),
"Absolute Error: RF Validate")
tmap_arrange(val_preds_map, val_error_map)
```
When comparing the spread of predicted versus actual permits, there is more divergence at higher permit counts which is also observed spatially. Throughout this study, we’ve hedged against any model’s sensitivity towards outliers and while there is noticeably variability of prediction errors at high permit counts, we remain confident about our model selection.
### Generalizability
Comparing model error per block group to several sociodemographic characteristics indicates that the model generalizes well; it displays consistent and relatively low absolute errors across income, racial mix, rent burden, and total population. Error is not correlated with affordability and in fact is slightly negatively correlated with the percentage of the nonwhite population. This observed pattern may be attributed to the likelihood that majority-minority neighborhoods experience a comparatively lower volume of overall development, thereby diminishing the absolute magnitude of error, despite potential proportional increases. Additionally, there is a slight increase in error with the total population, aligning with the expectation that higher population figures correspond to more extensive development activities.
```{r rent x error scatter}
rf_val_preds_long <- rf_val_preds %>%
pivot_longer(cols = c(rent_burden, percent_nonwhite, total_pop, med_inc),
names_to = "variable", values_to = "value") %>%
mutate(variable = case_when(
variable == "med_inc" ~ "Median Income ($)",
variable == "percent_nonwhite" ~ "Nonwhite (%)",
variable == "rent_burden" ~ "Rent Burden (%)",
TRUE ~ "Total Pop."
))
ggplot(rf_val_preds_long, aes(x = value, y = abs_error)) +
geom_point(alpha = 0.2) +
geom_smooth(method = "lm", se = FALSE, color = palette[3]) +
facet_wrap(~ variable, scales = "free_x") +
labs(title = "Generalizability of Absolute Error",
x = "Value",
y = "Absolute Error") +
theme_minimal()
```
Considering the reign of concilmanic perogative in Philadelphia, briefly discussed in the Background, it is important to ask whether the model is generalizable across the city’s 10 council districts. Consistent with the error distributions mentioned previously, we note that the council districts with the highest volume of permits also have the widest distribution of errors. That said, the overall distribution of error per district remains narrow; no district has a mean error above 3, and only district 5 has an upper quartile error above 5. Evidently, a handful of outliers drive up the MAE, but the model error is well-dsitributed overall.
```{r council district}
suppressMessages(
ggplot(rf_val_preds, aes(x = reorder(district, abs_error, FUN = mean), y = abs_error)) +
geom_boxplot(fill = NA, color = palette[3], alpha = 0.7) +
labs(title = "MAE by Council District",
y = "Mean Absolute Error",
x = "Council District") +
theme_minimal()
)
```
## Assessing Development Pressure
The core functionality of our SmartZoning tool is to identify conflict between projected development and current zoning. It is interesting to note that RSA4 and RSA5 (Residential Single-family Attached) zoning is also where we predict the most development pressure. When speaking with Jon Geeting, Engagement Director of Philadelphia 3.0, he noted that RSA4 and RSA5 are two of the most prohibitive zoning designations, especially in areas close to amenities like schools, grocery stores, and most of all, public transit.
```{r restrictive zoning}
zoning_map <- tmap_theme(tm_shape(filtered_zoning) +
tm_polygons(col = "code", border.alpha = 0, colorNA = "lightgrey", title = "Zoning code", palette = zoning_palette) +
tm_shape(broad_and_market) +
tm_lines(col = "lightgrey") +
tm_layout(frame = FALSE,
legend.height = 0.4),
"Restrictive Zoning")
mismatch <- tmap_theme(tm_shape(filtered_zoning) +
tm_polygons(col = "rf_val_preds", border.alpha = 0, colorNA = "lightgrey", palette = mono_5_orange, style = "fisher", title = "Predicted New Permits") +
tm_shape(broad_and_market) +
tm_lines(col = "lightgrey") +
tm_layout(frame = FALSE),
"Development Pressure")
tmap_arrange(zoning_map, mismatch)
```
Using these data, we can filter only for properties that 1) are zoned restrictively and 2) face high development pressure. We note that these properties occur primarily in Furthermore, we can identify properties with high potential for assemblage, which suggests the ability to accommodate high-density, multi-unit housing. Because developers seek out assemblage opportunities to create more sizable and therefore more economically viable projects, assemblage also drives development pressure. **Zoom in to explore parcels across the city.**
```{r interactive}
tmap_mode('view')
filtered_zoning %>%
filter(rf_val_preds > 10) %>%
tm_shape() +
tm_polygons(col = "code", border.alpha = 0, colorNA = "lightgrey",
popup.vars = c('rf_val_preds', 'code'), palette = zoning_palette, title = "Zoning") +
tm_shape(broad_and_market) +
tm_lines(col = "lightgrey") +
tm_layout(frame = FALSE)
```
```{r assemblage}
filtered_zoning %>%
st_drop_geometry() %>%
select(rf_val_preds,
n_contig,
sum_contig_area,
code) %>%
filter(rf_val_preds > 10,
n_contig > 2) %>%
arrange(desc(rf_val_preds)) %>%
kablerize(caption = "Restricively-Zoned Properties with High Development Risk")
```
## 2024 Predictions
Using the trained and tested model, we predict where new construction permits are most likely to be filed and granted in 2024. Interestingly, when comparing the predicted map with median income distribution, development pressure is expected to be greatest in block groups with higher median incomes. The relationship between predicted development and higher median incomes suggests that developers may be strategically targeting areas with greater economic prosperity to meet market demand, de-risk return on investment, and tap into more robust infrastructure and local services. This is a key insight that zoning advocates can leverage to push for smart growth policies by emphasizing the need for more equitable development policies, incentives for affordable housing, and more meaningful transit-oriented-development programs.
```{r rf project}
tmap_mode('plot')
preds24 <- tmap_theme(tm_shape(rf_proj_preds) +
tm_polygons(col = "rf_proj_preds", border.alpha = 0, palette = mono_5_green, style = "fisher", colorNA = "lightgrey", title = "Predicted New Permits") +
tm_shape(broad_and_market) +
tm_lines(col = "lightgrey") +
tm_layout(frame = FALSE),
"Projected New Development, 2024")
med_inc22 <- tmap_theme(tm_shape(permits_bg %>% filter(year == 2022)) +
tm_polygons(col = "med_inc", border.alpha = 0, palette = mono_5_orange, style = "fisher", colorNA = "lightgrey", title = "Median Income ($)") +
tm_shape(broad_and_market) +
tm_lines(col = "darkgrey") +
tm_layout(frame = FALSE),
"Median Income, 2022")
tmap_arrange(preds24, med_inc22)
```
## Web Application
Below is a wireframe preview of the SmartZoning® web application; a [prototype built in Shiny is available here](https://nlebovits.shinyapps.io/smart-zoning/). The UX offers key features that leverages this study’s modeling and mapping.
![](assets/wireframe.jpeg)
![Wireframes of SmartZoning® prototype](assets/wireframe2.jpeg)
Key Features:
- Interactive parcel level map that makes key information about parcel easily accessible
- Filters to enhance spatial exploration customizable to difference audiences
- View switch between current development and future development (v2 could include historical permit and zoning data for powerful comparatives)
- Compare functionality to understand neighborhood context
- Customizable report generator to instantly create actionable assets
The tool is currently in beta testing. The goal is to optimize it for the use case of a smart growth or rezoning advocate use case. What is most critical is to prioritize the buttons and filters so that advocates can use the tool as a real-time accessory in negotiations with council members, planning commissions, and other critical stakeholders.
The intended impact is to provide advocates with data-driven insights that can eloquently and efficiently make the case for streamlining smart growth policies. By providing an unprecedented combination of predictions and supplmentary data, the SmartZoning® tool will enable users to proactively engage decision makers. *Stay tuned for more updates.*
## Next Steps
SmartZoning® emphasizes the importance of data-driven decision-making in zoning. Utilizing parcel level data to comprehensively analyze the whole city helps to identify areas with the potential for growth and guide zoning decisions that prioritize a balanced and equitable distribution of development opportunities.
In future iterations, the model may want to focus on the potential for upzoning in areas where there are no permits predicted but high income, such as in Center City and Society Hill. To address concerns that by highlighting where upzoning is happening now and potentially exacerbating gentrification trends in neighborhoods like Fairmount and Grays Ferry, the model could be calibrated to suggest development mismatch in underdeveloped, but high income areas. The goal is to create a balanced and inclusive approach to development that benefits the existing community, preserves neighborhood character, and provides opportunities for all residents--SmartZoning® can help inform how this balance is struck.