-
Notifications
You must be signed in to change notification settings - Fork 5
/
Copy path04_scraping_notebook.qmd
427 lines (281 loc) · 18.6 KB
/
04_scraping_notebook.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
---
title: "Web scraping"
subtitle: "SICSS, 2022"
format: html
editor: visual
---
# Web scraping notebook
In this worksheet, I introduce you to how we might gather data through screen-scraping (or server-side) techniques; the second worksheet will go through an example of how we use API (or client-side) techniques for gathering data.
## Tutorial: Screen-scraping
In this tutorial, you will learn how to summarise, aggregate, and analyze text in R:
- How to select elements of CSS using SelectorGadget (see [here](https://rvest.tidyverse.org/articles/articles/selectorgadget.html) for a detailed overview)
- How to use the <tt>rvest</tt> package to scrape parts of a webpage
## Setup
To practice these skills, we will use a series of webpages on the Internet Archive that host material collected at the Arab Spring protests in Egypt in 2011. The original website can be seen [here](https://www.tahrirdocuments.org/) and below.
![](images/tahrir_page.png){width="100%"}
## Load data and packages
Before proceeding, we'll load the remaining packages we will need for this tutorial.
```{r, message=F}
library(tidyverse) # loads dplyr, ggplot2, and others
library(ggthemes) # includes a set of themes to make your visualizations look nice!
library(readr) # more informative and easy way to import data
library(stringr) # to handle text elements
library(rvest) #for scraping
```
We can download the final dataset we will produce with:
```{r}
pamphdata <- read_csv("data/pamphlets_formatted_gsheets.csv")
```
You can also view the formatted output of this scraping exercise, alongside images of the documents in question, in Google Sheets [here](https://docs.google.com/spreadsheets/d/1rg2VTV6uuknpu6u-L5n7kvQ2cQ6e6Js7IHp7CaSKe90/edit?usp=sharing).
If you're working on this document from your own computer ("locally") you can download the Tahrir documents data in the following way:
```{r, eval = F}
pamphdata <- read_csv("https://raw.githubusercontent.com/cjbarrie/sicss_21/main/01_scraping_APIs/data/pamphlets_formatted_gsheets.csv")
```
## Inspect and filter data
Let's have a look at what we will end up producing:
```{r}
head(pamphdata)
```
## Inspecting HTML contents
We are going to return to the Internet Archived webpages to see how we can produce this final formatted dataset. The archived Tahrir Documents webpages can be accessed [here](https://wayback.archive-it.org/2358/20120130143023/http://www.tahrirdocuments.org/).
We first want to expect how the contents of each webpage is stored.
When we scroll to the very bottom of the page, we see listed a number of hyperlinks to documents stored by month:
![](images/tahrir_archives.png)
We will click through the documents stored for March and then click on the top listed pamphlet entitled "The Season of Anger Sets in Among the Arab Peoples." You can access this [here](https://wayback.archive-it.org/2358/20120130161341/http://www.tahrirdocuments.org/2011/03/voice-of-the-revolution-3-page-2/).
We will store this url to inspect the HTML it contains as follows:
```{r}
url <- "https://wayback.archive-it.org/2358/20120130161341/http://www.tahrirdocuments.org/2011/03/voice-of-the-revolution-3-page-2/"
html <- read_html(url)
html
```
Well, this isn't particularly useful. Let's now see how we can extract the text contained inside.
```{r}
pagetext <- html %>%
html_text()
pagetext
```
Well this looks pretty terrifying now...
We need a way of quickly identifying where the relevant text is so that we can specify this when we are scraping. The most widely-used tool to achieve this is the "Selector Gadget" Chrome Extension. You can add this to your browser for free [here](https://chrome.google.com/webstore/detail/selectorgadget/mhjhnkcfbdhnjickkkdbjoemdmbfginb?hl=en).
The tool works by allowing the user to point and click on elements of a webpage (or "CSS selectors"). Unlike alternatives, such as "Inspect Element" browser tools, we are easily able to see how the webpage item is contained within CSS selectors (rather than HTML tags alone), which is easier to parse.
We can do this with our Tahrir documents as below:
![](images/gifcap4.gif){width="100%"}
So now we know that the main text of the translated document is contained between "p" HTML tags. To identify the text between these HTML tags we can run:
```{r}
pagetext <- html %>%
html_elements("p") %>%
html_text(trim=TRUE)
pagetext
```
``` r emo::ji("relieved_face")``r emo::ji("relieved_face")``r emo::ji("relieved_face")``r emo::ji("relieved_face")``r emo::ji("relieved_face")``r emo::ji("relieved_face")``r emo::ji("relieved_face")``r emo::ji("relieved_face")``r emo::ji("relieved_face")``r emo::ji("relieved_face") ```
, which looks quite a lot more manageable...!
What is happening here? Essentially, the `html_elements()` function is scanning the page and collecting all HTML elements contained between `<p>` tags, which we collect using the "p" CSS selector. We are then just grabbing the text contained in this part of the page with the `html_text()` function.
So this gives us one way of capturing the text, but what about if we wanted to get other elements of the document, for example the date or the tags attributed to each document? Well we can do the same thing here too. Let's take the example of getting the date:
![](images/gifcap5.gif){width="100%"}
We see here that the date is identified by the ".calendar" CSS selector and so we enter this into the same `html_elements()` function as before:
```{r}
pagedate <- html %>%
html_elements(".calendar") %>%
html_text(trim=TRUE)
pagedate
```
Of course, this is all well and good, but we also need a way of doing this at scale---we can't just keep repeating the same process for every page we find as this wouldn't be much quicker than just copy pasting. So how can we do this? Well we need first to understand the URL structure of the website in question.
## Inspecting URL structures
When we scroll down the page we see listed a number of documents. Each of these directs to an individual pamphlet distributed at protests during the 2011 Egyptian Revolution.
Click on one of these and see how the URL changes.
We see that if our starting URL was:
```{r, echo=F}
starturl <- "https://wayback.archive-it.org/2358/20120130135111/http://www.tahrirdocuments.org/"
```
```{r, echo=F, comment=NA}
cat(starturl)
```
Then if we click on March 2011, the first month for which we have documents, we see that the url becomes:
```{r, echo=F}
marchurl <- "https://wayback.archive-it.org/2358/20120130143023/http://www.tahrirdocuments.org/2011/03/"
augusturl <- "https://wayback.archive-it.org/2358/20120130142155/http://www.tahrirdocuments.org/2011/08/"
jan2012url <- "https://wayback.archive-it.org/2358/20120130142014/http://www.tahrirdocuments.org/2012/01/"
```
```{r, echo=F, comment=NA}
cat(marchurl)
```
, for August 2011 it becomes:
```{r, echo=F, comment=NA}
cat(augusturl)
```
, and for January 2012 it becomes:
```{r, echo=F, comment=NA}
cat(jan2012url)
```
We notice that for each month, the URL changes with the addition of month and year between back slashes at the end or the URL. In the next section, we will go through how to efficiently create a set of URLs to loop through and retrieve the information contained in each individual webpage.
## Looping through dates
We are going to want to retrieve the text of documents archived for each month. As such, our first task is to store each of these webpages as a series of strings. We could do this manually by, for example, pasting year and month strings to the end of each URL for each month from March, 2011 to January, 2012:
```{r}
url <- "https://wayback.archive-it.org/2358/20120130143023/http://www.tahrirdocuments.org/"
url1 <- paste0(url,"2011/03/")
url2 <- paste0(url,"2011/04/")
url3 <- paste0(url,"2011/04/")
#etc...
urls <- c(url1, url2, url3)
```
But this wouldn't be particularly efficient...
Instead, we can wrap all of this in a loop.
```{r}
urls <- character(0)
for (i in 3:13) {
url <- "https://wayback.archive-it.org/2358/20120130143023/http://www.tahrirdocuments.org/"
newurl <- ifelse(i <10, paste0(url,"2011/0",i,"/"),
ifelse(i>=10 & i<=12 , paste0(url,"2011/",i,"/"),
paste0(url,"2012/01/")))
urls <- c(urls, newurl)
}
```
What's going on here? Well, we are first specifying the starting URL as above. We are then iterating through the numbers 3 to 13. And we are telling R to take the new URL and then, depending on the number in the loop we are on, to take the base starting url--- `r url` --- and to paste on the end of it the string "2011/0", then the number of the loop we are on, and then "/". So, for the first "i" in the loop---the number 3---then we are effectively calling the equivalent of:
```{r}
i <- 3
url <- "https://wayback.archive-it.org/2358/20120130143023/http://www.tahrirdocuments.org/"
newurl <- paste0(url,"2011/0",i,"/")
```
Which gives:
```{r, echo=F}
newurl
```
In the above, the `ifelse()` commands are simply telling R: if i (the number of the loop we are on) is less than 10 then `paste0(url,"2011/0",i,"/")`; i.e., if i is less than 10 then paste "2011/0", then "i" and then "/". So for the number 3 this becomes:
`"https://wayback.archive-it.org/2358/20120130143023/http://www.tahrirdocuments.org/2011/03/"`
, and for the number 4 this becomes
`"https://wayback.archive-it.org/2358/20120130143023/http://www.tahrirdocuments.org/2011/04/"`
If, however, `i>=10 & i<=12` (i is greater than or equal to 10 and less than or equal to 12) then we are calling `paste0(url,"2011/",i,"/")` because here we do not need the first "0" in the months.
Finally, if (else) i is greater than 12 then we are calling `paste0(url,"2012/01/")`. For this last call, notice, we do not have to specify whether i is greater than or equal to 12 because we are wrapping everything in `ifelse()` commands. With `ifelse()` calls like this, we are telling R if x "meets condition" then do y, otherwise do z. When we are wrapping multiple `ifelse()` calls within each other, we are effectively telling R if x "meets condition" then do y, or if x "meets other condition" then do z, otherwise do a. So here, the "otherwise do a" part of the `ifelse()` calls is saying: if i is not less than 10, and is not between 10 and 12, then paste "2012/01/" to the end of the URL.
Got it? I didn't even get it on first reading... and I wrote it. The best way to understand what is going on is to run this code yourself and look at what each part is doing.
## Looping through pages
So now we have our list of URLs for each month. What next?
Well if we go onto the page of a particular month, let's say March, we will see that the page has multiple paginated tabs at the bottom. Let's see what happens to the URL when we click on one of these:
```{r, echo=F}
marchurl <- "https://wayback.archive-it.org/2358/20120130143023/http://www.tahrirdocuments.org/2011/03/"
marchurlp2 <- "https://wayback.archive-it.org/2358/20120130163651/http://www.tahrirdocuments.org/2011/03/page/2/"
marchurlp3 <- "https://wayback.archive-it.org/2358/20120130163651/http://www.tahrirdocuments.org/2011/03/page/3/"
```
We see that if our starting point URL for March, as above, was:
```{r, echo=F, comment=NA}
cat(marchurl)
```
When we click through to page 2 it becomes:
```{r, echo=F, comment=NA}
cat(marchurlp2)
```
And for page 3 it becomes:
```{r, echo=F, comment=NA}
cat(marchurlp3)
```
We can see pretty clearly that as we navigate through each page, there appears appended to the URL the string "page/2/" and "page/3/". So this shouldn't be too tricky to add to our list of URLs. But we want to avoid having to manually click through the archive for each month to figure out how many pagination tabs are at the bottom of each page.
Fortunately, we don't have to. Using the "Selector Gadget" tool again we can automate this process by grabbing the highest number that appears in the pagination bar for each month's pages. The code below achieves this:
```{r, eval=F}
urlpages_all <- character(0) #create empty character string to deposit our final set of urls
urlpages <- character(0) #create empty character string to deposit our urls for each page of each month
for (i in seq_along(urls)) { #for loop for each url stored above
url <- urls[i] #take the first url from the vector of urls created above
html <- read_html(url) #read the html
pages <- html %>%
html_elements(".page") %>% #grab the page element
html_text() #convert to text
pageints <- as.integer(pages) #convert to set of integers
npages <- max(pageints, na.rm = T) #get number of highest integer
for (j in 1:npages) { #for loop for each of 1:highest page integer for that month's url
newurl <- paste0(url,"page/",j,"/") #create new url by pasting "page/" and then the number of that page, and then "/", matching the url structure identified above
urlpages <- c(urlpages, newurl) #bind with previously created page urls for each month
}
urlpages_all <- c(urlpages_all, urlpages) #bind the monthly page by page urls together
urlpages <- character(0) #empty urlpages for next iteration of the first for loop
urlpages_all <- gsub("page/1/", "", urlpages_all) #get rid of page/1/ as not needed
}
```
```{r, echo=F}
urlpages_all <- readRDS("data/urlpages_all.RDS")
```
What's going on here? Well, in the first two lines, we are simply creating an empty character string that we're going to populate in the subsequent loop. Remember that we have a set of eleven starting URLs for each of months archived on this webpage.
So in the code beginning `for (i in seq_along(files)` we saying, similar to above, for the beginning url to the end url, do the following in a loop: first, read in the url with `url <- urls[i]` then read the html it contains with `html <- read_html(url)`.
After this line, we are getting the pages as a character vector of page numbers by calling the `html_elements()` function on the ".page" tag. this gives a series of pages stored as e.g. "1" "2" "3".
In order to be able to see how many there are, we need to extract the highest number that appears in this string. To do this, we first need to reformat it as an "integer" object rather than a "character" object so that R can recognize that these are numbers. So we call `pageints <- as.integer(pages)`. Then we get the maximum by simply calling: `npages <- max(pageints, na.rm = T)`.
In the next part of the loop, we are taking the new information we have stored as "npages," i.e., the number of pagination tabs for each month, and telling R: for each of these pages, define a new url by adding "page/" then the number of the pagination tab "j", and then "/". After we've bound all of these together, we get a list of URLs that look like this:
```{r}
head(urlpages_all)
```
## Looping through every page on the website
So what next?
The next step is to get the URLs for each of the documents contained in the archive for each month. How do we do this? Well, we can once again use the "Selector Gadget" tool to work this out. For the main landing pages of each month, we see listed, as below, each document in a list. For each of these documents, we see that the title, which links to the revolutionary leaflet in question, has two CSS selectors: "h2" and ".post".
![](images/gifcap6.gif){width="100%"}
We can again pass these tags through `html_elements()` to grab what's contained inside. We can then grab what's contained inside these by extracting the "children" of these classes. In essence, this just means a lower level tag: tags can have tags within tags and these flow downwards like a family tree (hence the name, I suppose).
So one of the "children" of this HTML tag is the link contained inside, which we can get with calling `html_children()` followed by specifying that we want the specific attribute of the web link it encloses with `html_attr("href")`. The subsequent lines then just remove extraneous information.
The complete loop, then, to retrieve the URL of the page for every leaflet contained on this website is:
```{r, eval =F}
#GET URLS FOR EACH PAMPHLET
pamlinks_all <- character(0)
for (i in seq_along(urlpages_all)) {
url <- urlpages_all[i]
html <- read_html(url)
links <- html_elements(html, ".post , h2") %>%
html_children() %>%
html_attr("href") %>%
na.omit() %>%
`attributes<-`(NULL)
pamlinks_all <- c(pamlinks_all, links)
}
```
```{r, echo=F}
pamlinks_all <- readRDS("data/pamlinks_all.RDS")
```
Which gives us:
```{r}
head(pamlinks_all)
length(pamlinks_all)
```
We see now that we have collected all 523 separate URLs for every revolutionary leaflet contained on these pages. Now we're in a great position to be able to crawl each page and collect the information we need. This final loop is all we need to go through each URL we're interested in and collect relevant information on document text, title, date, tags, and the URL to the image of the revolutionary literature itself.
See if you can work out yourselves how each part of this is fitting together. NOTE: if you want to run the final loop on your own machines it will take several hours to complete.
```{r, eval=F}
df_empty <- data.frame()
for (i in seq_along(pamlinks_all)) {
url <- pamlinks_all[i]
html <- read_html(url)
cat("Collecting url number ",i,": ", url, "\n")
error <- tryCatch(html <- read_html(url),
error=function(e) e)
if (inherits(error, 'error')) {
df <- data.frame(title = NA,
date = NA,
text = NA,
imageurl = NA,
tags = NA)
next
}
df <- data.frame(matrix(ncol=0, nrow=length(1)))
#get titles
titles <- html_elements(html, ".title") %>%
html_text(trim=TRUE)
title <- titles[1]
df$title <- title
#get date
date <- html_elements(html, ".calendar") %>%
html_text(trim=TRUE)
df$date <- date
#get text
textsep <- html_elements(html, "p") %>%
html_text(trim=TRUE)
text <- paste(textsep, collapse = ",")
df$text <- text
#get tags
pamtags <- html_elements(html, ".category") %>%
html_text(trim=TRUE)
df$tags <- pamtags
#get link to original pamphlet image
elements_other <- html_elements(html, "a") %>%
html_children()
url_element <- as.character(elements_other[2])
imgurl <- str_extract(url_element, "src=\\S+")
imgurl <- substr(imgurl, 6, (nchar(imgurl)-1))
df$imageurl <- imgurl
df_empty <- rbind(df_empty, df)
}
```
And now... we're pretty much there...back where we started!
## Exercises
- Go to <http://books.toscrape.com/> and write a script that captures information on one of the genres on the left hand side. This information is up to you to select (could be e.g., links; titles; prices...).
- (Harder) go to <https://www.theguardian.com/politics> and grab the title, author, and text of all today's politics articles in The Guardian