Skip to content

Commit

Permalink
Nutch-compatible implementation of FastURLFilter + use it in PreFilte…
Browse files Browse the repository at this point in the history
…rBolt, fix #59. Storm 1.2.4

Signed-off-by: Julien Nioche <[email protected]>
  • Loading branch information
jnioche committed Nov 14, 2023
1 parent b0474ac commit 2c462d7
Show file tree
Hide file tree
Showing 15 changed files with 769 additions and 119 deletions.
4 changes: 2 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ RUN sed -Ei 's@^path\.data: .*@path.data: /data/elasticsearch@' /etc/elasticsear
#
# Apache Storm
#
ENV STORM_VERSION=1.2.3
ENV STORM_VERSION=1.2.4
COPY downloads/apache-storm-$STORM_VERSION.tar.gz /tmp/apache-storm-$STORM_VERSION.tar.gz
RUN tar -xzf /tmp/apache-storm-$STORM_VERSION.tar.gz -C /opt
RUN rm /tmp/apache-storm-$STORM_VERSION.tar.gz
Expand All @@ -70,7 +70,7 @@ RUN chmod -R 644 /etc/supervisor/conf.d/*.conf
#
# Storm crawler / news crawler
#
ENV CRAWLER_VERSION=1.18
ENV CRAWLER_VERSION=1.18.1
RUN groupadd ubuntu && \
useradd --gid ubuntu --home-dir /home/ubuntu \
--create-home --shell /bin/bash ubuntu && \
Expand Down
32 changes: 19 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Prerequisites

* Java 8
* Install Elasticsearch 7.5.0 (ev. also Kibana)
* Install Apache Storm 1.2.3
* Install Apache Storm 1.2.4
* Start Elasticsearch and Storm
* Build ES indices by running `bin/ES_IndexInit.sh`

Expand All @@ -34,14 +34,14 @@ mvn clean package

And run ...
``` sh
storm jar target/crawler-1.18.jar org.commoncrawl.stormcrawler.news.CrawlTopology -conf $PWD/conf/es-conf.yaml -conf $PWD/conf/crawler-conf.yaml $PWD/seeds/ feeds.txt
storm jar target/crawler-1.18.1.jar org.commoncrawl.stormcrawler.news.CrawlTopology -conf $PWD/conf/es-conf.yaml -conf $PWD/conf/crawler-conf.yaml $PWD/seeds/ feeds.txt
```

This will launch the crawl topology. It will also "inject" all URLs found in the file `./seeds/feeds.txt` in the status index. The URLs point to news feeds and sitemaps from which links to news articles are extracted and fetched. The topology will create WARC files in the directory specified in the configuration under the key `warc.dir`. This directory must be created beforehand.

Of course, it's also possible to add (or remove) the seeds (feeds and sitemaps) using the Elasticsearch API. In this case, the can topology can be run without the last two arguments.

Alternatively, the topology can be run from the [crawler.flux](./conf/crawler.flux), please see the [Storm Flux documentation](https://storm.apache.org/releases/1.2.3/flux.html). Make sure to adapt the Flux definition to your needs!
Alternatively, the topology can be run from the [crawler.flux](./conf/crawler.flux), please see the [Storm Flux documentation](https://storm.apache.org/releases/1.2.4/flux.html). Make sure to adapt the Flux definition to your needs!


Monitor the crawl
Expand All @@ -50,6 +50,7 @@ Monitor the crawl
When the topology is running you can check that URLs have been injected and news are getting fetched on [http://localhost:9200/status/_search?pretty]. Or use StormCrawler's Kibana dashboards to monitor the crawling process. Please follow the instructions to install the templates for Kibana provided as part of [StormCrawler's Elasticsearch module documentation](//github.com/DigitalPebble/storm-crawler/tree/master/external/elasticsearch).

There is also a shell script [bin/es_status](./bin/es_status) to get aggregated counts from the status index, and to add, delete or force a re-fetch of URLs. E.g.,

```
$> bin/es_status aggregate_status
3824 DISCOVERED
Expand All @@ -61,38 +62,43 @@ $> bin/es_status aggregate_status
Run Crawl from Docker Container
-------------------------------

First, download Apache Storm 1.2.3. from the [download page](https://storm.apache.org/downloads.html) and place it in the directory `downloads`:
First, download Apache Storm 1.2.4. from the [download page](https://storm.apache.org/downloads.html) and place it in the directory `downloads`:

```
STORM_VERSION=1.2.3
STORM_VERSION=1.2.4
mkdir downloads
wget -q -P downloads --timestamping http://www-us.apache.org/dist/storm/apache-storm-$STORM_VERSION/apache-storm-$STORM_VERSION.tar.gz
wget -q -P downloads --timestamping https://downloads.apache.org/storm/apache-storm-$STORM_VERSION/apache-storm-$STORM_VERSION.tar.gz
```

Do not forget to create the uberjar (see above) which is included in the Docker image. Simply run:

```
mvn clean package
```

Then build the Docker image from the [Dockerfile](./Dockerfile):

Note: the uberjar is included in the Docker image and needs to be built first (see above).

```
docker build -t newscrawler:1.18 .
docker build -t newscrawler:1.18.1 .
```

To launch an interactive container:

```
docker run --net=host \
-p 127.0.0.1:9200:9200 \
-p 5601:5601 -p 8080:8080 \
-v .../newscrawl/elasticsearch:/data/elasticsearch \
-v .../newscrawl/warc:/data/warc \
--rm -i -t newscrawler:1.18 /bin/bash
-v $PWD/data/elasticsearch:/data/elasticsearch \
-v $PWD/data/warc:/data/warc \
--rm --name newscrawler -i -t newscrawler:1.18.1 /bin/bash
```

NOTE: don't forget to adapt the paths to mounted volumes used to persist data on the host.
NOTE: don't forget to adapt the paths to mounted volumes used to persist data on the host. Make sure to add the user agent configuration in conf/crawler-conf.yaml.

CAVEAT: Make sure that the Elasticsearch port 9200 is not already in use or mapped by a running ES instance. Otherwise Elasticsearch commands may affect the running instance!

The crawler is launched in the running container by the script

```
/home/ubuntu/news-crawler/bin/run-crawler.sh
```
Expand Down
4 changes: 2 additions & 2 deletions aws/packer/bootstrap.sh
Original file line number Diff line number Diff line change
Expand Up @@ -79,8 +79,8 @@ ZOOKEEPER_VERSION=3.4.14
wget -q -O - http://www-us.apache.org/dist/zookeeper/zookeeper-$ZOOKEEPER_VERSION/zookeeper-$ZOOKEEPER_VERSION.tar.gz \
| sudo tar -xzf - -C /opt
ZOOKEEPER_HOME=/opt/zookeeper-$ZOOKEEPER_VERSION
STORM_VERSION=1.2.3
wget -q -O - http://www-us.apache.org/dist/storm/apache-storm-$STORM_VERSION/apache-storm-$STORM_VERSION.tar.gz \
STORM_VERSION=1.2.4
wget -q -O - https://downloads.apache.org/storm/apache-storm-$STORM_VERSION/apache-storm-$STORM_VERSION.tar.gz \
| sudo tar -xzf - -C /opt
STORM_HOME=/opt/apache-storm-$STORM_VERSION
sudo groupadd storm
Expand Down
4 changes: 2 additions & 2 deletions aws/packer/newscrawl-ami.json
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,8 @@
},
{
"type": "file",
"source": "target/crawler-1.18.jar",
"destination": "/tmp/install/news-crawler/lib/crawler-1.18.jar"
"source": "target/crawler-1.18.1.jar",
"destination": "/tmp/install/news-crawler/lib/crawler-1.18.1.jar"
},
{
"type": "file",
Expand Down
1 change: 0 additions & 1 deletion conf/crawler-conf.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -246,4 +246,3 @@ config:
# but latest after one day
warc.rotation.policy.max-minutes: 1440


18 changes: 17 additions & 1 deletion conf/crawler.flux
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ components:
- name: "put"
args:
- "software"
- "StormCrawler 1.18 https://stormcrawler.net/"
- "StormCrawler 1.18.1 https://stormcrawler.net/"
- name: "put"
args:
- "description"
Expand Down Expand Up @@ -91,6 +91,11 @@ bolts:
- id: "filter"
className: "com.digitalpebble.stormcrawler.bolt.URLFilterBolt"
parallelism: 1
- id: "prefilter"
className: "org.commoncrawl.stormcrawler.news.PreFilterBolt"
parallelism: 1
constructorArgs:
- "pre-urlfilters.json"
- id: "partitioner"
className: "com.digitalpebble.stormcrawler.bolt.URLPartitionerBolt"
parallelism: 1
Expand Down Expand Up @@ -129,6 +134,11 @@ bolts:

streams:
- from: "spout"
to: "prefilter"
grouping:
type: SHUFFLE

- from: "prefilter"
to: "partitioner"
grouping:
type: SHUFFLE
Expand Down Expand Up @@ -158,6 +168,12 @@ streams:
to: "ssbolt"
grouping:
type: LOCAL_OR_SHUFFLE

- from: "prefilter"
to: "status"
grouping:
type: LOCAL_OR_SHUFFLE
streamId: "status"

- from: "fetcher"
to: "status"
Expand Down
11 changes: 9 additions & 2 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>org.commoncrawl.stormcrawler.news</groupId>
<artifactId>crawler</artifactId>
<version>1.18</version>
<version>1.18.1</version>
<packaging>jar</packaging>
<licenses>
<license>
Expand All @@ -19,6 +19,7 @@
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<storm-crawler.version>1.18</storm-crawler.version>
<storm-core.version>1.2.3</storm-core.version>
<aws.version>1.12.467</aws.version>
<jackson-databind.version>2.11.1</jackson-databind.version>
<crawler-commons.version>1.1</crawler-commons.version>
<mockito-all.version>1.10.19</mockito-all.version>
Expand Down Expand Up @@ -174,11 +175,17 @@
<version>${crawler-commons.version}</version>
</dependency>

<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
<version>${aws.version}</version>
</dependency>

<!-- test dependencies -->
<dependency>
<groupId>com.digitalpebble.stormcrawler</groupId>
<artifactId>storm-crawler-core</artifactId>
<version>${project.version}</version>
<version>${storm-crawler.version}</version>
<type>test-jar</type>
<scope>test</scope>
</dependency>
Expand Down
2 changes: 0 additions & 2 deletions seeds/feeds.txt
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,4 @@ https://www.lemonde.fr/livres/rss_full.xml isFeed=true
https://www.lemonde.fr/afrique/rss_full.xml isFeed=true
https://www.lemonde.fr/ameriques/rss_full.xml isFeed=true
https://www.cnn.com/sitemaps/cnn/news.xml isSitemapNews=true
https://www.ft.com/sitemaps/news.xml isSitemapNews=true
https://www.ft.com/?format=rss isFeed=true
https://www.bbc.com/sitemaps/https-index-com-news.xml isSitemapNews=true
Loading

0 comments on commit 2c462d7

Please sign in to comment.