Skip to content

A Scala framework to build derived datasets, aka batch views, of Telemetry data.

Notifications You must be signed in to change notification settings

vgutierrez9/telemetry-batch-view

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

telemetry-batch-view

This is a Scala application to build derived datasets, also known as batch views, of Telemetry data.

Build Status codecov.io

Raw JSON pings are stored on S3 within files containing framed Heka records. Reading the raw data in through e.g. Spark can be slow as for a given analysis only a few fields are typically used; not to mention the cost of parsing the JSON blobs. Furthermore, Heka files might contain only a handful of records under certain circumstances.

Defining a derived Parquet dataset, which uses a columnar layout optimized for analytics workloads, can drastically improve the performance of analysis jobs while reducing the space requirements. A derived dataset might, and should, also perform heavy duty operations common to all analysis that are going to read from that dataset (e.g., parsing dates into normalized timestamps).

Adding a new derived dataset

See the views folder for examples of jobs that create derived datasets.

See the Firefox Data Documentation for more information about the individual derived datasets. For help finding the right dataset for your analysis, see Choosing a Dataset.

Development

There are two possible workflows for hacking on telemetry-batch-view: you can either create a docker container for building the package and running tests, or import the project into IntelliJ's IDEA.

To run the docker tests, just use the provided Dockerfile to build a container, then use the runtests.sh script to run tests inside it:

docker build -t telemetry-batch-view .
./runtests.sh

You may need to increase the amount of memory allocated to Docker for this to work, as some of the tests are very memory hungry at present. At least 4 gigabytes is recommended.

You can also pass arguments to sbt (the scala build tool we use for running the tests) through the runtests.sh. For example, to run only the addon tests, try:

./runtests.sh "test-only com.mozilla.telemetry.AddonsViewTest"

If you wish to import the project into IntelliJ IDEA, apply the following changes to Preferences -> Languages & Frameworks -> Scala Compile Server:

  • JVM maximum heap size, MB: 2048
  • JVM parameters: -server -Xmx2G -Xss4M

Note that the first time the project is opened it takes some time to download all the dependencies.

Generating Datasets

See the documentation for specific views for details about running/generating them.

For example, to create a longitudinal view locally:

sbt "run-main com.mozilla.telemetry.views.LongitudinalView --from 20160101 --to 20160701 --bucket telemetry-test-bucket"

For distributed execution we pack all of the classes together into a single JAR and submit it to the cluster:

sbt assembly
spark-submit --master yarn --deploy-mode client --class com.mozilla.telemetry.views.LongitudinalView target/scala-2.11/telemetry-batch-view-*.jar --from 20160101 --to 20160701 --bucket telemetry-test-bucket

Caveats

If you run into memory issues during compilation time or running the test suite, issue the following command before running sbt:

export _JAVA_OPTIONS="-Xms4G -Xmx4G -Xss4M -XX:MaxMetaspaceSize=256M"

Slow tests By default slow tests are not run when using sbt test. To run slow tests use ./runtests.sh slow:test (or just sbt slow:test outside of the Docker environment).

Running on Windows

Executing scala/Spark jobs could be particularly problematic on this platform. Here's a list of common issues and the relative solutions:

Issue: I see a weird reflection error or an odd exception when trying to run my code.

This is probably due to winutils being missing or not found. Winutils are needed by HADOOP and can be downloaded from here.

Issue: java.net.URISyntaxException: Relative path in absolute URI: ...

This means that winutils cannot be found or that Spark cannot find a valid warehouse directory. Add the following line at the beginning of your entry function to make it work:

System.setProperty("hadoop.home.dir", "C:\\path\\to\\winutils")
System.setProperty("spark.sql.warehouse.dir", "file:///C:/somereal-dir/spark-warehouse")

Issue: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: ---------

See SPARK-10528. Run "winutils chmod 777 /tmp/hive" from a privileged prompt to make it work.

About

A Scala framework to build derived datasets, aka batch views, of Telemetry data.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Scala 98.1%
  • Other 1.9%