Changes

Jump to: navigation, search

Telemetry/Custom analysis with spark

1,338 bytes added, 17:12, 10 August 2016
Created page with "This page is currently in-progress. == Introduction == Spark is a data processing engine designed to be fast and easy to use. We have setup Jupyter workbooks that use Spark t..."
This page is currently in-progress.

== Introduction ==
Spark is a data processing engine designed to be fast and easy to use. We have setup Jupyter workbooks that use Spark to analyze our Telemetry data. Jupyter workbooks can be easily shared and updated among colleagues to enable richer analysis than SQL alone.

The Spark clusters can be spun up on analysis.telemetry.mozilla.org, which is abbreviated as atmo. The Spark Python API is called pyspark.

== Setting Up a Spark Cluster On ATMO ==

# Go to analysis.telemetry.mozilla.org
# Click “Launch an ad-hoc Spark cluster”.
# Enter some details:
## The “Cluster Name” field should be a short descriptive name, like “chromehangs analysis”.
## Set the number of workers for the cluster. Please keep in mind to use resources sparingly; use a single worker to write and debug your job.
## Upload your SSH public key.
# Click “Submit”.
# A cluster will be launched on AWS preconfigured with Spark, IPython and some handy data analysis libraries like pandas and matplotlib.

Once the cluster is ready, you can tunnel IPython through SSH by following the instructions on the dashboard, e.g.:

ssh -i my-private-key -L 8888:localhost:8888 hadoop@ec2-54-70-129-221.us-west-2.compute.amazonaws.com

Finally, you can launch IPython in Firefox by visiting http://localhost:8888.
29
edits

Navigation menu