Deprecated: Analysis job that generates aggregates for telemetry-dashboard
Перейти к файлу
Jonas Finnemann Jensen e238dcbbfa Removed some left overs 2013-11-13 16:27:21 -08:00
dashboard Removed some left overs 2013-11-13 16:27:21 -08:00
.gitignore Renamed test and moved scripts to zip archieve for ease of deployment 2013-09-25 19:07:11 +02:00
Formats.mkd Aggregator hacks to handle data fast... on local laptops when we process a lot 2013-11-06 17:04:52 -08:00
LICENSE boring stuff 2013-02-24 21:57:28 -08:00
Makefile Tested an ready for deployment I think 2013-11-04 15:55:15 -08:00
README.md Tested an ready for deployment I think 2013-11-04 15:55:15 -08:00
setup.py Tested an ready for deployment I think 2013-11-04 15:55:15 -08:00

README.md

Telemetry Dashboard

Telemetry dashboard is an analysis job that aggregates telemetry histograms and simple measures, and offers an decent presentation. The default dashboard developed in this repository is hosted at (telemetry.mozilla.com)[http://telemetry.mozilla.com]. But the aggregated data is also available for consumption by third-party applications, so you don't need to do the aggregation on your own.

Consuming Telemetry Aggregations

Include into your code http://telemetry.mozilla.com/js/telemetry.js feel free to use the other modules too. Don't go about reading the raw JSON files, they are not designed for human consumption!

Hacking Telemetry Dashboard

If you want to improve the user-interface for telemetry dashboard, clone this repository, setup a static server that hosts the html/ folder on our localhost and start hacking. This is easy!

If you want to add new aggregations, or improve upon existing aggregations, change the storage format, take a look at Formats.mkd. Talk to the guy who is maintaining telemetry dashboard.

Basic flow is as follows:

  1. An .egg file is generated with make egg
  2. Analysis tasks are created with telemetry-server
  3. DashboardProcessor from analysis.py aggregated telemetry submissions, this process is driven by telemetry-server.
  4. Aggregator from aggregator.py collects results from analysis tasks, by: 1. Downloads existing data from s3 2. Fetch task finished messages from SQS 3. Download result.txt files in parallel 4. Updates results on disk 5. Publishes updated results in a new subfolder of current/ on s3, every once in a while. 6. Check points all aggregated data to a subfolder of check-points/ on s3, every once in a while. 7. Repeat