Deprecated: Analysis job that generates aggregates for telemetry-dashboard
Перейти к файлу
Jonas Finnemann Jensen d62a79e6d8 Renamed test and moved scripts to zip archieve for ease of deployment 2013-09-25 19:07:11 +02:00
html Updated telemetry.js to use new data format 2013-09-24 22:46:13 +02:00
.gitignore Renamed test and moved scripts to zip archieve for ease of deployment 2013-09-25 19:07:11 +02:00
LICENSE boring stuff 2013-02-24 21:57:28 -08:00
Makefile Renamed test and moved scripts to zip archieve for ease of deployment 2013-09-25 19:07:11 +02:00
README.md readme touchups 2013-06-18 13:29:43 -07:00
dashboard.py Renamed test and moved scripts to zip archieve for ease of deployment 2013-09-25 19:07:11 +02:00
debug_filter_include_more.json More debug data filter 2013-09-24 17:57:01 +02:00
debug_filter_include_some.json Adding more debug data to the debug filter, cleanup release fileter 2013-09-24 17:28:40 +02:00
debug_filter_include_somemore.json New debug input filter 2013-09-24 21:02:04 +02:00
filter_include_all.json Adding more debug data to the debug filter, cleanup release fileter 2013-09-24 17:28:40 +02:00
mr2disk.py Makefile downloads Histograms.json from tip, specgen.py uses it, mr2disk copies it to html/data 2013-09-12 19:21:44 +02:00
specgen.py Added specs to output and simple-measures 2013-09-24 19:54:24 +02:00

README.md

#Telemetry Dashboard

Generate static files for a telemetry dashboard.

#How to Run

You'll need to have mango set up in your .ssh_config to connect you to the hadoop node where you'll run jydoop from.

Run `script/bootstrap`
Serve the `html/` dir

##Histogram View There are x fields to narrow query by

have a category table that stores category tree: Each node has a unique id Level1 Product: Firefox|Fennec|Thunderbird Level2 Platform: Windows|Linux Level3 etc

size of this table can be kept in check by reducing common videocards to a family name, etc Can also customize what shows up under different levels..For example we could restrict tb, to have less childnodes.

Store the tree in a table, but keep it read into memory for queries, inserting new records

Then have a histogram table where columns: histogram_id | category_id | value where histogram_id is id like SHUTDOWN_OK, category id is a key from category table, value is the sum of histograms in that category...can be represented with some binary value

##Misc Evolution can be implemented by adding a build_date field to histogram table

TODO: How big would the category tree table be..surely there is a finite size for that

histogram table would be |category_table| * |number of histograms|, pretty compact

Map + Reduce

Mapper should turn each submission into which looks like buildid/channel/reason/appName/appVersion/OS/osVersion/arch {histograms:{A11Y_CONSUMERS:{histogram_data}, ...} simpleMeasures:{firstPaint:[100,101,1000...]}} Where key identifies where in the filter tree the data should live..Note a single packet could produce more than 1 such entry if we want to get into detailed breakdowns of gfxCard vs some FX UI animation histogram

Reducer would then take above data and sum up histograms + append to simple measure lists based on key

This should produce a fairly small file per day per channel(~200 records). Which will then be quick to pull out and merge into the per-build-per-histogram-json that can be rsynced to some webserver. This basically a final iterative REDUCE on top of map-reduce for new data. Hadoop does not feel like the right option for that, but I could be wrong.

###todo:

  • oneline local testing using Jython's FileDriver.py