arewefastyet/README

58 строки
2.3 KiB
Plaintext

The latest AWFY is five components:
(1) MySQL database that stores statistics.
(2) Hidden PHP script on the webserver, where stats get sent.
(3) Python driver that runs on each benchmark computer, and submits stats.
(4) Python aggregator that builds JSON data from the DB.
(5) Static HTML as the frontpage, that queries JSON via XHR.
NOTE: (2), (3), and (5) must be on the same webserver, otherwise timestamps
might not be computed correctly.
######### SET UP ##########
Setting up (1):
Run database/schema.sql
Setting up (2):
Drop website/UPDATE.PHP and website/internals.php somewhere, and
rename UPDATE.PHP to something secret.
Setting up (3):
I set it up like:
mkdir awfy
cd awfy
svn checkout http://v8.googlecode.com/svn/branches/bleeding_edge/ v8
svn checkout https://svn.webkit.org/repository/webkit/trunk WebKit
hg clone http://hg.mozilla.org/integration/mozilla-inbound
mkdir driver
cp <awfy-src>/driver/* driver/
cd driver
hg clone http://hg.mozilla.org/projects/kraken
svn checkout http://v8.googlecode.com/svn/trunk/benchmarks v8-v7
svn checkout https://svn.webkit.org/repository/webkit/trunk/PerformanceTests/SunSpider
svn checkout http://octane-benchmark.googlecode.com/svn/trunk/ octane
(a) Add a database entry for the machine configuration.
(b) Edit driver/awfy.config to match the build architecture you want, and machine
database entry.
(c) Add a cronjob to run dostuff.py periodically. AWFY.com runs every 30min.
Setting up (4):
Put awfy-server.config in /etc/, edit it to point at your database and
website/data folder. Then put update.py in a cronjob. It will dump files
where appropriate. AWFY.com does this every 15min.
update.py generates various JSON files:
(a) "raw" and "metadata" files cache database queries from run to run,
so we don't have to make expensive database queries.
(b) "aggregate" files are used for the front page.
(c) "condensed" files are used for one level of zooming, so users don't
have to download the raw data set right away.
The metadata and raw JSON files are updated as needed. The aggregate and
condensed files are always re-generated from the raw data.
Setting up (5):
Nothing special needed. Note that our flot is slightly modified, so it
might not work to just replace it.