add evaluation scripts and introduction
This commit is contained in:
Родитель
942429c689
Коммит
0f17e2b6b7
|
@ -0,0 +1,130 @@
|
|||
# How to evaluate Forerunner:
|
||||
|
||||
## 1. Preparation
|
||||
|
||||
Please download all the following into your machine first.
|
||||
|
||||
* [Source code](https://github.com/microsoft/Forerunner).
|
||||
|
||||
* [Execution scripts](https://github.com/microsoft/Forerunner/tree/master/executionScripts)
|
||||
|
||||
* [Performance evaluation scripts](https://github.com/microsoft/Forerunner/tree/master/perfScripts)
|
||||
|
||||
* Workload data (03/12/2021 - 03/22/2021):
|
||||
|
||||
* https://forerunnerdata.blob.core.windows.net/workload/20210312.json?sp=r&st=2021-08-20T11:28:15Z&se=2022-08-20T19:28:15Z&spr=https&sv=2020-08-04&sr=b&sig=nFJx0FWVTnhF3NYJ147j5nR8jW3m5jpkAS7qr4sSdoM%3D
|
||||
|
||||
* https://forerunnerdata.blob.core.windows.net/workload/20210313.json?sp=r&st=2021-08-20T11:28:51Z&se=2022-08-20T19:28:51Z&spr=https&sv=2020-08-04&sr=b&sig=aUICDJtl0zO%2FHwG%2FA8I5VgzE%2BKx3gKqgzEbU3sazkYM%3D
|
||||
|
||||
* https://forerunnerdata.blob.core.windows.net/workload/20210314.json?sp=r&st=2021-08-20T11:29:07Z&se=2022-08-20T19:29:07Z&spr=https&sv=2020-08-04&sr=b&sig=vCSHW7LZxBPKhSXyDX8mmbwpTyYJfY4R%2BpE5%2BR5WaIQ%3D
|
||||
|
||||
* https://forerunnerdata.blob.core.windows.net/workload/20210315.json?sp=r&st=2021-08-20T11:29:20Z&se=2022-08-20T19:29:20Z&spr=https&sv=2020-08-04&sr=b&sig=L8vgJvjaRe9WcxqZsN94JnpxGIP8%2BwyqbYYRNTYeUFo%3D
|
||||
|
||||
* https://forerunnerdata.blob.core.windows.net/workload/20210316.json?sp=r&st=2021-08-20T11:29:34Z&se=2022-08-20T19:29:34Z&spr=https&sv=2020-08-04&sr=b&sig=FqQ5lfQ9qdOv2TwyXuCe0ZFT6eOg%2FRghPcRTyn%2FIoiI%3D
|
||||
|
||||
* https://forerunnerdata.blob.core.windows.net/workload/20210317.json?sp=r&st=2021-08-20T11:29:52Z&se=2022-08-20T19:29:52Z&spr=https&sv=2020-08-04&sr=b&sig=1rMjzCzE4baHDfcRU1x1QeECICln4448%2FUHXGN13jYk%3D
|
||||
|
||||
* https://forerunnerdata.blob.core.windows.net/workload/20210318.json?sp=r&st=2021-08-20T11:30:19Z&se=2022-08-20T19:30:19Z&spr=https&sv=2020-08-04&sr=b&sig=wbqV6jdq3ZL3kLeIzym%2FAa8nxcwFhmgooio42gSIAvQ%3D
|
||||
|
||||
* https://forerunnerdata.blob.core.windows.net/workload/20210319.json?sp=r&st=2021-08-20T11:30:33Z&se=2022-08-20T19:30:33Z&spr=https&sv=2020-08-04&sr=b&sig=cZVW6D2MNe1ztgf8%2BngRBgmHEC5eemDMEA%2BC%2BG2Le8o%3D
|
||||
|
||||
* https://forerunnerdata.blob.core.windows.net/workload/20210320.json?sp=r&st=2021-08-20T11:30:48Z&se=2022-08-20T19:30:48Z&spr=https&sv=2020-08-04&sr=b&sig=C4Z15vigV8FXO5%2FvTC6ZdQmJqW9%2Bifp4J6sW7UeXrS8%3D
|
||||
|
||||
* https://forerunnerdata.blob.core.windows.net/workload/20210321.json?sp=r&st=2021-08-20T11:31:00Z&se=2022-08-20T19:31:00Z&spr=https&sv=2020-08-04&sr=b&sig=aWAFhm3vf788jOhBvaVFtbmzaPE2yCCp44qH%2BLev0Yk%3D
|
||||
|
||||
* https://forerunnerdata.blob.core.windows.net/workload/20210322.json?sp=r&st=2021-08-20T11:31:21Z&se=2022-08-20T19:31:21Z&spr=https&sv=2020-08-04&sr=b&sig=sfITBLKbTKON4x0cgdKGIWm86Zh0vM4m7cK2hZAIFIg%3D
|
||||
|
||||
* Ethereum chain data and state data (for emulation):
|
||||
* ethereum chaindata directory: datadir data (splited into three parts due to data size limit. Please concatenate them into one file and unzip it by 'tar -zxf <file>'):
|
||||
* https://forerunnerdata.blob.core.windows.net/ethdata/geth.tar.part_00?sp=r&st=2021-09-22T08:47:50Z&se=2022-12-31T16:47:50Z&spr=https&sv=2020-08-04&sr=b&sig=1VkDX4xpJQdpf8bEdQbCYa%2FNT7diBcdCsQ8xAhGx8yY%3D
|
||||
|
||||
* https://forerunnerdata.blob.core.windows.net/ethdata/geth.tar.part_01?sp=r&st=2021-09-22T08:49:06Z&se=2022-12-31T16:49:06Z&spr=https&sv=2020-08-04&sr=b&sig=xhvMuZ0QWK7KKjVde6dSSS1X08Qvk3%2BTJTR0uY32JG4%3D
|
||||
|
||||
* https://forerunnerdata.blob.core.windows.net/ethdata/geth.tar.part_02?sp=r&st=2021-09-22T08:49:28Z&se=2022-12-31T16:49:28Z&spr=https&sv=2020-08-04&sr=b&sig=ZPwLdkbT63NyKj%2B8M%2F6eX0Hnqzt%2F%2BJIMqyHktDrWgKQ%3D
|
||||
|
||||
* ethereum chaindata ancient data directory: [datadir.ancient](https://forerunnerdata.blob.core.windows.net/ethdata/ancient.tar?sp=r&st=2021-09-22T08:46:48Z&se=2022-12-31T16:46:48Z&spr=https&sv=2020-08-04&sr=b&sig=uAAPZ8VecklwRvXUoy3YSrxE7TXAl6XEHT6X1SRgOBk%3D) data. (Please unzip it by 'tar -zxf <file>')
|
||||
|
||||
|
||||
## 2. How to run Forerunner for evaluation
|
||||
|
||||
First, you can reference [Readme](https://github.com/microsoft/Forerunner/blob/main/README.md) to build the binary.
|
||||
|
||||
### 2.1 There is the execution command line for Baseline emulation:
|
||||
|
||||
```
|
||||
<path to the binary>/geth --datadir <path to ethereum chaindata directory> \
|
||||
--datadir.ancient <path to ethereum ancient data directory> \
|
||||
--nousb --txpool.accountslots 1024 --txpool.globalslots 8192 --txpool.accountqueue 1024 --txpool.globalqueue 4096 \
|
||||
--cache <megabytes of memory allocated to blockchain data caching> \
|
||||
--emulatordir <path to workload data directory> \
|
||||
--emulatefile <workload file name> \
|
||||
--emulatefrom <the start blocknumber of emulation> \
|
||||
--perflog
|
||||
```
|
||||
|
||||
The following configurations are inheriant from the official go-etheruem.
|
||||
|
||||
* --datadir <path to ethereum chaindata directory> : We provide the download link of this data in Preparation.
|
||||
* --datadir.ancient <path to ethereum ancient data directory> : We provide the download link of this data in the above.
|
||||
* --nousb --txpool.accountslots 1024 --txpool.globalslots 8192 --txpool.accountqueue 1024 --txpool.globalqueue 4096 : these configurations can remain unchanged.
|
||||
* --cache <megabytes of memory allocated to blockchain data caching> : the default value is 1024. We set it as 20480 in our evaluations.
|
||||
|
||||
The following configurations are designed for emulation:
|
||||
|
||||
* --emulatordir <path to workload data directory> : this path is '/datadrive/emulateLog/' of the given VMs. You can set it according to your local machines.
|
||||
* --emulatefile <workload file name> : we provide download links to some workload data in the above. One data file contains one-day workload and the file name is the date of workload recorded. To emulate for longer durations, you can concatenate data files of multi continuous days to one single file and set it as
|
||||
* --emulatefile. In our evaluations, we concatenate 10-day workload files into 20210312-22.json located in '/datadrive/emulateLog/'.
|
||||
* --emulatefrom <the start blocknumber of emulation> : This value is corresponding to the --emulatefile. You can get the value according to --emulatefile by a python script "find_first_block.py" provided in the [Execution scripts](https://github.com/microsoft/Forerunner/tree/master/executionScripts). For example, if you set the --emulatefile as 20210320.json to evaluate the workload data of 3/22/2021, you can get the return value of python find_first_block.py 20210320.json and set it as `emulatefrom `.
|
||||
|
||||
|
||||
### 2.2 There is the execution command line of Forerunner emulation:
|
||||
|
||||
```
|
||||
<path to the binary>/geth --datadir <path to ethereum chaindata directory> \
|
||||
--datadir.ancient <path to ethereum ancient data directory> \
|
||||
--nousb --txpool.accountslots 1024 --txpool.globalslots 8192 --txpool.accountqueue 1024 --txpool.globalqueue 4096 \
|
||||
--cache <megabytes of memory allocated to blockchain data caching> \
|
||||
--emulatordir <path to workload data directory> \
|
||||
--emulatefile <workload file name> \
|
||||
--emulatefrom <the start blocknumber of emulation> \
|
||||
--perflog\
|
||||
--preplay --cmpreuse --parallelhasher 16 --parallelbloom --no-overmatching --add-fastpath
|
||||
```
|
||||
|
||||
Besides the configurations mentioned in Baseline emulation, Forerunner emulation need several more configurations: ` --preplay --cmpreuse --parallelhasher 16 --parallelbloom --no-overmatching --add-fastpath`. Please keep them unchanged to enable Forerunner features.
|
||||
|
||||
|
||||
## 3. Evaluation step by step
|
||||
|
||||
1. Run Baseline execution script: example: https://github.com/microsoft/Forerunner/tree/master/executionScripts/runEmulateBaseline.sh;
|
||||
|
||||
2. Stop Baseline by <ctrl + c> after a certain period of time (e.g. 3 hours)
|
||||
|
||||
3. Collect the performance log of Baseline: copy /tmp/PerfTxLog.baseline.txt to <output dir path>
|
||||
|
||||
4. Run Forerunner execution script: example: https://github.com/microsoft/Forerunner/tree/master/executionScripts/runEmulateForerunner.sh;
|
||||
|
||||
5. Stop Forerunner by <ctrl + c> after a certain period of time (e.g. 3 hours)
|
||||
|
||||
6. Collect the performance log of Forerunner: copy /tmp/PerfTxLog.reuse.txt to <output dir path>
|
||||
|
||||
7. Run scripts to compute speedups:
|
||||
|
||||
python [join_perf.py](https://github.com/microsoft/Forerunner/tree/master/perfScripts/join_perf.py) -b <path to PerfTxLog.baseline.txt> -f <path to PerfTxLog.reuse.txt > -o <output dir path>
|
||||
|
||||
Then, you can find the result in <output dir path>/TxSpeedupResult.txt
|
||||
|
||||
Furthermore, we provide a one-click script to auto executes the above steps:
|
||||
|
||||
** [oneclick.sh](https://github.com/microsoft/Forerunner/tree/master/executionScripts/oneclick.sh) <count of hours; 3 by default> **.
|
||||
|
||||
It will execute each of Baseline and Forerunner for your configured hours and generate the final results into <output dir path>/perfScripts/out/TxSpeedupResult.txt.
|
||||
|
||||
|
||||
**Note**:
|
||||
|
||||
* Baseline and Forerunner should be executed in serial.
|
||||
|
||||
* We recommend that the execution of both Baseline and Forerunner lasts for 3 hours at least to get more reliable measurements. The main results of the paper were obtained by running Baseline and Forerunner 10 days each.
|
||||
|
||||
* As mentioned in Section 5.6 of our paper, Forerunner consumes ~67GB memory on average which is still unoptimized. We recommend you use a machine equipped with 128 GB memory to get rid of OOM during evaluation.
|
|
@ -0,0 +1 @@
|
|||
/home/ae/forerunner/executionScripts/emulate.sh 20210312-22.json 12021000 $*
|
|
@ -0,0 +1,8 @@
|
|||
/home/ae/forerunner/repo/Forerunner/build/bin/geth --datadir /mnt/ethereum --datadir.ancient /datadrive/ancient/ \
|
||||
--nousb --cache=40960 \
|
||||
--emulatordir=/datadrive/emulateLog \
|
||||
--emulatefile=$1 \
|
||||
--emulatefrom=$2 \
|
||||
--txpool.accountslots 1024 --txpool.globalslots 8192 --txpool.accountqueue 1024 --txpool.globalqueue 4096 \
|
||||
--perflog $3 $4 $5 $6 $7 $8 $9
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
killall geth
|
||||
|
||||
while pgrep geth > /dev/null; do
|
||||
echo Still running...
|
||||
sleep 1
|
||||
done
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
duration=$1
|
||||
|
||||
if [ "$duration" = "" ];
|
||||
then
|
||||
duration=3
|
||||
fi
|
||||
echo $duration "hours"
|
||||
duration=$[duration*3600]
|
||||
|
||||
|
||||
rm -rf /tmp/Perf*
|
||||
|
||||
nohup sleep $duration && /home/ae/forerunner/executionScripts/killgeth.sh 2>&1 &
|
||||
|
||||
/home/ae/forerunner/executionScripts/runEmulateBaseline.sh
|
||||
|
||||
cp /tmp/PerfTxLog.baseline.txt /home/ae/forerunner/perfScripts/
|
||||
|
||||
sleep 5
|
||||
|
||||
nohup sleep $duration && /home/ae/forerunner/executionScripts/killgeth.sh 2>&1 &
|
||||
|
||||
/home/ae/forerunner/executionScripts/runEmulateForerunner.sh
|
||||
|
||||
cp /tmp/PerfTxLog.reuse.txt /home/ae/forerunner/perfScripts/
|
||||
|
||||
python /home/ae/forerunner/perfScripts/join_perf.py -b /home/ae/forerunner/perfScripts/PerfTxLog.baseline.txt -f /home/ae/forerunner/perfScripts/PerfTxLog.reuse.txt -o /home/ae/forerunner/perfScripts/out
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
rm /tmp/*.baseline.txt
|
||||
|
||||
/home/ae/forerunner/executionScripts/emulate-3-12.sh
|
|
@ -0,0 +1,3 @@
|
|||
rm /tmp/*.reuse.txt
|
||||
|
||||
/home/ae/forerunner/executionScripts/emulate-3-12.sh --preplay --cmpreuse --parallelhasher 16 --parallelbloom --no-overmatching --add-fastpath
|
|
@ -0,0 +1,23 @@
|
|||
import json
|
||||
import sys
|
||||
from collections import defaultdict
|
||||
|
||||
path = sys.argv[1]
|
||||
|
||||
Blocks = 17
|
||||
Txs = 97
|
||||
TxPool = 23
|
||||
|
||||
has_pool = False
|
||||
block = 0
|
||||
with open(path, 'r') as f:
|
||||
for line in f:
|
||||
j = json.loads(line)
|
||||
t = j['type']
|
||||
if t == TxPool: has_pool = True
|
||||
elif t == Blocks:
|
||||
if True and has_pool:
|
||||
block = j['blocks'][0]['header']['number']
|
||||
break
|
||||
|
||||
print(int(block, 16) - 1)
|
|
@ -0,0 +1,371 @@
|
|||
import os
|
||||
from os import path
|
||||
import argparse
|
||||
from bisect import bisect_left
|
||||
|
||||
parser = argparse.ArgumentParser("")
|
||||
parser.add_argument("-b", dest="base_filename")
|
||||
parser.add_argument("-f", dest="reuse_filename")
|
||||
parser.add_argument("-o", dest="output_path")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
base_filename = args.base_filename
|
||||
reuse_filename = args.reuse_filename
|
||||
outdir = args.output_path
|
||||
|
||||
result_filename = path.join(outdir, "TxSpeedupResult.txt")
|
||||
output_filename = path.join(outdir, "AllPerfTxLog.joined.txt")
|
||||
|
||||
IGNORE_LINE_COUNT = 10000 # 100000 # 2000000
|
||||
SIZE_LIMIT = 2000000 # 000000
|
||||
|
||||
baselines = open(base_filename).readlines()[IGNORE_LINE_COUNT:IGNORE_LINE_COUNT + SIZE_LIMIT]
|
||||
reuselines = open(reuse_filename).readlines()[IGNORE_LINE_COUNT:IGNORE_LINE_COUNT + SIZE_LIMIT]
|
||||
|
||||
print "tx count of baseline:", len(baselines), "tx count of forerunner:", len(reuselines)
|
||||
|
||||
|
||||
# print baselines[0]
|
||||
# print reuselines[0]
|
||||
|
||||
|
||||
def parseKVLine(line):
|
||||
parts = line.strip().split()
|
||||
assert len(parts) % 2 == 0
|
||||
kvs = dict([(k, eval(v)) for k, v in zip(parts[::2], parts[1::2])])
|
||||
return kvs
|
||||
|
||||
|
||||
#
|
||||
# print parseKVLine(baselines[0])
|
||||
# print parseKVLine(reuselines[0])
|
||||
|
||||
|
||||
class BaseRecord:
|
||||
def __init__(self, line):
|
||||
self.kvs = parseKVLine(line)
|
||||
self.line = line.strip()
|
||||
|
||||
|
||||
class ReuseRecord:
|
||||
def __init__(self, line):
|
||||
kvs = self.kvs = parseKVLine(line)
|
||||
assert "delay" in self.kvs
|
||||
self.line = line.strip()
|
||||
self.fullStatus = "-".join(
|
||||
[kvs["baseStatus"]] + [kvs[k] for k in "hitType mixHitType traceHitType missType".split() if k in kvs])
|
||||
|
||||
def addBase(self, br):
|
||||
assert br.kvs["id"] == self.kvs["id"]
|
||||
assert br.kvs["tx"] == self.kvs["tx"]
|
||||
self.base = br.kvs["base"]
|
||||
assert self.kvs["gas"] == br.kvs["gas"]
|
||||
self.speedup = float(self.base) / self.kvs["reuse"]
|
||||
return self
|
||||
|
||||
def getNewLine(self):
|
||||
# p0, p2 = self.parts[:7], self.parts[7:]
|
||||
p1 = ["base", self.base, "speedup", self.speedup]
|
||||
# parts = p0 + p1 + p2
|
||||
return self.line + " " + " ".join(map(str, p1)) + "\n"
|
||||
|
||||
def getSpeedupLine(self):
|
||||
p1 = ["id", self.kvs["id"], "tx", self.kvs["tx"], "speedup", self.speedup, "reuse", self.kvs["reuse"], "base",
|
||||
self.base, "gas", self.kvs["gas"]]
|
||||
return " ".join(map(str, p1)) + "\n"
|
||||
|
||||
|
||||
# br = BaseRecord(baselines[0])
|
||||
# print br.kvs["base"], br.kvs["id"], br.kvs["tx"], br.kvs["gas"]
|
||||
#
|
||||
# rr = ReuseRecord(reuselines[0])
|
||||
# print rr.kvs["reuse"], rr.kvs["id"], rr.kvs["tx"], rr.kvs["baseStatus"]
|
||||
|
||||
base_records = map(BaseRecord, baselines)
|
||||
reuse_records = map(ReuseRecord, reuselines)
|
||||
|
||||
base_ids = set([r.kvs["id"] for r in base_records])
|
||||
reuse_ids = set([r.kvs["id"] for r in reuse_records])
|
||||
common_ids = base_ids.intersection(reuse_ids)
|
||||
|
||||
id2base = dict([(r.kvs["id"], r) for r in base_records])
|
||||
|
||||
pairs = []
|
||||
for r in reuse_records:
|
||||
if r.kvs["id"] in common_ids:
|
||||
br = id2base[r.kvs["id"]]
|
||||
pairs.append((r, br))
|
||||
|
||||
print "common", len(pairs)
|
||||
|
||||
merged_records = [r.addBase(b) for r, b in pairs]
|
||||
merged_lines = []
|
||||
speedup_lines = []
|
||||
all_speedups = []
|
||||
trace_slowdowns = []
|
||||
total_trace_time = 0
|
||||
total_trace_all_time = 0
|
||||
total_trace_count = 0
|
||||
total_base_time = 0
|
||||
max_speedup = 0
|
||||
max_line = ""
|
||||
for r in merged_records:
|
||||
line = r.getNewLine()
|
||||
merged_lines.append(line)
|
||||
kvs = r.kvs
|
||||
|
||||
all_speedups.append(r.speedup)
|
||||
if r.speedup > max_speedup:
|
||||
max_speedup = r.speedup
|
||||
max_line = line
|
||||
speedup_lines.append(r.getSpeedupLine())
|
||||
if "sD" in kvs:
|
||||
sd = kvs["sD"]
|
||||
md = kvs["mD"]
|
||||
tc = kvs["tC"]
|
||||
td = sd + md
|
||||
slowdown = td / float(r.base)
|
||||
trace_slowdowns.append(slowdown)
|
||||
total_trace_time += td
|
||||
total_trace_all_time += (td * tc)
|
||||
total_trace_count += tc
|
||||
total_base_time += r.base
|
||||
|
||||
all_speedups = sorted(all_speedups)
|
||||
m_speedup = all_speedups[-1]
|
||||
assert m_speedup == max_speedup
|
||||
lenind = len(all_speedups) - 1
|
||||
p999 = all_speedups[int(lenind * 0.999)]
|
||||
p995 = all_speedups[int(lenind * 0.995)]
|
||||
p99 = all_speedups[int(lenind * 0.99)]
|
||||
p95 = all_speedups[int(lenind * 0.95)]
|
||||
g100 = bisect_left(all_speedups, 100)
|
||||
g100 = 1 - float(g100) / lenind
|
||||
|
||||
|
||||
# merged_lines = [r.getNewLine() for r in merged_records]
|
||||
|
||||
def avg(mrecords):
|
||||
baseTotal = 0
|
||||
raTotal = 0
|
||||
for r in mrecords:
|
||||
baseTotal += r.base
|
||||
raTotal += r.kvs["reuse"]
|
||||
if raTotal == 0:
|
||||
return 0
|
||||
return float(baseTotal) / float(raTotal)
|
||||
|
||||
|
||||
def GetSavedInsPercent(mrecords):
|
||||
totalIns = 0
|
||||
execIns = 0
|
||||
totalOps = 0
|
||||
for r in mrecords:
|
||||
# if "Hit" == r.kvs["baseStatus"] and "Trace" == r.kvs["hitType"] :
|
||||
totalIns += r.kvs.get("tN", 0)
|
||||
execIns += r.kvs.get("eN", 0)
|
||||
totalOps += r.kvs.get("pN", 0)
|
||||
saved = totalIns - execIns
|
||||
if totalIns == 0:
|
||||
assert totalOps == 0
|
||||
return 0, 0, 0, 0, 0
|
||||
return saved / float(totalIns), (totalOps - execIns) / float(totalOps), execIns, totalIns, totalOps
|
||||
|
||||
|
||||
def GetSavedLoadsPercent(mrecords):
|
||||
totalDetail = 0
|
||||
actualDetail = 0
|
||||
actualAccount = 0
|
||||
for r in mrecords:
|
||||
totalDetail += r.kvs.get("fPR", 0)
|
||||
totalDetail += r.kvs.get("fPRm", 0)
|
||||
actualDetail += r.kvs.get("fAR", 0)
|
||||
actualDetail += r.kvs.get("fARm", 0)
|
||||
actualAccount += r.kvs.get("aR", 0)
|
||||
actualAccount += r.kvs.get("aRm", 0)
|
||||
if totalDetail == 0:
|
||||
return 0, 0, 0, 0
|
||||
return (totalDetail - actualDetail - actualAccount) / float(
|
||||
totalDetail), "actualDetail", actualDetail, "actualAccount", actualAccount, "totalDetail", totalDetail
|
||||
|
||||
|
||||
def GetSavedStoresPercent(mrecords):
|
||||
totalDetail = 0
|
||||
actualDetail = 0
|
||||
actualAccount = 0
|
||||
for r in mrecords:
|
||||
totalDetail += r.kvs.get("fPW", 0)
|
||||
totalDetail += r.kvs.get("fPWm", 0)
|
||||
actualDetail += r.kvs.get("fAW", 0)
|
||||
actualDetail += r.kvs.get("fAWm", 0)
|
||||
actualAccount += r.kvs.get("aW", 0)
|
||||
actualAccount += r.kvs.get("aWm", 0)
|
||||
if totalDetail == 0:
|
||||
return 0, 0, 0, 0
|
||||
# return (totalDetail - actualDetail - actualAccount) / float(totalDetail), actualDetail, actualAccount, totalDetail
|
||||
return (totalDetail - actualDetail - actualAccount) / float(
|
||||
totalDetail), "actualDetail", actualDetail, "actualAccount", actualAccount, "totalDetail", totalDetail
|
||||
|
||||
|
||||
result_out = open(result_filename, "w")
|
||||
|
||||
|
||||
def output(*args):
|
||||
line = " ".join(map(str, args)) + "\n"
|
||||
print line,
|
||||
result_out.write(line)
|
||||
|
||||
|
||||
output(result_filename)
|
||||
|
||||
|
||||
# print merged_lines[0]
|
||||
|
||||
|
||||
def isPartiallyCorect(kvs):
|
||||
status = kvs["baseStatus"]
|
||||
if not status.startswith("Hit"):
|
||||
return False
|
||||
status = kvs["hitType"]
|
||||
if "Trace" == status:
|
||||
tstatus = kvs["traceHitType"]
|
||||
if tstatus == "OpHit":
|
||||
return True
|
||||
if "Mix" == status:
|
||||
mstatus = kvs["mixHitType"]
|
||||
if "Delta" in mstatus:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def isFullyCorrect(kvs):
|
||||
status = kvs["baseStatus"]
|
||||
if not status.startswith("Hit"):
|
||||
return False
|
||||
if isPartiallyCorect(kvs):
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def groupByFullStatus(records):
|
||||
ret = {}
|
||||
for r in records:
|
||||
ret.setdefault(r.fullStatus, []).append(r)
|
||||
return ret
|
||||
|
||||
|
||||
fully_correct = [r for r in merged_records if isFullyCorrect(r.kvs)]
|
||||
partially_correct = [r for r in merged_records if isPartiallyCorect(r.kvs)]
|
||||
|
||||
wrong = [r for r in merged_records if not r.kvs["baseStatus"].startswith("Hit")]
|
||||
miss = [r for r in merged_records if r.kvs["baseStatus"] == "Miss"]
|
||||
no_prediction = [r for r in merged_records if r.kvs["baseStatus"] == "NoListen" or r.kvs["baseStatus"] == "NoPreplay"]
|
||||
no_listen = [r for r in merged_records if r.kvs["baseStatus"] == "NoListen"]
|
||||
no_preplay = [r for r in merged_records if r.kvs["baseStatus"] == "NoPreplay"]
|
||||
|
||||
all_base_time = float(sum([r.base for r in merged_records]))
|
||||
fully_correct_base_time = sum([r.base for r in fully_correct])
|
||||
partially_correct_base_time = sum([r.base for r in partially_correct])
|
||||
wrong_base_time = sum([r.base for r in wrong])
|
||||
miss_base_time = sum([r.base for r in miss])
|
||||
no_prediction_base_time = sum([r.base for r in no_prediction])
|
||||
no_listen_base_time = sum([r.base for r in no_listen])
|
||||
no_preplay_base_time = sum([r.base for r in no_preplay])
|
||||
|
||||
output("-----")
|
||||
output("## Main Results")
|
||||
output("")
|
||||
output("count of all txs: ", len(merged_records))
|
||||
output("")
|
||||
output("overall speedup", avg([r for r in merged_records]))
|
||||
output("effective speedup", avg([r for r in merged_records if not r.kvs["baseStatus"] == "NoListen"]))
|
||||
|
||||
|
||||
output("")
|
||||
output("satisfied ratio of all txs:", (len(fully_correct) + len(partially_correct)) / float(len(merged_records)),
|
||||
"weighted satisfied ratio of all txs:", (fully_correct_base_time + partially_correct_base_time) / all_base_time,
|
||||
"avg speedup of satisfied txs:",
|
||||
avg(fully_correct + partially_correct))
|
||||
output("")
|
||||
output("#### Effective Speedup (corresponding to Table 2 in SOSP paper):")
|
||||
output("")
|
||||
output("satisfied ratio of all observed txs:",
|
||||
(len(fully_correct) + len(partially_correct)) / float(len(merged_records) - len(no_listen)),
|
||||
"weighted satisfied ratio of all obersed txs:",
|
||||
(fully_correct_base_time + partially_correct_base_time) / (all_base_time - no_listen_base_time),
|
||||
"avg speedup of all observed txs:", avg(fully_correct + partially_correct + no_preplay + miss))
|
||||
|
||||
output("")
|
||||
output("#### Breakdown by prediction outcome (corresponding to Table 3 in SOSP paper):")
|
||||
output("")
|
||||
output(" Types\t\t\tcount proportion \ttime-weighted proportion\t avg speedup ")
|
||||
output("1 perfect satisfied\t\t", len(fully_correct) / float(len(merged_records)), "\t",
|
||||
fully_correct_base_time / all_base_time, "\t", avg(fully_correct))
|
||||
output("2 imperfect satisfied\t\t", len(partially_correct) / float(len(merged_records)), "\t",
|
||||
partially_correct_base_time / all_base_time, "\t", avg(partially_correct))
|
||||
|
||||
output("3 missed\t\t\t", len(wrong) / float(len(merged_records)), "\t", wrong_base_time / all_base_time, "\t",
|
||||
avg(wrong))
|
||||
output(" 3.1 predicted but missed\t", len(miss) / float(len(merged_records)), "\t", miss_base_time / all_base_time,
|
||||
"\t",
|
||||
avg(miss))
|
||||
output(" 3.2 no prediction\t\t", len(no_prediction) / float(len(merged_records)), "\t",
|
||||
no_prediction_base_time / all_base_time, "\t", avg(no_prediction))
|
||||
output(" 3.2.1 no observed\t", len(no_listen) / float(len(merged_records)), "\t",
|
||||
no_listen_base_time / all_base_time, "\t", avg(no_listen))
|
||||
output(" 3.2.2 no pre-execution\t", len(no_preplay) / float(len(merged_records)), "\t",
|
||||
no_preplay_base_time / all_base_time, "\t", avg(no_preplay))
|
||||
|
||||
output(" ")
|
||||
output(" ")
|
||||
|
||||
# groups = groupByFullStatus(merged_records)
|
||||
# gks = sorted(groups.keys())
|
||||
# for gk in gks:
|
||||
# g = groups[gk]
|
||||
# output(g[0].fullStatus, len(g) / float(len(merged_records)), avg(g))
|
||||
#
|
||||
# output(" ")
|
||||
|
||||
output("#### Distribution of speedups:")
|
||||
output("")
|
||||
output("\tmax", max_speedup, "p999", p999, "p995", p995, "p99", p99, "p95", p95, ">100", g100)
|
||||
output("")
|
||||
output("max line:")
|
||||
output(max_line)
|
||||
|
||||
output("")
|
||||
output("-----")
|
||||
output("## Other detail info")
|
||||
output("")
|
||||
output("#### Saved IO")
|
||||
output("")
|
||||
output("* saved stores percent:", GetSavedStoresPercent(merged_records))
|
||||
output("* saved loads percent:", GetSavedLoadsPercent(merged_records))
|
||||
output("* saved ins percent:", GetSavedInsPercent(merged_records))
|
||||
output(" ")
|
||||
|
||||
if len(trace_slowdowns) > 0:
|
||||
output("")
|
||||
output("-----")
|
||||
output("#### Tracer detail")
|
||||
output("")
|
||||
output("the end-to-end time to pre-execute a transaction in a context and synthesize an AP takes on average ",
|
||||
str(total_trace_time / float(total_base_time)) + "x",
|
||||
"the time to execute the transaction, without being optimized")
|
||||
output("average trace time (milli)",
|
||||
total_trace_time / len(trace_slowdowns) / 10 ** 6)
|
||||
output("average total slowdown", total_trace_all_time / float(total_base_time), "average total trace time (milli)",
|
||||
total_trace_all_time / len(trace_slowdowns) / 10 ** 6, "average trace count",
|
||||
total_trace_count * 1.0 / len(trace_slowdowns))
|
||||
output("all transaction count", len(merged_lines), "traced transaction count", len(trace_slowdowns))
|
||||
|
||||
output(" ")
|
||||
output(" ")
|
||||
output(" ")
|
||||
|
||||
open(output_filename, "w").writelines(merged_lines)
|
||||
|
||||
result_out.close()
|
||||
|
Загрузка…
Ссылка в новой задаче