Bug 1180500 - Update to latest wptrunner, a=testonly

This commit is contained in:
James Graham 2015-07-07 10:02:02 +01:00
Родитель f3629d1236
Коммит f26ec68469
15 изменённых файлов: 138 добавлений и 83 удалений

Просмотреть файл

@ -25,9 +25,15 @@ following are most significant:
``--product`` (defaults to `firefox`)
The product to test against: `b2g`, `chrome`, `firefox`, or `servo`.
``--binary`` (required)
``--binary`` (required if product is `firefox` or `servo`)
The path to a binary file for the product (browser) to test against.
``--webdriver-binary`` (required if product is `chrome`)
The path to a `*driver` binary; e.g., a `chromedriver` binary.
``--certutil-binary`` (required if product is `firefox` [#]_)
The path to a `certutil` binary (for tests that must be run over https).
``--metadata`` (required)
The path to a directory containing test metadata. [#]_
@ -37,6 +43,9 @@ following are most significant:
``--prefs-root`` (required only when testing a Firefox binary)
The path to a directory containing Firefox test-harness preferences. [#]_
.. [#] The ``--certutil-binary`` option is required when the product is
``firefox`` unless ``--ssl-type=none`` is specified.
.. [#] The ``--metadata`` path is to a directory that contains:
* a ``MANIFEST.json`` file (the web-platform-tests documentation has
@ -56,26 +65,29 @@ To test a Firefox Nightly build in an OS X environment, you might start
wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=~/mozilla-central/obj-x86_64-apple-darwin14.0.0/dist/Nightly.app/Contents/MacOS/firefox \
--prefs-root=~/mozilla-central/testing/profiles
--binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/dist/Nightly.app/Contents/MacOS/firefox \
--certutil-binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/security/nss/cmd/certutil/certutil \
--prefs-root=~/mozilla-central/testing/profiles
And to test a Chromium build in an OS X environment, you might start
wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=~/chromium/src/out/Release/Chromium.app/Contents/MacOS/Chromium \
--product=chrome
--binary=~/chromium/src/out/Release/Chromium.app/Contents/MacOS/Chromium \
--webdriver-binary=/usr/local/bin/chromedriver --product=chrome
-------------------------------------
Example: How to run a subset of tests
-------------------------------------
To restrict a test run just to tests in a particular web-platform-tests
subdirectory, use ``--include`` with the directory name; for example::
subdirectory, specify the directory name in the positional arguments after
the options; for example, run just the tests in the `dom` subdirectory::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=/path/to/firefox --prefs-root=/path/to/testing/profiles \
--include=dom
--binary=/path/to/firefox --certutil-binary=/path/to/certutil \
--prefs-root=/path/to/testing/profiles \
dom
Output
~~~~~~
@ -95,8 +107,9 @@ log to a file and a human-readable summary to stdout, you might start
wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=/path/to/firefox --prefs-root=/path/to/testing/profiles
--log-raw=output.log --log-mach=-
--binary=/path/to/firefox --certutil-binary=/path/to/certutil \
--prefs-root=/path/to/testing/profiles \
--log-raw=output.log --log-mach=-
Expectation Data
~~~~~~~~~~~~~~~~

Просмотреть файл

@ -56,9 +56,15 @@ takes multiple options, of which the following are most significant:
``--product`` (defaults to `firefox`)
The product to test against: `b2g`, `chrome`, `firefox`, or `servo`.
``--binary`` (required)
``--binary`` (required if product is `firefox` or `servo`)
The path to a binary file for the product (browser) to test against.
``--webdriver-binary`` (required if product is `chrome`)
The path to a `*driver` binary; e.g., a `chromedriver` binary.
``--certutil-binary`` (required if product is `firefox` [#]_)
The path to a `certutil` binary (for tests that must be run over https).
``--metadata`` (required only when not `using default paths`_)
The path to a directory containing test metadata. [#]_
@ -68,6 +74,9 @@ takes multiple options, of which the following are most significant:
``--prefs-root`` (required only when testing a Firefox binary)
The path to a directory containing Firefox test-harness preferences. [#]_
.. [#] The ``--certutil-binary`` option is required when the product is
``firefox`` unless ``--ssl-type=none`` is specified.
.. [#] The ``--metadata`` path is to a directory that contains:
* a ``MANIFEST.json`` file (the web-platform-tests documentation has
@ -89,26 +98,30 @@ To test a Firefox Nightly build in an OS X environment, you might start
wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=~/mozilla-central/obj-x86_64-apple-darwin14.0.0/dist/Nightly.app/Contents/MacOS/firefox \
--binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/dist/Nightly.app/Contents/MacOS/firefox \
--certutil-binary=~/mozilla-central/obj-x86_64-apple-darwin14.3.0/security/nss/cmd/certutil/certutil \
--prefs-root=~/mozilla-central/testing/profiles
And to test a Chromium build in an OS X environment, you might start
wptrunner using something similar to the following example::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=~/chromium/src/out/Release/Chromium.app/Contents/MacOS/Chromium \
--product=chrome
--webdriver-binary=/usr/local/bin/chromedriver --product=chrome
--------------------
Running test subsets
--------------------
To restrict a test run just to tests in a particular web-platform-tests
subdirectory, use ``--include`` with the directory name; for example::
subdirectory, specify the directory name in the positional arguments after
the options; for example, run just the tests in the `dom` subdirectory::
wptrunner --metadata=~/web-platform-tests/ --tests=~/web-platform-tests/ \
--binary=/path/to/firefox --prefs-root=/path/to/testing/profiles \
--include=dom
--binary=/path/to/firefox --certutil-binary=/path/to/certutil \
--prefs-root=/path/to/testing/profiles \
dom
-------------------
Running in parallel

Просмотреть файл

@ -8,9 +8,13 @@ ssl-type=none
# prefs-root=/path/to/gecko-src/testing/profiles/
# [servo]
# binary=/path/to/servo-src/components/servo/target/servo
# binary=/path/to/servo-src/target/release/servo
# exclude=testharness # Because it needs a special testharness.js
# [servodriver]
# binary=/path/to/servo-src/target/release/servo
# exclude=testharness # Because it needs a special testharness.js
# [chrome]
# binary=/path/to/chrome
# webdriver-binary=/path/to/chromedriver
# webdriver-binary=/path/to/chromedriver

Просмотреть файл

@ -42,7 +42,7 @@ def browser_kwargs(**kwargs):
"debug_info": kwargs["debug_info"]}
def executor_kwargs(test_type, server_config, cache_manager, **kwargs):
def executor_kwargs(test_type, server_config, cache_manager, run_info_data, **kwargs):
rv = base_executor_kwargs(test_type, server_config,
cache_manager, **kwargs)
return rv

Просмотреть файл

@ -55,12 +55,14 @@ class TestharnessResultConverter(object):
def __call__(self, test, result):
"""Convert a JSON result into a (TestResult, [SubtestResult]) tuple"""
assert result["test"] == test.url, ("Got results from %s, expected %s" %
(result["test"], test.url))
harness_result = test.result_cls(self.harness_codes[result["status"]], result["message"])
result_url, status, message, stack, subtest_results = result
assert result_url == test.url, ("Got results from %s, expected %s" %
(result_url, test.url))
harness_result = test.result_cls(self.harness_codes[status], message)
return (harness_result,
[test.subtest_result_cls(subtest["name"], self.test_codes[subtest["status"]],
subtest["message"], subtest.get("stack", None)) for subtest in result["tests"]])
[test.subtest_result_cls(name, self.test_codes[status], message, stack)
for name, status, message, stack in subtest_results])
testharness_result_converter = TestharnessResultConverter()

Просмотреть файл

@ -107,6 +107,12 @@ class MarionetteProtocol(Protocol):
return True
def after_connect(self):
# Turn off debug-level logging by default since this is so verbose
with self.marionette.using_context("chrome"):
self.marionette.execute_script("""
Components.utils.import("resource://gre/modules/Log.jsm");
Log.repository.getLogger("Marionette").level = Log.Level.Info;
""")
self.load_runner("http")
def load_runner(self, protocol):

Просмотреть файл

@ -62,8 +62,9 @@ class ServoTestharnessExecutor(ProcessTestExecutor):
self.result_data = None
self.result_flag = threading.Event()
debug_args, command = browser_command(self.binary, ["--cpu", "--hard-fail", "-z", self.test_url(test)],
self.debug_info)
debug_args, command = browser_command(self.binary,
["--cpu", "--hard-fail", "-u", "Servo/wptrunner", "-z", self.test_url(test)],
self.debug_info)
self.command = command
@ -99,15 +100,18 @@ class ServoTestharnessExecutor(ProcessTestExecutor):
self.proc.wait()
proc_is_running = True
if self.result_flag.is_set() and self.result_data is not None:
self.result_data["test"] = test.url
result = self.convert_result(test, self.result_data)
else:
if self.proc.poll() is not None:
if self.result_flag.is_set():
if self.result_data is not None:
self.result_data["test"] = test.url
result = self.convert_result(test, self.result_data)
else:
self.proc.wait()
result = (test.result_cls("CRASH", None), [])
proc_is_running = False
else:
result = (test.result_cls("TIMEOUT", None), [])
else:
result = (test.result_cls("TIMEOUT", None), [])
if proc_is_running:
if self.pause_after_test:
@ -186,8 +190,8 @@ class ServoRefTestExecutor(ProcessTestExecutor):
with TempFilename(self.tempdir) as output_path:
self.command = [self.binary, "--cpu", "--hard-fail", "--exit",
"-Z", "disable-text-aa", "--output=%s" % output_path,
full_url]
"-u", "Servo/wptrunner", "-Z", "disable-text-aa",
"--output=%s" % output_path, full_url]
env = os.environ.copy()
env["HOST_FILE"] = self.hosts_path

Просмотреть файл

@ -13,11 +13,16 @@ window.wrappedJSObject.addEventListener("message", function listener(event) {
clearTimeout(timer);
var tests = event.data.tests;
var status = event.data.status;
marionetteScriptFinished({test:"%(url)s",
tests: tests,
status: status.status,
message: status.message,
stack: status.stack});
var subtest_results = tests.map(function(x) {
return [x.name, x.status, x.message, x.stack]
});
marionetteScriptFinished(["%(url)s",
status.status,
status.message,
status.stack,
subtest_results]);
}, false);
window.wrappedJSObject.win = window.open("%(abs_url)s", "%(window_id)s");

Просмотреть файл

@ -8,12 +8,17 @@ window.timeout_multiplier = %(timeout_multiplier)d;
window.addEventListener("message", function(event) {
var tests = event.data[0];
var status = event.data[1];
var subtest_results = tests.map(function(x) {
return [x.name, x.status, x.message, x.stack]
});
clearTimeout(timer);
callback({test:"%(url)s",
tests: tests,
status: status.status,
message: status.message,
stack: status.stack});
callback(["%(url)s",
status.status,
status.message,
status.stack,
subtest_results]);
}, false);
window.win = window.open("%(abs_url)s", "%(window_id)s");

Просмотреть файл

@ -153,17 +153,32 @@ def update_from_logs(manifests, *log_filenames, **kwargs):
return expected_map
def directory_manifests(metadata_path):
rv = []
for dirpath, dirname, filenames in os.walk(metadata_path):
if "__dir__.ini" in filenames:
rel_path = os.path.relpath(dirpath, metadata_path)
rv.append(os.path.join(rel_path, "__dir__.ini"))
return rv
def write_changes(metadata_path, expected_map):
# First write the new manifest files to a temporary directory
temp_path = tempfile.mkdtemp(dir=os.path.split(metadata_path)[0])
write_new_expected(temp_path, expected_map)
# Keep all __dir__.ini files (these are not in expected_map because they
# aren't associated with a specific test)
keep_files = directory_manifests(metadata_path)
# Copy all files in the root to the temporary location since
# these cannot be ini files
keep_files = [item for item in os.listdir(metadata_path) if
not os.path.isdir(os.path.join(metadata_path, item))]
keep_files.extend(item for item in os.listdir(metadata_path) if
not os.path.isdir(os.path.join(metadata_path, item)))
for item in keep_files:
dest_dir = os.path.dirname(os.path.join(temp_path, item))
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
shutil.copyfile(os.path.join(metadata_path, item),
os.path.join(temp_path, item))

Просмотреть файл

@ -7,12 +7,14 @@ var props = {output:%(output)d};
setup(props);
add_completion_callback(function (tests, harness_status) {
alert("RESULT: " + JSON.stringify({
tests: tests.map(function(t) {
return { name: t.name, status: t.status, message: t.message, stack: t.stack}
var id = location.pathname + location.search + location.hash;
alert("RESULT: " + JSON.stringify([
id,
harness_status.status,
harness_status.message,
harness_status.stack,
tests.map(function(t) {
return [t.name, t.status, t.message, t.stack]
}),
status: harness_status.status,
message: harness_status.message,
stack: harness_status.stack,
}));
});

Просмотреть файл

@ -6,15 +6,15 @@ setup({output:%(output)d});
add_completion_callback(function() {
add_completion_callback(function (tests, status) {
var test_results = tests.map(function(x) {
return {name:x.name, status:x.status, message:x.message, stack:x.stack}
var subtest_results = tests.map(function(x) {
return [x.name, x.status, x.message, x.stack]
});
var id = location.pathname + location.search + location.hash;
var results = JSON.stringify({test: id,
tests:test_results,
status: status.status,
message: status.message,
stack: status.stack});
var results = JSON.stringify([id,
status.status,
status.message,
status.stack,
subtest_results]);
(function done() {
if (window.__wd_results_callback__) {
clearTimeout(__wd_results_timer__);

Просмотреть файл

@ -15,16 +15,3 @@ if (window.opener && window.opener.explicit_timeout) {
}
setup(props);
add_completion_callback(function() {
add_completion_callback(function(tests, status) {
var harness_status = {
"status": status.status,
"message": status.message,
"stack": status.stack
};
var test_results = tests.map(function(x) {
return {name:x.name, status:x.status, message:x.message, stack:x.stack}
});
window.opener.postMessage([test_results, harness_status], "*");
})
});

Просмотреть файл

@ -496,7 +496,7 @@ class TestLoader(object):
def iter_tests(self):
manifest_items = []
for manifest in self.manifests.keys():
for manifest in sorted(self.manifests.keys()):
manifest_iter = iterfilter(self.manifest_filters,
manifest.itertypes(*self.test_types))
manifest_items.extend(manifest_iter)

Просмотреть файл

@ -124,24 +124,23 @@ class GetSyncTargetCommit(Step):
class LoadManifest(Step):
"""Load the test manifest"""
provides = ["test_manifest"]
provides = ["manifest_path", "test_manifest", "old_manifest"]
def create(self, state):
state.test_manifest = testloader.ManifestLoader(state.tests_path).load_manifest(
state.tests_path, state.metadata_path,
)
from manifest import manifest
state.manifest_path = os.path.join(state.metadata_path, "MANIFEST.json")
# Conservatively always rebuild the manifest when doing a sync
state.old_manifest = manifest.load(state.tests_path, state.manifest_path)
state.test_manifest = manifest.Manifest(None, "/")
class UpdateManifest(Step):
"""Update the manifest to match the tests in the sync tree checkout"""
provides = ["initial_rev"]
def create(self, state):
from manifest import manifest, update
test_manifest = state.test_manifest
state.initial_rev = test_manifest.rev
update.update(state.sync["path"], "/", test_manifest)
manifest.write(test_manifest, os.path.join(state.metadata_path, "MANIFEST.json"))
update.update(state.sync["path"], "/", state.test_manifest)
manifest.write(state.test_manifest, state.manifest_path)
class CopyWorkTree(Step):