fb1d8e15cb
MozReview-Commit-ID: LvpIb2OK2GS --HG-- rename : testing/web-platform/tests/conformance-checkers/html/elements/keygen/challenge-isvalid.html => testing/web-platform/tests/conformance-checkers/html/elements/keygen/challenge-novalid.html rename : testing/web-platform/tests/conformance-checkers/html/elements/keygen/keytype-isvalid.html => testing/web-platform/tests/conformance-checkers/html/elements/keygen/keytype-novalid.html rename : testing/web-platform/tests/conformance-checkers/html/elements/keygen/model-isvalid.html => testing/web-platform/tests/conformance-checkers/html/elements/keygen/model-also-novalid.html rename : testing/web-platform/tests/conformance-checkers/html/elements/keygen/no-attributes-isvalid.html => testing/web-platform/tests/conformance-checkers/html/elements/keygen/no-attributes-novalid.html rename : testing/web-platform/tests/conformance-checkers/xhtml/elements/keygen/054-isvalid.xhtml => testing/web-platform/tests/conformance-checkers/xhtml/elements/keygen/054-also-novalid.xhtml rename : testing/web-platform/tests/conformance-checkers/xhtml/elements/keygen/055-isvalid.xhtml => testing/web-platform/tests/conformance-checkers/xhtml/elements/keygen/055-also-novalid.xhtml rename : testing/web-platform/tests/conformance-checkers/xhtml/elements/keygen/056-isvalid.xhtml => testing/web-platform/tests/conformance-checkers/xhtml/elements/keygen/056-also-novalid.xhtml rename : testing/web-platform/tests/conformance-checkers/xhtml/elements/keygen/057-isvalid.xhtml => testing/web-platform/tests/conformance-checkers/xhtml/elements/keygen/057-also-novalid.xhtml rename : testing/web-platform/tests/conformance-checkers/xhtml/elements/keygen/058-isvalid.xhtml => testing/web-platform/tests/conformance-checkers/xhtml/elements/keygen/058-also-novalid.xhtml rename : testing/web-platform/tests/html-media-capture/capture_fallback_file_upload.html => testing/web-platform/tests/html-media-capture/capture_fallback_file_upload-manual.html rename : testing/web-platform/tests/html/obsolete/requirements-for-implementations/the-marquee-element-0/marquee-start.html => testing/web-platform/tests/html/obsolete/requirements-for-implementations/the-marquee-element-0/marquee-start-manual.html rename : testing/web-platform/tests/html/obsolete/requirements-for-implementations/the-marquee-element-0/marquee-stop.html => testing/web-platform/tests/html/obsolete/requirements-for-implementations/the-marquee-element-0/marquee-stop-manual.html rename : testing/web-platform/tests/service-workers/cache-storage/serviceworker/credentials.html => testing/web-platform/tests/service-workers/cache-storage/serviceworker/credentials.https.html |
||
---|---|---|
.. | ||
annotations | ||
collections | ||
definitions | ||
examples | ||
scripts | ||
tools | ||
.editorconfig | ||
CONTRIBUTING.md | ||
OWNERS | ||
README.md | ||
TODO |
README.md
Annotation-model: Tests for the Web Annotation Data Model
The Web Annotation Data Model specification presents a JSON-oriented collection of terms and structure that permit the sharing of annotations about other content.
The purpose of these tests is to help validate that each of the structural requirements expressed in the Data Model specification are properly supported by implementations.
The general approach for this testing is to enable both manual and automated testing. However, since the specification has no actual user interface requirements, there is no general automation mechanism that can be presented for clients. Instead, the automation mechanism is one where client implementors could take advantage of the plumbing we provide here to push their data into the tests and collect the results of the testing. This assumes knowledge of the requirements of each test / collection of tests so that the input data is relevant. Each test or test collection contains information sufficient for the task.
Running Tests
In the case of this test collection, we will be initially creating manual tests. These will automatically determine pass or fail and generate output for the main WPT window. The plan is to minimize the number of such tests to ease the burden on the testers while still exercising all the features.
The workflow for running these tests is something like:
- Start up the test driver window and select the annotation-model tests - click "Start"
- A window pops up that shows a test - the description of which tells the tester what input is expected. The window contains a textarea into which the input can be typed / pasted, along with a button to click to start testing that input.
- The tester (presumably in another window) brings up their annotation client and uses it to generate an annotation that supplies the requested structure. They then copy / paste that into the aforementioned textarea and select the button.
- The test runs. Success or failure is determined and reported to the test driver window, which then cycles to the next test in the sequence.
- Repeat steps 2-4 until done.
- Download the JSON format report of test results, which can then be visually inspected, reported on using various tools, or passed on to W3C for evaluation and collection in the Implementation Report via github.
Remember that while these tests are written to help exercise implementations, their other (important) purpose is to increase confidence that there are interoperable implementations. So, implementers are our audience, but these tests are not meant to be a comprehensive collection of tests for a client that might implement the Recommendation. The bulk of the tests are manual because there are no UI requirements in the Recommendation that would make it possible to effectively stimulate every client portably.
Having said that, because the structure of these "manual" tests is very rigid, it is possible for an implementer who understands test automation to use an open source tool such as Selenium to run these "manual" tests against their implementation - exercising their implementation against content they provide to create annotations and feed the data into our test input field and run the test.
Capturing and Reporting Results
As tests are run against implementations, if the results of testing are submitted to test-results then they will be automatically included in documents generated by wptreport. The same tool can be used locally to view reports about recorded results.
Automating Test Execution
Writing Tests
If you are interested in writing tests for this environment, see the associated CONTRIBUTING document.