diff --git a/devtools/docs/SUMMARY.md b/devtools/docs/SUMMARY.md index 6c1df78b9148..00bd48471b36 100644 --- a/devtools/docs/SUMMARY.md +++ b/devtools/docs/SUMMARY.md @@ -33,8 +33,6 @@ * [Debugging intermittent failures](tests/debugging-intermittents.md) * [Performance tests (DAMP)](tests/performance-tests.md) * [Writing a new test](tests/writing-perf-tests.md) - * [Example](tests/writing-perf-tests-example.md) - * [Avanced tips](tests/writing-perf-tests-tips.md) * [Files and directories](files/README.md) * [Adding New Files](files/adding-files.md) * [Tool Architectures](tools/tools.md) diff --git a/devtools/docs/backend/actor-e10s-handling.md b/devtools/docs/backend/actor-e10s-handling.md index 6cee8abe6fb7..9f9de6d546fb 100644 --- a/devtools/docs/backend/actor-e10s-handling.md +++ b/devtools/docs/backend/actor-e10s-handling.md @@ -2,7 +2,7 @@ In multi-process environments, most devtools actors are created and initialized in the child content process, to be able to access the resources they are exposing to the toolbox. But sometimes, these actors need to access things in the parent process too. Here's why and how. -{% hint style="danger" %} +{% hint style="error" %} This documentation page is **deprecated**. `setupInParent` relies on the message manager which is being deprecated. Furthermore, communications between parent and content processes should be avoided for security reasons. If possible, the client should be responsible for calling actors both on the parent and content process. diff --git a/devtools/docs/tests/performance-tests.md b/devtools/docs/tests/performance-tests.md index d239ad840435..1886d29ab869 100644 --- a/devtools/docs/tests/performance-tests.md +++ b/devtools/docs/tests/performance-tests.md @@ -147,7 +147,7 @@ Compared to the other test suites, it isn't run on the cloud, but on dedicated h This is to ensure performance numbers are stable over time and between two runs. Talos runs various types of tests. More specifically, DAMP is a [Page loader test](https://wiki.mozilla.org/Buildbot/Talos/Tests#Page_Load_Tests). The [source code](http://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/) for DAMP is also in mozilla-central. -See [Writing new performance test](./writing-perf-tests.md) for more information about the implementation of DAMP tests. +The [main script](http://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp.js) contains the implementation of all the tests described in "What does it do?" paragraph. ## How to see the performance trends? diff --git a/devtools/docs/tests/writing-perf-tests-example.md b/devtools/docs/tests/writing-perf-tests-example.md deleted file mode 100644 index 148ec3206e29..000000000000 --- a/devtools/docs/tests/writing-perf-tests-example.md +++ /dev/null @@ -1,67 +0,0 @@ -# Performance test example: performance of click event in the inspector - -Let's look at a trivial but practical example and add a simple test to measure the performance of a click in the inspector. - -First we create a file under [tests/inspector](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/inspector) since we are writing an inspector test. We call the file `click.js`. - -We will use a dummy test document here: `data:text/html,click test document`. - -We prepare the imports needed to write the test, from head.js and inspector-helper.js: -- `testSetup`, `testTeardown`, `openToolbox` and `runTest` from head.js -- `reloadInspectorAndLog` from inspector-helper.js - -The full code for the test looks as follows: -``` -const { - reloadInspectorAndLog, -} = require("./inspector-helpers"); - -const { - openToolbox, - runTest, - testSetup, - testTeardown, -} = require("../head"); - -module.exports = async function() { - // Define here your custom document via a data URI: - const url = "data:text/html,click test document"; - - await testSetup(url); - const toolbox = await openToolbox("inspector"); - - const inspector = toolbox.getPanel("inspector"); - const window = inspector.panelWin; // Get inspector's panel window object - const body = window.document.body; - - await new Promise(resolve => { - const test = runTest("inspector.click"); - body.addEventListener("click", function () { - test.done(); - resolve(); - }, { once: true }); - body.click(); - }); - - // Check if the inspector reload is impacted by click - await reloadInspectorAndLog("click", toolbox); - - await testTeardown(); -} -``` - -Finally we add an entry in [damp-tests.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp-tests.js): -``` - { - name: "inspector.click", - path: "inspector/click.js", - description: - "Measure the time to click in the inspector, and reload the inspector", - }, -``` - -Then we can run our test with: -``` -./mach talos-test --activeTests damp --subtest inspector.click -``` - diff --git a/devtools/docs/tests/writing-perf-tests-tips.md b/devtools/docs/tests/writing-perf-tests-tips.md deleted file mode 100644 index 7381b206e55a..000000000000 --- a/devtools/docs/tests/writing-perf-tests-tips.md +++ /dev/null @@ -1,41 +0,0 @@ -# How to write a good performance test? - -## Verify that you wait for all asynchronous code - -If your test involves asynchronous code, which is very likely given the DevTools codebase, please review carefully your test script. -You should ensure that _any_ code ran directly or indirectly by your test is completed. -You should not only wait for the functions related to the very precise feature you are trying to measure. - -This is to prevent introducing noise in the test run after yours. If any asynchronous code is pending, -it is likely to run in parallel with the next test and increase its variance. -Noise in the tests makes it hard to detect small regressions. - -You should typically wait for: -* All RDP requests to finish, -* All DOM Events to fire, -* Redux action to be dispatched, -* React updates, -* ... - - -## Ensure that its results change when regressing/fixing the code or feature you want to watch. - -If you are writing the new test to cover a recent regression and you have a patch to fix it, push your test to try without _and_ with the regression fix. -Look at the try push and confirm that your fix actually reduces the duration of your perf test significantly. -If you are introducing a test without any patch to improve the performance, try slowing down the code you are trying to cover with a fake slowness like `setTimeout` for asynchronous code, or very slow `for` loop for synchronous code. This is to ensure your test would catch a significant regression. - -For our click performance test, we could do this from the inspector codebase: -``` -window.addEventListener("click", function () { - - // This for loop will fake a hang and should slow down the duration of our test - for (let i = 0; i < 100000000; i++) {} - -}, true); // pass `true` in order to execute before the test click listener -``` - - -## Keep your test execution short. - -Running performance tests is expensive. We are currently running them 25 times for each changeset landed in Firefox. -Aim to run tests in less than a second on try. \ No newline at end of file diff --git a/devtools/docs/tests/writing-perf-tests.md b/devtools/docs/tests/writing-perf-tests.md index b69152575923..6944f5c06c1d 100644 --- a/devtools/docs/tests/writing-perf-tests.md +++ b/devtools/docs/tests/writing-perf-tests.md @@ -1,11 +1,9 @@ # Writing new performance test See [Performance tests (DAMP)](performance-tests.md) for an overall description of our performance tests. -Here, we will describe how to write a new test and register it to run in DAMP. +Here, we will describe how to write a new test with an example: track the performance of clicking inside the inspector panel. -{% hint style="tip" %} - -**Reuse existing tests if possible!** +## Consider modifying existing tests first If a `custom` page already exists for the tool you are testing, try to modify the existing `custom` test rather than adding a new individual test. @@ -15,108 +13,115 @@ New individual tests run separately, in new tabs, and make DAMP slower than just If your test case requires a dedicated document or can't run next to the other tests in the current `custom` test, follow the instructions below to add a new individual test. -{% endhint %} +## Where is the code for the test? -This page contains the general documentation for writing DAMP tests. See also: -- [Performance test writing example](./writing-perf-tests-example.html) for a practical example of creating a new test -- [Performance test writing tips](./writing-perf-tests-tips.html) for detailed tips on how to write a good and efficient test +For now, all the tests live in a single file [damp.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp.js). -## Test location +There are two kinds of tests: +* The first kind is being run against two documents: + * "Simple", an empty webpage. This one helps highlighting the load time of panels, + * "Complicated", a copy of bild.be, a German newspaper website. This allows us to examine the performance of the tools when inspecting complicated, big websites. -Tests are located in [testing/talos/talos/tests/devtools/addon/content/tests](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests). You will find subfolders for panels already tested in DAMP (debugger, inspector, …) as well as other subfolders for tests not specific to a given panel (server, toolbox). + To run your test against these two documents, add it to [this function](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#563-673). -Tests are isolated in dedicated files. Some examples of tests: -- [tests/netmonitor/simple.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/netmonitor/simple.js) -- [tests/inspector/mutations.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/inspector/mutations.js) + Look for `_getToolLoadingTests` function. There is one method per tool. Since we want to test how long does it take to click on the inspector, we will find the `inspector` method, and add the new test code there, like this: + ``` + _getToolLoadingTests(url, label, { expectedMessages, expectedSources }) { + let tests = { + async inspector() { + await this.testSetup(url); + let toolbox = await this.openToolboxAndLog(label + ".inspector", "inspector"); + await this.reloadInspectorAndLog(label, toolbox); -## Basic test + // <== here we are going to add some code to test "click" performance, + // after the inspector is opened and after the page is reloaded. -The basic skeleton of a test is: + await this.closeToolboxAndLog(label + ".inspector"); + await this.testTeardown(); + }, + ``` +* The second kind isn't specific to any document. You can come up with your own test document, or not involve any document if you don't need one. + If that better fits your needs, you should introduce an independent test function. Like [this one](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#330-348) or [this other one](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#350-402). You also have to register the new test function you just introduced in this [`tests` object within `startTest` function](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.js#863-864). -``` -const { - testSetup, - testTeardown, - SIMPLE_URL, -} = require("../head"); + You could also use extremely simple documents specific to your test case like this: + ``` + /** + * Measure the time necessary to click in the inspector panel + */ + async _inspectorClickTest() { + // Define here your custom document via a data URI: + let url = "data:text/html,custom test document"; + let tab = await this.testSetup(url); + let messageManager = tab.linkedBrowser.messageManager; + let toolbox = await this.openToolbox("inspector"); -module.exports = async function() { - await testSetup(SIMPLE_URL); + // <= Here, you would write your test actions, + // after opening the inspector against a custom document - // Run some measures here + await this.closeToolbox(); + await this.testTeardown(); + }, + ... + startTest(doneCallback, config) { + ... + // And you have to register the test in `startTest` function + // `tests` object is keyed by test names. So our test is named "inspector.click" here. + tests["inspector.click"] = this._inspectorClickTest; + ... + } - await testTeardown(); -}; -``` + ``` -* always start the test by calling `testSetup(url)`, with the `url` of the document to use -* always end the test with `testTeardown()` - - -## Test documents - -DevTools performance heavily depends on the document against which DevTools are opened. There are two "historical" documents you can use for tests for any panel: -* "Simple", an empty webpage. This one helps highlighting the load time of panels, -* "Complicated", a copy of bild.be, a German newspaper website. This allows us to examine the performance of the tools when inspecting complicated, big websites. - -The URL of those documents are exposed by [tests/head.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/head.js). The Simple page can be found at [testing/talos/talos/tests/devtools/addon/content/pages/simple.html](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/pages/simple.html). The Complicated page is downloaded via [tooltool](https://wiki.mozilla.org/ReleaseEngineering/Applications/Tooltool) automatically the first time you run the DAMP tests. - -You can create also new test documents under [testing/talos/talos/tests/devtools/addon/content/pages](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/pages). See the pages in the `custom` subfolder for instance. If you create a document in `pages/custom/mypanel/index.html`, the URL of the document in your tests should be `PAGES_BASE_URL + "custom/mypanel/index.html"`. The constant `PAGES_BASE_URL` is exposed by head.js. - -Note that modifying any existing test document will most likely impact the baseline for existing tests. - -Finally you can also create very simple test documents using data urls. Test documents don't have to contain any specific markup or script to be valid DAMP test documents, so something as simple as `testSetup("data:text/html,my test document");` is valid. - - -## Test helpers - -Helper methods have been extracted in shared modules: -* [tests/head.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/head.js) for the most common ones -* tests/{subfolder}/{subfolder}-helpers.js for folder-specific helpers ([example](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/tests/inspector/inspector-helpers.js)) - -To measure something which is not covered by an existing helper, you should use `runTest`, exposed by head.js. - -``` -module.exports = async function() { - await testSetup(SIMPLE_URL); - - // Calling `runTest` will immediately start recording your action duration. - // You can execute any necessary setup action you don't want to record before calling it. - const test = runTest(`mypanel.mytest.mymeasure`); - - await doSomeThings(); // <== Do an action you want to record here - - // Once your action is completed, call `runTest` returned object's `done` method. - // It will automatically record the action duration and appear in PerfHerder as a new subtest. - // It also creates markers in the profiler so that you can better inspect this action in - // profiler.firefox.com. - test.done(); - - await testTeardown(); -}; -``` - -If your measure is not simply the time spent by an asynchronous call (for instance computing an average, counting things…) there is a lower level helper called `logTestResult` which will directly log a value. See [this example](https://searchfox.org/mozilla-central/rev/325c1a707819602feff736f129cb36055ba6d94f/testing/talos/talos/tests/devtools/addon/content/tests/webconsole/streamlog.js#62). - - -## Test runner - -If you need to dive into the internals of the DAMP runner, most of the logic is in [testing/talos/talos/tests/devtools/addon/content/damp.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp.js). - - -# How to name your test and register it? - -If a new test file was created, it needs to be registered in the test suite. To register the new test, add it in [damp-tests.js](https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/devtools/addon/content/damp-tests.js). This file acts as the manifest for the DAMP test suite. +## How to name your test and register it? If your are writing a test executing against Simple and Complicated documents, your test name will look like: `(simple|complicated).${tool-name}.${test-name}`. So for our example, it would be `simple.inspector.click` and `complicated.inspector.click`. For independent tests that don't use the Simple or Complicated documents, the test name only needs to start with the tool name, if the test is specific to that tool For the example, it would be `inspector.click`. -In general, the test name should try to match the path of the test file. As you can see in damp-tests.js this naming convention is not consistently followed. We have discrepencies for simple/complicated/custom tests, as well as for webconsole tests. This is largely for historical reasons. +Once you come up with a name, you will have to register your test [here](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.html#12-42) and [here](https://searchfox.org/mozilla-central/rev/cd742d763809089925a38178dd2ba5a9069fa855/testing/talos/talos/tests/devtools/addon/content/damp.html#44-71) with a short description of it. +## How to write a performance test? -# How to run your new test? +When you write a performance test, in most cases, you only care about the time it takes to complete a very precise action. +There is a `runTest` helper method that helps recording a precise action duration: +``` +// Calling `runTest` will immediately start recording your action duration. +// You can execute any necessary setup action you don't want to record before calling it. +let test = this.runTest("my.test.name"); // `runTest` expects the test name as argument + +// <== Do an action you want to record here + +// Once your action is completed, call `runTest` returned object's `done` method. +// It will automatically record the action duration and appear in PerfHerder as a new subtest. +// It also creates markers in the profiler so that you can better inspect this action in +// profiler.firefox.com. +test.done(); +``` + +So for our click example it would be: +``` +async inspector() { + await this.testSetup(url); + let toolbox = await this.openToolboxAndLog(label + ".inspector", "inspector"); + await this.reloadInspectorAndLog(label, toolbox); + + let inspector = toolbox.getPanel("inspector"); + let window = inspector.panelWin; // Get inspector's panel window object + let body = window.document.body; + + await new Promise(resolve => { + let test = this.runTest("inspector.click"); + body.addEventListener("click", function () { + test.done(); + resolve(); + }, { once: true }); + body.click(); + }); +} +``` + +## How to run your new test? You can run any performance test with this command: ``` @@ -143,3 +148,42 @@ unzip testing/mozharness/build/blobber_upload_dir/profile_damp.zip ``` Then you have to open [https://profiler.firefox.com/](https://profiler.firefox.com/) and manually load the profile file that lives here: `profile_damp/page_0_pagecycle_1/cycle_0.profile` +## How to write a good performance test? + +### Verify that you wait for all asynchronous code + +If your test involves asynchronous code, which is very likely given the DevTools codebase, please review carefully your test script. +You should ensure that _any_ code ran directly or indirectly by your test is completed. +You should not only wait for the functions related to the very precise feature you are trying to measure. + +This is to prevent introducing noise in the test run after yours. If any asynchronous code is pending, +it is likely to run in parallel with the next test and increase its variance. +Noise in the tests makes it hard to detect small regressions. + +You should typically wait for: +* All RDP requests to finish, +* All DOM Events to fire, +* Redux action to be dispatched, +* React updates, +* ... + +### Ensure that its results change when regressing/fixing the code or feature you want to watch. + +If you are writing the new test to cover a recent regression and you have a patch to fix it, push your test to try without _and_ with the regression fix. +Look at the try push and confirm that your fix actually reduces the duration of your perf test significantly. +If you are introducing a test without any patch to improve the performance, try slowing down the code you are trying to cover with a fake slowness like `setTimeout` for asynchronous code, or very slow `for` loop for synchronous code. This is to ensure your test would catch a significant regression. + +For our click performance test, we could do this from the inspector codebase: +``` +window.addEventListener("click", function () { + + // This for loop will fake a hang and should slow down the duration of our test + for (let i = 0; i < 100000000; i++) {} + +}, true); // pass `true` in order to execute before the test click listener +``` + +### Keep your test execution short. + +Running performance tests is expensive. We are currently running them 25 times for each changeset landed in Firefox. +Aim to run tests in less than a second on try.