2015-04-27 14:19:02 +03:00
|
|
|
.. This Source Code Form is subject to the terms of the Mozilla Public
|
|
|
|
.. License, v. 2.0. If a copy of the MPL was not distributed with this
|
|
|
|
.. file, You can obtain one at http://mozilla.org/MPL/2.0/.
|
|
|
|
|
|
|
|
.. _testing:
|
|
|
|
|
2015-06-11 15:28:23 +03:00
|
|
|
=================
|
|
|
|
Front-end testing
|
|
|
|
=================
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
Bedrock runs a suite of front-end `Jasmine`_ behavioral/unit tests, which use
|
|
|
|
`Karma`_ as a test runner. We also have a suite of functional tests using
|
|
|
|
`Selenium`_ and `pytest`_. This allows us to emulate users interacting with a
|
|
|
|
real browser. All these test suites live in the ``tests`` directory.
|
2015-06-09 16:59:14 +03:00
|
|
|
|
|
|
|
The ``tests`` directory comprises of:
|
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
* ``/functional`` contains pytest tests.
|
|
|
|
* ``/pages`` contains Python page objects.
|
2015-06-11 15:28:23 +03:00
|
|
|
* ``/unit`` contains the Jasmine tests and Karma config file.
|
2015-04-27 14:19:02 +03:00
|
|
|
|
|
|
|
Installation
|
|
|
|
------------
|
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
First follow the :ref:`installation instructions for bedrock<install>`, which
|
|
|
|
will install the specific versions of Jasmine/Karma which are needed to run the
|
|
|
|
unit tests, and guide you through installing pip and setting up a virtual
|
|
|
|
environment for the functional tests. The additional requirements can then be
|
|
|
|
installed by using the following commands:
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
.. code-block:: bash
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
$ source venv/bin/activate
|
|
|
|
$ bin/peep.py install -r requirements/test.txt
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
Running Jasmine tests using Karma
|
|
|
|
---------------------------------
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
To perform a single run of the Jasmine test suite using Firefox, type the
|
|
|
|
following command:
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
.. code-block:: bash
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
$ grunt test
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
See the `Jasmine`_ documentation for tips on how to write JS behavioral or unit
|
|
|
|
tests. We also use `Sinon`_ for creating test spies, stubs and mocks.
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2016-01-08 21:43:29 +03:00
|
|
|
Running functional tests
|
|
|
|
------------------------
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2015-06-09 16:59:14 +03:00
|
|
|
.. Note::
|
|
|
|
|
2016-01-08 21:43:29 +03:00
|
|
|
Before running the functional tests, please make sure to follow the bedrock
|
2015-10-01 17:22:54 +03:00
|
|
|
:ref:`installation docs<install>`, including the database sync that is needed
|
|
|
|
to pull in external data such as event/blog feeds etc. These are required for
|
|
|
|
some of the tests to pass.
|
2015-06-09 16:59:14 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
To run the full functional test suite against your local bedrock instance:
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
.. code-block:: bash
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2015-12-03 01:46:10 +03:00
|
|
|
$ py.test --driver Firefox --html tests/functional/results.html tests/functional/
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
This will run all test suites found in the ``tests/functional`` directory and
|
|
|
|
assumes you have bedrock running at ``localhost`` on port ``8000``. Results will
|
2016-01-08 21:43:29 +03:00
|
|
|
be reported in ``tests/functional/results.html``.
|
|
|
|
|
|
|
|
By default, tests will run one at a time. This is the safest way to ensure
|
|
|
|
predictable results, due to
|
|
|
|
`bug 1230105 <https://bugzilla.mozilla.org/show_bug.cgi?id=1230105>`_.
|
|
|
|
If you want to run tests in parallel (this should be safe when running against
|
|
|
|
a deployed instance), you can add ``-n auto`` to the command line. Replace
|
|
|
|
``auto`` with an integer if you want to set the maximum number of concurrent
|
|
|
|
processes.
|
2015-04-27 14:19:02 +03:00
|
|
|
|
|
|
|
.. Note::
|
|
|
|
|
2016-01-08 21:43:29 +03:00
|
|
|
There are some functional tests that do not require a browser. These can
|
|
|
|
take a long time to run, especially if they're not running in parallel.
|
|
|
|
To skip these tests, add ``-m 'not headless'`` to your command line.
|
2015-06-09 16:59:14 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
To run a single test file you must tell py.test to execute a specific file
|
|
|
|
e.g. ``tests/functional/test_newsletter.py``:
|
2015-06-09 16:59:14 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
.. code-block:: bash
|
2015-06-09 16:59:14 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
$ py.test --driver Firefox --html tests/functional/results.html -n auto tests/functional/test_newsletter.py
|
2015-06-09 16:59:14 +03:00
|
|
|
|
2016-01-05 14:12:12 +03:00
|
|
|
To run a single test you can filter using the ``-k`` argument supplied with a keyword
|
|
|
|
e.g. ``-k test_successful_sign_up``:
|
|
|
|
|
|
|
|
.. code-block:: bash
|
|
|
|
|
|
|
|
$ py.test --driver Firefox --html tests/functional/results.html -n auto tests/functional/test_newsletter.py -k test_successful_sign_up
|
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
You can also easily run the tests against any bedrock environment by specifying the
|
|
|
|
``--base-url`` argument. For example, to run all functional tests against dev:
|
2015-06-09 16:59:14 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
.. code-block:: bash
|
2015-06-09 16:59:14 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
$ py.test --base-url https://www-dev.allizom.org --driver Firefox --html tests/functional/results.html -n auto tests/functional/
|
2015-06-09 16:59:14 +03:00
|
|
|
|
2015-12-03 01:46:10 +03:00
|
|
|
.. Note::
|
|
|
|
|
|
|
|
For the above commands to work, Firefox needs to be installed in a
|
|
|
|
predictable location for your operating system. For details on how to
|
|
|
|
specify the location of Firefox, or running the tests against alternative
|
|
|
|
browsers, refer to the `pytest-selenium documentation`_.
|
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
For more information on command line options, see the `pytest documentation`_.
|
2015-06-09 16:59:14 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
Writing Selenium tests
|
|
|
|
----------------------
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
Tests usually consist of interactions and assertions. Selenium provides an API
|
|
|
|
for opening pages, locating elements, interacting with elements, and obtaining
|
|
|
|
state of pages and elements. To improve readability and maintainability of the
|
|
|
|
tests, we use the `Page Object`_ model, which means each page we test has an
|
|
|
|
object that represents the actions and states that are needed for testing.
|
|
|
|
|
|
|
|
Well written page objects should allow your test to contain simple interactions
|
|
|
|
and assertions as shown in the following example:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
def test_sign_up_for_newsletter(base_url, selenium):
|
|
|
|
page = NewsletterPage(base_url, selenium).open()
|
|
|
|
page.type_email('noreply@mozilla.com')
|
|
|
|
page.accept_privacy_policy()
|
|
|
|
page.click_sign_me_up()
|
|
|
|
assert page.sign_up_successful
|
|
|
|
|
|
|
|
It's important to keep assertions in your tests and not your page objects, and
|
|
|
|
to limit the amount of logic in your page objects. This will ensure your tests
|
|
|
|
all start with a known state, and any deviations from this expected state will
|
|
|
|
be highlighted as potential regressions. Ideally, when tests break due to a
|
|
|
|
change in bedrock, only the page objects will need updating. This can often be
|
|
|
|
due to an element needing to be located in a different way.
|
|
|
|
|
|
|
|
Please take some time to read over the `Selenium documentation`_ for details on
|
|
|
|
the Python client API.
|
|
|
|
|
|
|
|
Destructive tests
|
|
|
|
~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
By default all tests are assumed to be destructive, which means they will be
|
|
|
|
skipped if they're run against a `sensitive environment`_. This prevents
|
|
|
|
accidentally running tests that create, modify, or delete data on the
|
|
|
|
application under test. If your test is nondestructive you will need to apply
|
|
|
|
the ``nondestructive`` marker to it. A simple example is shown below, however
|
|
|
|
you can also read the `pytest markers`_ documentation for more options.
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
import pytest
|
|
|
|
|
|
|
|
@pytest.mark.nondestructive
|
|
|
|
def test_newsletter_default_values(base_url, selenium):
|
|
|
|
page = NewsletterPage(base_url, selenium).open()
|
|
|
|
assert '' == page.email
|
|
|
|
assert 'United States' == page.country
|
|
|
|
assert 'English' == page.language
|
|
|
|
assert page.html_format_selected
|
|
|
|
assert not page.text_format_selected
|
|
|
|
assert not page.privacy_policy_accepted
|
|
|
|
|
2015-12-22 13:06:38 +03:00
|
|
|
Smoke tests
|
|
|
|
~~~~~~~~~~~
|
2015-10-27 14:04:44 +03:00
|
|
|
|
2015-12-22 13:06:38 +03:00
|
|
|
Smoke tests are run as part of bedrocks deployment pipeline. These should be considered
|
2015-10-27 14:04:44 +03:00
|
|
|
to be critical tests which benefit from being run automatically after every commit to
|
|
|
|
master. Only the full suite of functional tests are run after deployment to staging. If
|
2015-12-22 13:06:38 +03:00
|
|
|
your test should be marked as a smoke test you will need to apply a ``smoke`` marker
|
2015-10-27 14:04:44 +03:00
|
|
|
to it.
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
import pytest
|
|
|
|
|
2015-12-22 13:06:38 +03:00
|
|
|
@pytest.mark.smoke
|
2015-10-27 14:04:44 +03:00
|
|
|
@pytest.mark.nondestructive
|
|
|
|
def test_newsletter_default_values(base_url, selenium):
|
|
|
|
page = NewsletterPage(base_url, selenium).open()
|
|
|
|
assert '' == page.email
|
|
|
|
assert 'United States' == page.country
|
|
|
|
assert 'English' == page.language
|
|
|
|
assert page.html_format_selected
|
|
|
|
assert not page.text_format_selected
|
|
|
|
assert not page.privacy_policy_accepted
|
|
|
|
|
2015-12-22 13:06:38 +03:00
|
|
|
You can run smoke tests only by adding ``-m smoke`` when running the test suite on the
|
2015-10-27 14:04:44 +03:00
|
|
|
command line.
|
|
|
|
|
|
|
|
.. Note::
|
|
|
|
|
|
|
|
Tests that rely on long-running timeouts, cron jobs, or that test for locale specific
|
2015-12-22 13:06:38 +03:00
|
|
|
interactions should not be marked as a smoke test. We should try and ensure that the
|
|
|
|
suite of smoke tests are quick to run, and they should not have a dependency on
|
2015-10-27 14:04:44 +03:00
|
|
|
checking out and building the full site.
|
|
|
|
|
2015-12-22 13:06:38 +03:00
|
|
|
Sanity tests
|
|
|
|
~~~~~~~~~~~~
|
|
|
|
|
|
|
|
Sanity tests are considered to be our most critical tests that must pass in a wide range
|
|
|
|
of web browsers, including old versions of Internet Explorer. Sanity tests are run
|
|
|
|
automatically post deployment on a wider range of browsers & platforms than we run the
|
|
|
|
full suite against. The number of sanity tests we run should remain small, but cover our
|
|
|
|
most critical pages where legacy browser support is important.
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
import pytest
|
|
|
|
|
|
|
|
@pytest.mark.sanity
|
|
|
|
@pytest.mark.nondestructive
|
|
|
|
def test_click_download_button(base_url, selenium):
|
|
|
|
page = FirefoxNewPage(base_url, selenium).open()
|
|
|
|
page.download_firefox()
|
|
|
|
assert page.is_thank_you_message_displayed
|
|
|
|
|
|
|
|
You can run sanity tests only by adding ``-m sanity`` when running the test suite on the
|
|
|
|
command line.
|
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
Waits and Expected Conditions
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
Often an interaction with a page will cause a visible response. While Selenium
|
|
|
|
does its best to wait for any page loads to be complete, it's never going to be
|
|
|
|
as good as you at knowing when to allow the test to continue. For this reason,
|
|
|
|
you will need to write explicit `waits`_ in your page objects. These repeatedly
|
|
|
|
execute code (a condition) until the condition returns true. The following
|
|
|
|
example is probably the most commonly used, and will wait until an element is
|
|
|
|
considered displayed:
|
|
|
|
|
|
|
|
.. code-block:: python
|
|
|
|
|
|
|
|
from selenium.webdriver.support import expected_conditions as expected
|
|
|
|
from selenium.webdriver.support.ui import WebDriverWait as Wait
|
|
|
|
|
|
|
|
Wait(selenium, timeout=10).until(
|
|
|
|
expected.visibility_of_element_located(By.ID, 'my_element'))
|
|
|
|
|
|
|
|
For convenience, the Selenium project offers some basic `expected conditions`_,
|
|
|
|
which can be used for the most common cases.
|
|
|
|
|
|
|
|
Debugging Selenium
|
2015-06-11 15:28:23 +03:00
|
|
|
------------------
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
Debug information is collected on failure and added to the HTML report
|
|
|
|
referenced by the ``--html`` argument. You can enable debug information for all
|
|
|
|
tests by setting the ``SELENIUM_CAPTURE_DEBUG`` environment variable to
|
|
|
|
``always``.
|
2015-04-27 14:19:02 +03:00
|
|
|
|
2015-06-11 15:28:23 +03:00
|
|
|
Guidelines for writing functional tests
|
|
|
|
---------------------------------------
|
2015-06-09 16:59:14 +03:00
|
|
|
|
2015-10-01 17:22:54 +03:00
|
|
|
* Try and keep tests organized and cleanly separated. Each page should have its
|
|
|
|
own page object and test file, and each test should be responsible for a
|
|
|
|
specific purpose, or component of a page.
|
|
|
|
* Avoid using sleeps - always use waits as mentioned above.
|
|
|
|
* Don't make tests overly specific. If a test keeps failing because of generic
|
|
|
|
changes to a page such as an image filename or ``href`` being updated, then
|
|
|
|
the test is probably too specific.
|
|
|
|
* Avoid string checking as tests may break if strings are updated, or could
|
|
|
|
change depending on the page locale.
|
|
|
|
* When writing tests, try and run them against a staging or demo environment
|
|
|
|
in addition to local testing. It's also worth running tests a few times to
|
|
|
|
identify any intermittent failures that may need additional waits.
|
|
|
|
|
|
|
|
See also the `Web QA style guide`_ for Python based testing.
|
2015-06-11 15:28:23 +03:00
|
|
|
|
|
|
|
.. _Jasmine: https://jasmine.github.io/1.3/introduction.html
|
|
|
|
.. _Karma: https://karma-runner.github.io/
|
|
|
|
.. _Sinon: http://sinonjs.org/
|
2015-10-01 17:22:54 +03:00
|
|
|
.. _Selenium: http://docs.seleniumhq.org/
|
|
|
|
.. _pytest: http://pytest.org/latest/
|
|
|
|
.. _pytest documentation: http://pytest.org/latest/
|
|
|
|
.. _pytest markers: http://pytest.org/latest/example/markers.html
|
|
|
|
.. _pytest-selenium documentation: http://pytest-selenium.readthedocs.org/en/latest/index.html
|
|
|
|
.. _sensitive environment: http://pytest-selenium.readthedocs.org/en/latest/user_guide.html#sensitive-environments
|
|
|
|
.. _Selenium documentation: http://seleniumhq.github.io/selenium/docs/api/py/api.html
|
|
|
|
.. _Page Object: http://martinfowler.com/bliki/PageObject.html
|
|
|
|
.. _waits: http://seleniumhq.github.io/selenium/docs/api/py/webdriver_support/selenium.webdriver.support.wait.html
|
|
|
|
.. _expected conditions: http://seleniumhq.github.io/selenium/docs/api/py/webdriver_support/selenium.webdriver.support.expected_conditions.html
|
|
|
|
.. _Web QA style guide: https://wiki.mozilla.org/QA/Execution/Web_Testing/Docs/Automation/StyleGuide
|