A collection of automated tests created with the scope of reducing manual testing for the AMO weekly release.
Перейти к файлу
aschek c19e222dc3
schek/stage_and_prod_update (#1069)
2024-11-14 10:17:43 +02:00
.circleci
.github
api
img
pages schek/stage_and_prod_update (#1069) 2024-11-14 10:17:43 +02:00
regions
sample-addons schek/stage_and_prod_update (#1069) 2024-11-14 10:17:43 +02:00
scripts
tests schek/stage_and_prod_update (#1069) 2024-11-14 10:17:43 +02:00
.DS_Store
.gitignore
.pre-commit-config.yaml
Dockerfile
README.md
api_submission_tests.html
collections-test-results.html
coverage-devhub-tests-results.html
coverage-frontend-tests-results.html
dev.json
developer.txt schek/stage_and_prod_update (#1069) 2024-11-14 10:17:43 +02:00
devhub-dev-parallel-test-results.html
devhub-parallel-test-results.html
frontend-dev-parallel-test-results.html
prod.json schek/stage_and_prod_update (#1069) 2024-11-14 10:17:43 +02:00
pytest.ini
rating_user.txt schek/stage_and_prod_update (#1069) 2024-11-14 10:17:43 +02:00
ratings-dev-test-results.html
ratings-test-results.html
regular_user.txt
requirements.dev.txt
requirements.txt
reusable_user.txt schek/stage_and_prod_update (#1069) 2024-11-14 10:17:43 +02:00
reviewer-tools-stage-test-results.html
reviewer_tools.txt
reviewer_user.txt schek/stage_and_prod_update (#1069) 2024-11-14 10:17:43 +02:00
sanity-parallel-test-results.html schek/stage_and_prod_update (#1069) 2024-11-14 10:17:43 +02:00
sanity-serial-test-results.html schek/stage_and_prod_update (#1069) 2024-11-14 10:17:43 +02:00
setup.cfg
stage.json
submissions-dev-test-results.html
submissions-test-results.html
submissions_user.txt schek/stage_and_prod_update (#1069) 2024-11-14 10:17:43 +02:00
translations.json
user-dev-test-results.html
user-test-results.html

README.md

Release Tests for the Mozilla Addons Website.

Scope

This is an Automation project handled mostly by the AMO QA team. Its goal is to reduce the size of our manual test suites that need to be executed weekly on the AMO staging environment before a new AMO prod release. The current tests are focused mostly on the frontend site, covering areas such as search, homepage UI, add-on installations and other UI elements. The test suites are grouped in individual test files to match the site area they are focused on.

As the project will continue to grow, test areas will be extended to also cover the developer hub pages.

Prerequisites

You'll need to have the following programs installed in your system:

  • Python 3
  • geckodriver
    • if you extract the geckodriver in your main Python directory you can call the driver at runtime from the command line
    • on a Windows machine, python is usually installed in C:\Users\AppData\Local\Programs\Python
  • Docker for Windows

How to run the tests locally

Clone the repository

You'll need to clone this repo using Git. If you do not know how to clone a GitHub repository, check out this help page from GitHub.

If you think you would like to contribute to the tests by writing or maintaining them in the future, it would be a good idea to create a fork of this repository first, and then clone that. GitHub also has great instructions for forking a repository.

Install the dependencies listed in requirements.txt:

  • navigate to the project root directory and run pip install -r requirements.txt

Running tests in the foreground

These tests are meant to be run against the AMO staging environment. We use pytest as our test runner. If you want to see the tests being run on your local machine you can do this simply by navigating to the project directory and running the following command:

pytest --driver Firefox --variables stage.json --variables users.json

You can also run tests from one single test file by specifying the file name:

pytest test_search.py --driver Firefox --variables stage.json --variables users.json

Or you can run a specific test name:

pytest test_search.py::test_name_of_choice --driver Firefox --variables stage.json --variables users.json
  • note that you need to have all the requirements installed for this to work
  • we are using pytest --variables as a tool to store reusable test data
  • make sure that you have a Nightly version installed on your machine if you want the tests to launch in the foreground

Running tests on selenium-standalone with Docker and PowerShell

Before starting, make sure that Docker is up and running and you have switched to Wndows continers.

  • to make the container switch, click on the Docker icon in the system tray and select "Switch to Windows continaers"
  1. Build the selenium-standalone image based on the Dockerfile instructions:
docker image build -t firefox-standalone:latest .
  • note that the process can take a while; you will know that the image was successfully built when docker exits without any errors in the build logs
  1. Once the image is built successfully you can start a container based on it:
docker run -p 4444:4444 --shm-size 2g --rm firefox-standalone:latest
  • the contianer is successfully initialized if you see Selenium Server is up and running on port 4444 as the last entry
  • you can also load localhost:4444 in your browser and make sure you see the Selenium-standalone homepage
  1. To run the tests inside the selenium-standalone container, you need to point pytest to port 4444:
pytest test_name.py --driver Remote --port 4444 --capability browserName firefox --variables stage.json
  • we use --driver Remote and --port 4444 because we want to tell our tests to run against the Selenium-standalone server inside our container
  • the tests will run headless (the browser should not open). If the browser opens, your set-up might not be correct

Adding a test

The tests are written in Python using a POM, or Page Object Model. The plugin we use for this is called pypom. Please read the documentation there for good examples on how to use the Page Object Model when writing tests.

The pytest plugin that we use for running tests has a number of advanced command line options available too. The full documentation for the plugin can be found here.

Mobile and Desktop testing

If you would like to add or edit tests please consider that these are run on both a mobile resolution and a desktop resolution. The mobile resolution is 738x414 (iPhone 7+), the desktop resolution is: 1920x1080. Your tests should be able to work on both.

Debugging a failure

Whether a test passes or fails will result in a HTML report being created. This report will have detailed information of the test run and if a test does fail, it will provide geckodriver logs, terminal logs, as well as a screenshot of the browser when the test failed. We use a pytest plugin called pytest-html to create this report. The report can be found within the project directory and is named ui-test.html. It should be viewed within a browser.