зеркало из https://github.com/microsoft/nutter.git
merge into master (#6)
* initial commit * readme update * read me update 2 * Create pythonpackage.yml * - update python version * Create pythonpublish.yml * Initial commit of runtime * Linting and license update (#5) * lint and license * linting fixes * remove .vs
This commit is contained in:
Родитель
cce6959cbc
Коммит
0191e626a7
|
@ -0,0 +1,34 @@
|
|||
name: Python package
|
||||
|
||||
on: [push]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
max-parallel: 4
|
||||
matrix:
|
||||
python-version: [3.5]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v1
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v1
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
- name: Lint with flake8
|
||||
run: |
|
||||
pip install flake8
|
||||
# stop the build if there are Python syntax errors or undefined names
|
||||
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
|
||||
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
|
||||
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
|
||||
- name: Test with pytest
|
||||
run: |
|
||||
pip install pytest
|
||||
pytest
|
|
@ -0,0 +1,26 @@
|
|||
name: Upload Nutter Package
|
||||
|
||||
on:
|
||||
release:
|
||||
types: [created]
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v1
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v1
|
||||
with:
|
||||
python-version: '3.x'
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install setuptools wheel twine
|
||||
- name: Build and publish
|
||||
env:
|
||||
TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }}
|
||||
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
|
||||
run: |
|
||||
python setup.py sdist bdist_wheel
|
||||
twine upload dist/*
|
|
@ -102,3 +102,6 @@ venv.bak/
|
|||
|
||||
# mypy
|
||||
.mypy_cache/
|
||||
|
||||
# Visual Studio
|
||||
.vs/
|
||||
|
|
296
README.md
296
README.md
|
@ -1,17 +1,289 @@
|
|||
|
||||
|
||||
# Nutter
|
||||
|
||||
Testing Framework for Databricks Notebooks
|
||||
## Overview
|
||||
The Nutter framework makes it easy to test Databricks notebooks. The framework enables a simple inner dev loop, but also easily integrates with Azure DevOps Build/Release pipelines, among others. When data or ML engineers want to test a notebook, they simply create a test notebook called *test_*<notebook_under_test>.
|
||||
|
||||
|
||||
The tests can be run from within that notebook or executed from the Nutter CLI, useful for integrating into Build/Release pipelines.
|
||||
|
||||
The following defines a single test fixture named 'MyTestFixture' that has 1 TestCase named 'test_name':
|
||||
``` Python
|
||||
from runtime.nutterfixture import NutterFixture, tag
|
||||
class MyTestFixture(NutterFixture):
|
||||
def run_test_name(self):
|
||||
dbutils.notebook.run('notebook_under_test', 600, args)
|
||||
|
||||
def assertion_test_name(self):
|
||||
some_tbl = sqlContext.sql('SELECT COUNT(*) AS total FROM sometable')
|
||||
first_row = some_tbl.first()
|
||||
assert (first_row[0] == 1)
|
||||
|
||||
result = MyTestFixture().execute_tests()
|
||||
print(result.to_string())
|
||||
# Comment out the next line (result.exit(dbutils)) to see the test result report from within the notebook
|
||||
result.exit(dbutils)
|
||||
|
||||
```
|
||||
|
||||
To execute the test from within the test notebook, simply run the cell containing the above code. At the current time, in order to see the below test result, you will have to comment out the call to result.exit(dbutils). That call is required to send the results, if the test is run from the CLI, so do not forget to uncomment after locally testing.
|
||||
``` Python
|
||||
Notebook: (local) - Lifecycle State: N/A, Result: N/A
|
||||
============================================================
|
||||
PASSING TESTS
|
||||
------------------------------------------------------------
|
||||
test_name (19.43149897100011 seconds)
|
||||
|
||||
|
||||
============================================================
|
||||
```
|
||||
|
||||
## Components
|
||||
Nutter has 2 main components:
|
||||
1. Nutter Runner - this is the server-side component that is installed as a library on the Databricks cluster
|
||||
2. Nutter CLI - this is the client CLI that can be installed both on a developers laptop and on a build agent
|
||||
|
||||
## Nutter Runner
|
||||
The Nutter Runner is simply a base Python class, NutterFixture, that test fixtures implement. The runner is installed as a library on the Databricks cluster. The NutterFixture base class can then be imported in a test notebook and implemented by a test fixture:
|
||||
``` Python
|
||||
from runtime.nutterfixture import NutterFixture, tag
|
||||
class MyTestFixture(NutterFixture):
|
||||
…
|
||||
```
|
||||
|
||||
To run the tests:
|
||||
``` Python
|
||||
result = MyTestFixture().execute_tests()
|
||||
```
|
||||
|
||||
To view the results from within the test notebook:
|
||||
``` Python
|
||||
print(result.to_string())
|
||||
```
|
||||
|
||||
To return the test results to the Nutter CLI:
|
||||
``` Python
|
||||
result.exit(dbutils)
|
||||
```
|
||||
|
||||
__Note:__ The call to result.exit, behind the scenes calls dbutils.notebook.exit, passing the serialized TestResults back to the CLI. At the current time, print statements do not work when dbutils.notebook.exit is called in a notebook, even if they are written prior to the call. For this reason, it is required to *temporarily* comment out result.exit(dbutils) when running the tests locally.
|
||||
|
||||
### Test Cases
|
||||
A test fixture can contain 1 or mote test cases. Test cases are discovered when execute_tests() is called on the test fixture. Every test case is comprised of 2 required and 2 optional methods and are discovered by the following convention: prefix_testname, where valid prefixes are: before_, run_, assertion_, and after_. A test fixture that has run_fred and assertion_fred methods has 1 test case called 'fred'. The following are details about test case methods:
|
||||
|
||||
* before_(testname) - (optional) - if provided, is run prior to the 'run_' method. This method can be used to setup any test pre-conditions
|
||||
* run_(testname) - (required) - run after 'before_' if before was provided, otherwise run first. This method typically runs the notebook under test
|
||||
* assertion_(testname) (required) - run after 'run_'. This method typically contains the test assertions
|
||||
* after_(testname) (optional) - if provided, run after 'assertion_'. This method typically is used to clean up any test data used by the test
|
||||
|
||||
A test fixture can have multiple test cases. The following example shows a fixture called MultiTestFixture with 2 test cases: 'test_case_1' and 'test_case_2' (assertion code omitted for brevity):
|
||||
``` Python
|
||||
from runtime.nutterfixture import NutterFixture, tag
|
||||
class MultiTestFixture(NutterFixture):
|
||||
def run_test_case_1(self):
|
||||
dbutils.notebook.run('notebook_under_test', 600, args)
|
||||
|
||||
def assertion_test_case_1(self):
|
||||
…
|
||||
|
||||
def run_test_case_2(self):
|
||||
dbutils.notebook.run('notebook_under_test', 600, args)
|
||||
|
||||
def assertion_test_case_2(self):
|
||||
…
|
||||
|
||||
result = MultiTestFixture().execute_tests()
|
||||
print(result.to_string())
|
||||
result.exit(dbutils)
|
||||
```
|
||||
|
||||
### before_all and after_all
|
||||
Test Fixtures also can have a before_all() method which is run prior to all tests and an after_all() which is run after all tests.
|
||||
``` Python
|
||||
from runtime.nutterfixture import NutterFixture, tag
|
||||
class MultiTestFixture(NutterFixture):
|
||||
def before_all(self):
|
||||
…
|
||||
|
||||
def run_test_case_1(self):
|
||||
dbutils.notebook.run('notebook_under_test', 600, args)
|
||||
|
||||
def assertion_test_case_1(self):
|
||||
…
|
||||
|
||||
def after_all(self):
|
||||
…
|
||||
```
|
||||
|
||||
### Installing the Nutter Runner on Azure Databricks
|
||||
Perform the following steps to install the Nutter wheel file on your Azure Databricks cluster:
|
||||
1. Open your Azure Databricks workspace
|
||||
2. Click on the 'Clusters' link (on the left)
|
||||
3. Click on the cluster you wish to install Nutter on
|
||||
4. Click 'Libraries' (at the top)
|
||||
5. Click 'Install New'
|
||||
6. Drag the Nutter whl file
|
||||
|
||||
## Nutter CLI
|
||||
|
||||
###
|
||||
### Getting Started
|
||||
Install the Nutter CLI from the source.
|
||||
|
||||
``` bash
|
||||
pip install setuptools
|
||||
git clone https://github.com/microsoft/nutter
|
||||
cd nutter
|
||||
python setup.py bdist_wheel
|
||||
cd dist
|
||||
pip install nutter-<LATEST_VERSION>-py3-none-any.whl
|
||||
```
|
||||
|
||||
__Note:__ It's recommended to install the Nutter CLI in a virtual environment.
|
||||
|
||||
Set the environment variables.
|
||||
|
||||
Linux
|
||||
``` bash
|
||||
export DATABRICKS_HOST=<HOST>
|
||||
export DATABRICKS_TOKEN=<TOKEN>
|
||||
```
|
||||
|
||||
Windows PowerShell
|
||||
``` cmd
|
||||
$env DATABRICKS_HOST="HOST"
|
||||
$env DATABRICKS_TOKEN="TOKEN"
|
||||
```
|
||||
|
||||
__Note:__ For more information about personal access tokens review [Databricks API Authentication](https://docs.azuredatabricks.net/dev-tools/api/latest/authentication.html).
|
||||
|
||||
## Examples
|
||||
|
||||
### 1. Listing Test Notebooks
|
||||
|
||||
The following command list all test notebooks in the folder ```/dataload```
|
||||
|
||||
``` bash
|
||||
nutter list /dataload
|
||||
```
|
||||
|
||||
__Note:__ The Nutter CLI lists only tests notebooks that follow the naming convention for Nutter test notebooks.
|
||||
|
||||
By default the Nutter CLI lists test notebooks in the folder ignoring sub-folders.
|
||||
|
||||
You can list all test notebooks in the folder structure using the ```--recursive``` flag.
|
||||
|
||||
``` bash
|
||||
nutter list /dataload --recursive
|
||||
```
|
||||
|
||||
### 2. Executing Test Notebooks
|
||||
|
||||
The ```run``` command schedules the execution of test notebooks and waits for their result.
|
||||
|
||||
### Run single test notebook
|
||||
The following command executes the test notebook ```/dataload/test_sourceLoad``` in the cluster ```0123-12334-tonedabc```.
|
||||
|
||||
```bash
|
||||
nutter run dataload/test_sourceLoad --cluster_id 0123-12334-tonedabc
|
||||
```
|
||||
__Note:__ In Azure Databricks you can get the cluster ID by selecting a cluster name from the Clusters tab and clicking on the JSON view.
|
||||
|
||||
### Run multiple tests notebooks
|
||||
|
||||
The Nutter CLI supports the execution of multiple notebooks via name pattern matching. The Nutter CLI applies the pattern to the name of test notebook **without** the *test_* prefix. The CLI also expects that you omit the prefix when specifying the pattern.
|
||||
|
||||
|
||||
Say the *dataload* folder has the following test notebooks: *test_srcLoad* and *test_srcValidation*. The following command will result in the execution of both tests.
|
||||
|
||||
```bash
|
||||
nutter run dataload/src* --cluster_id 0123-12334-tonedabc
|
||||
```
|
||||
|
||||
In addition, if you have tests in a hierarchical folder structure, you can recursively execute all tests by setting the ```--recursive``` flag.
|
||||
|
||||
The following command will execute all tests in the folder structure within the folder *dataload*.
|
||||
|
||||
```bash
|
||||
nutter run dataload/ --cluster_id 0123-12334-tonedabc --recursive
|
||||
```
|
||||
|
||||
### Parallel Execution
|
||||
|
||||
By default the Nutter CLI executes the test notebooks sequentially. The execution is a blocking operation that returns when the job reaches a terminal state or when the timeout expires.
|
||||
|
||||
You can execute mutilple notebooks in parallel by increasing the level of parallelism. The flag ```--max_parallel_tests``` controls the level of parallelism and determines the maximum number of tests that will be executed at the same time.
|
||||
|
||||
The following command executes all the tests in the *dataload* folder structure, and submits and waits for the execution of at the most 2 tests in parallel.
|
||||
|
||||
```bash
|
||||
nutter run dataload/ --cluster_id 0123-12334-tonedabc --recursive --max_parallel_tests 2
|
||||
```
|
||||
|
||||
__Note:__ Running tests notebooks in parallel introduces the risk of data race conditions when two or more tests notebooks modify the same tables or files at the same time. Before increasing the level of parallelism make sure that your tests cases modify only tables or files that are used or referenced within the scope of the test notebook.
|
||||
|
||||
## Nutter CLI Syntax and Flags
|
||||
|
||||
*Run Command*
|
||||
|
||||
```
|
||||
SYNOPSIS
|
||||
nutter run TEST_PATTERN CLUSTER_ID <flags>
|
||||
|
||||
POSITIONAL ARGUMENTS
|
||||
TEST_PATTERN
|
||||
CLUSTER_ID
|
||||
```
|
||||
|
||||
```
|
||||
FLAGS
|
||||
--timeout Execution timeout. Default 120s
|
||||
--junit_report Create a JUnit XML report from the test results.
|
||||
--tags_report Create a CSV report from the test results that includes the test cases tags.
|
||||
--max_parallel_tests Sets the level of parallelism for test notebook execution.
|
||||
--recursive Executes all tests in the hierarchical folder structure.
|
||||
```
|
||||
|
||||
__Note:__ You can also use flags syntax for POSITIONAL ARGUMENTS
|
||||
|
||||
*List Command*
|
||||
|
||||
```
|
||||
NAME
|
||||
nutter list
|
||||
|
||||
SYNOPSIS
|
||||
nutter list PATH <flags>
|
||||
|
||||
POSITIONAL ARGUMENTS
|
||||
PATH
|
||||
```
|
||||
|
||||
```
|
||||
FLAGS
|
||||
--recursive Lists all tests in the hierarchical folder structure.
|
||||
```
|
||||
|
||||
__Note:__ You can also use flags syntax for POSITIONAL ARGUMENTS
|
||||
|
||||
## Integrating Nutter with Azure DevOps
|
||||
|
||||
You can run the Nutter CLI within an Azure DevOps pipeline. The Nutter CLI will exit with non-zero code when a test case fails or the execution of the test notebook is not successful.
|
||||
|
||||
For full integration of the test results with Azure DevOps you can set the flag ```--junit_report```. When this flag is set, the Nutter CLI outputs the results of the tests cases as a JUnit XML compliant file.
|
||||
|
||||
# Contributing
|
||||
## Using VS Code
|
||||
- There's a known issue with VS Code and the lastest version of pytest.
|
||||
- Please make sure that you install pytest 5.0.1
|
||||
- If you installed pytest using VS Code, then you are likely using the incorrect version. Run the following command to fix it:
|
||||
``` Python
|
||||
pip install --force-reinstall pytest==5.0.1
|
||||
```
|
||||
|
||||
This project welcomes contributions and suggestions. Most contributions require you to agree to a
|
||||
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
|
||||
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
|
||||
|
||||
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
|
||||
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
|
||||
provided by the bot. You will only need to do this once across all repos using our CLA.
|
||||
|
||||
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
|
||||
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
|
||||
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
|
||||
## Creating the wheel file and manually test wheel locally
|
||||
1. Change directory to the root that contains setup.py
|
||||
2. Update the version in the setup.py
|
||||
3. Run the following command: python3 setup.py sdist bdist_wheel
|
||||
4. (optional) Install the wheel locally by running: python3 -m pip install <path-to-whl-file>
|
||||
|
|
|
@ -0,0 +1,98 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import sys
|
||||
from common.api import NutterStatusEvents
|
||||
from common.statuseventhandler import EventHandler
|
||||
|
||||
|
||||
class ConsoleEventHandler(EventHandler):
|
||||
def __init__(self, debug):
|
||||
self._debug = debug
|
||||
self._listed_tests = 0
|
||||
self._filtered_tests = 0
|
||||
self._done_tests = 0
|
||||
self._scheduled_tests = 0
|
||||
super().__init__()
|
||||
|
||||
def handle(self, event_queue):
|
||||
while True:
|
||||
self._get_and_handle(event_queue)
|
||||
|
||||
def _get_and_handle(self, event_queue):
|
||||
try:
|
||||
event_instance = event_queue.get()
|
||||
if self._debug:
|
||||
logging.debug(
|
||||
'Message from queue: {}'.format(event_instance))
|
||||
return
|
||||
output = self._get_output(event_instance)
|
||||
self._print_output(output)
|
||||
except Exception as ex:
|
||||
print(ex)
|
||||
logging.debug(ex)
|
||||
finally:
|
||||
event_queue.task_done()
|
||||
|
||||
def _print_output(self, output):
|
||||
print(output, end='', file=sys.stdout, flush=True)
|
||||
|
||||
def _get_output(self, event_instance):
|
||||
event_output = self._get_event_ouput(event_instance)
|
||||
if event_output is None:
|
||||
return
|
||||
return '--> {}\n'.format(event_output)
|
||||
|
||||
def _get_event_ouput(self, event_instance):
|
||||
if event_instance.event is NutterStatusEvents.TestsListing:
|
||||
return self._handle_testlisting(event_instance)
|
||||
if event_instance.event is NutterStatusEvents.TestsListingFiltered:
|
||||
return self._handle_testlistingfiltered(event_instance)
|
||||
if event_instance.event is NutterStatusEvents.TestsListingResults:
|
||||
return self._handle_testlistingresults(event_instance)
|
||||
if event_instance.event is NutterStatusEvents.TestScheduling:
|
||||
return self._handle_testscheduling(event_instance)
|
||||
if event_instance.event is NutterStatusEvents.TestExecuted:
|
||||
return self._handle_testsexecuted(event_instance)
|
||||
if event_instance.event is NutterStatusEvents.TestExecutionResult:
|
||||
return self._handle_testsexecutionresult(event_instance)
|
||||
if event_instance.event is NutterStatusEvents.TestExecutionRequest:
|
||||
return self._handle_testsexecutionrequest(event_instance)
|
||||
return ''
|
||||
|
||||
def _handle_testlisting(self, event):
|
||||
return 'Looking for tests in {}'.format(event.data)
|
||||
|
||||
def _handle_testlistingfiltered(self, event):
|
||||
self._filtered_tests = event.data
|
||||
return '{} tests matched the pattern'.format(self._filtered_tests)
|
||||
|
||||
def _handle_testlistingresults(self, event):
|
||||
return '{} tests found'.format(event.data)
|
||||
|
||||
def _handle_testsexecuted(self, event):
|
||||
return '{} Success:{} {}'.format(event.data.notebook_path,
|
||||
event.data.success,
|
||||
event.data.notebook_run_page_url)
|
||||
|
||||
def _handle_testsexecutionrequest(self, event):
|
||||
return 'Execution request: {}'.format(event.data)
|
||||
|
||||
def _handle_testscheduling(self, event):
|
||||
num_of_tests = self._num_of_test_to_execute()
|
||||
self._scheduled_tests += 1
|
||||
return '{} of {} tests scheduled for execution'.format(self._scheduled_tests,
|
||||
num_of_tests)
|
||||
|
||||
def _handle_testsexecutionresult(self, event):
|
||||
num_of_tests = self._num_of_test_to_execute()
|
||||
self._done_tests += 1
|
||||
return '{} of {} tests executed'.format(self._done_tests, num_of_tests)
|
||||
|
||||
def _num_of_test_to_execute(self):
|
||||
if self._filtered_tests > 0:
|
||||
return self._filtered_tests
|
||||
return self._listed_tests
|
|
@ -0,0 +1,195 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import fire
|
||||
import logging
|
||||
import os
|
||||
import datetime
|
||||
|
||||
import common.api as api
|
||||
from common.apiclient import InvalidConfigurationException
|
||||
|
||||
import common.resultsview as view
|
||||
from .eventhandlers import ConsoleEventHandler
|
||||
from .resultsvalidator import ExecutionResultsValidator
|
||||
from .reportsman import ReportWriters
|
||||
from . import reportsman as reports
|
||||
|
||||
__version__ = '0.1.31'
|
||||
|
||||
BUILD_NUMBER_ENV_VAR = 'NUTTER_BUILD_NUMBER'
|
||||
|
||||
|
||||
def get_cli_version():
|
||||
build_number = os.environ.get(BUILD_NUMBER_ENV_VAR)
|
||||
if build_number:
|
||||
return '{}.{}'.format(__version__, build_number)
|
||||
return __version__
|
||||
|
||||
|
||||
def get_cli_header():
|
||||
header = 'Nutter Version {}\n'.format(get_cli_version())
|
||||
header += '+' * 50
|
||||
header += '\n'
|
||||
|
||||
return header
|
||||
|
||||
|
||||
class NutterCLI(object):
|
||||
|
||||
def __init__(self, debug=False, log_to_file=False, version=False):
|
||||
self._logger = logging.getLogger('NutterCLI')
|
||||
self._handle_show_version(version)
|
||||
|
||||
# CLI only logger so the output is not dictated
|
||||
# by the logging configuration of all the other components
|
||||
self._set_debugging(debug, log_to_file)
|
||||
self._print_cli_header()
|
||||
self._set_nutter(debug)
|
||||
super().__init__()
|
||||
|
||||
def run(self, test_pattern, cluster_id,
|
||||
timeout=120, junit_report=False,
|
||||
tags_report=False, max_parallel_tests=1,
|
||||
recursive=False):
|
||||
try:
|
||||
logging.debug(""" Running tests. test_pattern: {} cluster_id: {} timeout: {}
|
||||
junit_report: {} max_parallel_tests: {}
|
||||
tags_report: {} recursive:{} """
|
||||
.format(test_pattern, cluster_id, timeout,
|
||||
junit_report, max_parallel_tests,
|
||||
tags_report, recursive))
|
||||
|
||||
logging.debug("Executing test(s): {}".format(test_pattern))
|
||||
|
||||
if self._is_a_test_pattern(test_pattern):
|
||||
logging.debug('Executing pattern')
|
||||
results = self._nutter.run_tests(
|
||||
test_pattern, cluster_id, timeout, max_parallel_tests, recursive)
|
||||
self._nutter.events_processor_wait()
|
||||
self._handle_results(results, junit_report, tags_report)
|
||||
return
|
||||
|
||||
logging.debug('Executing single test')
|
||||
result = self._nutter.run_test(test_pattern, cluster_id,
|
||||
timeout)
|
||||
|
||||
self._handle_results([result], junit_report, tags_report)
|
||||
|
||||
except Exception as error:
|
||||
self._logger.fatal(error)
|
||||
exit(1)
|
||||
|
||||
def list(self, path, recursive=False):
|
||||
try:
|
||||
logging.debug("Running tests. path: {}".format(path))
|
||||
results = self._nutter.list_tests(path, recursive)
|
||||
self._nutter.events_processor_wait()
|
||||
self._display_list_results(results)
|
||||
except Exception as error:
|
||||
self._logger.fatal(error)
|
||||
exit(1)
|
||||
|
||||
def _handle_results(self, results, junit_report, tags_report):
|
||||
self._display_test_results(results)
|
||||
|
||||
report_man = self._get_report_writer_manager(junit_report, tags_report)
|
||||
self._handle_reports(report_man, results)
|
||||
|
||||
ExecutionResultsValidator().validate(results)
|
||||
|
||||
def _get_report_writer_manager(self, junit_report, tags_report):
|
||||
writers = 0
|
||||
if junit_report:
|
||||
writers = ReportWriters.JUNIT
|
||||
if tags_report:
|
||||
writers = writers + ReportWriters.TAGS
|
||||
|
||||
return reports.get_report_writer_manager(writers)
|
||||
|
||||
def _handle_reports(self, report_manager, exec_results):
|
||||
if not report_manager.has_providers():
|
||||
logging.debug('No providers were registered.')
|
||||
return
|
||||
for provider in report_manager.providers_names():
|
||||
print('Writing {} report.'.format(provider))
|
||||
|
||||
for exec_result in exec_results:
|
||||
t_result = api.to_testresults(
|
||||
exec_result.notebook_result.exit_output)
|
||||
if t_result is None:
|
||||
print('Warning:')
|
||||
print('\tThe output of {} is missing or the format is invalid.'.format(
|
||||
exec_result.notebook_path))
|
||||
continue
|
||||
report_manager.add_result(exec_result.notebook_path, t_result)
|
||||
|
||||
for file_name in report_manager.write():
|
||||
print('File {} written'.format(file_name))
|
||||
|
||||
def _display_list_results(self, results):
|
||||
list_results_view = view.get_list_results_view(results)
|
||||
view.print_results_view(list_results_view)
|
||||
|
||||
def _display_test_results(self, results):
|
||||
results_view = view.get_run_results_views(results)
|
||||
view.print_results_view(results_view)
|
||||
|
||||
def _is_a_test_pattern(self, pattern):
|
||||
segments = pattern.split('/')
|
||||
if len(segments) > 0:
|
||||
search_pattern = segments[len(segments)-1]
|
||||
if search_pattern.lower().startswith('test_'):
|
||||
return False
|
||||
return True
|
||||
logging.Fatal(
|
||||
""" Invalid argument.
|
||||
The value must be the full path to the test or a pattern """)
|
||||
|
||||
def _print_cli_header(self):
|
||||
print(get_cli_header())
|
||||
|
||||
def _set_nutter(self, debug):
|
||||
try:
|
||||
event_handler = ConsoleEventHandler(debug)
|
||||
self._nutter = api.get_nutter(event_handler)
|
||||
except InvalidConfigurationException as ex:
|
||||
logging.debug(ex)
|
||||
self._print_config_error_and_exit()
|
||||
|
||||
def _handle_show_version(self, version):
|
||||
if not version:
|
||||
return
|
||||
print(self._get_version_label())
|
||||
exit(0)
|
||||
|
||||
def _get_version_label(self):
|
||||
version = get_cli_version()
|
||||
return 'Nutter Version {}'.format(version)
|
||||
|
||||
def _print_config_error_and_exit(self):
|
||||
print(""" Invalid configuration.\n
|
||||
DATABRICKS_HOST and DATABRICKS_TOKEN
|
||||
environment variables are not set """)
|
||||
exit(1)
|
||||
|
||||
def _set_debugging(self, debug, log_to_file):
|
||||
if debug:
|
||||
log_name = None
|
||||
if log_to_file:
|
||||
log_name = 'nutter-exec-{0:%Y.%m.%d.%H%M%S%f}.log'.format(
|
||||
datetime.datetime.utcnow())
|
||||
logging.basicConfig(
|
||||
filename=log_name,
|
||||
format="%(asctime)s:%(levelname)s:%(message)s",
|
||||
level=logging.DEBUG)
|
||||
|
||||
|
||||
def main():
|
||||
fire.Fire(NutterCLI)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -0,0 +1,67 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import common.api as nutter_api
|
||||
from enum import Enum
|
||||
from enum import IntEnum
|
||||
|
||||
import logging
|
||||
|
||||
|
||||
def get_report_writer_manager(writers):
|
||||
return ReportWriterManager(writers)
|
||||
|
||||
|
||||
class ReportWriterManager(object):
|
||||
|
||||
def __init__(self, report_writers):
|
||||
self._set_providers(report_writers)
|
||||
super().__init__()
|
||||
|
||||
def _set_providers(self, report_writers):
|
||||
self._providers = {}
|
||||
logging.debug(
|
||||
'Setting the following report writers: {}'.format(report_writers))
|
||||
|
||||
if ReportWriters.JUNIT & report_writers:
|
||||
writer = nutter_api.get_report_writer(
|
||||
ReportWritersTypes.JUNIT.value)
|
||||
self._providers[ReportWritersTypes.JUNIT] = writer
|
||||
|
||||
if ReportWriters.TAGS & report_writers:
|
||||
writer = nutter_api.get_report_writer(
|
||||
ReportWritersTypes.TAGS.value)
|
||||
self._providers[ReportWritersTypes.TAGS] = writer
|
||||
|
||||
def add_result(self, notebook_path, testresult):
|
||||
for key, provider in self._providers.items():
|
||||
logging.debug('Adding a test result to {} providers.'.format(key))
|
||||
provider.add_result(notebook_path, testresult)
|
||||
|
||||
def write(self):
|
||||
file_names = []
|
||||
for key, provider in self._providers.items():
|
||||
if not provider.has_data():
|
||||
logging.debug('No test results to write for {}.'.format(key))
|
||||
continue
|
||||
file_name = provider.write()
|
||||
file_names.append(file_name)
|
||||
return file_names
|
||||
|
||||
def providers_names(self):
|
||||
return [key for key, value in self._providers.items()]
|
||||
|
||||
def has_providers(self):
|
||||
return len(self._providers) > 0
|
||||
|
||||
|
||||
class ReportWritersTypes(Enum):
|
||||
JUNIT = 'JunitXMLReportWriter'
|
||||
TAGS = 'TagsReportWriter'
|
||||
|
||||
|
||||
class ReportWriters(IntEnum):
|
||||
JUNIT = 1
|
||||
TAGS = 2
|
|
@ -0,0 +1,66 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
from common.apiclientresults import ExecuteNotebookResult
|
||||
from common.testresult import TestResults
|
||||
import logging
|
||||
|
||||
|
||||
class ExecutionResultsValidator(object):
|
||||
def validate(self, results):
|
||||
if not isinstance(results, list):
|
||||
raise ValueError("Invalid results. Expected a list")
|
||||
|
||||
for result in results:
|
||||
self._validate_result(result)
|
||||
|
||||
def _validate_result(self, result):
|
||||
if not isinstance(result, ExecuteNotebookResult):
|
||||
raise ValueError("Expected ExecuteNotebookResult")
|
||||
if result.is_error:
|
||||
msg = """ The job is not in a successfull terminal state.
|
||||
Life cycle state:{} """.format(result.task_result_state)
|
||||
raise JobExecutionFailureException(message=msg)
|
||||
if result.notebook_result.is_error:
|
||||
msg = 'The notebook failed. result state:{}'.format(
|
||||
result.notebook_result.result_state)
|
||||
raise NotebookExecutionFailureException(message=msg)
|
||||
|
||||
self._validate_test_results(result.notebook_result.exit_output)
|
||||
|
||||
def _validate_test_results(self, exit_output):
|
||||
test_results = None
|
||||
try:
|
||||
test_results = TestResults().deserialize(exit_output)
|
||||
except Exception as ex:
|
||||
logging.debug(ex)
|
||||
msg = """ The Notebook exit output value is invalid or missing.
|
||||
Additional info: {} """.format(str(ex))
|
||||
raise InvalidNotebookOutputException(msg)
|
||||
|
||||
for test_result in test_results.results:
|
||||
if not test_result.passed:
|
||||
msg = 'The Test Case: {} failed.'.format(test_result.test_name)
|
||||
raise TestCaseFailureException(msg)
|
||||
|
||||
|
||||
class TestCaseFailureException(Exception):
|
||||
def __init__(self, message):
|
||||
super().__init__(message)
|
||||
|
||||
|
||||
class JobExecutionFailureException(Exception):
|
||||
def __init__(self, message):
|
||||
super().__init__(message)
|
||||
|
||||
|
||||
class NotebookExecutionFailureException(Exception):
|
||||
def __init__(self, message):
|
||||
super().__init__(message)
|
||||
|
||||
|
||||
class InvalidNotebookOutputException(Exception):
|
||||
def __init__(self, message):
|
||||
super().__init__(message)
|
|
@ -0,0 +1,302 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
from abc import abstractmethod, ABCMeta
|
||||
from .testresult import TestResults
|
||||
from . import scheduler
|
||||
from . import apiclient
|
||||
from .resultreports import JunitXMLReportWriter, TestResultsReportWriter
|
||||
from .statuseventhandler import StatusEventsHandler
|
||||
|
||||
import enum
|
||||
import logging
|
||||
|
||||
import re
|
||||
import importlib
|
||||
|
||||
|
||||
def get_nutter(event_handler=None):
|
||||
return Nutter(event_handler)
|
||||
|
||||
|
||||
def get_junit_writer():
|
||||
return JunitXMLReportWriter()
|
||||
|
||||
|
||||
def get_report_writer(writer):
|
||||
module = importlib.import_module('common.resultreports')
|
||||
report_writer = getattr(module, writer)
|
||||
instance = report_writer()
|
||||
if isinstance(report_writer, TestResultsReportWriter):
|
||||
raise ValueError(
|
||||
'The report writer must a class derived from TestResultsReportWriter')
|
||||
return instance
|
||||
|
||||
|
||||
def to_testresults(exit_output):
|
||||
if not exit_output:
|
||||
return None
|
||||
try:
|
||||
return TestResults().deserialize(exit_output)
|
||||
except Exception as ex:
|
||||
error = 'error while creating result from {}. Error: {}'.format(
|
||||
ex, exit_output)
|
||||
logging.debug(error)
|
||||
return None
|
||||
|
||||
|
||||
class NutterApi(object):
|
||||
"""
|
||||
"""
|
||||
|
||||
__metaclass__ = ABCMeta
|
||||
|
||||
@abstractmethod
|
||||
def list_tests(self, path, recursive):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def run_tests(self, pattern, cluster_id, timeout, max_parallel_tests):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def run_test(self, testpath, cluster_id, timeout):
|
||||
pass
|
||||
|
||||
|
||||
class Nutter(NutterApi):
|
||||
"""
|
||||
"""
|
||||
|
||||
def __init__(self, event_handler=None):
|
||||
self.dbclient = apiclient.databricks_client()
|
||||
self._events_processor = self._get_status_events_handler(event_handler)
|
||||
super().__init__()
|
||||
|
||||
def list_tests(self, path, recursive=False):
|
||||
tests = []
|
||||
for test in self._list_tests(path, recursive):
|
||||
tests.append(test)
|
||||
|
||||
self._add_status_event(
|
||||
NutterStatusEvents.TestsListingResults, len(tests))
|
||||
|
||||
return tests
|
||||
|
||||
def run_test(self, testpath, cluster_id, timeout=120):
|
||||
self._add_status_event(NutterStatusEvents.TestExecutionRequest, testpath)
|
||||
test_notebook = TestNotebook.from_path(testpath)
|
||||
if test_notebook is None:
|
||||
raise InvalidTestException
|
||||
|
||||
result = self.dbclient.execute_notebook(
|
||||
test_notebook.path, cluster_id, timeout=timeout)
|
||||
|
||||
return result
|
||||
|
||||
def run_tests(self, pattern, cluster_id,
|
||||
timeout=120, max_parallel_tests=1, recursive=False):
|
||||
|
||||
self._add_status_event(NutterStatusEvents.TestExecutionRequest, pattern)
|
||||
root, pattern_to_match = self._get_root_and_pattern(pattern)
|
||||
|
||||
tests = self.list_tests(root, recursive)
|
||||
|
||||
results = []
|
||||
if len(tests) == 0:
|
||||
return results
|
||||
|
||||
pattern_matcher = TestNamePatternMatcher(pattern_to_match)
|
||||
filtered_notebooks = pattern_matcher.filter_by_pattern(tests)
|
||||
self._add_status_event(
|
||||
NutterStatusEvents.TestsListingFiltered, len(filtered_notebooks))
|
||||
|
||||
return self._schedule_and_run(
|
||||
filtered_notebooks, cluster_id, max_parallel_tests, timeout)
|
||||
|
||||
def events_processor_wait(self):
|
||||
if self._events_processor is None:
|
||||
return
|
||||
self._events_processor.wait()
|
||||
|
||||
def _list_tests(self, path, recursive):
|
||||
self._add_status_event(NutterStatusEvents.TestsListing, path)
|
||||
workspace_objects = self.dbclient.list_objects(path)
|
||||
|
||||
for notebook in workspace_objects.test_notebooks:
|
||||
yield TestNotebook(notebook.name, notebook.path)
|
||||
|
||||
if not recursive:
|
||||
return
|
||||
|
||||
for directory in workspace_objects.directories:
|
||||
for test in self._list_tests(directory.path, True):
|
||||
yield test
|
||||
|
||||
def _get_status_events_handler(self, events_handler):
|
||||
if events_handler is None:
|
||||
return None
|
||||
processor = StatusEventsHandler(events_handler)
|
||||
logging.debug('Status events processor created')
|
||||
return processor
|
||||
|
||||
def _add_status_event(self, name, status):
|
||||
if self._events_processor is None:
|
||||
return
|
||||
logging.debug('Status event. name:{} status:{}'.format(name, status))
|
||||
|
||||
self._events_processor.add_event(name, status)
|
||||
|
||||
def _get_root_and_pattern(self, pattern):
|
||||
segments = pattern.split('/')
|
||||
if len(segments) == 0:
|
||||
raise ValueError("Invalid pattern. The value must start with /")
|
||||
root = '/'.join(segments[:len(segments)-1])
|
||||
|
||||
if root == '':
|
||||
root = '/'
|
||||
|
||||
valid_pattern = segments[len(segments)-1]
|
||||
|
||||
return root, valid_pattern
|
||||
|
||||
def _schedule_and_run(self, test_notebooks, cluster_id,
|
||||
max_parallel_tests, timeout):
|
||||
func_scheduler = scheduler.get_scheduler(max_parallel_tests)
|
||||
for test_notebook in test_notebooks:
|
||||
self._add_status_event(
|
||||
NutterStatusEvents.TestScheduling, test_notebook.path)
|
||||
logging.debug(
|
||||
'Scheduling execution of: {}'.format(test_notebook.path))
|
||||
func_scheduler.add_function(self._execute_notebook,
|
||||
test_notebook.path, cluster_id, timeout)
|
||||
return self._run_and_await(func_scheduler)
|
||||
|
||||
def _execute_notebook(self, test_notebook_path, cluster_id, timeout):
|
||||
result = self.dbclient.execute_notebook(test_notebook_path,
|
||||
cluster_id, None, timeout)
|
||||
self._add_status_event(NutterStatusEvents.TestExecuted,
|
||||
ExecutionResultEventData.from_execution_results(result))
|
||||
logging.debug('Executed: {}'.format(test_notebook_path))
|
||||
return result
|
||||
|
||||
def _run_and_await(self, func_scheduler):
|
||||
logging.debug('Scheduler run and wait.')
|
||||
func_results = func_scheduler.run_and_wait()
|
||||
return self.__process_func_results(func_results)
|
||||
|
||||
def __process_func_results(self, func_results):
|
||||
results = []
|
||||
for func_result in func_results:
|
||||
self._inspect_result(func_result)
|
||||
results.append(func_result.func_result)
|
||||
return results
|
||||
|
||||
def _inspect_result(self, func_result):
|
||||
logging.debug('Processing function results.')
|
||||
|
||||
self._add_status_event(NutterStatusEvents.TestExecutionResult, '{}'.format(
|
||||
func_result.exception is not None))
|
||||
|
||||
if func_result.exception is not None:
|
||||
logging.debug('Exception:{}'.format(func_result.exception))
|
||||
raise func_result.exception
|
||||
|
||||
|
||||
class TestNotebook(object):
|
||||
def __init__(self, name, path):
|
||||
if not self._is_valid_test_name(name):
|
||||
raise InvalidTestException
|
||||
|
||||
self.name = name
|
||||
self.path = path
|
||||
self.test_name = name.split("_")[1]
|
||||
|
||||
def __eq__(self, obj):
|
||||
is_equal = obj.name == self.name and obj.path == self.path
|
||||
return isinstance(obj, TestNotebook) and is_equal
|
||||
|
||||
@classmethod
|
||||
def from_path(cls, path):
|
||||
name = cls._get_notebook_name_from_path(path)
|
||||
if not cls._is_valid_test_name(name):
|
||||
return None
|
||||
return cls(name, path)
|
||||
|
||||
@classmethod
|
||||
def _is_valid_test_name(cls, name):
|
||||
if name is None:
|
||||
return False
|
||||
|
||||
return name.lower().startswith('test_')
|
||||
|
||||
@classmethod
|
||||
def _get_notebook_name_from_path(cls, path):
|
||||
segments = path.split('/')
|
||||
if len(segments) == 0:
|
||||
raise ValueError('Invalid path. Path must start /')
|
||||
name = segments[len(segments)-1]
|
||||
return name
|
||||
|
||||
|
||||
class TestNamePatternMatcher(object):
|
||||
def __init__(self, pattern):
|
||||
try:
|
||||
# * is an invalid regex in python
|
||||
# however, we want to treat it as no filter
|
||||
if pattern == '*' or pattern is None or pattern == '':
|
||||
self._pattern = None
|
||||
return
|
||||
re.compile(pattern)
|
||||
except re.error as ex:
|
||||
logging.debug('Pattern could not be compiled. {}'.format(ex))
|
||||
raise ValueError(
|
||||
""" The pattern provided is invalid.
|
||||
The pattern must start with an alphanumeric character """)
|
||||
self._pattern = pattern
|
||||
|
||||
def filter_by_pattern(self, test_notebooks):
|
||||
results = []
|
||||
for test_notebook in test_notebooks:
|
||||
if self._pattern is None:
|
||||
results.append(test_notebook)
|
||||
continue
|
||||
|
||||
search_result = re.search(self._pattern, test_notebook.test_name)
|
||||
if search_result is not None and search_result.end() > 0:
|
||||
results.append(test_notebook)
|
||||
return results
|
||||
|
||||
class ExecutionResultEventData():
|
||||
def __init__(self, notebook_path, success, notebook_run_page_url):
|
||||
self.success = success
|
||||
self.notebook_path = notebook_path
|
||||
self.notebook_run_page_url = notebook_run_page_url
|
||||
|
||||
@classmethod
|
||||
def from_execution_results(cls, exec_results):
|
||||
notebook_run_page_url = exec_results.notebook_run_page_url
|
||||
notebook_path = exec_results.notebook_path
|
||||
try:
|
||||
success = not exec_results.is_any_error
|
||||
except Exception as ex:
|
||||
logging.debug("Error while creating the ExecutionResultEventData {}", ex)
|
||||
success = False
|
||||
finally:
|
||||
return cls(notebook_path, success, notebook_run_page_url)
|
||||
|
||||
|
||||
class NutterStatusEvents(enum.Enum):
|
||||
TestExecutionRequest = 1
|
||||
TestsListing = 2
|
||||
TestsListingFiltered = 3
|
||||
TestsListingResults = 4
|
||||
TestScheduling = 5
|
||||
TestExecuted = 6
|
||||
TestExecutionResult = 7
|
||||
|
||||
|
||||
class InvalidTestException(Exception):
|
||||
pass
|
|
@ -0,0 +1,138 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import uuid
|
||||
import time
|
||||
from databricks_api import DatabricksAPI
|
||||
from . import authconfig as cfg, utils
|
||||
from .apiclientresults import ExecuteNotebookResult, WorkspacePath
|
||||
from .httpretrier import HTTPRetrier
|
||||
import logging
|
||||
|
||||
|
||||
def databricks_client():
|
||||
|
||||
db = DatabricksAPIClient()
|
||||
|
||||
return db
|
||||
|
||||
|
||||
class DatabricksAPIClient(object):
|
||||
"""
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
config = cfg.get_auth_config()
|
||||
self.min_timeout = 10
|
||||
|
||||
if config is None:
|
||||
raise InvalidConfigurationException
|
||||
|
||||
# TODO: remove the dependency with this API, an instead use httpclient/requests
|
||||
db = DatabricksAPI(host=config.host,
|
||||
token=config.token)
|
||||
self.inner_dbclient = db
|
||||
|
||||
# The retrier uses the recommended defaults
|
||||
# https://docs.microsoft.com/en-us/azure/databricks/dev-tools/api/latest/jobs
|
||||
self._retrier = HTTPRetrier()
|
||||
|
||||
def list_notebooks(self, path):
|
||||
workspace_objects = self.list_objects(path)
|
||||
notebooks = workspace_objects.notebooks
|
||||
return notebooks
|
||||
|
||||
def list_objects(self, path):
|
||||
objects = self.inner_dbclient.workspace.list(path)
|
||||
logging.debug('Creating WorkspacePath for path {}'.format(path))
|
||||
logging.debug('List response: \n\t{}'.format(objects))
|
||||
|
||||
workspace_path_obj = WorkspacePath.from_api_response(objects)
|
||||
logging.debug('WorkspacePath created')
|
||||
|
||||
return workspace_path_obj
|
||||
|
||||
def execute_notebook(self, notebook_path, cluster_id,
|
||||
notebook_params=None, timeout=120):
|
||||
if not notebook_path:
|
||||
raise ValueError("empty path")
|
||||
if not cluster_id:
|
||||
raise ValueError("empty cluster id")
|
||||
if timeout < self.min_timeout:
|
||||
raise ValueError(
|
||||
"Timeout must be greater than {}".format(self.min_timeout))
|
||||
if notebook_params is not None:
|
||||
if not isinstance(notebook_params, dict):
|
||||
raise ValueError("Parameters must be a dictionary")
|
||||
|
||||
name = str(uuid.uuid1())
|
||||
ntask = self.__get_notebook_task(notebook_path, notebook_params)
|
||||
|
||||
runid = self._retrier.execute(self.inner_dbclient.jobs.submit_run,
|
||||
run_name=name,
|
||||
existing_cluster_id=cluster_id,
|
||||
notebook_task=ntask,
|
||||
)
|
||||
|
||||
if 'run_id' not in runid:
|
||||
raise NotebookTaskRunIDMissingException
|
||||
|
||||
life_cycle_state, output = self.__pull_for_output(
|
||||
runid['run_id'], timeout)
|
||||
|
||||
return ExecuteNotebookResult.from_job_output(output)
|
||||
|
||||
def __pull_for_output(self, run_id, timeout):
|
||||
timedout = time.time() + timeout
|
||||
output = {}
|
||||
while time.time() < timedout:
|
||||
output = self._retrier.execute(
|
||||
self.inner_dbclient.jobs.get_run_output, run_id)
|
||||
logging.debug(output)
|
||||
|
||||
lcs = utils.recursive_find(
|
||||
output, ['metadata', 'state', 'life_cycle_state'])
|
||||
|
||||
# As per:
|
||||
# https://docs.azuredatabricks.net/api/latest/jobs.html#jobsrunlifecyclestate
|
||||
# All these are terminal states
|
||||
if lcs == 'TERMINATED' or lcs == 'SKIPPED' or lcs == 'INTERNAL_ERROR':
|
||||
return lcs, output
|
||||
time.sleep(1)
|
||||
self._raise_timeout(output)
|
||||
|
||||
def _raise_timeout(self, output):
|
||||
run_page_url = utils.recursive_find(
|
||||
output, ['metadata', 'run_page_url'])
|
||||
raise TimeOutException(
|
||||
""" Timeout while waiting for the result of a test.\n
|
||||
Check the status of the execution\n
|
||||
Run page URL: {} """.format(run_page_url))
|
||||
|
||||
def __get_notebook_task(self, path, params):
|
||||
ntask = {}
|
||||
ntask['notebook_path'] = path
|
||||
base_params = []
|
||||
if params is not None:
|
||||
for key in params:
|
||||
param = {}
|
||||
param['key'] = key
|
||||
param['value'] = params[key]
|
||||
base_params.append(param)
|
||||
ntask['base_parameters'] = base_params
|
||||
|
||||
return ntask
|
||||
|
||||
|
||||
class NotebookTaskRunIDMissingException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class InvalidConfigurationException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class TimeOutException(Exception):
|
||||
pass
|
|
@ -0,0 +1,170 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
from . import utils
|
||||
from abc import ABCMeta
|
||||
from .testresult import TestResults
|
||||
import logging
|
||||
|
||||
|
||||
class ExecuteNotebookResult(object):
|
||||
def __init__(self, life_cycle_state, notebook_path,
|
||||
notebook_result, notebook_run_page_url):
|
||||
self.task_result_state = life_cycle_state
|
||||
self.notebook_path = notebook_path
|
||||
self.notebook_result = notebook_result
|
||||
self.notebook_run_page_url = notebook_run_page_url
|
||||
|
||||
@classmethod
|
||||
def from_job_output(cls, job_output):
|
||||
life_cycle_state = utils.recursive_find(
|
||||
job_output, ['metadata', 'state', 'life_cycle_state'])
|
||||
notebook_path = utils.recursive_find(
|
||||
job_output, ['metadata', 'task', 'notebook_task', 'notebook_path'])
|
||||
notebook_run_page_url = utils.recursive_find(
|
||||
job_output, ['metadata', 'run_page_url'])
|
||||
notebook_result = NotebookOutputResult.from_job_output(job_output)
|
||||
|
||||
return cls(life_cycle_state, notebook_path,
|
||||
notebook_result, notebook_run_page_url)
|
||||
|
||||
@property
|
||||
def is_error(self):
|
||||
# The assumption is that the task is an terminal state
|
||||
# Success state must be TERMINATED all the others are considered failures
|
||||
return self.task_result_state != 'TERMINATED'
|
||||
|
||||
@property
|
||||
def is_any_error(self):
|
||||
if self.is_error:
|
||||
return True
|
||||
if self.notebook_result.is_error:
|
||||
return True
|
||||
if self.notebook_result.nutter_test_results is None:
|
||||
return True
|
||||
|
||||
for test_case in self.notebook_result.nutter_test_results.results:
|
||||
if not test_case.passed:
|
||||
return True
|
||||
return False
|
||||
|
||||
class NotebookOutputResult(object):
|
||||
def __init__(self, result_state, exit_output, nutter_test_results):
|
||||
self.result_state = result_state
|
||||
self.exit_output = exit_output
|
||||
self.nutter_test_results = nutter_test_results
|
||||
|
||||
@classmethod
|
||||
def from_job_output(cls, job_output):
|
||||
exit_output = ''
|
||||
nutter_test_results = ''
|
||||
notebook_result_state = ''
|
||||
if 'error' in job_output:
|
||||
exit_output = job_output['error']
|
||||
|
||||
if 'notebook_output' in job_output:
|
||||
notebook_result_state = utils.recursive_find(
|
||||
job_output, ['metadata', 'state', 'result_state'])
|
||||
|
||||
if 'result' in job_output['notebook_output']:
|
||||
exit_output = job_output['notebook_output']['result']
|
||||
nutter_test_results = cls._get_nutter_test_results(exit_output)
|
||||
|
||||
return cls(notebook_result_state, exit_output, nutter_test_results)
|
||||
|
||||
@property
|
||||
def is_error(self):
|
||||
# https://docs.azuredatabricks.net/dev-tools/api/latest/jobs.html#jobsrunresultstate
|
||||
return self.result_state != 'SUCCESS' and not self.is_run_from_notebook
|
||||
|
||||
@property
|
||||
def is_run_from_notebook(self):
|
||||
# https://docs.azuredatabricks.net/dev-tools/api/latest/jobs.html#jobsrunresultstate
|
||||
return self.result_state == 'N/A'
|
||||
|
||||
@classmethod
|
||||
def _get_nutter_test_results(cls, exit_output):
|
||||
nutter_test_results = cls._to_nutter_test_results(exit_output)
|
||||
if nutter_test_results is None:
|
||||
return None
|
||||
return nutter_test_results
|
||||
|
||||
@classmethod
|
||||
def _to_nutter_test_results(cls, exit_output):
|
||||
if not exit_output:
|
||||
return None
|
||||
try:
|
||||
return TestResults().deserialize(exit_output)
|
||||
except Exception as ex:
|
||||
error = 'error while creating result from {}. Error: {}'.format(
|
||||
ex, exit_output)
|
||||
logging.debug(error)
|
||||
return None
|
||||
|
||||
|
||||
class WorkspacePath(object):
|
||||
def __init__(self, notebooks, directories):
|
||||
self.notebooks = notebooks
|
||||
self.directories = directories
|
||||
self.test_notebooks = self._set_test_notebooks()
|
||||
|
||||
@classmethod
|
||||
def from_api_response(cls, objects):
|
||||
notebooks = cls._set_notebooks(objects)
|
||||
directories = cls._set_directories(objects)
|
||||
return cls(notebooks, directories)
|
||||
|
||||
@classmethod
|
||||
def _set_notebooks(cls, objects):
|
||||
if 'objects' not in objects:
|
||||
return []
|
||||
return [NotebookObject(object['path']) for object in objects['objects']
|
||||
if object['object_type'] == 'NOTEBOOK']
|
||||
|
||||
@classmethod
|
||||
def _set_directories(cls, objects):
|
||||
if 'objects' not in objects:
|
||||
return []
|
||||
return [Directory(object['path']) for object in objects['objects']
|
||||
if object['object_type'] == 'DIRECTORY']
|
||||
|
||||
def _set_test_notebooks(self):
|
||||
return [notebook for notebook in self.notebooks
|
||||
if notebook.is_test_notebook]
|
||||
|
||||
|
||||
class WorkspaceObject():
|
||||
__metaclass__ = ABCMeta
|
||||
|
||||
def __init__(self, path):
|
||||
self.path = path
|
||||
|
||||
|
||||
class NotebookObject(WorkspaceObject):
|
||||
def __init__(self, path):
|
||||
self.name = self._get_notebook_name_from_path(path)
|
||||
super().__init__(path)
|
||||
|
||||
def _get_notebook_name_from_path(self, path):
|
||||
segments = path.split('/')
|
||||
if len(segments) == 0:
|
||||
raise ValueError('Invalid path. Path must start /')
|
||||
name = segments[len(segments)-1]
|
||||
return name
|
||||
|
||||
@property
|
||||
def is_test_notebook(self):
|
||||
return self._is_valid_test_name(self.name)
|
||||
|
||||
def _is_valid_test_name(self, name):
|
||||
if name is None:
|
||||
return False
|
||||
|
||||
return name.lower().startswith('test_')
|
||||
|
||||
|
||||
class Directory(WorkspaceObject):
|
||||
def __init__(self, path):
|
||||
super().__init__(path)
|
|
@ -0,0 +1,56 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import os
|
||||
from abc import abstractmethod, ABCMeta
|
||||
|
||||
def get_auth_config():
|
||||
"""
|
||||
"""
|
||||
|
||||
providers = (EnvVariableAuthConfigProvider(),)
|
||||
|
||||
for provider in providers:
|
||||
config = provider.get_auth_config()
|
||||
if config is not None and config.is_valid:
|
||||
return config
|
||||
return None
|
||||
|
||||
class DatabricksApiAuthConfigProvider(object):
|
||||
"""
|
||||
"""
|
||||
|
||||
__metaclass__ = ABCMeta
|
||||
|
||||
@abstractmethod
|
||||
def get_auth_config(self):
|
||||
pass
|
||||
|
||||
class DatabricksApiAuthConfig(object):
|
||||
def __init__(self, host, token, insecure):
|
||||
self.host = host
|
||||
self.token = token
|
||||
self.insecure = insecure
|
||||
|
||||
@property
|
||||
def is_valid(self):
|
||||
if self.host == '' or self.token == '':
|
||||
return False
|
||||
|
||||
return self.host is not None and self.token is not None
|
||||
|
||||
class EnvVariableAuthConfigProvider(DatabricksApiAuthConfigProvider):
|
||||
"""
|
||||
Loads token auth configuration from environment variables.
|
||||
"""
|
||||
|
||||
def get_auth_config(self):
|
||||
host = os.environ.get('DATABRICKS_HOST')
|
||||
token = os.environ.get('DATABRICKS_TOKEN')
|
||||
insecure = os.environ.get('DATABRICKS_INSECURE')
|
||||
config = DatabricksApiAuthConfig(host, token, insecure)
|
||||
if config.is_valid:
|
||||
return config
|
||||
return None
|
|
@ -0,0 +1,41 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from time import sleep
|
||||
from requests.exceptions import HTTPError
|
||||
|
||||
|
||||
class HTTPRetrier(object):
|
||||
def __init__(self, max_retries=20, delay=30):
|
||||
self._max_retries = max_retries
|
||||
self._delay = delay
|
||||
self._tries = 0
|
||||
|
||||
def execute(self, function, *args, **kwargs):
|
||||
waitfor = self._delay
|
||||
retry = True
|
||||
self._tries = 0
|
||||
while retry:
|
||||
try:
|
||||
retry = self._tries < self._max_retries
|
||||
logging.debug(
|
||||
'Executing function with HTTP retry policy. Max tries:{} delay:{}'
|
||||
.format(self._max_retries, self._delay))
|
||||
|
||||
return function(*args, **kwargs)
|
||||
except HTTPError as exc:
|
||||
logging.debug("Error: {0}".format(str(exc)))
|
||||
if not retry:
|
||||
raise
|
||||
if isinstance(exc.response.status_code, int):
|
||||
if exc.response.status_code < 500:
|
||||
raise
|
||||
if retry:
|
||||
logging.debug(
|
||||
'Retrying in {0}s, {1} of {2} retries'
|
||||
.format(str(waitfor), str(self._tries+1), str(self._max_retries)))
|
||||
sleep(waitfor)
|
||||
self._tries = self._tries + 1
|
|
@ -0,0 +1,17 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
from abc import abstractmethod, ABCMeta
|
||||
|
||||
class PickleSerializable():
|
||||
__metaclass__ = ABCMeta
|
||||
|
||||
@abstractmethod
|
||||
def serialize(self):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def deserialize(self):
|
||||
pass
|
|
@ -0,0 +1,145 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
from abc import abstractmethod, ABCMeta
|
||||
from .testresult import TestResults
|
||||
from junit_xml import TestSuite, TestCase
|
||||
import datetime
|
||||
import logging
|
||||
|
||||
|
||||
class TestResultsReportWriter(object):
|
||||
"""
|
||||
"""
|
||||
|
||||
__metaclass__ = ABCMeta
|
||||
|
||||
@abstractmethod
|
||||
def add_result(self, notebook_path, test_result):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def to_file(self, path):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def has_data(self):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def write(self):
|
||||
pass
|
||||
|
||||
def _validate_add_results(self, notebook_path, test_result):
|
||||
if not isinstance(test_result, TestResults):
|
||||
raise ValueError('Expected an instance of TestResults')
|
||||
if notebook_path is None or notebook_path == '':
|
||||
raise ValueError("Invalid notebook path")
|
||||
|
||||
class TagsReportRow(object):
|
||||
def __init__(self, notebook_name, test_result):
|
||||
self.notebook_name = notebook_name
|
||||
self.test_name = test_result.test_name
|
||||
self.passed_str = 'PASSED'
|
||||
if not test_result.passed:
|
||||
self.passed_str = 'FAILED'
|
||||
self.duration = test_result.execution_time
|
||||
self.tags = self._to_tag_string(test_result.tags)
|
||||
|
||||
def _to_tag_string(self, tags):
|
||||
logging.debug(tags)
|
||||
if tags is None:
|
||||
return ''
|
||||
value = ''
|
||||
for tag in tags:
|
||||
value = value + ' {}'.format(tag)
|
||||
return value
|
||||
|
||||
def to_string(self):
|
||||
str_value = '{},{},{},{},{}\n'.format(
|
||||
self.tags, self.notebook_name,
|
||||
self.test_name, self.passed_str, self.duration)
|
||||
return str_value
|
||||
|
||||
class TagsReportWriter(TestResultsReportWriter):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self._rows = []
|
||||
|
||||
def add_result(self, notebook_path, test_result):
|
||||
self._validate_add_results(notebook_path, test_result)
|
||||
|
||||
new_rows = [TagsReportRow(notebook_path, test_result)
|
||||
for test_result in test_result.results]
|
||||
self._rows.extend(new_rows)
|
||||
|
||||
def has_data(self):
|
||||
return len(self._rows) > 0
|
||||
|
||||
def write(self):
|
||||
report_name = 'test-nutter-tags.{0:%Y.%m.%d.%H%M%S%f}.txt'.format(
|
||||
datetime.datetime.utcnow())
|
||||
self.to_file(report_name)
|
||||
|
||||
return report_name
|
||||
|
||||
def to_file(self, path):
|
||||
file = open(path, 'w')
|
||||
try:
|
||||
for row in self._rows:
|
||||
file.write(row.to_string())
|
||||
finally:
|
||||
file.close()
|
||||
|
||||
|
||||
class JunitXMLReportWriter(TestResultsReportWriter):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.all_test_suites = []
|
||||
|
||||
def add_result(self, notebook_path, test_result):
|
||||
self._validate_add_results(notebook_path, test_result)
|
||||
|
||||
t_suite = self._to_junitxml(notebook_path, test_result)
|
||||
self.all_test_suites.append(t_suite)
|
||||
|
||||
def _to_junitxml(self, notebook_path, test_result):
|
||||
tsuite = TestSuite("nutter")
|
||||
for t_result in test_result.results:
|
||||
fail_error = None
|
||||
tc_result = 'PASSED'
|
||||
if not t_result.passed:
|
||||
fail_error = 'Exception: {} \n Stack: {}'.format(
|
||||
t_result.exception, t_result.stack_trace)
|
||||
tc_result = 'FAILED'
|
||||
|
||||
t_case = TestCase(t_result.test_name,
|
||||
classname=notebook_path,
|
||||
elapsed_sec=t_result.execution_time,
|
||||
stderr=fail_error,
|
||||
stdout=tc_result)
|
||||
|
||||
if tc_result == 'FAILED':
|
||||
t_case.add_failure_info(tc_result, fail_error)
|
||||
|
||||
tsuite.test_cases.append(t_case)
|
||||
return tsuite
|
||||
|
||||
def has_data(self):
|
||||
return len(self.all_test_suites) > 0
|
||||
|
||||
def write(self):
|
||||
report_name = 'test-nutter-result.{0:%Y.%m.%d.%H%M%S%f}.xml'.format(
|
||||
datetime.datetime.utcnow())
|
||||
self.to_file(report_name)
|
||||
|
||||
return report_name
|
||||
|
||||
def to_file(self, path):
|
||||
file = open(path, 'w')
|
||||
try:
|
||||
file.write(TestSuite.to_xml_string(self.all_test_suites))
|
||||
finally:
|
||||
file.close()
|
|
@ -0,0 +1,240 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
import logging
|
||||
from abc import abstractmethod, ABCMeta
|
||||
from .apiclientresults import ExecuteNotebookResult
|
||||
from .testresult import TestResults, TestResult
|
||||
from .stringwriter import StringWriter
|
||||
from .api import TestNotebook
|
||||
|
||||
|
||||
def get_run_results_views(exec_results):
|
||||
if not isinstance(exec_results, list):
|
||||
raise ValueError("Expected a List")
|
||||
|
||||
results_view = RunCommandResultsView()
|
||||
for exec_result in exec_results:
|
||||
results_view.add_exec_result(exec_result)
|
||||
|
||||
return results_view
|
||||
|
||||
|
||||
def get_list_results_view(list_results):
|
||||
return ListCommandResultsView(list_results)
|
||||
|
||||
|
||||
def print_results_view(results_view):
|
||||
if not isinstance(results_view, ResultsView):
|
||||
raise ValueError("Expected ResultsView")
|
||||
|
||||
results_view.print()
|
||||
|
||||
print("Total: {} \n".format(results_view.total))
|
||||
|
||||
|
||||
class ResultsView():
|
||||
__metaclass__ = ABCMeta
|
||||
|
||||
def print(self):
|
||||
print(self.get_view())
|
||||
|
||||
@abstractmethod
|
||||
def get_view(self):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def total(self):
|
||||
pass
|
||||
|
||||
|
||||
class ListCommandResultsView(ResultsView):
|
||||
def __init__(self, listresults):
|
||||
if not isinstance(listresults, list):
|
||||
raise ValueError("Expected a a list of TestNotebook()")
|
||||
self.list_results = [ListCommandResultView.from_test_notebook(test_notebook)
|
||||
for test_notebook in listresults]
|
||||
|
||||
super().__init__()
|
||||
|
||||
def get_view(self):
|
||||
writer = StringWriter()
|
||||
writer.write_line('{}'.format('\nTests Found'))
|
||||
writer.write_line('-' * 55)
|
||||
for list_result in self.list_results:
|
||||
writer.write(list_result.get_view())
|
||||
|
||||
writer.write_line('-' * 55)
|
||||
|
||||
return writer.to_string()
|
||||
|
||||
@property
|
||||
def total(self):
|
||||
return len(self.list_results)
|
||||
|
||||
|
||||
class ListCommandResultView(ResultsView):
|
||||
def __init__(self, name, path):
|
||||
self.name = name
|
||||
self.path = path
|
||||
super().__init__()
|
||||
|
||||
@classmethod
|
||||
def from_test_notebook(cls, test_notebook):
|
||||
if not isinstance(test_notebook, TestNotebook):
|
||||
raise ValueError('Expected an instance of TestNotebook')
|
||||
return cls(test_notebook.name, test_notebook.path)
|
||||
|
||||
def get_view(self):
|
||||
return "Name:\t{}\nPath:\t{}\n\n".format(self.name, self.path)
|
||||
|
||||
@property
|
||||
def total(self):
|
||||
return 1
|
||||
|
||||
|
||||
class RunCommandResultsView(ResultsView):
|
||||
def __init__(self):
|
||||
self.run_results = []
|
||||
super().__init__()
|
||||
|
||||
def add_exec_result(self, result):
|
||||
if not isinstance(result, ExecuteNotebookResult):
|
||||
raise ValueError("Expected ExecuteNotebookResult")
|
||||
self.run_results.append(RunCommandResultView(result))
|
||||
|
||||
def get_view(self):
|
||||
writer = StringWriter()
|
||||
writer.write('\n')
|
||||
for run_result in self.run_results:
|
||||
writer.write(run_result.get_view())
|
||||
writer.write_line('=' * 60)
|
||||
|
||||
return writer.to_string()
|
||||
|
||||
@property
|
||||
def total(self):
|
||||
return len(self.run_results)
|
||||
|
||||
|
||||
class RunCommandResultView(ResultsView):
|
||||
def __init__(self, result):
|
||||
|
||||
if not isinstance(result, ExecuteNotebookResult):
|
||||
raise ValueError("Expected ExecuteNotebookResult")
|
||||
|
||||
self.notebook_path = result.notebook_path
|
||||
self.task_result_state = result.task_result_state
|
||||
self.notebook_result_state = result.notebook_result.result_state
|
||||
self.notebook_run_page_url = result.notebook_run_page_url
|
||||
|
||||
self.raw_notebook_output = result.notebook_result.exit_output
|
||||
t_results = self._get_test_results(result)
|
||||
self.test_cases_views = []
|
||||
if t_results is not None:
|
||||
for t_result in t_results.results:
|
||||
self.test_cases_views.append(TestCaseResultView(t_result))
|
||||
|
||||
super().__init__()
|
||||
|
||||
def _get_test_results(self, result):
|
||||
if result.notebook_result.is_run_from_notebook:
|
||||
return result.notebook_result.nutter_test_results
|
||||
|
||||
return self.__to_testresults(result.notebook_result.exit_output)
|
||||
|
||||
def get_view(self):
|
||||
sw = StringWriter()
|
||||
sw.write_line("Notebook: {} - Lifecycle State: {}, Result: {}".format(
|
||||
self.notebook_path, self.task_result_state, self.notebook_result_state))
|
||||
sw.write_line('Run Page URL: {}'.format(self.notebook_run_page_url))
|
||||
|
||||
sw.write_line("=" * 60)
|
||||
|
||||
if len(self.test_cases_views) == 0:
|
||||
sw.write_line("No test cases were returned.")
|
||||
sw.write_line("Notebook output: {}".format(
|
||||
self.raw_notebook_output))
|
||||
sw.write_line("=" * 60)
|
||||
return sw.to_string()
|
||||
|
||||
if len(self.failing_tests) > 0:
|
||||
sw.write_line("FAILING TESTS")
|
||||
sw.write_line("-" * 60)
|
||||
|
||||
for tc_view in self.failing_tests:
|
||||
sw.write(tc_view.get_view())
|
||||
|
||||
sw.write_line("")
|
||||
sw.write_line("")
|
||||
|
||||
if len(self.passing_tests) > 0:
|
||||
sw.write_line("PASSING TESTS")
|
||||
sw.write_line("-" * 60)
|
||||
|
||||
for tc_view in self.passing_tests:
|
||||
sw.write(tc_view.get_view())
|
||||
|
||||
sw.write_line("")
|
||||
sw.write_line("")
|
||||
|
||||
return sw.to_string()
|
||||
|
||||
def __to_testresults(self, exit_output):
|
||||
if not exit_output:
|
||||
return None
|
||||
try:
|
||||
return TestResults().deserialize(exit_output)
|
||||
except Exception as ex:
|
||||
error = 'error while creating result from {}. Error: {}'.format(
|
||||
ex, exit_output)
|
||||
logging.debug(error)
|
||||
return None
|
||||
|
||||
@property
|
||||
def total(self):
|
||||
return len(self.test_cases_views)
|
||||
|
||||
@property
|
||||
def passing_tests(self):
|
||||
return list(filter(lambda x: x.passed, self.test_cases_views))
|
||||
|
||||
@property
|
||||
def failing_tests(self):
|
||||
return list(filter(lambda x: not x.passed, self.test_cases_views))
|
||||
|
||||
|
||||
class TestCaseResultView(ResultsView):
|
||||
def __init__(self, nutter_test_results):
|
||||
|
||||
if not isinstance(nutter_test_results, TestResult):
|
||||
raise ValueError("Expected TestResult")
|
||||
|
||||
self.test_case = nutter_test_results.test_name
|
||||
self.passed = nutter_test_results.passed
|
||||
self.exception = nutter_test_results.exception
|
||||
self.stack_trace = nutter_test_results.stack_trace
|
||||
self.execution_time = nutter_test_results.execution_time
|
||||
|
||||
super().__init__()
|
||||
|
||||
def get_view(self):
|
||||
sw = StringWriter()
|
||||
|
||||
time = '{} seconds'.format(self.execution_time)
|
||||
sw.write_line('{} ({})'.format(self.test_case, time))
|
||||
|
||||
if (self.passed):
|
||||
return sw.to_string()
|
||||
|
||||
sw.write_line("")
|
||||
sw.write_line(self.stack_trace)
|
||||
sw.write_line("")
|
||||
sw.write_line(self.exception.__class__.__name__ + ": " + str(self.exception))
|
||||
|
||||
return sw.to_string()
|
||||
|
||||
@property
|
||||
def total(self):
|
||||
return 1
|
|
@ -0,0 +1,138 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import threading
|
||||
import logging
|
||||
from threading import Thread
|
||||
from queue import Queue
|
||||
|
||||
def get_scheduler(num_of_workers):
|
||||
return Scheduler(num_of_workers)
|
||||
|
||||
|
||||
class Scheduler(object):
|
||||
def __init__(self, num_of_workers):
|
||||
if num_of_workers < 1 or num_of_workers > 15:
|
||||
raise ValueError(
|
||||
'Number of workers is invalid. It must be a value bettwen 1 and 15')
|
||||
self._num_of_workers = num_of_workers
|
||||
self._in_queue = Queue()
|
||||
self._out_queue = Queue()
|
||||
|
||||
def add_function(self, function, *args):
|
||||
function_exec = FunctionToExecute(function, *args)
|
||||
self._in_queue.put(function_exec)
|
||||
|
||||
def run_and_wait(self):
|
||||
try:
|
||||
logging.debug("Starting workers")
|
||||
workers = []
|
||||
w = 0
|
||||
while w < self._num_of_workers:
|
||||
worker = FunctionHandler(self._in_queue, self._out_queue)
|
||||
worker.daemon = True
|
||||
worker.start()
|
||||
workers.append(worker)
|
||||
w += 1
|
||||
|
||||
logging.debug("Workers started")
|
||||
self._in_queue.join()
|
||||
logging.debug("Stopping workers")
|
||||
for worker in workers:
|
||||
worker.signal_stop()
|
||||
self._in_queue.join()
|
||||
|
||||
return self._process_results()
|
||||
|
||||
except Exception as ex:
|
||||
logging.critical(ex)
|
||||
raise ex
|
||||
|
||||
def _process_results(self):
|
||||
results_handler = ResultsHandler(self._out_queue)
|
||||
results_handler.daemon = True
|
||||
results_handler.start()
|
||||
self._out_queue.join()
|
||||
results_handler.signal_stop()
|
||||
self._out_queue.join()
|
||||
return results_handler.func_results
|
||||
|
||||
|
||||
class Worker(Thread):
|
||||
def __init__(self):
|
||||
Thread.__init__(self)
|
||||
self._done = threading.Event()
|
||||
|
||||
def set_done(self):
|
||||
self._done.set()
|
||||
|
||||
|
||||
class ResultsHandler(Worker):
|
||||
def __init__(self, queue):
|
||||
super().__init__()
|
||||
self._queue = queue
|
||||
self.func_results = []
|
||||
|
||||
def signal_stop(self):
|
||||
self._queue.put(None)
|
||||
|
||||
def run(self):
|
||||
while True:
|
||||
try:
|
||||
result = self._queue.get()
|
||||
if result is None:
|
||||
break
|
||||
self.func_results.append(result)
|
||||
finally:
|
||||
self._queue.task_done()
|
||||
self.set_done()
|
||||
logging.debug("Results handler is done")
|
||||
|
||||
|
||||
class FunctionHandler(Worker):
|
||||
def __init__(self, in_queue, out_queue):
|
||||
super().__init__()
|
||||
self._in_queue = in_queue
|
||||
self._out_queue = out_queue
|
||||
|
||||
def signal_stop(self):
|
||||
self._in_queue.put(None)
|
||||
|
||||
def run(self):
|
||||
logging.debug("Function Handler Starting")
|
||||
while True:
|
||||
try:
|
||||
function_exe = self._in_queue.get()
|
||||
if function_exe is None:
|
||||
logging.debug("Function Handler Stopped")
|
||||
break
|
||||
logging.debug('Function Handler: Execute for {}'.format(function_exe))
|
||||
result = function_exe.execute()
|
||||
logging.debug('Function Handler: Execute called.')
|
||||
self._out_queue.put(FunctionResult(result, None))
|
||||
|
||||
except Exception as ex:
|
||||
self._out_queue.put(FunctionResult(None, ex))
|
||||
logging.debug('Function Handler. Exception in function. Error {} {}'
|
||||
.format(str(ex), ex is None))
|
||||
finally:
|
||||
self._in_queue.task_done()
|
||||
self.set_done()
|
||||
logging.debug("Function handler is done")
|
||||
|
||||
|
||||
class FunctionToExecute(object):
|
||||
def __init__(self, function, *args):
|
||||
self._function = function
|
||||
self._args = args
|
||||
|
||||
def execute(self):
|
||||
return self._function(*self._args)
|
||||
|
||||
|
||||
class FunctionResult(object):
|
||||
def __init__(self, result, exception):
|
||||
self.func_result = result
|
||||
self.exception = exception
|
|
@ -0,0 +1,51 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
from abc import abstractmethod, ABC
|
||||
from queue import Queue
|
||||
from threading import Thread
|
||||
from datetime import datetime
|
||||
from enum import Enum
|
||||
import logging
|
||||
|
||||
|
||||
class StatusEventsHandler(object):
|
||||
def __init__(self, handler):
|
||||
self._event_queue = Queue()
|
||||
self._processor = Processor(handler, self._event_queue,)
|
||||
|
||||
self._processor.daemon = True
|
||||
self._processor.start()
|
||||
|
||||
def add_event(self, event, data):
|
||||
self._event_queue.put(StatusEvent(event, data))
|
||||
|
||||
def wait(self):
|
||||
self._event_queue.join()
|
||||
|
||||
class StatusEvent(object):
|
||||
def __init__(self, event, data):
|
||||
if not isinstance(event, Enum):
|
||||
raise ValueError('Invalid event. Must be an Enum')
|
||||
|
||||
self.timestamp = datetime.utcnow()
|
||||
self.event = event
|
||||
self.data = data
|
||||
|
||||
class EventHandler(ABC):
|
||||
|
||||
@abstractmethod
|
||||
def handle(self, queue):
|
||||
pass
|
||||
|
||||
class Processor(Thread):
|
||||
def __init__(self, handler, event_queue):
|
||||
self._handler = handler
|
||||
self._event_queue = event_queue
|
||||
Thread.__init__(self)
|
||||
|
||||
def run(self):
|
||||
logging.debug("Starting handler..")
|
||||
self._handler.handle(self._event_queue)
|
|
@ -0,0 +1,17 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
class StringWriter():
|
||||
def __init__(self):
|
||||
self.result = ""
|
||||
|
||||
def write(self, string_to_append):
|
||||
self.result += string_to_append
|
||||
|
||||
def write_line(self, string_to_append):
|
||||
self.write(string_to_append + '\n')
|
||||
|
||||
def to_string(self):
|
||||
return self.result
|
|
@ -0,0 +1,33 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
from .apiclientresults import ExecuteNotebookResult, NotebookOutputResult
|
||||
from .resultsview import RunCommandResultsView
|
||||
from .testresult import TestResults
|
||||
|
||||
|
||||
class TestExecResults():
|
||||
def __init__(self, test_results):
|
||||
if not isinstance(test_results, TestResults):
|
||||
raise TypeError("test_results must be of type TestResults")
|
||||
self.test_results = test_results
|
||||
self.runcommand_results_view = RunCommandResultsView()
|
||||
|
||||
def to_string(self):
|
||||
notebook_path = ""
|
||||
notebook_result = self.get_ExecuteNotebookResult(
|
||||
notebook_path, self.test_results)
|
||||
self.runcommand_results_view.add_exec_result(notebook_result)
|
||||
view = self.runcommand_results_view.get_view()
|
||||
return view
|
||||
|
||||
def exit(self, dbutils):
|
||||
dbutils.notebook.exit(self.test_results.serialize())
|
||||
|
||||
def get_ExecuteNotebookResult(self, notebook_path, test_results):
|
||||
notebook_result = NotebookOutputResult(
|
||||
'N/A', None, test_results)
|
||||
|
||||
return ExecuteNotebookResult('N/A', 'N/A', notebook_result, 'N/A')
|
|
@ -0,0 +1,85 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
from .pickleserializable import PickleSerializable
|
||||
import pickle
|
||||
import base64
|
||||
|
||||
def get_test_results():
|
||||
return TestResults()
|
||||
|
||||
class TestResults(PickleSerializable):
|
||||
def __init__(self):
|
||||
self.results = []
|
||||
self.test_cases = 0
|
||||
self.num_failures = 0
|
||||
self.total_execution_time = 0
|
||||
|
||||
def append(self, testresult):
|
||||
if not isinstance(testresult, TestResult):
|
||||
raise TypeError("Can only append TestResult to TestResults")
|
||||
|
||||
self.results.append(testresult)
|
||||
self.test_cases = self.test_cases + 1
|
||||
if (not testresult.passed):
|
||||
self.num_failures = self.num_failures + 1
|
||||
|
||||
total_execution_time = self.total_execution_time + testresult.execution_time
|
||||
self.total_execution_time = total_execution_time
|
||||
|
||||
def serialize(self):
|
||||
bin_data = pickle.dumps(self)
|
||||
return str(base64.encodebytes(bin_data), "utf-8")
|
||||
|
||||
def deserialize(self, pickle_string):
|
||||
bin_str = pickle_string.encode("utf-8")
|
||||
decoded_bin_data = base64.decodebytes(bin_str)
|
||||
return pickle.loads(decoded_bin_data)
|
||||
|
||||
def passed(self):
|
||||
for item in self.results:
|
||||
if not item.passed:
|
||||
return False
|
||||
return True
|
||||
|
||||
def __eq__(self, other):
|
||||
if not isinstance(self, other.__class__):
|
||||
return False
|
||||
if len(self.results) != len(other.results):
|
||||
return False
|
||||
for item in other.results:
|
||||
if not self.__item_in_list_equalto(item):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def __item_in_list_equalto(self, expected_item):
|
||||
for item in self.results:
|
||||
if (item == expected_item):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
class TestResult:
|
||||
def __init__(self, test_name, passed,
|
||||
execution_time, tags, exception=None, stack_trace=""):
|
||||
|
||||
if not isinstance(tags, list):
|
||||
raise ValueError("tags must be a list")
|
||||
self.passed = passed
|
||||
self.exception = exception
|
||||
self.stack_trace = stack_trace
|
||||
self.test_name = test_name
|
||||
self.execution_time = execution_time
|
||||
self.tags = tags
|
||||
|
||||
def __eq__(self, other):
|
||||
if isinstance(self, other.__class__):
|
||||
return self.test_name == other.test_name \
|
||||
and self.passed == other.passed \
|
||||
and type(self.exception) == type(other.exception) \
|
||||
and str(self.exception) == str(other.exception)
|
||||
|
||||
return False
|
|
@ -0,0 +1,19 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
def recursive_find(dict_instance, keys):
|
||||
if not isinstance(keys, list):
|
||||
raise ValueError("Expected list of keys")
|
||||
if not isinstance(dict_instance, dict):
|
||||
return None
|
||||
if len(keys) == 0:
|
||||
return None
|
||||
key = keys[0]
|
||||
value = dict_instance.get(key, None)
|
||||
if value is None:
|
||||
return None
|
||||
if len(keys) == 1:
|
||||
return value
|
||||
return recursive_find(value, keys[1:len(keys)])
|
|
@ -0,0 +1,4 @@
|
|||
pytest==5.0.1
|
||||
mock
|
||||
pytest-mock
|
||||
pytest-cov
|
|
@ -0,0 +1,5 @@
|
|||
databricks-api
|
||||
requests
|
||||
fire
|
||||
junit_xml
|
||||
|
|
@ -0,0 +1,87 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
from .testcase import TestCase
|
||||
|
||||
|
||||
def get_fixture_loader():
|
||||
loader = FixtureLoader()
|
||||
return loader
|
||||
|
||||
|
||||
class FixtureLoader():
|
||||
def __init__(self):
|
||||
self.__test_case_dictionary = {}
|
||||
pass
|
||||
|
||||
def load_fixture(self, nutter_fixture):
|
||||
if nutter_fixture is None:
|
||||
raise ValueError("Must pass NutterFixture")
|
||||
|
||||
all_attributes = dir(nutter_fixture)
|
||||
for attribute in all_attributes:
|
||||
is_test_method = self.__is_test_method(attribute)
|
||||
if is_test_method:
|
||||
test_full_name = attribute
|
||||
test_name = self.__get_test_name(attribute)
|
||||
func = getattr(nutter_fixture, test_full_name)
|
||||
if func is None:
|
||||
continue
|
||||
|
||||
if test_name == "before_all" or test_name == "after_all":
|
||||
continue
|
||||
|
||||
test_case = None
|
||||
if test_name in self.__test_case_dictionary:
|
||||
test_case = self.__test_case_dictionary[test_name]
|
||||
|
||||
if test_case is None:
|
||||
test_case = TestCase(test_name)
|
||||
|
||||
test_case = self.__set_method(test_case, test_full_name, func)
|
||||
|
||||
self.__test_case_dictionary[test_name] = test_case
|
||||
|
||||
return self.__test_case_dictionary
|
||||
|
||||
def __is_test_method(self, attribute):
|
||||
if attribute.startswith("before_") or \
|
||||
attribute.startswith("run_") or \
|
||||
attribute.startswith("assertion_") or \
|
||||
attribute.startswith("after_"):
|
||||
return True
|
||||
return False
|
||||
|
||||
def __set_method(self, case, name, func):
|
||||
if name.startswith("before_"):
|
||||
case.set_before(func)
|
||||
return case
|
||||
if name.startswith("run_"):
|
||||
case.set_run(func)
|
||||
return case
|
||||
if name.startswith("assertion_"):
|
||||
case.set_assertion(func)
|
||||
return case
|
||||
if name.startswith("after_"):
|
||||
case.set_after(func)
|
||||
return case
|
||||
|
||||
return case
|
||||
|
||||
def __get_test_name(self, full_name):
|
||||
if full_name == "before_all" or full_name == "after_all":
|
||||
return full_name
|
||||
|
||||
name = self.__remove_prefix(full_name, "before_")
|
||||
name = self.__remove_prefix(name, "run_")
|
||||
name = self.__remove_prefix(name, "assertion_")
|
||||
name = self.__remove_prefix(name, "after_")
|
||||
|
||||
return name
|
||||
|
||||
def __remove_prefix(self, text, prefix):
|
||||
if text.startswith(prefix):
|
||||
return text[len(prefix):]
|
||||
return text
|
|
@ -0,0 +1,74 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from abc import ABCMeta
|
||||
from common.testresult import TestResults
|
||||
from .fixtureloader import FixtureLoader
|
||||
from common.testexecresults import TestExecResults
|
||||
|
||||
|
||||
def tag(the_tag):
|
||||
def tag_decorator(function):
|
||||
if isinstance(the_tag, list) == False and isinstance(the_tag, str) == False:
|
||||
raise ValueError("the_tag must be a string or a list")
|
||||
if str.startswith(function.__name__, "run_") == False:
|
||||
raise ValueError("a tag may only decorate a run_ method")
|
||||
|
||||
function.tag = the_tag
|
||||
return function
|
||||
return tag_decorator
|
||||
|
||||
|
||||
class NutterFixture(object):
|
||||
"""
|
||||
"""
|
||||
|
||||
__metaclass__ = ABCMeta
|
||||
|
||||
def __init__(self):
|
||||
self.data_loader = FixtureLoader()
|
||||
self.test_results = TestResults()
|
||||
self._logger = logging.getLogger('NutterRunner')
|
||||
|
||||
def execute_tests(self):
|
||||
self.__load_fixture()
|
||||
|
||||
if len(self.__test_case_dict) > 0 and self.__has_method("before_all"):
|
||||
logging.debug('Running before_all()')
|
||||
self.before_all()
|
||||
|
||||
for key, value in self.__test_case_dict.items():
|
||||
logging.debug('Running test: {}'.format(key))
|
||||
test_result = value.execute_test()
|
||||
logging.debug('Completed running test: {}'.format(key))
|
||||
self.test_results.append(test_result)
|
||||
|
||||
if len(self.__test_case_dict) > 0 and self.__has_method("after_all"):
|
||||
logging.debug('Running after_all()')
|
||||
self.after_all()
|
||||
|
||||
return TestExecResults(self.test_results)
|
||||
|
||||
def __load_fixture(self):
|
||||
test_case_dict = self.data_loader.load_fixture(self)
|
||||
if test_case_dict is None:
|
||||
logging.fatal("Invalid Test Fixture")
|
||||
raise InvalidTestFixtureException("Invalid Test Fixture")
|
||||
self.__test_case_dict = test_case_dict
|
||||
|
||||
logging.debug("Found {} test cases".format(len(test_case_dict)))
|
||||
for key, value in self.__test_case_dict.items():
|
||||
logging.debug('Test Case: {}'.format(key))
|
||||
|
||||
def __has_method(self, method_name):
|
||||
method = getattr(self, method_name, None)
|
||||
if callable(method):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
class InvalidTestFixtureException(Exception):
|
||||
pass
|
|
@ -0,0 +1,107 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import os
|
||||
import time
|
||||
import traceback
|
||||
from common.testresult import TestResult
|
||||
|
||||
|
||||
def get_testcase(test_name):
|
||||
|
||||
tc = TestCase(test_name)
|
||||
|
||||
return tc
|
||||
|
||||
|
||||
class TestCase():
|
||||
ERROR_MESSAGE_RUN_MISSING = """ TestCase does not contain a run function.
|
||||
Please pass a function to set_run"""
|
||||
ERROR_MESSAGE_ASSERTION_MISSING = """ TestCase does not contain an assertion function.
|
||||
Please pass a function to set_assertion """
|
||||
|
||||
def __init__(self, test_name):
|
||||
self.test_name = test_name
|
||||
self.before = None
|
||||
self.__before_set = False
|
||||
self.run = None
|
||||
self.assertion = None
|
||||
self.after = None
|
||||
self.__after_set = False
|
||||
self.invalid_message = ""
|
||||
self.tags = []
|
||||
|
||||
def set_before(self, before):
|
||||
self.before = before
|
||||
self.__before_set = True
|
||||
|
||||
def set_run(self, run):
|
||||
self.run = run
|
||||
|
||||
def set_assertion(self, assertion):
|
||||
self.assertion = assertion
|
||||
|
||||
def set_after(self, after):
|
||||
self.after = after
|
||||
self.__after_set = True
|
||||
|
||||
def execute_test(self):
|
||||
start_time = time.perf_counter()
|
||||
try:
|
||||
if hasattr(self.run, "tag"):
|
||||
if isinstance(self.run.tag, list):
|
||||
self.tags.extend(self.run.tag)
|
||||
else:
|
||||
self.tags.append(self.run.tag)
|
||||
if not self.is_valid():
|
||||
raise NoTestCasesFoundError(
|
||||
"Both a run and an assertion are required for every test")
|
||||
if self.__before_set and self.before is not None:
|
||||
self.before()
|
||||
self.run()
|
||||
self.assertion()
|
||||
if self.__after_set and self.after is not None:
|
||||
self.after()
|
||||
|
||||
except Exception as exc:
|
||||
return TestResult(self.test_name, False,
|
||||
self.__get_elapsed_time(start_time), self.tags,
|
||||
exc, traceback.format_exc())
|
||||
|
||||
return TestResult(self.test_name, True,
|
||||
self.__get_elapsed_time(start_time), self.tags, None)
|
||||
|
||||
def is_valid(self):
|
||||
is_valid = True
|
||||
|
||||
if self.run is None:
|
||||
self.__add_message_to_error(self.ERROR_MESSAGE_RUN_MISSING)
|
||||
is_valid = False
|
||||
|
||||
if self.assertion is None:
|
||||
self.__add_message_to_error(self.ERROR_MESSAGE_ASSERTION_MISSING)
|
||||
is_valid = False
|
||||
|
||||
return is_valid
|
||||
|
||||
def __get_elapsed_time(self, start_time):
|
||||
end_time = time.perf_counter()
|
||||
elapsed_time = end_time - start_time
|
||||
return elapsed_time
|
||||
|
||||
def __add_message_to_error(self, message):
|
||||
if self.invalid_message:
|
||||
self.invalid_message += os.linesep
|
||||
|
||||
self.invalid_message += message
|
||||
|
||||
def get_invalid_message(self):
|
||||
self.is_valid()
|
||||
|
||||
return self.invalid_message
|
||||
|
||||
|
||||
class NoTestCasesFoundError(Exception):
|
||||
pass
|
|
@ -0,0 +1,28 @@
|
|||
import setuptools
|
||||
import cli.nuttercli as nuttercli
|
||||
|
||||
with open("README.md", "r") as fh:
|
||||
long_description = fh.read()
|
||||
|
||||
version = nuttercli.get_cli_version()
|
||||
setuptools.setup(
|
||||
name="nutter",
|
||||
version=version,
|
||||
author="Jesus Aguilar, Rob Bagby",
|
||||
author_email="jesus.aguilar@microsoft.com, rob.bagby@microsoft.com",
|
||||
description="A databricks notebook testing library",
|
||||
long_description="A databricks notebook testing library",
|
||||
long_description_content_type="text/markdown",
|
||||
entry_points={
|
||||
'console_scripts': [
|
||||
'nutter = cli.nuttercli:main'
|
||||
]},
|
||||
url="https://github.com/microsoft/nutter",
|
||||
packages=setuptools.find_packages(),
|
||||
classifiers=[
|
||||
"Programming Language :: Python :: 3",
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Operating System :: OS Independent",
|
||||
],
|
||||
python_requires='>=3.5.2',
|
||||
)
|
|
@ -0,0 +1,139 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from queue import Queue
|
||||
from cli.eventhandlers import ConsoleEventHandler
|
||||
from common.api import NutterStatusEvents, ExecutionResultEventData
|
||||
from common.statuseventhandler import StatusEvent
|
||||
|
||||
|
||||
def test__handle__nutterstatusevents_testslisting__output_is_valid(mocker):
|
||||
console_event_handler = ConsoleEventHandler(False)
|
||||
mocker.patch.object(console_event_handler, '_print_output')
|
||||
path = '/path'
|
||||
events = [StatusEvent(NutterStatusEvents.TestsListing, path)]
|
||||
queue = _get_queue_with_events(events)
|
||||
|
||||
console_event_handler._get_and_handle(queue)
|
||||
|
||||
expected = _get_output_wrapper('Looking for tests in {}'.format(path))
|
||||
console_event_handler._print_output.assert_called_with(expected)
|
||||
|
||||
|
||||
def test__handle__nutterstatusevents_testsexecutionrequest__output_is_valid(mocker):
|
||||
console_event_handler = ConsoleEventHandler(False)
|
||||
mocker.patch.object(console_event_handler, '_print_output')
|
||||
pattern = '/path'
|
||||
events = [StatusEvent(NutterStatusEvents.TestExecutionRequest, pattern)]
|
||||
queue = _get_queue_with_events(events)
|
||||
|
||||
console_event_handler._get_and_handle(queue)
|
||||
|
||||
expected = _get_output_wrapper('Execution request: {}'.format(pattern))
|
||||
console_event_handler._print_output.assert_called_with(expected)
|
||||
|
||||
|
||||
def test__handle__nutterstatusevents_testslistingfiltered__output_is_valid(mocker):
|
||||
console_event_handler = ConsoleEventHandler(False)
|
||||
mocker.patch.object(console_event_handler, '_print_output')
|
||||
num_of_tests = 1
|
||||
events = [StatusEvent(
|
||||
NutterStatusEvents.TestsListingFiltered, num_of_tests)]
|
||||
queue = _get_queue_with_events(events)
|
||||
|
||||
console_event_handler._get_and_handle(queue)
|
||||
|
||||
expected = _get_output_wrapper(
|
||||
'{} tests matched the pattern'.format(num_of_tests))
|
||||
console_event_handler._print_output.assert_called_with(expected)
|
||||
|
||||
|
||||
def test__handle__nutterstatusevents_testlistingresults__output_is_valid(mocker):
|
||||
console_event_handler = ConsoleEventHandler(False)
|
||||
mocker.patch.object(console_event_handler, '_print_output')
|
||||
num_of_tests = 1
|
||||
events = [StatusEvent(
|
||||
NutterStatusEvents.TestsListingResults, num_of_tests)]
|
||||
queue = _get_queue_with_events(events)
|
||||
|
||||
console_event_handler._get_and_handle(queue)
|
||||
|
||||
expected = _get_output_wrapper('{} tests found'.format(num_of_tests))
|
||||
console_event_handler._print_output.assert_called_with(expected)
|
||||
|
||||
|
||||
def test__handle__nutterstatusevents_testscheduling__output_is_valid(mocker):
|
||||
console_event_handler = ConsoleEventHandler(False)
|
||||
mocker.patch.object(console_event_handler, '_print_output')
|
||||
num_of_tests = 1
|
||||
console_event_handler._filtered_tests = num_of_tests
|
||||
events = [StatusEvent(NutterStatusEvents.TestScheduling, num_of_tests)]
|
||||
queue = _get_queue_with_events(events)
|
||||
|
||||
console_event_handler._get_and_handle(queue)
|
||||
|
||||
expected = _get_output_wrapper(
|
||||
'{} of {} tests scheduled for execution'.format(num_of_tests, num_of_tests))
|
||||
console_event_handler._print_output.assert_called_with(expected)
|
||||
|
||||
|
||||
def test__handle__nutterstatusevents_testsexecutionresult__output_is_valid(mocker):
|
||||
console_event_handler = ConsoleEventHandler(False)
|
||||
mocker.patch.object(console_event_handler, '_print_output')
|
||||
num_of_tests = 1
|
||||
done_tests = 1
|
||||
console_event_handler._listed_tests = num_of_tests
|
||||
events = [StatusEvent(
|
||||
NutterStatusEvents.TestExecutionResult, num_of_tests)]
|
||||
queue = _get_queue_with_events(events)
|
||||
|
||||
console_event_handler._get_and_handle(queue)
|
||||
|
||||
expected = _get_output_wrapper(
|
||||
'{} of {} tests executed.'.format(done_tests, num_of_tests))
|
||||
console_event_handler._print_output.assert_called_with(expected)
|
||||
|
||||
|
||||
def test__handle__nutterstatusevents_testsexecutionresult__output_is_valid(mocker):
|
||||
console_event_handler = ConsoleEventHandler(False)
|
||||
mocker.patch.object(console_event_handler, '_print_output')
|
||||
num_of_tests = 1
|
||||
done_tests = 1
|
||||
console_event_handler._listed_tests = num_of_tests
|
||||
events = [StatusEvent(
|
||||
NutterStatusEvents.TestExecutionResult, num_of_tests)]
|
||||
queue = _get_queue_with_events(events)
|
||||
|
||||
console_event_handler._get_and_handle(queue)
|
||||
|
||||
expected = _get_output_wrapper(
|
||||
'{} of {} tests executed'.format(done_tests, num_of_tests))
|
||||
console_event_handler._print_output.assert_called_with(expected)
|
||||
|
||||
|
||||
def test__handle__nutterstatusevents_testexecuted__output_is_valid(mocker):
|
||||
console_event_handler = ConsoleEventHandler(False)
|
||||
mocker.patch.object(console_event_handler, '_print_output')
|
||||
event_data = ExecutionResultEventData('/my', True, 'http://url')
|
||||
events = [StatusEvent(NutterStatusEvents.TestExecuted, event_data)]
|
||||
queue = _get_queue_with_events(events)
|
||||
|
||||
console_event_handler._get_and_handle(queue)
|
||||
|
||||
expected = _get_output_wrapper('{} Success:{} {}'.format(
|
||||
event_data.notebook_path, event_data.success, event_data.notebook_run_page_url))
|
||||
console_event_handler._print_output.assert_called_with(expected)
|
||||
|
||||
|
||||
def _get_output_wrapper(output):
|
||||
return '--> {}\n'.format(output)
|
||||
|
||||
|
||||
def _get_queue_with_events(events):
|
||||
queue = Queue()
|
||||
for event in events:
|
||||
queue.put(event)
|
||||
return queue
|
|
@ -0,0 +1,208 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import os
|
||||
import json
|
||||
import cli.nuttercli as nuttercli
|
||||
from cli.nuttercli import NutterCLI
|
||||
from common.apiclientresults import ExecuteNotebookResult
|
||||
import mock
|
||||
from common.testresult import TestResults, TestResult
|
||||
from cli.reportsman import ReportWriterManager, ReportWritersTypes, ReportWriters
|
||||
|
||||
|
||||
def test__get_cli_version__without_build__env_var__returns_value():
|
||||
version = nuttercli.get_cli_version()
|
||||
assert version is not None
|
||||
|
||||
|
||||
def test__get_cli_header_value():
|
||||
version = nuttercli.get_cli_version()
|
||||
header = 'Nutter Version {}\n'.format(version)
|
||||
header += '+' * 50
|
||||
header += '\n'
|
||||
|
||||
assert nuttercli.get_cli_header() == header
|
||||
|
||||
|
||||
|
||||
def test__get_cli_version__with_build__env_var__returns_value(mocker):
|
||||
version = nuttercli.get_cli_version()
|
||||
build_number = '1.2.3'
|
||||
mocker.patch.dict(
|
||||
os.environ, {nuttercli.BUILD_NUMBER_ENV_VAR: build_number})
|
||||
version_with_build_number = nuttercli.get_cli_version()
|
||||
assert version_with_build_number == '{}.{}'.format(version, build_number)
|
||||
|
||||
def test__get_version_label__valid_string(mocker):
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_HOST': 'myhost'})
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_TOKEN': 'mytoken'})
|
||||
|
||||
version = nuttercli.get_cli_version()
|
||||
expected = 'Nutter Version {}'.format(version)
|
||||
cli = NutterCLI()
|
||||
version_from_cli = cli._get_version_label()
|
||||
|
||||
assert expected == version_from_cli
|
||||
|
||||
|
||||
def test__nutter_cli_ctor__handles__version_and_exits_0(mocker):
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_HOST': 'myhost'})
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_TOKEN': 'mytoken'})
|
||||
|
||||
|
||||
with pytest.raises(SystemExit) as mock_ex:
|
||||
cli = NutterCLI(version=True)
|
||||
|
||||
assert mock_ex.type == SystemExit
|
||||
assert mock_ex.value.code == 0
|
||||
|
||||
def test__run__pattern__display_results(mocker):
|
||||
test_results = TestResults().serialize()
|
||||
cli = _get_cli_for_tests(
|
||||
mocker, 'SUCCESS', 'TERMINATED', test_results)
|
||||
|
||||
mocker.patch.object(cli, '_display_test_results')
|
||||
cli.run('my*', 'cluster')
|
||||
assert cli._display_test_results.call_count == 1
|
||||
|
||||
|
||||
def test__nutter_cli_ctor__handles__configurationexception_and_exits_1(mocker):
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_HOST': ''})
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_TOKEN': ''})
|
||||
|
||||
with pytest.raises(SystemExit) as mock_ex:
|
||||
cli = NutterCLI()
|
||||
|
||||
assert mock_ex.type == SystemExit
|
||||
assert mock_ex.value.code == 1
|
||||
|
||||
|
||||
def test__run__one_test_fullpath__display_results(mocker):
|
||||
test_results = TestResults().serialize()
|
||||
cli = _get_cli_for_tests(
|
||||
mocker, 'SUCCESS', 'TERMINATED', test_results)
|
||||
|
||||
mocker.patch.object(cli, '_display_test_results')
|
||||
cli.run('test_mynotebook2', 'cluster')
|
||||
assert cli._display_test_results.call_count == 1
|
||||
|
||||
def test__run_one_test_junit_writter__writer_writes(mocker):
|
||||
test_results = TestResults().serialize()
|
||||
cli = _get_cli_for_tests(
|
||||
mocker, 'SUCCESS', 'TERMINATED', test_results)
|
||||
mocker.patch.object(cli, '_get_report_writer_manager')
|
||||
mock_report_manager = ReportWriterManager(ReportWriters.JUNIT)
|
||||
mocker.patch.object(mock_report_manager, 'write')
|
||||
mocker.patch.object(mock_report_manager, 'add_result')
|
||||
|
||||
cli._get_report_writer_manager.return_value = mock_report_manager
|
||||
|
||||
cli.run('test_mynotebook2', 'cluster')
|
||||
|
||||
assert mock_report_manager.add_result.call_count == 1
|
||||
assert mock_report_manager.write.call_count == 1
|
||||
assert not mock_report_manager._providers[ReportWritersTypes.JUNIT].has_data(
|
||||
)
|
||||
|
||||
|
||||
def test__list__none__display_result(mocker):
|
||||
cli = _get_cli_for_tests(
|
||||
mocker, 'SUCCESS', 'TERMINATED', 'IHAVERETURNED')
|
||||
|
||||
mocker.patch.object(cli, '_display_list_results')
|
||||
cli.list('/')
|
||||
assert cli._display_list_results.call_count == 1
|
||||
|
||||
|
||||
def _get_cli_for_tests(mocker, result_state, life_cycle_state, notebook_result):
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_HOST': 'myhost'})
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_TOKEN': 'mytoken'})
|
||||
cli = NutterCLI()
|
||||
mocker.patch.object(cli._nutter, 'run_test')
|
||||
cli._nutter.run_test.return_value = _get_run_test_response(
|
||||
result_state, life_cycle_state, notebook_result)
|
||||
mocker.patch.object(cli._nutter, 'run_tests')
|
||||
cli._nutter.run_tests.return_value = _get_run_tests_response(
|
||||
result_state, life_cycle_state, notebook_result)
|
||||
mocker.patch.object(cli._nutter, 'list_tests')
|
||||
cli._nutter.list_tests.return_value = _get_list_tests_response()
|
||||
|
||||
return cli
|
||||
|
||||
|
||||
def _get_run_test_response(result_state, life_cycle_state, notebook_result):
|
||||
data_json = """
|
||||
{"notebook_output":
|
||||
{"result": "IHaveReturned", "truncated": false},
|
||||
"metadata":
|
||||
{"execution_duration": 15000,
|
||||
"run_type": "SUBMIT_RUN",
|
||||
"cleanup_duration": 0,
|
||||
"number_in_job": 1,
|
||||
"cluster_instance":
|
||||
{"cluster_id": "0925-141d1222-narcs242",
|
||||
"spark_context_id": "803963628344534476"},
|
||||
"creator_user_name": "abc@microsoft.com",
|
||||
"task": {"notebook_task": {"notebook_path": "/test_mynotebook"}},
|
||||
"run_id": 7, "start_time": 1569887259173,
|
||||
"job_id": 4,
|
||||
"state": {"result_state": "SUCCESS", "state_message": "",
|
||||
"life_cycle_state": "TERMINATED"}, "setup_duration": 2000,
|
||||
"run_page_url": "https://westus2.azuredatabricks.net/?o=14702dasda6094293890#job/4/run/1",
|
||||
"cluster_spec": {"existing_cluster_id": "0925-141122-narcs242"}, "run_name": "myrun"}}
|
||||
"""
|
||||
data_dict = json.loads(data_json)
|
||||
data_dict['notebook_output']['result'] = notebook_result
|
||||
data_dict['metadata']['state']['result_state'] = result_state
|
||||
data_dict['metadata']['state']['life_cycle_state'] = life_cycle_state
|
||||
|
||||
return ExecuteNotebookResult.from_job_output(data_dict)
|
||||
|
||||
|
||||
def _get_list_tests_response():
|
||||
result = {}
|
||||
result['test_mynotebook'] = '/test_mynotebook'
|
||||
result['test_mynotebook2'] = '/test_mynotebook2'
|
||||
return result
|
||||
|
||||
|
||||
def _get_run_tests_response(result_state, life_cycle_state, notebook_result):
|
||||
data_json = """
|
||||
{"notebook_output":
|
||||
{"result": "IHaveReturned", "truncated": false},
|
||||
"metadata":
|
||||
{"execution_duration": 15000,
|
||||
"run_type": "SUBMIT_RUN",
|
||||
"cleanup_duration": 0,
|
||||
"number_in_job": 1,
|
||||
"cluster_instance":
|
||||
{"cluster_id": "0925-141d1222-narcs242",
|
||||
"spark_context_id": "803963628344534476"},
|
||||
"creator_user_name": "abc@microsoft.com",
|
||||
"task": {"notebook_task": {"notebook_path": "/test_mynotebook"}},
|
||||
"run_id": 7, "start_time": 1569887259173,
|
||||
"job_id": 4,
|
||||
"state": {"result_state": "SUCCESS", "state_message": "",
|
||||
"life_cycle_state": "TERMINATED"}, "setup_duration": 2000,
|
||||
"run_page_url": "https://westus2.azuredatabricks.net/?o=14702dasda6094293890#job/4/run/1",
|
||||
"cluster_spec": {"existing_cluster_id": "0925-141122-narcs242"}, "run_name": "myrun"}}
|
||||
"""
|
||||
data_dict = json.loads(data_json)
|
||||
data_dict['notebook_output']['result'] = notebook_result
|
||||
data_dict['metadata']['state']['result_state'] = result_state
|
||||
data_dict['metadata']['state']['life_cycle_state'] = life_cycle_state
|
||||
|
||||
data_dict2 = json.loads(data_json)
|
||||
data_dict2['notebook_output']['result'] = notebook_result
|
||||
data_dict2['metadata']['state']['result_state'] = result_state
|
||||
data_dict2['metadata']['task']['notebook_task']['notebook_path'] = '/test_mynotebook2'
|
||||
data_dict2['metadata']['state']['life_cycle_state'] = life_cycle_state
|
||||
|
||||
results = []
|
||||
results.append(ExecuteNotebookResult.from_job_output(data_dict))
|
||||
results.append(ExecuteNotebookResult.from_job_output(data_dict2))
|
||||
return results
|
|
@ -0,0 +1,80 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from common.testresult import TestResults, TestResult
|
||||
from common.resultreports import JunitXMLReportWriter
|
||||
from common.resultreports import TagsReportWriter
|
||||
from cli.reportsman import ReportWriterManager, ReportWriters, ReportWritersTypes
|
||||
import common.api as nutter_api
|
||||
|
||||
def test__reportwritermanager_ctor__junit_report__valid_manager():
|
||||
report_writer_man = ReportWriterManager(ReportWriters.JUNIT)
|
||||
|
||||
assert len(report_writer_man._providers) == 1
|
||||
report_man = report_writer_man._providers[ReportWritersTypes.JUNIT]
|
||||
assert isinstance(report_man, JunitXMLReportWriter)
|
||||
|
||||
def test__reportwritermanager_ctor__tags_report__valid_manager():
|
||||
report_writer_man = ReportWriterManager(ReportWriters.TAGS)
|
||||
|
||||
assert len(report_writer_man._providers) == 1
|
||||
report_man = report_writer_man._providers[ReportWritersTypes.TAGS]
|
||||
assert isinstance(report_man, TagsReportWriter)
|
||||
|
||||
|
||||
def test__reportwritermanager_ctor__tags_and_junit_report__valid_manager():
|
||||
report_writer_man = ReportWriterManager(ReportWriters.TAGS + ReportWriters.JUNIT)
|
||||
|
||||
assert len(report_writer_man._providers) == 2
|
||||
report_man = report_writer_man._providers[ReportWritersTypes.TAGS]
|
||||
assert isinstance(report_man, TagsReportWriter)
|
||||
report_man = report_writer_man._providers[ReportWritersTypes.JUNIT]
|
||||
assert isinstance(report_man, JunitXMLReportWriter)
|
||||
|
||||
|
||||
def test__reportwritermanager_ctor__invalid_report__empty_manager():
|
||||
report_writer_man = ReportWriterManager(0)
|
||||
|
||||
assert len(report_writer_man._providers) == 0
|
||||
|
||||
def test__add_result__junit_provider_one_test_result__provider_has_data():
|
||||
report_writer_man = ReportWriterManager(ReportWriters.JUNIT)
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult("mycase", True, 10, []))
|
||||
report_writer_man.add_result('notepad', test_results)
|
||||
|
||||
report_man = report_writer_man._providers[ReportWritersTypes.JUNIT]
|
||||
assert isinstance(report_man, JunitXMLReportWriter)
|
||||
assert report_man.has_data()
|
||||
|
||||
|
||||
def test__add_result__junit_provider_zero_test_result__provider_has_data():
|
||||
report_writer_man = ReportWriterManager(ReportWriters.JUNIT)
|
||||
test_results = TestResults()
|
||||
report_writer_man.add_result('notepad', test_results)
|
||||
|
||||
report_man = report_writer_man._providers[ReportWritersTypes.JUNIT]
|
||||
assert report_man.has_data()
|
||||
|
||||
def test__add_result__tags_provider_one_test_result__provider_has_data():
|
||||
report_writer_man = ReportWriterManager(ReportWriters.TAGS)
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult("mycase", True, 10, ['hello']))
|
||||
report_writer_man.add_result('notepad', test_results)
|
||||
|
||||
report_man = report_writer_man._providers[ReportWritersTypes.TAGS]
|
||||
assert report_man.has_data()
|
||||
|
||||
|
||||
def test__write__two_providers__returns_two_names():
|
||||
report_writer_man = ReportWriterManager(ReportWriters.TAGS + ReportWriters.JUNIT)
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult("mycase", True, 10, ['hello']))
|
||||
report_writer_man.add_result('notepad', test_results)
|
||||
|
||||
results = report_writer_man.providers_names()
|
||||
|
||||
assert len(results) == 2
|
|
@ -0,0 +1,169 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import common.testresult as testresult
|
||||
from common.apiclientresults import ExecuteNotebookResult
|
||||
from cli.resultsvalidator import ExecutionResultsValidator, TestCaseFailureException, JobExecutionFailureException, NotebookExecutionFailureException, InvalidNotebookOutputException
|
||||
import json
|
||||
|
||||
|
||||
def test__validate__results_is_none__valueerror():
|
||||
with pytest.raises(ValueError):
|
||||
ExecutionResultsValidator().validate(None)
|
||||
|
||||
|
||||
def test__validate__results_are_empty__no_ex():
|
||||
exec_results = []
|
||||
ExecutionResultsValidator().validate(exec_results)
|
||||
|
||||
|
||||
def test__validate__results_have_no_testcases__no_ex():
|
||||
test_results = testresult.TestResults()
|
||||
exec_result = __get_ExecuteNotebookResult(
|
||||
'SUCCESS', 'TERMINATED', test_results.serialize())
|
||||
exec_results = [exec_result]
|
||||
|
||||
ExecutionResultsValidator().validate(exec_results)
|
||||
|
||||
|
||||
def test__validate__results_have_one_testcases__no_ex():
|
||||
test_results = testresult.TestResults()
|
||||
test_case = testresult.TestResult(
|
||||
test_name="mytest_case", passed=True, execution_time=1, tags = [])
|
||||
test_results.append(test_case)
|
||||
|
||||
exec_result = __get_ExecuteNotebookResult(
|
||||
'SUCCESS', 'TERMINATED', test_results.serialize())
|
||||
exec_results = [exec_result]
|
||||
|
||||
ExecutionResultsValidator().validate(exec_results)
|
||||
|
||||
|
||||
def test__validate__results_have_two_exec_results__no_ex():
|
||||
test_results = testresult.TestResults()
|
||||
test_case = testresult.TestResult(
|
||||
test_name="mytest_case", passed=True, execution_time=1, tags = [])
|
||||
test_results.append(test_case)
|
||||
|
||||
exec_result = __get_ExecuteNotebookResult(
|
||||
'SUCCESS', 'TERMINATED', test_results.serialize())
|
||||
exec_results = [exec_result, exec_result]
|
||||
|
||||
ExecutionResultsValidator().validate(exec_results)
|
||||
|
||||
|
||||
def test__validate__results_have_two_testcases__no_ex():
|
||||
test_results = testresult.TestResults()
|
||||
test_case = testresult.TestResult(
|
||||
test_name="mytest_case", passed=True, execution_time=1, tags = [])
|
||||
test_results.append(test_case)
|
||||
test_case = testresult.TestResult(
|
||||
test_name="mytest2_case", passed=True, execution_time=1, tags = [])
|
||||
test_results.append(test_case)
|
||||
|
||||
exec_result = __get_ExecuteNotebookResult(
|
||||
'SUCCESS', 'TERMINATED', test_results.serialize())
|
||||
exec_results = [exec_result]
|
||||
|
||||
ExecutionResultsValidator().validate(exec_results)
|
||||
|
||||
|
||||
def test__validate__results_have_two_testcases_one_failure__no_ex():
|
||||
test_results = testresult.TestResults()
|
||||
test_case = testresult.TestResult(
|
||||
test_name="mytest_case", passed=True, execution_time=1, tags = [])
|
||||
test_results.append(test_case)
|
||||
test_case = testresult.TestResult(
|
||||
test_name="mytest2_case", passed=False, execution_time=1, tags = [])
|
||||
test_results.append(test_case)
|
||||
|
||||
exec_result = __get_ExecuteNotebookResult(
|
||||
'SUCCESS', 'TERMINATED', test_results.serialize())
|
||||
exec_results = [exec_result]
|
||||
|
||||
with pytest.raises(TestCaseFailureException):
|
||||
ExecutionResultsValidator().validate(exec_results)
|
||||
|
||||
|
||||
def test__validate__results_have_failed_testcase__throws_testcasefailurexception():
|
||||
test_results = testresult.TestResults()
|
||||
test_case = testresult.TestResult(
|
||||
test_name="mytest_case", passed=False, execution_time=1, tags = [])
|
||||
test_results.append(test_case)
|
||||
|
||||
exec_result = __get_ExecuteNotebookResult(
|
||||
'SUCCESS', 'TERMINATED', test_results.serialize())
|
||||
exec_results = [exec_result]
|
||||
|
||||
with pytest.raises(TestCaseFailureException):
|
||||
ExecutionResultsValidator().validate(exec_results)
|
||||
|
||||
|
||||
def test__validate__results_have_invalid_output__throws_invalidnotebookoutputexception():
|
||||
|
||||
exec_result = __get_ExecuteNotebookResult(
|
||||
'SUCCESS', 'TERMINATED', '')
|
||||
exec_results = [exec_result]
|
||||
|
||||
with pytest.raises(InvalidNotebookOutputException):
|
||||
ExecutionResultsValidator().validate(exec_results)
|
||||
|
||||
|
||||
def test__validate__results_with_notebook_failure__throws_notebookexecutionfailureexception():
|
||||
test_results = testresult.TestResults()
|
||||
test_case = testresult.TestResult(
|
||||
test_name="mytest_case", passed=False, execution_time=1, tags = [])
|
||||
test_results.append(test_case)
|
||||
|
||||
exec_result = __get_ExecuteNotebookResult(
|
||||
'FAILED', 'TERMINATED', test_results.serialize())
|
||||
exec_results = [exec_result]
|
||||
|
||||
with pytest.raises(NotebookExecutionFailureException):
|
||||
ExecutionResultsValidator().validate(exec_results)
|
||||
|
||||
|
||||
def test__validate__results_with_job_failure__throws_jobexecutionfailureexception():
|
||||
test_results = testresult.TestResults()
|
||||
test_case = testresult.TestResult(
|
||||
test_name="mytest_case", passed=False, execution_time=1, tags = [])
|
||||
test_results.append(test_case)
|
||||
|
||||
exec_result = __get_ExecuteNotebookResult(
|
||||
'FAILED', 'INTERNAL_ERROR', test_results.serialize())
|
||||
exec_results = [exec_result]
|
||||
|
||||
with pytest.raises(JobExecutionFailureException):
|
||||
ExecutionResultsValidator().validate(exec_results)
|
||||
|
||||
|
||||
def __get_ExecuteNotebookResult(result_state, life_cycle_state, notebook_result):
|
||||
data_json = """
|
||||
{"notebook_output":
|
||||
{"result": "IHaveReturned", "truncated": false},
|
||||
"metadata":
|
||||
{"execution_duration": 15000,
|
||||
"run_type": "SUBMIT_RUN",
|
||||
"cleanup_duration": 0,
|
||||
"number_in_job": 1,
|
||||
"cluster_instance":
|
||||
{"cluster_id": "0925-141d1222-narcs242",
|
||||
"spark_context_id": "803963628344534476"},
|
||||
"creator_user_name": "abc@microsoft.com",
|
||||
"task": {"notebook_task": {"notebook_path": "/test_mynotebook"}},
|
||||
"run_id": 7, "start_time": 1569887259173,
|
||||
"job_id": 4,
|
||||
"state": {"result_state": "SUCCESS", "state_message": "",
|
||||
"life_cycle_state": "TERMINATED"}, "setup_duration": 2000,
|
||||
"run_page_url": "https://westus2.azuredatabricks.net/?o=14702dasda6094293890#job/4/run/1",
|
||||
"cluster_spec": {"existing_cluster_id": "0925-141122-narcs242"}, "run_name": "myrun"}}
|
||||
"""
|
||||
data_dict = json.loads(data_json)
|
||||
data_dict['notebook_output']['result'] = notebook_result
|
||||
data_dict['metadata']['state']['result_state'] = result_state
|
||||
data_dict['metadata']['state']['life_cycle_state'] = life_cycle_state
|
||||
|
||||
return ExecuteNotebookResult.from_job_output(data_dict)
|
|
@ -0,0 +1,222 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from common import apiclient as client
|
||||
from common.apiclient import DatabricksAPIClient
|
||||
import os
|
||||
import json
|
||||
|
||||
|
||||
def test__databricks_client__token_host_notset__clientfails(mocker):
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_HOST': ''})
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_TOKEN': ''})
|
||||
|
||||
with pytest.raises(client.InvalidConfigurationException):
|
||||
dbclient = client.databricks_client()
|
||||
|
||||
|
||||
def test__databricks_client__token_host_set__clientreturns(mocker):
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_HOST': 'myhost'})
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_TOKEN': 'mytoken'})
|
||||
|
||||
dbclient = client.databricks_client()
|
||||
|
||||
assert isinstance(dbclient, DatabricksAPIClient)
|
||||
|
||||
|
||||
def test__list_notebooks__onenotebook__okay(mocker):
|
||||
db = __get_client(mocker)
|
||||
mocker.patch.object(db.inner_dbclient.workspace, 'list')
|
||||
|
||||
objects = """{"objects":[
|
||||
{"object_type":"NOTEBOOK","path":"/nutfixjob","language":"PYTHON"},
|
||||
{"object_type":"DIRECTORY","path":"/ETL-Part-3-1.0.3"}]}"""
|
||||
|
||||
db.inner_dbclient.workspace.list.return_value = json.loads(objects)
|
||||
|
||||
notebooks = db.list_notebooks('/')
|
||||
|
||||
assert len(notebooks) == 1
|
||||
|
||||
|
||||
def test__list_notebooks__zeronotebook__okay(mocker):
|
||||
db = __get_client(mocker)
|
||||
mocker.patch.object(db.inner_dbclient.workspace, 'list')
|
||||
|
||||
objects = """{"objects":[
|
||||
{"object_type":"DIRECTORY","path":"/ETL-Part-3-1.0.3"}]}"""
|
||||
|
||||
db.inner_dbclient.workspace.list.return_value = json.loads(objects)
|
||||
|
||||
notebooks = db.list_notebooks('/')
|
||||
|
||||
assert len(notebooks) == 0
|
||||
|
||||
|
||||
def test__execute_notebook__emptypath__valueerrror(mocker):
|
||||
db = __get_client(mocker)
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
db.execute_notebook('', 'cluster')
|
||||
|
||||
|
||||
def test__execute_notebook__nonepath__valueerror(mocker):
|
||||
db = __get_client(mocker)
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
db.execute_notebook(None, 'cluster')
|
||||
|
||||
|
||||
def test__execute_notebook__emptycluster__valueerror(mocker):
|
||||
db = __get_client(mocker)
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
db.execute_notebook('/', '')
|
||||
|
||||
|
||||
def test__execute_notebook__non_dict_params__valueerror(mocker):
|
||||
db = __get_client(mocker)
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
db.execute_notebook('/', 'cluster', notebook_params='')
|
||||
|
||||
|
||||
def test__execute_notebook__nonecluster__valueerror(mocker):
|
||||
db = __get_client(mocker)
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
db.execute_notebook('/', None)
|
||||
|
||||
|
||||
def test__execute_notebook__success__executeresult_has_run_url(mocker):
|
||||
run_page_url = "http://runpage"
|
||||
output_data = __get_submit_run_response(
|
||||
'SUCCESS', 'TERMINATED', '', run_page_url)
|
||||
run_id = {}
|
||||
run_id['run_id'] = 1
|
||||
db = __get_client_for_execute_notebook(mocker, output_data, run_id)
|
||||
|
||||
result = db.execute_notebook('/mynotebook', 'clusterid')
|
||||
|
||||
assert result.notebook_run_page_url == run_page_url
|
||||
|
||||
def test__execute_notebook__failure__executeresult_has_run_url(mocker):
|
||||
run_page_url = "http://runpage"
|
||||
output_data = __get_submit_run_response(
|
||||
'FAILURE', 'TERMINATED', '', run_page_url)
|
||||
run_id = {}
|
||||
run_id['run_id'] = 1
|
||||
db = __get_client_for_execute_notebook(mocker, output_data, run_id)
|
||||
|
||||
result = db.execute_notebook('/mynotebook', 'clusterid')
|
||||
|
||||
assert result.notebook_run_page_url == run_page_url
|
||||
|
||||
|
||||
def test__execute_notebook__terminatestate__success(mocker):
|
||||
output_data = __get_submit_run_response('SUCCESS', 'TERMINATED', '')
|
||||
run_id = {}
|
||||
run_id['run_id'] = 1
|
||||
db = __get_client_for_execute_notebook(mocker, output_data, run_id)
|
||||
|
||||
result = db.execute_notebook('/mynotebook', 'clusterid')
|
||||
|
||||
assert result.task_result_state == 'TERMINATED'
|
||||
|
||||
|
||||
def test__execute_notebook__skippedstate__resultstate_is_SKIPPED(mocker):
|
||||
output_data = __get_submit_run_response('', 'SKIPPED', '')
|
||||
run_id = {}
|
||||
run_id['run_id'] = 1
|
||||
db = __get_client_for_execute_notebook(mocker, output_data, run_id)
|
||||
|
||||
result = db.execute_notebook('/mynotebook', 'clusterid')
|
||||
|
||||
assert result.task_result_state == 'SKIPPED'
|
||||
|
||||
|
||||
def test__execute_notebook__internal_error_state__resultstate_is_INTERNAL_ERROR(mocker):
|
||||
output_data = __get_submit_run_response('', 'INTERNAL_ERROR', '')
|
||||
run_id = {}
|
||||
run_id['run_id'] = 1
|
||||
db = __get_client_for_execute_notebook(mocker, output_data, run_id)
|
||||
|
||||
result = db.execute_notebook('/mynotebook', 'clusterid')
|
||||
|
||||
assert result.task_result_state == 'INTERNAL_ERROR'
|
||||
|
||||
|
||||
def test__execute_notebook__timeout_1_sec_lcs_isrunning__timeoutexception(mocker):
|
||||
output_data = __get_submit_run_response('', 'RUNNING', '')
|
||||
run_id = {}
|
||||
run_id['run_id'] = 1
|
||||
db = __get_client_for_execute_notebook(mocker, output_data, run_id)
|
||||
|
||||
with pytest.raises(client.TimeOutException):
|
||||
db.min_timeout = 1
|
||||
result = db.execute_notebook('/mynotebook', 'clusterid', timeout=1)
|
||||
|
||||
|
||||
def test__execute_notebook__timeout_greater_than_min__valueerror(mocker):
|
||||
output_data = __get_submit_run_response('', 'RUNNING', '')
|
||||
run_id = {}
|
||||
run_id['run_id'] = 1
|
||||
db = __get_client_for_execute_notebook(mocker, output_data, run_id)
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
db.min_timeout = 10
|
||||
result = db.execute_notebook('/mynotebook', 'clusterid', timeout=1)
|
||||
|
||||
|
||||
default_run_page_url = 'https://westus2.azuredatabricks.net/?o=14702dasda6094293890#job/4/run/1'
|
||||
|
||||
|
||||
def __get_submit_run_response(task_result_state, life_cycle_state, result, run_page_url=default_run_page_url):
|
||||
data_json = """
|
||||
{"notebook_output":
|
||||
{"result": "IHaveReturned", "truncated": false},
|
||||
"metadata":
|
||||
{"execution_duration": 15000,
|
||||
"run_type": "SUBMIT_RUN",
|
||||
"cleanup_duration": 0,
|
||||
"number_in_job": 1,
|
||||
"cluster_instance":
|
||||
{"cluster_id": "0925-141d1222-narcs242",
|
||||
"spark_context_id": "803963628344534476"},
|
||||
"creator_user_name": "abc@microsoft.com",
|
||||
"task": {"notebook_task": {"notebook_path": "/mynotebook"}},
|
||||
"run_id": 7, "start_time": 1569887259173,
|
||||
"job_id": 4,
|
||||
"state": {"result_state": "SUCCESS", "state_message": "",
|
||||
"life_cycle_state": "TERMINATED"}, "setup_duration": 2000,
|
||||
"run_page_url": "https://westus2.azuredatabricks.net/?o=14702dasda6094293890#job/4/run/1",
|
||||
"cluster_spec": {"existing_cluster_id": "0925-141122-narcs242"}, "run_name": "myrun"}}
|
||||
"""
|
||||
data_dict = json.loads(data_json)
|
||||
data_dict['notebook_output']['result'] = result
|
||||
data_dict['metadata']['state']['result_state'] = task_result_state
|
||||
data_dict['metadata']['state']['life_cycle_state'] = life_cycle_state
|
||||
data_dict['metadata']['run_page_url'] = run_page_url
|
||||
|
||||
return json.dumps(data_dict)
|
||||
|
||||
|
||||
def __get_client_for_execute_notebook(mocker, output_data, run_id):
|
||||
db = __get_client(mocker)
|
||||
mocker.patch.object(db.inner_dbclient.jobs, 'submit_run')
|
||||
db.inner_dbclient.jobs.submit_run.return_value = run_id
|
||||
mocker.patch.object(db.inner_dbclient.jobs, 'get_run_output')
|
||||
db.inner_dbclient.jobs.get_run_output.return_value = json.loads(
|
||||
output_data)
|
||||
|
||||
return db
|
||||
|
||||
|
||||
def __get_client(mocker):
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_HOST': 'myhost'})
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_TOKEN': 'mytoken'})
|
||||
|
||||
return DatabricksAPIClient()
|
|
@ -0,0 +1,55 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import os
|
||||
from common import authconfig as auth
|
||||
|
||||
def test_tokenhostset_okay(mocker):
|
||||
mocker.patch.dict(os.environ,{'DATABRICKS_HOST':'host'})
|
||||
mocker.patch.dict(os.environ,{'DATABRICKS_TOKEN':'token'})
|
||||
|
||||
config = auth.get_auth_config()
|
||||
# Assert
|
||||
assert config != None
|
||||
assert config.host == 'host'
|
||||
assert config.token == 'token'
|
||||
|
||||
def test_onlytokenset_none(mocker):
|
||||
mocker.patch.dict(os.environ,{'DATABRICKS_HOST':''})
|
||||
mocker.patch.dict(os.environ,{'DATABRICKS_TOKEN':'token'})
|
||||
|
||||
config = auth.get_auth_config()
|
||||
# Assert
|
||||
assert config == None
|
||||
|
||||
def test_tokenhostsetemtpy_none(mocker):
|
||||
mocker.patch.dict(os.environ,{'DATABRICKS_HOST':''})
|
||||
mocker.patch.dict(os.environ,{'DATABRICKS_TOKEN':''})
|
||||
|
||||
config = auth.get_auth_config()
|
||||
# Assert
|
||||
assert config == None
|
||||
|
||||
def test_onlyhostset_none(mocker):
|
||||
mocker.patch.dict(os.environ,{'DATABRICKS_HOST':'host'})
|
||||
mocker.patch.dict(os.environ,{'DATABRICKS_TOKEN':''})
|
||||
|
||||
config = auth.get_auth_config()
|
||||
# Assert
|
||||
assert config == None
|
||||
|
||||
def test_tokenhostinsecureset_okay(mocker):
|
||||
mocker.patch.dict(os.environ,{'DATABRICKS_HOST':'host'})
|
||||
mocker.patch.dict(os.environ,{'DATABRICKS_TOKEN':'token'})
|
||||
mocker.patch.dict(os.environ,{'DATABRICKS_INSECURE':'insecure'})
|
||||
|
||||
config = auth.get_auth_config()
|
||||
|
||||
# Assert
|
||||
assert config != None
|
||||
assert config.host == 'host'
|
||||
assert config.token == 'token'
|
||||
assert config.insecure == 'insecure'
|
|
@ -0,0 +1,97 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from common.httpretrier import HTTPRetrier
|
||||
import requests
|
||||
from requests.exceptions import HTTPError
|
||||
from databricks_api import DatabricksAPI
|
||||
|
||||
def test__execute__no_exception__returns_value():
|
||||
retrier = HTTPRetrier()
|
||||
value = 'hello'
|
||||
|
||||
return_value = retrier.execute(_get_value, value)
|
||||
|
||||
assert return_value == value
|
||||
|
||||
def test__execute__no_exception_named_args__returns_value():
|
||||
retrier = HTTPRetrier()
|
||||
value = 'hello'
|
||||
|
||||
return_value = retrier.execute(_get_value, return_value = value)
|
||||
|
||||
assert return_value == value
|
||||
|
||||
|
||||
def test__execute__no_exception_named_args_set_first_arg__returns_value():
|
||||
retrier = HTTPRetrier()
|
||||
value = 'hello'
|
||||
|
||||
return_values = retrier.execute(_get_values, value1 = value)
|
||||
|
||||
assert return_values[0] == value
|
||||
assert return_values[1] is None
|
||||
|
||||
|
||||
def test__execute__no_exception_named_args_set_second_arg__returns_value():
|
||||
retrier = HTTPRetrier()
|
||||
value = 'hello'
|
||||
|
||||
return_values = retrier.execute(_get_values, value2 = value)
|
||||
|
||||
assert return_values[0] is None
|
||||
assert return_values[1] == value
|
||||
|
||||
def test__execute__raises_non_http_exception__exception_arises(mocker):
|
||||
retrier = HTTPRetrier()
|
||||
raiser = ExceptionRaiser(0, ValueError)
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
return_value = retrier.execute(raiser.execute)
|
||||
|
||||
def test__execute__raises_500_http_exception__retries_twice_and_raises(mocker):
|
||||
retrier = HTTPRetrier(2,1)
|
||||
|
||||
db = DatabricksAPI(host='HOST',token='TOKEN')
|
||||
mock_request = mocker.patch.object(db.client.session, 'request')
|
||||
mock_resp = requests.models.Response()
|
||||
mock_resp.status_code = 500
|
||||
mock_request.return_value = mock_resp
|
||||
|
||||
with pytest.raises(HTTPError):
|
||||
return_value = retrier.execute(db.jobs.get_run_output, 1)
|
||||
assert retrier._tries == 2
|
||||
|
||||
def test__execute__raises_403_http_exception__no_retries_and_raises(mocker):
|
||||
retrier = HTTPRetrier(2,1)
|
||||
|
||||
db = DatabricksAPI(host='HOST',token='TOKEN')
|
||||
mock_request = mocker.patch.object(db.client.session, 'request')
|
||||
mock_resp = requests.models.Response()
|
||||
mock_resp.status_code = 403
|
||||
mock_request.return_value = mock_resp
|
||||
|
||||
with pytest.raises(HTTPError):
|
||||
return_value = retrier.execute(db.jobs.get_run_output, 1)
|
||||
assert retrier._tries == 0
|
||||
|
||||
def _get_value(return_value):
|
||||
return return_value
|
||||
|
||||
def _get_values(value1=None, value2=None):
|
||||
return value1, value2
|
||||
|
||||
class ExceptionRaiser(object):
|
||||
def __init__(self, raise_after, exception):
|
||||
self._raise_after = raise_after
|
||||
self._called = 1
|
||||
self._exception = exception
|
||||
|
||||
def execute(self):
|
||||
if self._called > self._raise_after:
|
||||
raise self._exception()
|
||||
self._called = self._called + 1
|
||||
return self._called
|
|
@ -0,0 +1,53 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import common.utils as utils
|
||||
|
||||
|
||||
def test__recursive_find__2_levels_value__value(mocker):
|
||||
keys = ["a", "b"]
|
||||
test_dict = __get_test_dict()
|
||||
value = utils.recursive_find(test_dict, keys)
|
||||
|
||||
assert value == "c"
|
||||
|
||||
|
||||
def test__recursive_find__3_levels_no_value__none(mocker):
|
||||
keys = ["a", "b", "c"]
|
||||
test_dict = __get_test_dict()
|
||||
value = utils.recursive_find(test_dict, keys)
|
||||
|
||||
assert value is None
|
||||
|
||||
|
||||
def test__recursive_find__3_levels_value__value(mocker):
|
||||
keys = ["a", "C", "D"]
|
||||
test_dict = __get_test_dict()
|
||||
value = utils.recursive_find(test_dict, keys)
|
||||
|
||||
assert value == "E"
|
||||
|
||||
|
||||
def test__recursive_find__3_levels_value__value(mocker):
|
||||
keys = ["a", "C", "D"]
|
||||
test_dict = __get_test_dict()
|
||||
value = utils.recursive_find(test_dict, keys)
|
||||
|
||||
assert value == "E"
|
||||
|
||||
|
||||
def test__recursive_find__2_levels_dict__dict(mocker):
|
||||
keys = ["a", "C"]
|
||||
test_dict = __get_test_dict()
|
||||
value = utils.recursive_find(test_dict, keys)
|
||||
|
||||
assert isinstance(value, dict)
|
||||
|
||||
|
||||
def __get_test_dict():
|
||||
test_dict = {"a": {"b": "c", "C": {"D": "E"}}, "1": {"2": {"3": "4"}}}
|
||||
|
||||
return test_dict
|
|
@ -0,0 +1,553 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import os
|
||||
import json
|
||||
from common.api import Nutter, TestNotebook, NutterStatusEvents
|
||||
import common.api as nutter_api
|
||||
from common.testresult import TestResults, TestResult
|
||||
from common.api import TestNamePatternMatcher
|
||||
from common.resultreports import JunitXMLReportWriter
|
||||
from common.resultreports import TagsReportWriter
|
||||
from common.apiclient import WorkspacePath, DatabricksAPIClient
|
||||
from common.statuseventhandler import StatusEventsHandler, EventHandler, StatusEvent
|
||||
|
||||
def test__workspacepath__empty_object_response__instance_is_created():
|
||||
objects = {}
|
||||
workspace_path = WorkspacePath.from_api_response(objects)
|
||||
|
||||
def test__get_report_writer__junitxmlreportwriter__valid_instance():
|
||||
writer = nutter_api.get_report_writer('JunitXMLReportWriter')
|
||||
|
||||
assert isinstance(writer, JunitXMLReportWriter)
|
||||
|
||||
|
||||
def test__get_report_writer__tagsreportwriter__valid_instance():
|
||||
writer = nutter_api.get_report_writer('TagsReportWriter')
|
||||
|
||||
assert isinstance(writer, TagsReportWriter)
|
||||
|
||||
|
||||
def test__list_tests__onetest__okay(mocker):
|
||||
|
||||
nutter = _get_nutter(mocker)
|
||||
dbapi_client = _get_client(mocker)
|
||||
nutter.dbclient = dbapi_client
|
||||
mocker.patch.object(nutter.dbclient, 'list_objects')
|
||||
|
||||
workspace_path_1 = _get_workspacepathobject(
|
||||
[('NOTEBOOK', '/mynotebook'), ('NOTEBOOK', '/test_mynotebook')])
|
||||
|
||||
nutter.dbclient.list_objects.return_value = workspace_path_1
|
||||
|
||||
tests = nutter.list_tests("/")
|
||||
|
||||
assert len(tests) == 1
|
||||
assert tests[0] == TestNotebook('test_mynotebook', '/test_mynotebook')
|
||||
|
||||
|
||||
def test__list_tests__onetest_in_folder__okay(mocker):
|
||||
|
||||
nutter = _get_nutter(mocker)
|
||||
dbapi_client = _get_client(mocker)
|
||||
nutter.dbclient = dbapi_client
|
||||
mocker.patch.object(nutter.dbclient, 'list_objects')
|
||||
|
||||
workspace_path_1 = _get_workspacepathobject(
|
||||
[('NOTEBOOK', '/folder/mynotebook'), ('NOTEBOOK', '/folder/test_mynotebook')])
|
||||
|
||||
nutter.dbclient.list_objects.return_value = workspace_path_1
|
||||
|
||||
tests = nutter.list_tests("/folder")
|
||||
|
||||
assert len(tests) == 1
|
||||
assert tests[0] == TestNotebook(
|
||||
'test_mynotebook', '/folder/test_mynotebook')
|
||||
|
||||
|
||||
@pytest.mark.skip('No longer needed')
|
||||
def test__list_tests__response_without_root_object__okay(mocker):
|
||||
|
||||
nutter = _get_nutter(mocker)
|
||||
dbapi_client = _get_client(mocker)
|
||||
nutter.dbclient = dbapi_client
|
||||
mocker.patch.object(nutter.dbclient, 'list_objects')
|
||||
|
||||
objects = """{"objects":[
|
||||
{"object_type":"NOTEBOOK","path":"/mynotebook","language":"PYTHON"},
|
||||
{"object_type":"NOTEBOOK","path":"/test_mynotebook","language":"PYTHON"}]}"""
|
||||
|
||||
nutter.dbclient.list_notebooks.return_value = WorkspacePath(json.loads(objects)[
|
||||
'objects'])
|
||||
|
||||
tests = nutter.list_tests("/")
|
||||
|
||||
assert len(tests) == 1
|
||||
assert tests[0] == TestNotebook('test_mynotebook', '/test_mynotebook')
|
||||
|
||||
|
||||
def test__list_tests__onetest_uppercase_name__okay(mocker):
|
||||
|
||||
nutter = _get_nutter(mocker)
|
||||
dbapi_client = _get_client(mocker)
|
||||
nutter.dbclient = dbapi_client
|
||||
mocker.patch.object(nutter.dbclient, 'list_objects')
|
||||
|
||||
workspace_path_1 = _get_workspacepathobject(
|
||||
[('NOTEBOOK', '/mynotebook'), ('NOTEBOOK', '/TEST_mynote')])
|
||||
|
||||
nutter.dbclient.list_objects.return_value = workspace_path_1
|
||||
|
||||
tests = nutter.list_tests("/")
|
||||
|
||||
assert len(tests) == 1
|
||||
assert tests == [TestNotebook('TEST_mynote', '/TEST_mynote')]
|
||||
|
||||
|
||||
def test__list_tests__nutterstatusevents_testlisting_sequence_is_fired(mocker):
|
||||
event_handler = TestEventHandler()
|
||||
nutter = _get_nutter(mocker, event_handler)
|
||||
dbapi_client = _get_client(mocker)
|
||||
nutter.dbclient = dbapi_client
|
||||
mocker.patch.object(nutter.dbclient, 'list_objects')
|
||||
|
||||
workspace_path_1 = _get_workspacepathobject(
|
||||
[('NOTEBOOK', '/mynotebook'), ('NOTEBOOK', '/TEST_mynote')])
|
||||
|
||||
nutter.dbclient.list_objects.return_value = workspace_path_1
|
||||
|
||||
tests = nutter.list_tests("/")
|
||||
status_event = event_handler.get_item()
|
||||
assert status_event.event == NutterStatusEvents.TestsListing
|
||||
|
||||
status_event = event_handler.get_item()
|
||||
assert status_event.event == NutterStatusEvents.TestsListingResults
|
||||
assert status_event.data == 1
|
||||
|
||||
def test__list_tests_recursively__1test1dir1test__2_tests(mocker):
|
||||
nutter = _get_nutter(mocker)
|
||||
dbapi_client = _get_client(mocker)
|
||||
nutter.dbclient = dbapi_client
|
||||
mocker.patch.object(nutter.dbclient, 'list_objects')
|
||||
|
||||
workspace_path_1 = _get_workspacepathobject(
|
||||
[('NOTEBOOK', '/test_1'), ('DIRECTORY', '/p')])
|
||||
workspace_path_2 = _get_workspacepathobject([('NOTEBOOK', '/p/test_1')])
|
||||
|
||||
nutter.dbclient.list_objects.side_effect = [
|
||||
workspace_path_1, workspace_path_2]
|
||||
|
||||
tests = nutter.list_tests("/", True)
|
||||
|
||||
expected = [TestNotebook('test_1', '/test_1'),
|
||||
TestNotebook('test_1', '/p/test_1')]
|
||||
assert expected == tests
|
||||
assert nutter.dbclient.list_objects.call_count == 2
|
||||
|
||||
|
||||
def test__list_tests_recursively__1test1dir2test__3_tests(mocker):
|
||||
nutter = _get_nutter(mocker)
|
||||
dbapi_client = _get_client(mocker)
|
||||
nutter.dbclient = dbapi_client
|
||||
mocker.patch.object(nutter.dbclient, 'list_objects')
|
||||
|
||||
workspace_path_1 = _get_workspacepathobject(
|
||||
[('NOTEBOOK', '/test_1'), ('DIRECTORY', '/p')])
|
||||
workspace_path_2 = _get_workspacepathobject(
|
||||
[('NOTEBOOK', '/p/test_1'), ('NOTEBOOK', '/p/test_2')])
|
||||
|
||||
nutter.dbclient.list_objects.side_effect = [
|
||||
workspace_path_1, workspace_path_2]
|
||||
|
||||
tests = nutter.list_tests("/", True)
|
||||
expected = [TestNotebook('test_1', '/test_1'), TestNotebook('test_1',
|
||||
'/p/test_1'), TestNotebook('test_2', '/p/test_2')]
|
||||
assert expected == tests
|
||||
assert nutter.dbclient.list_objects.call_count == 2
|
||||
|
||||
|
||||
def test__list_tests_recursively__1test1dir1dir__1_test(mocker):
|
||||
nutter = _get_nutter(mocker)
|
||||
dbapi_client = _get_client(mocker)
|
||||
nutter.dbclient = dbapi_client
|
||||
mocker.patch.object(nutter.dbclient, 'list_objects')
|
||||
|
||||
workspace_path_1 = _get_workspacepathobject(
|
||||
[('NOTEBOOK', '/test_1'), ('DIRECTORY', '/p')])
|
||||
workspace_path_2 = _get_workspacepathobject([('DIRECTORY', '/p/c')])
|
||||
workspace_path_3 = _get_workspacepathobject([])
|
||||
|
||||
nutter.dbclient.list_objects.side_effect = [
|
||||
workspace_path_1, workspace_path_2, workspace_path_3]
|
||||
|
||||
tests = nutter.list_tests("/", True)
|
||||
|
||||
expected = [TestNotebook('test_1', '/test_1')]
|
||||
assert expected == tests
|
||||
assert nutter.dbclient.list_objects.call_count == 3
|
||||
|
||||
|
||||
def test__list_tests__notest__empty_list(mocker):
|
||||
nutter = _get_nutter(mocker)
|
||||
dbapi_client = _get_client(mocker)
|
||||
nutter.dbclient = dbapi_client
|
||||
_mock_dbclient_list_objects(mocker, dbapi_client, [
|
||||
('NOTEBOOK', '/my'), ('NOTEBOOK', '/my2')])
|
||||
|
||||
results = nutter.list_tests("/")
|
||||
|
||||
assert len(results) == 0
|
||||
|
||||
|
||||
|
||||
def test__run_tests__onematch_two_tests___nutterstatusevents_testlisting_scheduling_execution_sequence_is_fired(mocker):
|
||||
event_handler = TestEventHandler()
|
||||
nutter = _get_nutter(mocker, event_handler)
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult('case',True, 10,[]))
|
||||
submit_response = _get_submit_run_response('SUCCESS', 'TERMINATED', test_results.serialize())
|
||||
dbapi_client = _get_client_for_execute_notebook(mocker, submit_response)
|
||||
|
||||
nutter.dbclient = dbapi_client
|
||||
_mock_dbclient_list_objects(mocker, dbapi_client, [(
|
||||
'NOTEBOOK', '/test_my'), ('NOTEBOOK', '/test_abc')])
|
||||
|
||||
results = nutter.run_tests("/my*", "cluster")
|
||||
|
||||
status_event = event_handler.get_item()
|
||||
assert status_event.event == NutterStatusEvents.TestExecutionRequest
|
||||
assert status_event.data == '/my*'
|
||||
|
||||
status_event = event_handler.get_item()
|
||||
assert status_event.event == NutterStatusEvents.TestsListing
|
||||
|
||||
status_event = event_handler.get_item()
|
||||
assert status_event.event == NutterStatusEvents.TestsListingResults
|
||||
assert status_event.data == 2
|
||||
|
||||
status_event = event_handler.get_item()
|
||||
assert status_event.event == NutterStatusEvents.TestsListingFiltered
|
||||
assert status_event.data == 1
|
||||
|
||||
status_event = event_handler.get_item()
|
||||
assert status_event.event == NutterStatusEvents.TestScheduling
|
||||
assert status_event.data == '/test_my'
|
||||
|
||||
status_event = event_handler.get_item()
|
||||
assert status_event.event == NutterStatusEvents.TestExecuted
|
||||
assert status_event.data.success
|
||||
|
||||
status_event = event_handler.get_item()
|
||||
assert status_event.event == NutterStatusEvents.TestExecutionResult
|
||||
assert status_event.data #True if success
|
||||
|
||||
|
||||
def test__run_tests__onematch__okay(mocker):
|
||||
nutter = _get_nutter(mocker)
|
||||
submit_response = _get_submit_run_response('SUCCESS', 'TERMINATED', '')
|
||||
dbapi_client = _get_client_for_execute_notebook(mocker, submit_response)
|
||||
|
||||
nutter.dbclient = dbapi_client
|
||||
_mock_dbclient_list_objects(mocker, dbapi_client, [(
|
||||
'NOTEBOOK', '/test_my'), ('NOTEBOOK', '/my')])
|
||||
|
||||
results = nutter.run_tests("/my*", "cluster")
|
||||
|
||||
assert len(results) == 1
|
||||
result = results[0]
|
||||
assert result.task_result_state == 'TERMINATED'
|
||||
|
||||
def test__run_tests_recursively__1test1dir2test__3_tests(mocker):
|
||||
nutter = _get_nutter(mocker)
|
||||
submit_response = _get_submit_run_response('SUCCESS', 'TERMINATED', '')
|
||||
dbapi_client = _get_client_for_execute_notebook(mocker, submit_response)
|
||||
nutter.dbclient = dbapi_client
|
||||
|
||||
mocker.patch.object(nutter.dbclient, 'list_objects')
|
||||
|
||||
workspace_path_1 = _get_workspacepathobject(
|
||||
[('NOTEBOOK', '/test_1'), ('DIRECTORY', '/p')])
|
||||
workspace_path_2 = _get_workspacepathobject(
|
||||
[('NOTEBOOK', '/p/test_1'), ('NOTEBOOK', '/p/test_2')])
|
||||
|
||||
nutter.dbclient.list_objects.side_effect = [
|
||||
workspace_path_1, workspace_path_2]
|
||||
|
||||
tests = nutter.run_tests('/','cluster', 120, 1, True)
|
||||
assert len(tests) == 3
|
||||
|
||||
def test__run_tests_recursively__1dir1dir2test__2_tests(mocker):
|
||||
nutter = _get_nutter(mocker)
|
||||
submit_response = _get_submit_run_response('SUCCESS', 'TERMINATED', '')
|
||||
dbapi_client = _get_client_for_execute_notebook(mocker, submit_response)
|
||||
nutter.dbclient = dbapi_client
|
||||
|
||||
mocker.patch.object(nutter.dbclient, 'list_objects')
|
||||
|
||||
workspace_path_1 = _get_workspacepathobject(
|
||||
[('DIRECTORY', '/p')])
|
||||
workspace_path_2 = _get_workspacepathobject(
|
||||
[('DIRECTORY', '/c')])
|
||||
workspace_path_3 = _get_workspacepathobject(
|
||||
[('NOTEBOOK', '/p/c/test_1'), ('NOTEBOOK', '/p/c/test_2')])
|
||||
|
||||
nutter.dbclient.list_objects.side_effect = [
|
||||
workspace_path_1, workspace_path_2, workspace_path_3]
|
||||
|
||||
tests = nutter.run_tests('/','cluster', 120, 1, True)
|
||||
assert len(tests) == 2
|
||||
|
||||
def test__run_tests__onematch_suffix_is_uppercase__okay(mocker):
|
||||
nutter = _get_nutter(mocker)
|
||||
submit_response = _get_submit_run_response('SUCCESS', 'TERMINATED', '')
|
||||
dbapi_client = _get_client_for_execute_notebook(mocker, submit_response)
|
||||
|
||||
nutter.dbclient = dbapi_client
|
||||
_mock_dbclient_list_objects(mocker, dbapi_client, [(
|
||||
'NOTEBOOK', '/TEST_my'), ('NOTEBOOK', '/my')])
|
||||
|
||||
results = nutter.run_tests("/my*", "cluster")
|
||||
|
||||
assert len(results) == 1
|
||||
assert results[0].task_result_state == 'TERMINATED'
|
||||
|
||||
|
||||
def test__run_tests__nomatch_case_sensitive__okay(mocker):
|
||||
nutter = _get_nutter(mocker)
|
||||
submit_response = _get_submit_run_response('SUCCESS', 'TERMINATED', '')
|
||||
dbapi_client = _get_client_for_execute_notebook(mocker, submit_response)
|
||||
|
||||
nutter.dbclient = dbapi_client
|
||||
_mock_dbclient_list_objects(mocker, dbapi_client, [(
|
||||
'NOTEBOOK', '/test_MY'), ('NOTEBOOK', '/my')])
|
||||
|
||||
results = nutter.run_tests("/my*", "cluster")
|
||||
|
||||
assert len(results) == 0
|
||||
|
||||
|
||||
def test__run_tests__twomatches_with_pattern__okay(mocker):
|
||||
submit_response = _get_submit_run_response('SUCCESS', 'TERMINATED', '')
|
||||
dbapi_client = _get_client_for_execute_notebook(mocker, submit_response)
|
||||
|
||||
nutter = _get_nutter(mocker)
|
||||
nutter.dbclient = dbapi_client
|
||||
_mock_dbclient_list_objects(mocker, dbapi_client, [(
|
||||
'NOTEBOOK', '/test_my'), ('NOTEBOOK', '/test_my2')])
|
||||
|
||||
results = nutter.run_tests("/my*", "cluster")
|
||||
|
||||
assert len(results) == 2
|
||||
assert results[0].task_result_state == 'TERMINATED'
|
||||
assert results[1].task_result_state == 'TERMINATED'
|
||||
|
||||
|
||||
def test__run_tests__with_invalid_pattern__valueerror(mocker):
|
||||
submit_response = _get_submit_run_response('SUCCESS', 'TERMINATED', '')
|
||||
dbapi_client = _get_client_for_execute_notebook(mocker, submit_response)
|
||||
nutter = _get_nutter(mocker)
|
||||
nutter.dbclient = dbapi_client
|
||||
_mock_dbclient_list_objects(mocker, dbapi_client, [(
|
||||
'NOTEBOOK', '/test_my'), ('NOTEBOOK', '/test_my2')])
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
results = nutter.run_tests("/my/(", "cluster")
|
||||
|
||||
|
||||
def test__run_tests__nomatches__okay(mocker):
|
||||
|
||||
submit_response = _get_submit_run_response('SUCCESS', 'TERMINATED', '')
|
||||
dbapi_client = _get_client_for_execute_notebook(mocker, submit_response)
|
||||
nutter = _get_nutter(mocker)
|
||||
nutter.dbclient = dbapi_client
|
||||
_mock_dbclient_list_objects(mocker, dbapi_client, [(
|
||||
'NOTEBOOK', '/test_my'), ('NOTEBOOK', '/test_my2')])
|
||||
|
||||
results = nutter.run_tests("/abc*", "cluster")
|
||||
|
||||
assert len(results) == 0
|
||||
|
||||
|
||||
def test__to_testresults__none_output__none(mocker):
|
||||
output = None
|
||||
result = nutter_api.to_testresults(output)
|
||||
|
||||
assert result is None
|
||||
|
||||
|
||||
def test__to_testresults__non_pickle_output__none(mocker):
|
||||
output = 'NOT A PICKLE'
|
||||
result = nutter_api.to_testresults(output)
|
||||
|
||||
assert result is None
|
||||
|
||||
|
||||
def test__to_testresults__pickle_output__testresult(mocker):
|
||||
output = TestResults().serialize()
|
||||
result = nutter_api.to_testresults(output)
|
||||
|
||||
assert isinstance(result, TestResults)
|
||||
|
||||
|
||||
patterns = [
|
||||
(''),
|
||||
('*'),
|
||||
(None),
|
||||
('abc'),
|
||||
('abc*'),
|
||||
]
|
||||
@pytest.mark.parametrize('pattern', patterns)
|
||||
def test__testnamepatternmatcher_ctor_valid_pattern__instance(pattern):
|
||||
pattern_matcher = TestNamePatternMatcher(pattern)
|
||||
|
||||
assert isinstance(pattern_matcher, TestNamePatternMatcher)
|
||||
|
||||
|
||||
all_patterns = [
|
||||
(''),
|
||||
('*'),
|
||||
(None),
|
||||
]
|
||||
@pytest.mark.parametrize('pattern', all_patterns)
|
||||
def test__testnamepatternmatcher_ctor_valid_all_pattern__pattern_is_none(pattern):
|
||||
pattern_matcher = TestNamePatternMatcher(pattern)
|
||||
|
||||
assert isinstance(pattern_matcher, TestNamePatternMatcher)
|
||||
assert pattern_matcher._pattern is None
|
||||
|
||||
|
||||
reg_patterns = [
|
||||
('t?as'),
|
||||
('tt*'),
|
||||
('e^6'),
|
||||
]
|
||||
@pytest.mark.parametrize('pattern', reg_patterns)
|
||||
def test__testnamepatternmatcher_ctor_valid_regex_pattern__pattern_is_pattern(pattern):
|
||||
pattern_matcher = TestNamePatternMatcher(pattern)
|
||||
|
||||
assert isinstance(pattern_matcher, TestNamePatternMatcher)
|
||||
assert pattern_matcher._pattern == pattern
|
||||
|
||||
|
||||
filter_patterns = [
|
||||
('', [], 0),
|
||||
('a', [TestNotebook("test_a", "/test_a")], 1),
|
||||
('*', [TestNotebook("test_a", "/test_a"), TestNotebook("test_b", "/test_b")], 2),
|
||||
('b*',[TestNotebook("test_a", "/test_a"), TestNotebook("test_b", "/test_b")], 1),
|
||||
('b*',[TestNotebook("test_ba", "/test_ba"), TestNotebook("test_b", "/test_b")], 2),
|
||||
('c*',[TestNotebook("test_a", "/test_a"), TestNotebook("test_b", "/test_b")], 0),
|
||||
]
|
||||
@pytest.mark.parametrize('pattern, list_results, expected_count', filter_patterns)
|
||||
def test__filter_by_pattern__valid_scenarios__result_len_is_expected_count(pattern, list_results, expected_count):
|
||||
|
||||
pattern_matcher = TestNamePatternMatcher(pattern)
|
||||
filtered = pattern_matcher.filter_by_pattern(list_results)
|
||||
|
||||
assert len(filtered) == expected_count
|
||||
|
||||
|
||||
invalid_patterns = [
|
||||
('('),
|
||||
('--)'),
|
||||
]
|
||||
@pytest.mark.parametrize('pattern', invalid_patterns)
|
||||
def test__testnamepatternmatcher_ctor__invali_pattern__valueerror(pattern):
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
pattern_matcher = TestNamePatternMatcher(pattern)
|
||||
|
||||
|
||||
def _get_submit_run_response(result_state, life_cycle_state, result):
|
||||
data_json = """
|
||||
{"notebook_output":
|
||||
{"result": "IHaveReturned", "truncated": false},
|
||||
"metadata":
|
||||
{"execution_duration": 15000,
|
||||
"run_type": "SUBMIT_RUN",
|
||||
"cleanup_duration": 0,
|
||||
"number_in_job": 1,
|
||||
"cluster_instance":
|
||||
{"cluster_id": "0925-141d1222-narcs242",
|
||||
"spark_context_id": "803963628344534476"},
|
||||
"creator_user_name": "abc@microsoft.com",
|
||||
"task": {"notebook_task": {"notebook_path": "/mynotebook"}},
|
||||
"run_id": 7, "start_time": 1569887259173,
|
||||
"job_id": 4,
|
||||
"state": {"result_state": "SUCCESS", "state_message": "",
|
||||
"life_cycle_state": "TERMINATED"}, "setup_duration": 2000,
|
||||
"run_page_url": "https://westus2.azuredatabricks.net/?o=14702dasda6094293890#job/4/run/1",
|
||||
"cluster_spec": {"existing_cluster_id": "0925-141122-narcs242"}, "run_name": "myrun"}}
|
||||
"""
|
||||
data_dict = json.loads(data_json)
|
||||
data_dict['notebook_output']['result'] = result
|
||||
data_dict['metadata']['state']['result_state'] = result_state
|
||||
data_dict['metadata']['state']['life_cycle_state'] = life_cycle_state
|
||||
|
||||
return json.dumps(data_dict)
|
||||
|
||||
|
||||
def _get_client_for_execute_notebook(mocker, output_data):
|
||||
run_id = {}
|
||||
run_id['run_id'] = 1
|
||||
|
||||
db = _get_client(mocker)
|
||||
mocker.patch.object(db.inner_dbclient.jobs, 'submit_run')
|
||||
db.inner_dbclient.jobs.submit_run.return_value = run_id
|
||||
mocker.patch.object(db.inner_dbclient.jobs, 'get_run_output')
|
||||
db.inner_dbclient.jobs.get_run_output.return_value = json.loads(
|
||||
output_data)
|
||||
|
||||
return db
|
||||
|
||||
|
||||
def _get_client(mocker):
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_HOST': 'myhost'})
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_TOKEN': 'mytoken'})
|
||||
|
||||
return DatabricksAPIClient()
|
||||
|
||||
|
||||
def _get_nutter(mocker, event_handler = None):
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_HOST': 'myhost'})
|
||||
mocker.patch.dict(os.environ, {'DATABRICKS_TOKEN': 'mytoken'})
|
||||
|
||||
return Nutter(event_handler)
|
||||
|
||||
|
||||
def _mock_dbclient_list_objects(mocker, dbclient, objects):
|
||||
mocker.patch.object(dbclient, 'list_objects')
|
||||
|
||||
workspace_objects = _get_workspacepathobject(objects)
|
||||
dbclient.list_objects.return_value = workspace_objects
|
||||
|
||||
|
||||
def _get_workspacepathobject(objects):
|
||||
objects_list = []
|
||||
for object in objects:
|
||||
item = {}
|
||||
item['object_type'] = object[0]
|
||||
item['path'] = object[1]
|
||||
item['language'] = 'PYTHON'
|
||||
objects_list.append(item)
|
||||
|
||||
root_obj = {'objects': objects_list}
|
||||
|
||||
return WorkspacePath.from_api_response(root_obj)
|
||||
|
||||
|
||||
class TestEventHandler(EventHandler):
|
||||
def __init__(self):
|
||||
self._queue = None
|
||||
super().__init__()
|
||||
|
||||
def handle(self, queue):
|
||||
self._queue = queue
|
||||
|
||||
def get_item(self):
|
||||
item = self._queue.get()
|
||||
self._queue.task_done()
|
||||
return item
|
||||
|
|
@ -0,0 +1,112 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import json
|
||||
from common.api import Nutter, TestNotebook, NutterStatusEvents
|
||||
import common.api as nutter_api
|
||||
from common.testresult import TestResults, TestResult
|
||||
from common.apiclientresults import ExecuteNotebookResult, NotebookOutputResult
|
||||
|
||||
def test__is_any_error__not_terminated__true():
|
||||
exec_result = _get_run_test_response('', 'SKIPPED','')
|
||||
|
||||
assert exec_result.is_any_error
|
||||
|
||||
|
||||
def test__is_any_error__terminated_not_success__true():
|
||||
exec_result = _get_run_test_response('FAILED', 'TERMINATED','')
|
||||
|
||||
assert exec_result.is_any_error
|
||||
|
||||
|
||||
def test__is_any_error__terminated_success_invalid_results__true():
|
||||
exec_result = _get_run_test_response('SUCCESS', 'TERMINATED','')
|
||||
|
||||
assert exec_result.is_any_error
|
||||
|
||||
|
||||
def test__is_any_error__terminated_success_valid_results_with_failure__true():
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult('case',False, 10,[]))
|
||||
exec_result = _get_run_test_response('SUCCESS', 'TERMINATED',test_results.serialize())
|
||||
|
||||
assert exec_result.is_any_error
|
||||
|
||||
|
||||
|
||||
def test__is_any_error__terminated_success_valid_results_with_no_failure__false():
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult('case',True, 10,[]))
|
||||
exec_result = _get_run_test_response('SUCCESS', 'TERMINATED',test_results.serialize())
|
||||
|
||||
assert not exec_result.is_any_error
|
||||
|
||||
|
||||
|
||||
def test__is_any_error__terminated_success_2_valid_results_with_no_failure__false():
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult('case',True, 10,[]))
|
||||
test_results.append(TestResult('case2',True, 10,[]))
|
||||
exec_result = _get_run_test_response('SUCCESS', 'TERMINATED',test_results.serialize())
|
||||
|
||||
assert not exec_result.is_any_error
|
||||
|
||||
def test__is_any_error__terminated_success_2_results_1_invalid__true():
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult('case',True, 10,[]))
|
||||
test_results.append(TestResult('case2',False, 10,[]))
|
||||
exec_result = _get_run_test_response('SUCCESS', 'TERMINATED',test_results.serialize())
|
||||
|
||||
assert exec_result.is_any_error
|
||||
|
||||
def test__is_run_from_notebook__result_state_NA__returns_true():
|
||||
# Arrange
|
||||
nbr = NotebookOutputResult('N/A', None, None)
|
||||
|
||||
# Act
|
||||
is_run_from_notebook = nbr.is_run_from_notebook
|
||||
|
||||
#Assert
|
||||
assert True == is_run_from_notebook
|
||||
|
||||
def test__is_error__is_run_from_notebook_true__returns_false():
|
||||
# Arrange
|
||||
nbr = NotebookOutputResult('N/A', None, None)
|
||||
|
||||
# Act
|
||||
is_error = nbr.is_error
|
||||
|
||||
#Assert
|
||||
assert False == is_error
|
||||
|
||||
def _get_run_test_response(result_state, life_cycle_state, notebook_result):
|
||||
data_json = """
|
||||
{"notebook_output":
|
||||
{"result": "IHaveReturned", "truncated": false},
|
||||
"metadata":
|
||||
{"execution_duration": 15000,
|
||||
"run_type": "SUBMIT_RUN",
|
||||
"cleanup_duration": 0,
|
||||
"number_in_job": 1,
|
||||
"cluster_instance":
|
||||
{"cluster_id": "0925-141d1222-narcs242",
|
||||
"spark_context_id": "803963628344534476"},
|
||||
"creator_user_name": "abc@microsoft.com",
|
||||
"task": {"notebook_task": {"notebook_path": "/test_mynotebook"}},
|
||||
"run_id": 7, "start_time": 1569887259173,
|
||||
"job_id": 4,
|
||||
"state": {"result_state": "SUCCESS", "state_message": "",
|
||||
"life_cycle_state": "TERMINATED"}, "setup_duration": 2000,
|
||||
"run_page_url": "https://westus2.azuredatabricks.net/?o=14702dasda6094293890#job/4/run/1",
|
||||
"cluster_spec": {"existing_cluster_id": "0925-141122-narcs242"}, "run_name": "myrun"}}
|
||||
"""
|
||||
data_dict = json.loads(data_json)
|
||||
data_dict['notebook_output']['result'] = notebook_result
|
||||
data_dict['metadata']['state']['result_state'] = result_state
|
||||
data_dict['metadata']['state']['life_cycle_state'] = life_cycle_state
|
||||
|
||||
return ExecuteNotebookResult.from_job_output(data_dict)
|
||||
|
|
@ -0,0 +1,44 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from common.testresult import TestResults, TestResult
|
||||
from common.resultreports import JunitXMLReportWriter
|
||||
from common.resultreports import TagsReportWriter
|
||||
|
||||
def test_junitxmlreportwriter_add_result__invalid_params__raises_valueerror():
|
||||
writer = JunitXMLReportWriter()
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
writer.add_result(None, None)
|
||||
|
||||
|
||||
def test_tagsreportwriter_add_result__invalid_params__raises_valueerror():
|
||||
writer = TagsReportWriter()
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
writer.add_result(None, None)
|
||||
|
||||
|
||||
def test_tagsreportwriter_add_result__1_test_result__1_valid_row():
|
||||
writer = TagsReportWriter()
|
||||
test_results = TestResults()
|
||||
test_name = 'case1'
|
||||
duration = 10
|
||||
tags = ['hello', 'hello']
|
||||
test_result = TestResult(test_name, True, duration, tags)
|
||||
test_results.append(test_result)
|
||||
notebook_name = 'test_mynotebook'
|
||||
|
||||
writer.add_result(notebook_name, test_results)
|
||||
|
||||
assert len(writer._rows) == 1
|
||||
row = writer._rows[0]
|
||||
|
||||
assert row.notebook_name == notebook_name
|
||||
assert row.test_name == test_name
|
||||
assert row.passed_str == 'PASSED'
|
||||
assert row.duration == duration
|
||||
assert row.tags == row._to_tag_string(tags)
|
|
@ -0,0 +1,284 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import json
|
||||
import pytest
|
||||
from common.resultsview import RunCommandResultsView, TestCaseResultView, ListCommandResultView, ListCommandResultsView
|
||||
from common.apiclientresults import ExecuteNotebookResult
|
||||
from common.testresult import TestResults, TestResult
|
||||
from common.api import TestNotebook
|
||||
|
||||
def test__add_exec_result__vaid_instance__isadded(mocker):
|
||||
|
||||
test_results = TestResults().serialize()
|
||||
notebook_results = __get_ExecuteNotebookResult(
|
||||
'SUCCESS', 'TERMINATED', test_results)
|
||||
|
||||
run_results_view = RunCommandResultsView()
|
||||
run_results_view.add_exec_result(notebook_results)
|
||||
|
||||
assert run_results_view.total == 1
|
||||
|
||||
|
||||
def test__add_exec_result__vaid_instance_invalid_output__isadded(mocker):
|
||||
|
||||
test_results = "NO PICKLE"
|
||||
notebook_results = __get_ExecuteNotebookResult(
|
||||
'SUCCESS', 'TERMINATED', test_results)
|
||||
|
||||
run_results_view = RunCommandResultsView()
|
||||
run_results_view.add_exec_result(notebook_results)
|
||||
|
||||
assert run_results_view.total == 1
|
||||
|
||||
run_result_view = run_results_view.run_results[0]
|
||||
|
||||
assert len(run_result_view.test_cases_views) == 0
|
||||
|
||||
|
||||
def test__add_exec_result__vaid_instance_invalid_output__no_test_case_view(mocker):
|
||||
|
||||
test_results = "NO PICKLE"
|
||||
notebook_results = __get_ExecuteNotebookResult(
|
||||
'SUCCESS', 'TERMINATED', test_results)
|
||||
|
||||
run_results_view = RunCommandResultsView()
|
||||
run_results_view.add_exec_result(notebook_results)
|
||||
|
||||
assert run_results_view.total == 1
|
||||
|
||||
|
||||
def test__add_exec_result__vaid_instance__test_case_view(mocker):
|
||||
|
||||
test_results = TestResults()
|
||||
test_case = TestResult("mycase", True, 10, [])
|
||||
test_results.append(test_case)
|
||||
|
||||
notebook_results = __get_ExecuteNotebookResult(
|
||||
'SUCCESS', 'TERMINATED', test_results.serialize())
|
||||
|
||||
run_results_view = RunCommandResultsView()
|
||||
run_results_view.add_exec_result(notebook_results)
|
||||
|
||||
assert run_results_view.total == 1
|
||||
|
||||
run_result_view = run_results_view.run_results[0]
|
||||
|
||||
assert len(run_result_view.test_cases_views) == 1
|
||||
|
||||
tc_result_view = run_result_view.test_cases_views[0]
|
||||
|
||||
assert tc_result_view.test_case == test_case.test_name
|
||||
assert tc_result_view.passed == test_case.passed
|
||||
assert tc_result_view.execution_time == test_case.execution_time
|
||||
|
||||
|
||||
def test__add_exec_result__vaid_instance_two_test_cases__two_test_case_view(mocker):
|
||||
|
||||
test_results = TestResults()
|
||||
test_case = TestResult("mycase", True, 10, [])
|
||||
test_results.append(test_case)
|
||||
test_case = TestResult("mycase2", True, 10, [])
|
||||
test_results.append(test_case)
|
||||
|
||||
notebook_results = __get_ExecuteNotebookResult(
|
||||
'SUCCESS', 'TERMINATED', test_results.serialize())
|
||||
|
||||
run_results_view = RunCommandResultsView()
|
||||
run_results_view.add_exec_result(notebook_results)
|
||||
|
||||
assert run_results_view.total == 1
|
||||
|
||||
run_result_view = run_results_view.run_results[0]
|
||||
|
||||
assert len(run_result_view.test_cases_views) == 2
|
||||
|
||||
|
||||
def test__get_view__for_testcase_passed__returns_correct_string(mocker):
|
||||
# Arrange
|
||||
test_case = TestResult("mycase", True, 10, [])
|
||||
test_case_result_view = TestCaseResultView(test_case)
|
||||
|
||||
expected_view = "mycase (10 seconds)\n"
|
||||
|
||||
# Act
|
||||
view = test_case_result_view.get_view()
|
||||
|
||||
# Assert
|
||||
assert expected_view == view
|
||||
|
||||
|
||||
def test__get_view__for_testcase_failed__returns_correct_string(mocker):
|
||||
# Arrange
|
||||
stack_trace = "Stack Trace"
|
||||
exception = AssertionError("1 == 2")
|
||||
test_case = TestResult("mycase", False, 5.43, [
|
||||
'tag1', 'tag2'], exception, stack_trace)
|
||||
test_case_result_view = TestCaseResultView(test_case)
|
||||
|
||||
expected_view = "mycase (5.43 seconds)\n\n" + \
|
||||
stack_trace + "\n\n" + "AssertionError: 1 == 2" + "\n"
|
||||
|
||||
# Act
|
||||
view = test_case_result_view.get_view()
|
||||
|
||||
# Assert
|
||||
assert expected_view == view
|
||||
|
||||
|
||||
def test__get_view__for_run_command_result_with_passing_test_case__shows_test_result_under_passing(mocker):
|
||||
|
||||
test_results = TestResults()
|
||||
test_case = TestResult("mycase", True, 10, [])
|
||||
test_results.append(test_case)
|
||||
test_case_result_view = TestCaseResultView(test_case)
|
||||
serialized_results = test_results.serialize()
|
||||
|
||||
notebook_results = __get_ExecuteNotebookResult(
|
||||
'SUCCESS', 'TERMINATED', serialized_results)
|
||||
|
||||
#expected_view = 'Name: \t/test_mynotebook\nNotebook Exec Result:\tTERMINATED \nTests Cases:\nCase:\tmycase\n\n\tPASSED\n\t\n\t\n\tDuration: 10\n\nCase:\tmycase2\n\n\tPASSED\n\t\n\t\n\tDuration: 10\n\n\n----------------------------------------\n'
|
||||
expected_view = '\nNotebook: /test_mynotebook - Lifecycle State: TERMINATED, Result: SUCCESS\n'
|
||||
expected_view += 'Run Page URL: {}\n'.format(notebook_results.notebook_run_page_url)
|
||||
expected_view += '============================================================\n'
|
||||
expected_view += 'PASSING TESTS\n'
|
||||
expected_view += '------------------------------------------------------------\n'
|
||||
expected_view += test_case_result_view.get_view()
|
||||
expected_view += '\n\n'
|
||||
expected_view += '============================================================\n'
|
||||
|
||||
run_results_view = RunCommandResultsView()
|
||||
run_results_view.add_exec_result(notebook_results)
|
||||
|
||||
view = run_results_view.get_view()
|
||||
|
||||
assert expected_view == view
|
||||
|
||||
|
||||
def test__get_view__for_run_command_result_with_failing_test_case__shows_test_result_under_failing(mocker):
|
||||
test_results = TestResults()
|
||||
|
||||
stack_trace = "Stack Trace"
|
||||
exception = AssertionError("1 == 2")
|
||||
test_case = TestResult("mycase", False, 5.43, [
|
||||
'tag1', 'tag2'], exception, stack_trace)
|
||||
test_case_result_view = TestCaseResultView(test_case)
|
||||
test_results.append(test_case)
|
||||
|
||||
passing_test_case1 = TestResult("mycase1", True, 10, [])
|
||||
test_results.append(passing_test_case1)
|
||||
passing_test_case_result_view1 = TestCaseResultView(passing_test_case1)
|
||||
|
||||
passing_test_case2 = TestResult("mycase2", True, 10, [])
|
||||
test_results.append(passing_test_case2)
|
||||
passing_test_case_result_view2 = TestCaseResultView(passing_test_case2)
|
||||
|
||||
serialized_results = test_results.serialize()
|
||||
|
||||
notebook_results = __get_ExecuteNotebookResult(
|
||||
'FAILURE', 'TERMINATED', serialized_results)
|
||||
|
||||
expected_view = '\nNotebook: /test_mynotebook - Lifecycle State: TERMINATED, Result: FAILURE\n'
|
||||
expected_view += 'Run Page URL: {}\n'.format(notebook_results.notebook_run_page_url)
|
||||
expected_view += '============================================================\n'
|
||||
expected_view += 'FAILING TESTS\n'
|
||||
expected_view += '------------------------------------------------------------\n'
|
||||
expected_view += test_case_result_view.get_view()
|
||||
expected_view += '\n\n'
|
||||
expected_view += 'PASSING TESTS\n'
|
||||
expected_view += '------------------------------------------------------------\n'
|
||||
expected_view += passing_test_case_result_view1.get_view()
|
||||
expected_view += passing_test_case_result_view2.get_view()
|
||||
expected_view += '\n\n'
|
||||
expected_view += '============================================================\n'
|
||||
|
||||
run_results_view = RunCommandResultsView()
|
||||
run_results_view.add_exec_result(notebook_results)
|
||||
|
||||
view = run_results_view.get_view()
|
||||
|
||||
assert expected_view == view
|
||||
|
||||
def test__get_view__for_list_command__with_tests_Found__shows_listing(mocker):
|
||||
test_notebook1 = TestNotebook('test_one','/test_one')
|
||||
test_notebook2 = TestNotebook('test_two','/test_two')
|
||||
test_notebooks = [test_notebook1, test_notebook2]
|
||||
list_result_view1 = ListCommandResultView.from_test_notebook(test_notebook1)
|
||||
list_result_view2 = ListCommandResultView.from_test_notebook(test_notebook2)
|
||||
|
||||
|
||||
expected_view = '\nTests Found\n'
|
||||
expected_view += '-------------------------------------------------------\n'
|
||||
expected_view += list_result_view1.get_view()
|
||||
expected_view += list_result_view2.get_view()
|
||||
expected_view += '-------------------------------------------------------\n'
|
||||
|
||||
list_results_view = ListCommandResultsView(test_notebooks)
|
||||
|
||||
view = list_results_view.get_view()
|
||||
|
||||
assert view == expected_view
|
||||
|
||||
|
||||
|
||||
def test__get_view__for_run_command_result_with_one_passing_one_failing__shows_failing_then_passing(mocker):
|
||||
|
||||
stack_trace = "Stack Trace"
|
||||
exception = AssertionError("1 == 2")
|
||||
test_case = TestResult("mycase", False, 5.43, [
|
||||
'tag1', 'tag2'], exception, stack_trace)
|
||||
test_case_result_view = TestCaseResultView(test_case)
|
||||
|
||||
test_results = TestResults()
|
||||
test_results.append(test_case)
|
||||
serialized_results = test_results.serialize()
|
||||
|
||||
notebook_results = __get_ExecuteNotebookResult(
|
||||
'FAILURE', 'TERMINATED', serialized_results)
|
||||
|
||||
expected_view = '\nNotebook: /test_mynotebook - Lifecycle State: TERMINATED, Result: FAILURE\n'
|
||||
expected_view += 'Run Page URL: {}\n'.format(notebook_results.notebook_run_page_url)
|
||||
expected_view += '============================================================\n'
|
||||
expected_view += 'FAILING TESTS\n'
|
||||
expected_view += '------------------------------------------------------------\n'
|
||||
expected_view += test_case_result_view.get_view()
|
||||
expected_view += '\n\n'
|
||||
expected_view += '============================================================\n'
|
||||
|
||||
run_results_view = RunCommandResultsView()
|
||||
run_results_view.add_exec_result(notebook_results)
|
||||
|
||||
view = run_results_view.get_view()
|
||||
|
||||
assert expected_view == view
|
||||
|
||||
|
||||
def __get_ExecuteNotebookResult(result_state, life_cycle_state, notebook_result):
|
||||
data_json = """
|
||||
{"notebook_output":
|
||||
{"result": "IHaveReturned", "truncated": false},
|
||||
"metadata":
|
||||
{"execution_duration": 15000,
|
||||
"run_type": "SUBMIT_RUN",
|
||||
"cleanup_duration": 0,
|
||||
"number_in_job": 1,
|
||||
"cluster_instance":
|
||||
{"cluster_id": "0925-141d1222-narcs242",
|
||||
"spark_context_id": "803963628344534476"},
|
||||
"creator_user_name": "abc@microsoft.com",
|
||||
"task": {"notebook_task": {"notebook_path": "/test_mynotebook"}},
|
||||
"run_id": 7, "start_time": 1569887259173,
|
||||
"job_id": 4,
|
||||
"state": {"result_state": "SUCCESS", "state_message": "",
|
||||
"life_cycle_state": "TERMINATED"}, "setup_duration": 2000,
|
||||
"run_page_url": "https://westus2.azuredatabricks.net/?o=14702dasda6094293890#job/4/run/1",
|
||||
"cluster_spec": {"existing_cluster_id": "0925-141122-narcs242"}, "run_name": "myrun"}}
|
||||
"""
|
||||
data_dict = json.loads(data_json)
|
||||
data_dict['notebook_output']['result'] = notebook_result
|
||||
data_dict['metadata']['state']['result_state'] = result_state
|
||||
data_dict['metadata']['state']['life_cycle_state'] = life_cycle_state
|
||||
|
||||
return ExecuteNotebookResult.from_job_output(data_dict)
|
|
@ -0,0 +1,92 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import common.scheduler as scheduler
|
||||
import time
|
||||
import datetime
|
||||
|
||||
|
||||
def test__run_and_wait__1_function_1_worker_exception__result_is_none_and_exception():
|
||||
func_scheduler = scheduler.get_scheduler(1)
|
||||
func_scheduler.add_function(__raise_it, Exception)
|
||||
results = func_scheduler.run_and_wait()
|
||||
assert len(results) == 1
|
||||
assert results[0].func_result is None
|
||||
assert isinstance(results[0].exception, Exception)
|
||||
|
||||
|
||||
params = [
|
||||
(1, 1, 'this'),
|
||||
(1, 2, 'this'),
|
||||
(2, 2, 'this'),
|
||||
(2, 10, 'this'),
|
||||
(2, 2, {'this': 'this'}),
|
||||
(2, 2, ('this', 'that')),
|
||||
]
|
||||
@pytest.mark.parametrize('num_of_funcs, num_of_workers, func_return_value', params)
|
||||
def test__run_and_wait__X_functions_X_workers_x_value__results_are_okay(num_of_funcs, num_of_workers, func_return_value):
|
||||
func_scheduler = scheduler.get_scheduler(num_of_workers)
|
||||
|
||||
for i in range(0, num_of_funcs):
|
||||
func_scheduler.add_function(__get_back, func_return_value)
|
||||
|
||||
results = func_scheduler.run_and_wait()
|
||||
assert len(results) == num_of_funcs
|
||||
|
||||
for result in results:
|
||||
assert result.func_result == func_return_value
|
||||
|
||||
|
||||
def test__run_and_wait__3_function_1_worker__in_sequence():
|
||||
func_scheduler = scheduler.get_scheduler(1)
|
||||
value1 = 'this1'
|
||||
func_scheduler.add_function(__get_back, value1)
|
||||
value2 = 'this2'
|
||||
func_scheduler.add_function(__get_back, value2)
|
||||
value3 = 'this3'
|
||||
func_scheduler.add_function(__get_back, value3)
|
||||
results = func_scheduler.run_and_wait()
|
||||
assert len(results) == 3
|
||||
assert results[0].func_result == value1
|
||||
assert results[1].func_result == value2
|
||||
assert results[2].func_result == value3
|
||||
|
||||
|
||||
def test__run_and_wait__2_functions_1_worker_500ms_delay__sequential_duration():
|
||||
func_scheduler = scheduler.get_scheduler(1)
|
||||
wait_time = .500
|
||||
func_scheduler.add_function(__wait, wait_time)
|
||||
func_scheduler.add_function(__wait, wait_time)
|
||||
start = time.time()
|
||||
results = func_scheduler.run_and_wait()
|
||||
end = time.time()
|
||||
delay = int(end - start)
|
||||
assert delay >= 2 * wait_time
|
||||
|
||||
|
||||
def test__run_and_wait__3_functions_3_worker_500ms_delay__less_than_sequential_duration():
|
||||
func_scheduler = scheduler.get_scheduler(1)
|
||||
wait_time = .500
|
||||
func_scheduler.add_function(__wait, wait_time)
|
||||
func_scheduler.add_function(__wait, wait_time)
|
||||
func_scheduler.add_function(__wait, wait_time)
|
||||
start = time.time()
|
||||
results = func_scheduler.run_and_wait()
|
||||
end = time.time()
|
||||
delay = int(end - start)
|
||||
assert delay < 3 * wait_time
|
||||
|
||||
|
||||
def __get_back(this):
|
||||
return this
|
||||
|
||||
|
||||
def __raise_it(exception):
|
||||
raise exception
|
||||
|
||||
|
||||
def __wait(time_to_wait):
|
||||
time.sleep(time_to_wait)
|
|
@ -0,0 +1,51 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import enum
|
||||
from common.statuseventhandler import StatusEventsHandler, EventHandler, StatusEvent
|
||||
|
||||
def test__add_event_and_wait__1_event__handler_receives_it():
|
||||
test_handler = TestEventHandler()
|
||||
status_handler = StatusEventsHandler(test_handler)
|
||||
|
||||
status_handler.add_event(TestStatusEvent.AnEvent, 'added')
|
||||
item = test_handler.get_item()
|
||||
status_handler.wait()
|
||||
|
||||
assert item.event == TestStatusEvent.AnEvent
|
||||
assert item.data == 'added'
|
||||
|
||||
def test__add_event_and_wait__2_event2__handler_receives_them():
|
||||
test_handler = TestEventHandler()
|
||||
status_handler = StatusEventsHandler(test_handler)
|
||||
|
||||
status_handler.add_event(TestStatusEvent.AnEvent, 'added')
|
||||
status_handler.add_event(TestStatusEvent.AnEvent, 'added')
|
||||
item = test_handler.get_item()
|
||||
item2 = test_handler.get_item()
|
||||
status_handler.wait()
|
||||
|
||||
assert item.event == TestStatusEvent.AnEvent
|
||||
assert item.data == 'added'
|
||||
|
||||
assert item2.event == TestStatusEvent.AnEvent
|
||||
assert item2.data == 'added'
|
||||
|
||||
class TestEventHandler(EventHandler):
|
||||
def __init__(self):
|
||||
self._queue = None
|
||||
super().__init__()
|
||||
|
||||
def handle(self, queue):
|
||||
self._queue = queue
|
||||
|
||||
def get_item(self):
|
||||
item = self._queue.get()
|
||||
self._queue.task_done()
|
||||
return item
|
||||
|
||||
class TestStatusEvent(enum.Enum):
|
||||
AnEvent = 1
|
|
@ -0,0 +1,88 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from common.testexecresults import TestExecResults
|
||||
from common.testresult import TestResults, TestResult
|
||||
|
||||
def test__ctor__test_results_not_correct_type__raises_type_error():
|
||||
with pytest.raises(TypeError):
|
||||
test_exec_result = TestExecResults("invalidtype")
|
||||
|
||||
def test__to_string__valid_test_results__creates_view_from_test_results_and_returns(mocker):
|
||||
# Arrange
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult("test1", True, 10, []))
|
||||
test_results.append(TestResult("test2", True, 10, []))
|
||||
|
||||
test_exec_result = TestExecResults(test_results)
|
||||
|
||||
mocker.patch.object(test_exec_result, 'get_ExecuteNotebookResult')
|
||||
notebook_result = TestExecResults(test_results).get_ExecuteNotebookResult("", test_results)
|
||||
test_exec_result.get_ExecuteNotebookResult.return_value = notebook_result
|
||||
|
||||
mocker.patch.object(test_exec_result.runcommand_results_view, 'add_exec_result')
|
||||
mocker.patch.object(test_exec_result.runcommand_results_view, 'get_view')
|
||||
test_exec_result.runcommand_results_view.get_view.return_value = "expectedview"
|
||||
|
||||
# Act
|
||||
view = test_exec_result.to_string()
|
||||
|
||||
# Assert
|
||||
test_exec_result.get_ExecuteNotebookResult.assert_called_once_with("", test_results)
|
||||
test_exec_result.runcommand_results_view.add_exec_result.assert_called_once_with(notebook_result)
|
||||
test_exec_result.runcommand_results_view.get_view.assert_called_once_with()
|
||||
assert view == "expectedview"
|
||||
|
||||
def test__to_string__valid_test_results_run_from_notebook__creates_view_from_test_results_and_returns(mocker):
|
||||
# Arrange
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult("test1", True, 10, []))
|
||||
test_results.append(TestResult("test2", True, 10, []))
|
||||
|
||||
test_exec_result = TestExecResults(test_results)
|
||||
|
||||
# Act
|
||||
view = test_exec_result.to_string()
|
||||
|
||||
# Assert
|
||||
assert "PASSING TESTS" in view
|
||||
assert "test1" in view
|
||||
assert "test2" in view
|
||||
|
||||
def test__exit__valid_test_results__serializes_test_results_and_passes_to_dbutils_exit(mocker):
|
||||
# Arrange
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult("test1", True, 10, []))
|
||||
test_results.append(TestResult("test2", True, 10, []))
|
||||
|
||||
test_exec_result = TestExecResults(test_results)
|
||||
|
||||
mocker.patch.object(test_results, 'serialize')
|
||||
serialized_data = "serializeddata"
|
||||
test_results.serialize.return_value = serialized_data
|
||||
|
||||
dbutils_stub = DbUtilsStub()
|
||||
|
||||
# Act
|
||||
test_exec_result.exit(dbutils_stub)
|
||||
|
||||
# Assert
|
||||
test_results.serialize.assert_called_with()
|
||||
assert True == dbutils_stub.notebook.exit_called
|
||||
assert serialized_data == dbutils_stub.notebook.data_passed
|
||||
|
||||
class DbUtilsStub:
|
||||
def __init__(self):
|
||||
self.notebook = NotebookStub()
|
||||
|
||||
class NotebookStub():
|
||||
def __init__(self):
|
||||
self.exit_called = False
|
||||
self.data_passed = ""
|
||||
|
||||
def exit(self, data):
|
||||
self.exit_called = True
|
||||
self.data_passed = data
|
|
@ -0,0 +1,141 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import json
|
||||
from common.testresult import TestResults, TestResult
|
||||
import pickle
|
||||
import base64
|
||||
|
||||
def test__testresults_append__type_not_testresult__throws_error():
|
||||
# Arrange
|
||||
test_results = TestResults()
|
||||
|
||||
# Act/Assert
|
||||
with pytest.raises(TypeError):
|
||||
test_results.append("Test")
|
||||
|
||||
def test__testresults_append__type_testresult__appends_testresult():
|
||||
# Arrange
|
||||
test_results = TestResults()
|
||||
|
||||
# Act
|
||||
test_results.append(TestResult("Test Name", True, 1, []))
|
||||
|
||||
# Assert
|
||||
assert len(test_results.results) == 1
|
||||
|
||||
def test__eq__test_results_not_equal__are_not_equal():
|
||||
# Arrange
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult("Test NameX", True, 1, []))
|
||||
test_results.append(TestResult("Test Name1", True, 1, [], ValueError("Error")))
|
||||
|
||||
test_results1 = TestResults()
|
||||
test_results1.append(TestResult("Test Name", True, 1, []))
|
||||
test_results1.append(TestResult("Test Name1", True, 1, [], ValueError("Error")))
|
||||
|
||||
# Act / Assert
|
||||
are_not_equal = test_results != test_results1
|
||||
assert are_not_equal == True
|
||||
|
||||
def test__deserialize__no_constraints__is_serializable_and_deserializable():
|
||||
# Arrange
|
||||
test_results = TestResults()
|
||||
|
||||
test_results.append(TestResult("Test Name", True, 1, []))
|
||||
test_results.append(TestResult("Test Name1", True, 1, [], ValueError("Error")))
|
||||
|
||||
serialized_data = test_results.serialize()
|
||||
|
||||
deserialized_data = TestResults().deserialize(serialized_data)
|
||||
|
||||
assert test_results == deserialized_data
|
||||
|
||||
def test__deserialize__empty_pickle_data__throws_exception():
|
||||
# Arrange
|
||||
test_results = TestResults()
|
||||
|
||||
invalid_pickle = ""
|
||||
|
||||
# Act / Assert
|
||||
with pytest.raises(Exception):
|
||||
test_results.deserialize(invalid_pickle)
|
||||
|
||||
def test__deserialize__invalid_pickle_data__throws_Exception():
|
||||
# Arrange
|
||||
test_results = TestResults()
|
||||
|
||||
invalid_pickle = "test"
|
||||
|
||||
# Act / Assert
|
||||
with pytest.raises(Exception):
|
||||
test_results.deserialize(invalid_pickle)
|
||||
|
||||
|
||||
def test__eq__test_results_equal_but_not_same_ref__are_equal():
|
||||
# Arrange
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult("Test Name", True, 1, []))
|
||||
test_results.append(TestResult("Test Name1", True, 1, [], ValueError("Error")))
|
||||
|
||||
test_results1 = TestResults()
|
||||
test_results1.append(TestResult("Test Name", True, 1, []))
|
||||
test_results1.append(TestResult("Test Name1", True, 1, [], ValueError("Error")))
|
||||
|
||||
# Act / Assert
|
||||
assert test_results == test_results1
|
||||
|
||||
def test__num_tests__5_test_cases__is_5():
|
||||
# Arrange
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult("Test Name", True, 1, []))
|
||||
test_results.append(TestResult("Test Name1", False, 1, [], ValueError("Error")))
|
||||
test_results.append(TestResult("Test Name1", False, 1, [], ValueError("Error")))
|
||||
test_results.append(TestResult("Test Name1", False, 1, [], ValueError("Error")))
|
||||
test_results.append(TestResult("Test Name1", False, 1, [], ValueError("Error")))
|
||||
|
||||
# Act / Assert
|
||||
assert 5 == test_results.test_cases
|
||||
|
||||
def test__num_failures__5_test_cases_4_failures__is_4():
|
||||
# Arrange
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult("Test Name", True, 1, []))
|
||||
test_results.append(TestResult("Test Name1", False, 1, [], ValueError("Error")))
|
||||
test_results.append(TestResult("Test Name1", False, 1, [], ValueError("Error")))
|
||||
test_results.append(TestResult("Test Name1", False, 1, [], ValueError("Error")))
|
||||
test_results.append(TestResult("Test Name1", False, 1, [], ValueError("Error")))
|
||||
|
||||
# Act / Assert
|
||||
assert 4 == test_results.num_failures
|
||||
|
||||
def test__total_execution_time__5_test_cases__is_sum_of_execution_times():
|
||||
# Arrange
|
||||
test_results = TestResults()
|
||||
test_results.append(TestResult("Test Name", True, 1.12, []))
|
||||
test_results.append(TestResult("Test Name1", False, 1.0005, [], ValueError("Error")))
|
||||
test_results.append(TestResult("Test Name1", False, 10.000034, [], ValueError("Error")))
|
||||
test_results.append(TestResult("Test Name1", False, 7.66, [], ValueError("Error")))
|
||||
test_results.append(TestResult("Test Name1", False, 13.21, [], ValueError("Error")))
|
||||
|
||||
# Act / Assert
|
||||
assert 32.990534 == test_results.total_execution_time
|
||||
|
||||
def test__serialize__result_data__is_base64_str():
|
||||
test_results = TestResults()
|
||||
serialized_data = test_results.serialize()
|
||||
serialized_bin_data = base64.encodebytes(pickle.dumps(test_results))
|
||||
|
||||
assert serialized_data == str(serialized_bin_data, "utf-8")
|
||||
|
||||
|
||||
def test__deserialize__data_is_base64_str__can_deserialize():
|
||||
test_results = TestResults()
|
||||
serialized_bin_data = pickle.dumps(test_results)
|
||||
serialized_str = str(base64.encodebytes(serialized_bin_data), "utf-8")
|
||||
test_results_from_data = TestResults().deserialize(serialized_str)
|
||||
|
||||
assert test_results == test_results_from_data
|
|
@ -0,0 +1,181 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from runtime.fixtureloader import FixtureLoader
|
||||
from tests.runtime.testnutterfixturebuilder import TestNutterFixtureBuilder
|
||||
|
||||
def test__get_fixture_loader__returns_fixtureloader():
|
||||
# Arrange / Act
|
||||
loader = FixtureLoader()
|
||||
|
||||
# Assert
|
||||
assert isinstance(loader, FixtureLoader)
|
||||
|
||||
def test__load_fixture__none_passed_raises__valueerror():
|
||||
# Arrange
|
||||
loader = FixtureLoader()
|
||||
|
||||
# Act
|
||||
with pytest.raises(ValueError):
|
||||
loader.load_fixture(None)
|
||||
|
||||
def test__load_fixture__one_assertion_method__adds_one_testclass_to_dictionary_with_assert_set():
|
||||
# Arrange
|
||||
test_name = "fred"
|
||||
new_class = TestNutterFixtureBuilder() \
|
||||
.with_name("MyClass") \
|
||||
.with_assertion(test_name) \
|
||||
.build()
|
||||
|
||||
loader = FixtureLoader()
|
||||
|
||||
# Act
|
||||
loaded_fixture = loader.load_fixture(new_class())
|
||||
|
||||
# Assert
|
||||
assert len(loaded_fixture) == 1
|
||||
__assert_test_case_from_dict(loaded_fixture, test_name, True, True, False, True)
|
||||
|
||||
def test__load_fixture__one_assertion_method_one_additional_method__adds_one_testclass_to_dictionary_with_assert_set():
|
||||
# Arrange
|
||||
test_name = "fred"
|
||||
new_class = TestNutterFixtureBuilder() \
|
||||
.with_name("MyClass") \
|
||||
.with_assertion(test_name) \
|
||||
.with_test(test_name) \
|
||||
.build()
|
||||
|
||||
loader = FixtureLoader()
|
||||
|
||||
# Act
|
||||
loaded_fixture = loader.load_fixture(new_class())
|
||||
|
||||
# Assert
|
||||
assert len(loaded_fixture) == 1
|
||||
__assert_test_case_from_dict(loaded_fixture, test_name, True, True, False, True)
|
||||
|
||||
def test__load_fixture__one_assertion_one_run_method__adds_one_testclass_to_dictionary_with_assert_and_run_set():
|
||||
# Arrange
|
||||
test_name = "fred"
|
||||
new_class = TestNutterFixtureBuilder() \
|
||||
.with_name("MyClass") \
|
||||
.with_assertion(test_name) \
|
||||
.with_run(test_name) \
|
||||
.build()
|
||||
|
||||
loader = FixtureLoader()
|
||||
|
||||
# Act
|
||||
loaded_fixture = loader.load_fixture(new_class())
|
||||
|
||||
# Assert
|
||||
assert len(loaded_fixture) == 1
|
||||
__assert_test_case_from_dict(loaded_fixture, test_name, True, False, False, True)
|
||||
|
||||
def test__load_fixture__before_all__no_test_case_set_because_method_exists_on_fixture():
|
||||
# Arrange
|
||||
test_name = "fred"
|
||||
new_class = TestNutterFixtureBuilder() \
|
||||
.with_name("MyClass") \
|
||||
.with_assertion(test_name) \
|
||||
.with_run(test_name) \
|
||||
.with_before_all() \
|
||||
.build()
|
||||
|
||||
loader = FixtureLoader()
|
||||
|
||||
# Act
|
||||
loaded_fixture = loader.load_fixture(new_class())
|
||||
|
||||
# Assert
|
||||
assert "before_all" not in loaded_fixture
|
||||
assert "all" not in loaded_fixture
|
||||
|
||||
def test__load_fixture__after_all__no_test_case_set_because_method_exists_on_fixture():
|
||||
# Arrange
|
||||
test_name = "fred"
|
||||
new_class = TestNutterFixtureBuilder() \
|
||||
.with_name("MyClass") \
|
||||
.with_assertion(test_name) \
|
||||
.with_run(test_name) \
|
||||
.with_after_all() \
|
||||
.build()
|
||||
|
||||
loader = FixtureLoader()
|
||||
|
||||
# Act
|
||||
loaded_fixture = loader.load_fixture(new_class())
|
||||
|
||||
# Assert
|
||||
assert "after_all" not in loaded_fixture
|
||||
assert "all" not in loaded_fixture
|
||||
|
||||
def test__load_fixture__two_assertion_one_run_method__adds_two_testclass_to_dictionary():
|
||||
# Arrange
|
||||
test_name_1 = "fred"
|
||||
test_name_2 = "hank"
|
||||
new_class = TestNutterFixtureBuilder() \
|
||||
.with_name("MyClass") \
|
||||
.with_assertion(test_name_1) \
|
||||
.with_assertion(test_name_2) \
|
||||
.with_run(test_name_1) \
|
||||
.build()
|
||||
|
||||
loader = FixtureLoader()
|
||||
|
||||
# Act
|
||||
loaded_fixture = loader.load_fixture(new_class())
|
||||
|
||||
# Assert
|
||||
assert len(loaded_fixture) == 2
|
||||
__assert_test_case_from_dict(loaded_fixture, test_name_1, True, False, False, True)
|
||||
__assert_test_case_from_dict(loaded_fixture, test_name_2, True, True, False, True)
|
||||
|
||||
def test__load_fixture__three_with_all_methods__adds_three_testclass_to_dictionary():
|
||||
# Arrange
|
||||
test_name_1 = "fred"
|
||||
test_name_2 = "hank"
|
||||
test_name_3 = "will"
|
||||
new_class = TestNutterFixtureBuilder() \
|
||||
.with_name("MyClass") \
|
||||
.with_before(test_name_1) \
|
||||
.with_before(test_name_2) \
|
||||
.with_before(test_name_3) \
|
||||
.with_run(test_name_1) \
|
||||
.with_run(test_name_2) \
|
||||
.with_run(test_name_3) \
|
||||
.with_assertion(test_name_1) \
|
||||
.with_assertion(test_name_2) \
|
||||
.with_assertion(test_name_3) \
|
||||
.with_after(test_name_1) \
|
||||
.with_after(test_name_2) \
|
||||
.with_after(test_name_3) \
|
||||
.build()
|
||||
|
||||
loader = FixtureLoader()
|
||||
|
||||
# Act
|
||||
loaded_fixture = loader.load_fixture(new_class())
|
||||
|
||||
# Assert
|
||||
assert len(loaded_fixture) == 3
|
||||
__assert_test_case_from_dict(loaded_fixture, test_name_1, False, False, False, False)
|
||||
__assert_test_case_from_dict(loaded_fixture, test_name_2, False, False, False, False)
|
||||
__assert_test_case_from_dict(loaded_fixture, test_name_3, False, False, False, False)
|
||||
|
||||
|
||||
def __assert_test_case_from_dict(test_case_dict, expected_name, before_none, run_none, assertion_none, after_none):
|
||||
assert expected_name in test_case_dict
|
||||
|
||||
test_case = test_case_dict[expected_name]
|
||||
assert (test_case.before is None) == before_none
|
||||
assert (test_case.run is None) == run_none
|
||||
assert (test_case.assertion is None) == assertion_none
|
||||
assert (test_case.after is None) == after_none
|
||||
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,340 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from runtime.nutterfixture import NutterFixture, tag, InvalidTestFixtureException
|
||||
from runtime.testcase import TestCase
|
||||
from common.testresult import TestResult, TestResults
|
||||
from tests.runtime.testnutterfixturebuilder import TestNutterFixtureBuilder
|
||||
from common.apiclientresults import ExecuteNotebookResult
|
||||
import sys
|
||||
|
||||
def test__ctor__creates_fixture_loader():
|
||||
# Arrange / Act
|
||||
fix = SimpleTestFixture()
|
||||
|
||||
# Assert
|
||||
assert fix.data_loader is not None
|
||||
|
||||
def test__execute_tests__calls_load_fixture_on_fixture_loader(mocker):
|
||||
# Arrange
|
||||
fix = SimpleTestFixture()
|
||||
|
||||
mocker.patch.object(fix.data_loader, 'load_fixture')
|
||||
|
||||
# Act
|
||||
fix.execute_tests()
|
||||
|
||||
# Assert
|
||||
fix.data_loader.load_fixture.assert_called_once_with(fix)
|
||||
|
||||
def test__execute_tests__data_loader_returns_none__throws_invalidfixtureexception(mocker):
|
||||
# Arrange
|
||||
fix = SimpleTestFixture()
|
||||
|
||||
mocker.patch.object(fix.data_loader, 'load_fixture')
|
||||
fix.data_loader.load_fixture.return_value = None
|
||||
|
||||
# Act / Assert
|
||||
with pytest.raises(InvalidTestFixtureException):
|
||||
fix.execute_tests()
|
||||
|
||||
def test__execute_tests__data_loader_returns_empty_dictionary__returns_empty_results(mocker):
|
||||
# Arrange
|
||||
fix = SimpleTestFixture()
|
||||
|
||||
mocker.patch.object(fix.data_loader, 'load_fixture')
|
||||
fix.data_loader.load_fixture.return_value = {}
|
||||
|
||||
# Act
|
||||
test_exec_results = fix.execute_tests()
|
||||
|
||||
# Assert
|
||||
assert len(test_exec_results.test_results.results) == 0
|
||||
|
||||
def test__execute_tests__before_all_set_and_data_loader_returns_empty_dictionary__does_not_call_before_all(mocker):
|
||||
# Arrange
|
||||
fix = SimpleTestFixture()
|
||||
|
||||
mocker.patch.object(fix.data_loader, 'load_fixture')
|
||||
fix.data_loader.load_fixture.return_value = {}
|
||||
fix.before_all = lambda self: 1 == 1
|
||||
|
||||
mocker.patch.object(fix, 'before_all')
|
||||
|
||||
# Act
|
||||
test_results = fix.execute_tests()
|
||||
|
||||
# Assert
|
||||
fix.before_all.assert_not_called()
|
||||
|
||||
def test__execute_tests__before_all_none_and_data_loader_returns_empty_dictionary__does_not_call_before_all(mocker):
|
||||
# Arrange
|
||||
fix = SimpleTestFixture()
|
||||
|
||||
mocker.patch.object(fix.data_loader, 'load_fixture')
|
||||
fix.data_loader.load_fixture.return_value = {}
|
||||
fix.before_all = None
|
||||
|
||||
mocker.patch.object(fix, 'before_all')
|
||||
|
||||
# Act
|
||||
test_results = fix.execute_tests()
|
||||
|
||||
# Assert
|
||||
fix.before_all.assert_not_called()
|
||||
|
||||
def test__execute_tests__before_all_set_and_data_loader_returns_dictionary_with_testcases__calls_before_all(mocker):
|
||||
# Arrange
|
||||
fix = SimpleTestFixture()
|
||||
|
||||
mocker.patch.object(fix.data_loader, 'load_fixture')
|
||||
|
||||
tc = __get_test_case("TestName", fix.run_test, fix.assertion_test)
|
||||
fix.before_all = lambda self: 1 == 1
|
||||
mocker.patch.object(fix, 'before_all')
|
||||
|
||||
test_case_dict = {
|
||||
"test": tc
|
||||
}
|
||||
|
||||
fix.data_loader.load_fixture.return_value = test_case_dict
|
||||
|
||||
# Act
|
||||
fix.execute_tests()
|
||||
|
||||
# Assert
|
||||
fix.before_all.assert_called_once_with()
|
||||
|
||||
def test__execute_tests__after_all_set_and_data_loader_returns_empty_dictionary__does_not_call_after_all(mocker):
|
||||
# Arrange
|
||||
fix = SimpleTestFixture()
|
||||
|
||||
mocker.patch.object(fix.data_loader, 'load_fixture')
|
||||
fix.data_loader.load_fixture.return_value = {}
|
||||
fix.after_all = lambda self: 1 == 1
|
||||
|
||||
mocker.patch.object(fix, 'after_all')
|
||||
|
||||
# Act
|
||||
test_results = fix.execute_tests()
|
||||
|
||||
# Assert
|
||||
fix.after_all.assert_not_called()
|
||||
|
||||
def test__execute_tests__after_all_none_and_data_loader_returns_empty_dictionary__does_not_call_after_all(mocker):
|
||||
# Arrange
|
||||
fix = SimpleTestFixture()
|
||||
|
||||
mocker.patch.object(fix.data_loader, 'load_fixture')
|
||||
fix.data_loader.load_fixture.return_value = {}
|
||||
fix.after_all = None
|
||||
|
||||
mocker.patch.object(fix, 'after_all')
|
||||
|
||||
# Act
|
||||
test_results = fix.execute_tests()
|
||||
|
||||
# Assert
|
||||
fix.after_all.assert_not_called()
|
||||
|
||||
def test__execute_tests__after_all_set_and_data_loader_returns_dictionary_with_testcases__calls_after_all(mocker):
|
||||
# Arrange
|
||||
fix = SimpleTestFixture()
|
||||
|
||||
mocker.patch.object(fix.data_loader, 'load_fixture')
|
||||
|
||||
tc = __get_test_case("TestName", fix.run_test, fix.assertion_test)
|
||||
fix.after_all = lambda self: 1 == 1
|
||||
mocker.patch.object(fix, 'after_all')
|
||||
|
||||
test_case_dict = {
|
||||
"test": tc
|
||||
}
|
||||
|
||||
fix.data_loader.load_fixture.return_value = test_case_dict
|
||||
|
||||
# Act
|
||||
fix.execute_tests()
|
||||
|
||||
# Assert
|
||||
fix.after_all.assert_called_once_with()
|
||||
|
||||
def test__execute_tests__data_loader_returns_dictionary_with_testcases__iterates_over_dictionary_and_calls_execute(mocker):
|
||||
# Arrange
|
||||
fix = SimpleTestFixture()
|
||||
mocker.patch.object(fix.data_loader, 'load_fixture')
|
||||
|
||||
tc = __get_test_case("TestName", fix.run_test, fix.assertion_test)
|
||||
mocker.patch.object(tc, 'execute_test')
|
||||
tc.execute_test.return_value = TestResult("TestName", True, 1, [])
|
||||
tc1 = __get_test_case("TestName", fix.run_test, fix.assertion_test)
|
||||
mocker.patch.object(tc1, 'execute_test')
|
||||
tc1.execute_test.return_value = TestResult("TestName", True, 1, [])
|
||||
|
||||
test_case_dict = {
|
||||
"test": tc,
|
||||
"test1": tc1
|
||||
}
|
||||
|
||||
fix.data_loader.load_fixture.return_value = test_case_dict
|
||||
|
||||
# Act
|
||||
fix.execute_tests()
|
||||
|
||||
# Assert
|
||||
tc.execute_test.assert_called_once_with()
|
||||
tc1.execute_test.assert_called_once_with()
|
||||
|
||||
def test__execute_tests__returns_test_result__calls_append_on_testresults(mocker):
|
||||
# Arrange
|
||||
fix = SimpleTestFixture()
|
||||
mocker.patch.object(fix.test_results, 'append')
|
||||
|
||||
tc = __get_test_case("TestName", lambda: 1 == 1, lambda: 1 == 1)
|
||||
|
||||
test_case_dict = {
|
||||
"test": tc
|
||||
}
|
||||
mocker.patch.object(fix.data_loader, 'load_fixture')
|
||||
fix.data_loader.load_fixture.return_value = test_case_dict
|
||||
|
||||
# Act
|
||||
result = fix.execute_tests()
|
||||
|
||||
# Assert
|
||||
fix.test_results.append.assert_called_once_with(mocker.ANY)
|
||||
|
||||
def test__execute_tests__two_test_cases__returns_test_results_with_2_test_results(mocker):
|
||||
# Arrange
|
||||
fix = SimpleTestFixture()
|
||||
|
||||
tc = __get_test_case("TestName", lambda: 1 == 1, lambda: 1 == 1)
|
||||
tc1 = __get_test_case("TestName1", lambda: 1 == 1, lambda: 1 == 1)
|
||||
|
||||
test_case_dict = {
|
||||
"TestName": tc,
|
||||
"TestName1": tc1
|
||||
}
|
||||
|
||||
mocker.patch.object(fix.data_loader, 'load_fixture')
|
||||
fix.data_loader.load_fixture.return_value = test_case_dict
|
||||
|
||||
# Act
|
||||
result = fix.execute_tests()
|
||||
|
||||
# Assert
|
||||
assert len(result.test_results.results) == 2
|
||||
|
||||
|
||||
def test__run_test_method__has_list_tag_decorator__list_set_on_method():
|
||||
# Arrange
|
||||
class Wrapper(NutterFixture):
|
||||
tag_list = ["tag1", "tag2"]
|
||||
@tag(tag_list)
|
||||
def run_test(self):
|
||||
lambda: 1 == 1
|
||||
|
||||
test_name = "test"
|
||||
tag_list = ["tag1", "tag2"]
|
||||
|
||||
test_fixture = TestNutterFixtureBuilder() \
|
||||
.with_name("MyClass") \
|
||||
.with_assertion(test_name) \
|
||||
.with_run(test_name, Wrapper.run_test) \
|
||||
.build()
|
||||
|
||||
# Act / Assert
|
||||
assert tag_list == test_fixture.run_test.tag
|
||||
|
||||
def test__run_test_method__has_str_tag_decorator__str_set_on_method():
|
||||
# Arrange
|
||||
class Wrapper(NutterFixture):
|
||||
tag_str = "mytag"
|
||||
@tag(tag_str)
|
||||
def run_test(self):
|
||||
lambda: 1 == 1
|
||||
|
||||
test_name = "test"
|
||||
test_fixture = TestNutterFixtureBuilder() \
|
||||
.with_name("MyClass") \
|
||||
.with_assertion(test_name) \
|
||||
.with_run(test_name, Wrapper.run_test) \
|
||||
.build()
|
||||
|
||||
# Act / Assert
|
||||
assert "mytag" == test_fixture.run_test.tag
|
||||
|
||||
def test__run_test_method__has_tag_decorator_not_list__raises_value_error():
|
||||
# Arrange
|
||||
with pytest.raises(ValueError):
|
||||
class Wrapper(NutterFixture):
|
||||
tag_invalid = {}
|
||||
@tag(tag_invalid)
|
||||
def run_test(self):
|
||||
lambda: 1 == 1
|
||||
|
||||
def test__run_test_method__has_tag_decorator_not_listhas_invalid_tag_decorator_none__raises_value_error():
|
||||
# Arrange
|
||||
with pytest.raises(ValueError):
|
||||
class Wrapper(NutterFixture):
|
||||
tag_invalid = None
|
||||
@tag(tag_invalid)
|
||||
def run_test(self):
|
||||
lambda: 1 == 1
|
||||
|
||||
def test__non_run_test_method__valid_tag_on_non_run_method__raises_value_error():
|
||||
# Arrange
|
||||
with pytest.raises(ValueError):
|
||||
class Wrapper(NutterFixture):
|
||||
tag_valid = "mytag"
|
||||
@tag(tag_valid)
|
||||
def assertion_test(self):
|
||||
lambda: 1 == 1
|
||||
|
||||
def __get_test_case(name, setrun, setassert):
|
||||
tc = TestCase(name)
|
||||
tc.set_run(setrun)
|
||||
tc.set_assertion(setassert)
|
||||
|
||||
return tc
|
||||
|
||||
def test__run_test_method__has_invalid_tag_decorator_not_list_or_str_using_class_not_builder__raises_value_error():
|
||||
# Arrange
|
||||
simple_test_fixture = SimpleTestFixture()
|
||||
|
||||
# Act / Assert
|
||||
with pytest.raises(ValueError):
|
||||
simple_test_fixture.run_test_with_invalid_decorator()
|
||||
|
||||
def test__run_test_method__has_valid_tag_decorator_in_class__tag_set_on_method():
|
||||
# Arrange
|
||||
simple_test_fixture = SimpleTestFixture()
|
||||
|
||||
# Act / Assert
|
||||
assert "mytag" == simple_test_fixture.run_test_with_valid_decorator.tag
|
||||
|
||||
class SimpleTestFixture(NutterFixture):
|
||||
|
||||
def before_test(self):
|
||||
pass
|
||||
|
||||
def run_test(self):
|
||||
pass
|
||||
|
||||
def assertion_test(self):
|
||||
assert 1 == 1
|
||||
|
||||
def after_test(self):
|
||||
pass
|
||||
|
||||
@tag("mytag")
|
||||
def run_test_with_valid_decorator(self):
|
||||
pass
|
||||
|
||||
@tag
|
||||
def run_test_with_invalid_decorator(self):
|
||||
pass
|
||||
|
|
@ -0,0 +1,169 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import pytest
|
||||
from runtime.nutterfixture import NutterFixture, tag
|
||||
from common.testresult import TestResult
|
||||
from tests.runtime.testnutterfixturebuilder import TestNutterFixtureBuilder
|
||||
|
||||
|
||||
def test__execute_tests__two_valid_cases__returns_test_results_with_2_passed_test_results():
|
||||
# Arrange
|
||||
test_name_1 = "fred"
|
||||
test_name_2 = "hank"
|
||||
test_fixture = TestNutterFixtureBuilder() \
|
||||
.with_name("MyClass") \
|
||||
.with_before(test_name_1) \
|
||||
.with_before(test_name_2) \
|
||||
.with_run(test_name_1) \
|
||||
.with_run(test_name_2) \
|
||||
.with_assertion(test_name_1) \
|
||||
.with_assertion(test_name_2) \
|
||||
.with_after(test_name_1) \
|
||||
.with_after(test_name_2) \
|
||||
.build()
|
||||
|
||||
expected_result1 = TestResult(test_name_1, True, 1, [])
|
||||
expected_result2 = TestResult(test_name_2, True, 1, [])
|
||||
|
||||
# Act
|
||||
result = test_fixture().execute_tests().test_results
|
||||
|
||||
# Assert
|
||||
assert len(result.results) == 2
|
||||
assert __item_in_list_equalto(result.results, expected_result1)
|
||||
assert __item_in_list_equalto(result.results, expected_result2)
|
||||
|
||||
def test__execute_tests__one_valid_one_invalid__returns_correct_test_results():
|
||||
# Arrange
|
||||
test_name_1 = "shouldpass"
|
||||
test_name_2 = "shouldfail"
|
||||
fail_func = AssertionHelper().assertion_fails
|
||||
|
||||
test_fixture = TestNutterFixtureBuilder() \
|
||||
.with_name("MyClass") \
|
||||
.with_before(test_name_1) \
|
||||
.with_before(test_name_2) \
|
||||
.with_run(test_name_1) \
|
||||
.with_run(test_name_2) \
|
||||
.with_assertion(test_name_1) \
|
||||
.with_assertion(test_name_2, fail_func) \
|
||||
.with_after(test_name_1) \
|
||||
.with_after(test_name_2) \
|
||||
.build()
|
||||
|
||||
expected_result1 = TestResult(test_name_1, True, 1, [])
|
||||
expected_result2 = TestResult(test_name_2, False, 1, [], AssertionError("assert 1 == 2"))
|
||||
|
||||
# Act
|
||||
result = test_fixture().execute_tests().test_results
|
||||
|
||||
# Assert
|
||||
assert len(result.results) == 2
|
||||
assert __item_in_list_equalto(result.results, expected_result1)
|
||||
assert __item_in_list_equalto(result.results, expected_result2)
|
||||
|
||||
def test__execute_tests__one_run_throws__returns_one_failed_testresult():
|
||||
# Arrange
|
||||
test_name_1 = "shouldthrow"
|
||||
fail_func = AssertionHelper().function_throws
|
||||
|
||||
test_fixture = TestNutterFixtureBuilder() \
|
||||
.with_name("MyClass") \
|
||||
.with_before(test_name_1) \
|
||||
.with_run(test_name_1, fail_func) \
|
||||
.with_assertion(test_name_1) \
|
||||
.with_after(test_name_1) \
|
||||
.build()
|
||||
|
||||
expected_result1 = TestResult(test_name_1, False, 1, [], ValueError())
|
||||
|
||||
# Act
|
||||
result = test_fixture().execute_tests().test_results
|
||||
|
||||
# Assert
|
||||
assert len(result.results) == 1
|
||||
assert __item_in_list_equalto(result.results, expected_result1)
|
||||
|
||||
def test__execute_tests__one_has_tags_one_does_not__returns_tags_in_testresult():
|
||||
# Arrange
|
||||
class Wrapper(NutterFixture):
|
||||
tag_list = ["taga", "tagb"]
|
||||
@tag(tag_list)
|
||||
def run_test_name(self):
|
||||
lambda: 1 == 1
|
||||
|
||||
test_name_1 = "test_name"
|
||||
test_name_2 = "test_name2"
|
||||
|
||||
test_fixture = TestNutterFixtureBuilder() \
|
||||
.with_name(test_name_1) \
|
||||
.with_run(test_name_1, Wrapper.run_test_name) \
|
||||
.with_assertion(test_name_1) \
|
||||
.with_after(test_name_1) \
|
||||
.with_name(test_name_2) \
|
||||
.with_run(test_name_2) \
|
||||
.with_assertion(test_name_2) \
|
||||
.with_after(test_name_2) \
|
||||
.build()
|
||||
|
||||
# Act
|
||||
result = test_fixture().execute_tests().test_results
|
||||
|
||||
# Assert
|
||||
assert len(result.results) == 2
|
||||
for res in result.results:
|
||||
if res.test_name == test_name_1:
|
||||
assert ("taga" in res.tags) == True
|
||||
assert ("tagb" in res.tags) == True
|
||||
if res.test_name == test_name_2:
|
||||
assert len(res.tags) == 0
|
||||
|
||||
def test__execute_tests__one_test_case_with_all_methods__all_methods_called(mocker):
|
||||
# Arrange
|
||||
test_name_1 = "test"
|
||||
|
||||
test_fixture = TestNutterFixtureBuilder() \
|
||||
.with_name("MyClass") \
|
||||
.with_before_all() \
|
||||
.with_before(test_name_1) \
|
||||
.with_run(test_name_1) \
|
||||
.with_assertion(test_name_1) \
|
||||
.with_after(test_name_1) \
|
||||
.with_after_all() \
|
||||
.build()
|
||||
|
||||
mocker.patch.object(test_fixture, 'before_all')
|
||||
mocker.patch.object(test_fixture, 'before_test')
|
||||
mocker.patch.object(test_fixture, 'run_test')
|
||||
mocker.patch.object(test_fixture, 'assertion_test')
|
||||
mocker.patch.object(test_fixture, 'after_test')
|
||||
mocker.patch.object(test_fixture, 'after_all')
|
||||
|
||||
# Act
|
||||
result = test_fixture().execute_tests()
|
||||
|
||||
# Assert
|
||||
test_fixture.before_all.assert_called_once_with()
|
||||
test_fixture.before_test.assert_called_once_with()
|
||||
test_fixture.run_test.assert_called_once_with()
|
||||
test_fixture.assertion_test.assert_called_once_with()
|
||||
test_fixture.after_test.assert_called_once_with()
|
||||
test_fixture.after_all.assert_called_once_with()
|
||||
|
||||
def __item_in_list_equalto(list, expected_item):
|
||||
for item in list:
|
||||
if (item == expected_item):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
class AssertionHelper():
|
||||
def assertion_fails(self):
|
||||
assert 1 == 2
|
||||
def function_throws(self):
|
||||
raise ValueError()
|
||||
|
|
@ -0,0 +1,423 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
import os
|
||||
import pytest
|
||||
import time
|
||||
from common.testresult import TestResult
|
||||
from runtime.nutterfixture import tag
|
||||
from runtime.testcase import TestCase, NoTestCasesFoundError
|
||||
|
||||
def test__isvalid_rundoesntexist_returnsfalse():
|
||||
# Arrange
|
||||
tc = TestCase("Test Name")
|
||||
fixture = TestFixture()
|
||||
|
||||
tc.set_assertion(fixture.assertion_test)
|
||||
|
||||
# Act
|
||||
isvalid = tc.is_valid()
|
||||
|
||||
# Assert
|
||||
assert False == isvalid
|
||||
|
||||
def test__isvalid_assertiondoesntexist_returnsfalse():
|
||||
# Arrange
|
||||
tc = TestCase("Test Name")
|
||||
fixture = TestFixture()
|
||||
|
||||
tc.set_run(fixture.run_test)
|
||||
|
||||
# Act
|
||||
isvalid = tc.is_valid()
|
||||
|
||||
# Assert
|
||||
assert False == isvalid
|
||||
|
||||
def test__isvalid_runandassertionexist_returnstrue():
|
||||
# Arrange
|
||||
tc = TestCase("Test Name")
|
||||
fixture = TestFixture()
|
||||
|
||||
tc.set_assertion(fixture.assertion_test)
|
||||
tc.set_run(fixture.run_test)
|
||||
|
||||
# Act
|
||||
isvalid = tc.is_valid()
|
||||
|
||||
# Assert
|
||||
assert True == isvalid
|
||||
|
||||
def test__getinvalidmessage_rundoesntexist_returnsrunerrormessage():
|
||||
# Arrange
|
||||
tc = TestCase("Test Name")
|
||||
fixture = TestFixture()
|
||||
|
||||
tc.set_assertion(fixture.assertion_test)
|
||||
|
||||
expected_message = tc.ERROR_MESSAGE_RUN_MISSING
|
||||
|
||||
# Act
|
||||
invalid_message = tc.get_invalid_message()
|
||||
|
||||
# Assert
|
||||
assert expected_message == invalid_message
|
||||
|
||||
def test__getinvalidmessage_assertiondoesntexist_returnsassertionerrormessage():
|
||||
# Arrange
|
||||
tc = TestCase("Test Name")
|
||||
fixture = TestFixture()
|
||||
|
||||
tc.set_run(fixture.run_test)
|
||||
|
||||
expected_message = tc.ERROR_MESSAGE_ASSERTION_MISSING
|
||||
|
||||
# Act
|
||||
invalid_message = tc.get_invalid_message()
|
||||
|
||||
# Assert
|
||||
assert expected_message == invalid_message
|
||||
|
||||
def test__getinvalidmessage_runandassertiondontexist_returnsrunandassertionerrormessage():
|
||||
# Arrange
|
||||
tc = TestCase("Test Name")
|
||||
fixture = TestFixture()
|
||||
|
||||
# Act
|
||||
invalid_message = tc.get_invalid_message()
|
||||
|
||||
# Assert
|
||||
assertion_message_exists = tc.ERROR_MESSAGE_ASSERTION_MISSING in invalid_message
|
||||
run_message_exists = tc.ERROR_MESSAGE_RUN_MISSING in invalid_message
|
||||
|
||||
assert assertion_message_exists == True
|
||||
assert run_message_exists == True
|
||||
|
||||
def test__set_run__function_passed__sets_run_function():
|
||||
# Arrange
|
||||
tc = TestCase("Test Name")
|
||||
fixture = TestFixture()
|
||||
|
||||
# Act
|
||||
tc.set_run(fixture.run_test)
|
||||
|
||||
# Assert
|
||||
assert tc.run == fixture.run_test
|
||||
|
||||
def test__set_assertion__function_passed__sets_assertion_function():
|
||||
# Arrange
|
||||
tc = TestCase("Test Name")
|
||||
func = lambda: 1 == 1
|
||||
|
||||
# Act
|
||||
tc.set_assertion(func)
|
||||
|
||||
# Assert
|
||||
assert tc.assertion == func
|
||||
|
||||
def test__set_before__function_passed__sets_before_function():
|
||||
# Arrange
|
||||
tc = TestCase("Test Name")
|
||||
func = lambda: 1 == 1
|
||||
|
||||
# Act
|
||||
tc.set_before(func)
|
||||
|
||||
# Assert
|
||||
assert tc.before == func
|
||||
|
||||
def test__set_after__function_passed__sets_after_function():
|
||||
# Arrange
|
||||
tc = TestCase("Test Name")
|
||||
func = lambda: 1 == 1
|
||||
|
||||
# Act
|
||||
tc.set_after(func)
|
||||
|
||||
# Assert
|
||||
assert tc.after == func
|
||||
|
||||
def test__execute_test__before_set__calls_before(mocker):
|
||||
# Arrange
|
||||
tc = TestCase("TestName")
|
||||
|
||||
tc.set_before(lambda: 1 == 1)
|
||||
tc.set_run(lambda: 1 == 1)
|
||||
tc.set_assertion(lambda: 1 == 1)
|
||||
mocker.patch.object(tc, 'before')
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
tc.before.assert_called_once_with()
|
||||
|
||||
def test__execute_test__before_not_set__does_not_call_before(mocker):
|
||||
# Arrange
|
||||
tc = TestCase("TestName")
|
||||
|
||||
tc.set_run(lambda: 1 == 1)
|
||||
tc.set_assertion(lambda: 1 == 1)
|
||||
mocker.patch.object(tc, 'before')
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
tc.before.assert_not_called()
|
||||
|
||||
def test__execute_test__after_set__calls_after(mocker):
|
||||
# Arrange
|
||||
tc = TestCase("TestName")
|
||||
|
||||
tc.set_after(lambda: 1 == 1)
|
||||
tc.set_run(lambda: 1 == 1)
|
||||
tc.set_assertion(lambda: 1 == 1)
|
||||
mocker.patch.object(tc, 'after')
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
tc.after.assert_called_once_with()
|
||||
|
||||
def test__execute_test__after_not_set__does_not_call_after(mocker):
|
||||
# Arrange
|
||||
tc = TestCase("TestName")
|
||||
|
||||
tc.set_run(lambda: 1 == 1)
|
||||
tc.set_assertion(lambda: 1 == 1)
|
||||
mocker.patch.object(tc, 'after')
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
tc.after.assert_not_called()
|
||||
|
||||
def test__execute_test__method_in_assert_doesnt_throw__returns_pass_testresult(mocker):
|
||||
# Arrange
|
||||
tc = TestCase("TestName")
|
||||
fixture = TestFixture()
|
||||
|
||||
tc.set_run(lambda: 1 == 1)
|
||||
tc.set_assertion(lambda: 1 == 1)
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
assert test_result == TestResult("TestName", True, None, [], None)
|
||||
|
||||
def test__execute_test__is_valid_equals_false__returns_fail_testresult():
|
||||
# Arrange
|
||||
tc = TestCase("TestName")
|
||||
no_test_cases_error = NoTestCasesFoundError('Both a run and an assertion are required for every test')
|
||||
|
||||
## (Note - no set_assertion - so invalid)
|
||||
tc.set_run(lambda: 1 == 1)
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
assert test_result == TestResult("TestName", False, 1, [], no_test_cases_error)
|
||||
|
||||
def test__execute_test__method_in_assert_throws__returns_fail_testresult():
|
||||
# Arrange
|
||||
tc = TestCase("TestName")
|
||||
assertion_error = AssertionError('bad assert')
|
||||
|
||||
lambda_that_throws = lambda: (_ for _ in ()).throw(assertion_error)
|
||||
|
||||
tc.set_run(lambda: 1 == 1)
|
||||
tc.set_assertion(lambda_that_throws)
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
# Assert
|
||||
assert test_result == TestResult("TestName", False, 1, [], assertion_error)
|
||||
|
||||
|
||||
def test__execute_test__method_in_run_throws__returns_fail_testresult():
|
||||
# Arrange
|
||||
tc = TestCase("TestName")
|
||||
not_implemented_exception = NotImplementedError("Whatever was not implemented")
|
||||
|
||||
lambda_that_throws = lambda: (_ for _ in ()).throw(not_implemented_exception)
|
||||
|
||||
tc.set_run(lambda_that_throws)
|
||||
tc.set_assertion(lambda: 1 == 1)
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
assert test_result == TestResult("TestName", False, 1, [], not_implemented_exception)
|
||||
|
||||
def test__execute_test__method_in_before_throws__returns_fail_testresult():
|
||||
# Arrange
|
||||
tc = TestCase("TestName")
|
||||
not_implemented_exception = NotImplementedError("Whatever was not implemented")
|
||||
|
||||
lambda_that_throws = lambda: (_ for _ in ()).throw(not_implemented_exception)
|
||||
|
||||
tc.set_before(lambda_that_throws)
|
||||
tc.set_run(lambda: 1 == 1)
|
||||
tc.set_assertion(lambda: 1 == 1)
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
assert test_result == TestResult("TestName", False, 1, [], not_implemented_exception)
|
||||
|
||||
def test__execute_test__method_in_after_throws__returns_fail_testresult():
|
||||
# Arrange
|
||||
tc = TestCase("TestName")
|
||||
not_implemented_exception = NotImplementedError("Whatever was not implemented")
|
||||
|
||||
lambda_that_throws = lambda: (_ for _ in ()).throw(not_implemented_exception)
|
||||
|
||||
tc.set_after(lambda_that_throws)
|
||||
tc.set_before(lambda: 1 == 1)
|
||||
tc.set_run(lambda: 1 == 1)
|
||||
tc.set_assertion(lambda: 1 == 1)
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
assert test_result == TestResult("TestName", False, 1, [], not_implemented_exception)
|
||||
|
||||
def test__execute_test__method_throws__returns_stacktrace_in_testresult():
|
||||
# Arrange
|
||||
tc = TestCase("TestName")
|
||||
not_implemented_exception = NotImplementedError("Whatever was not implemented")
|
||||
|
||||
lambda_that_throws = lambda: (_ for _ in ()).throw(not_implemented_exception)
|
||||
|
||||
tc.set_run(lambda_that_throws)
|
||||
tc.set_assertion(lambda: 1 == 1)
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
assert test_result.stack_trace
|
||||
|
||||
def test__execute_test__no_constraints__sets_execution_time():
|
||||
# Arrange
|
||||
tc = TestCase("TestName")
|
||||
tc.set_run(lambda: 1 == 1)
|
||||
tc.set_assertion(lambda: 1 == 1)
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
assert test_result.execution_time > 0
|
||||
|
||||
def test__run_method__no_tags__tags_list_empty(mocker):
|
||||
# Arrange
|
||||
tc = TestCase("TestName")
|
||||
tc.set_run(lambda: 1 == 1)
|
||||
tc.set_assertion(lambda: 1 == 1)
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
assert len(tc.tags) == 0
|
||||
|
||||
def test__run_method__string_tag__tags_list_contains_string(mocker):
|
||||
# Arrange
|
||||
strtag = "testtag"
|
||||
@tag(strtag)
|
||||
def run_TestName():
|
||||
lambda: 1 == 1
|
||||
|
||||
tc = TestCase("TestName")
|
||||
tc.set_run(run_TestName)
|
||||
tc.set_assertion(lambda: 1 == 1)
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
assert [strtag] == tc.tags
|
||||
|
||||
def test__run_method__list_tag__tags_list_contains_list(mocker):
|
||||
# Arrange
|
||||
tag_list = ["taga", "tagb"]
|
||||
@tag(tag_list)
|
||||
def run_TestName():
|
||||
lambda: 1 == 1
|
||||
|
||||
tc = TestCase("TestName")
|
||||
tc.set_run(run_TestName)
|
||||
tc.set_assertion(lambda: 1 == 1)
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
assert tc.tags == tag_list
|
||||
|
||||
def test__execute__run_has_tag__test_results_returns_tags():
|
||||
# Arrange
|
||||
tag_list = ["taga", "tagb"]
|
||||
@tag(tag_list)
|
||||
def run_TestName():
|
||||
lambda: 1 == 1
|
||||
|
||||
tc = TestCase("TestName")
|
||||
tc.set_run(run_TestName)
|
||||
tc.set_assertion(lambda: 1 == 1)
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
assert test_result.tags == tag_list
|
||||
|
||||
def test__execute__run_has_tag_and_execute_fails__test_results_returns_tags():
|
||||
# Arrange
|
||||
tag_list = ["taga", "tagb"]
|
||||
@tag(tag_list)
|
||||
def run_TestName():
|
||||
pass
|
||||
|
||||
def assertion_TestName():
|
||||
assert 1 == 2
|
||||
|
||||
tc = TestCase("TestName")
|
||||
tc.set_run(run_TestName)
|
||||
tc.set_assertion(assertion_TestName)
|
||||
|
||||
# Act
|
||||
test_result = tc.execute_test()
|
||||
|
||||
# Assert
|
||||
assert test_result.tags == tag_list
|
||||
|
||||
class TestFixture():
|
||||
|
||||
# def before_all(self):
|
||||
# return True
|
||||
def before_test(self):
|
||||
return True
|
||||
|
||||
def run_test(self):
|
||||
return True
|
||||
def throw(self):
|
||||
raise AssertionError("Method not implemented")
|
||||
def assertion_test(self):
|
||||
return True
|
||||
|
||||
def after_test(self):
|
||||
return True
|
||||
# def after_all(self):
|
||||
# return True
|
|
@ -0,0 +1,52 @@
|
|||
"""
|
||||
Copyright (c) Microsoft Corporation.
|
||||
Licensed under the MIT license.
|
||||
"""
|
||||
|
||||
from runtime.nutterfixture import NutterFixture
|
||||
|
||||
class TestNutterFixtureBuilder():
|
||||
def __init__(self):
|
||||
self.attributes = {}
|
||||
self.class_name = "ImplementingClass"
|
||||
|
||||
def with_name(self, class_name):
|
||||
self.class_name = class_name
|
||||
return self
|
||||
|
||||
def with_before_all(self, func = lambda self: 1 == 1):
|
||||
self.attributes.update({"before_all" : func })
|
||||
return self
|
||||
|
||||
def with_before(self, test_name, func = lambda self: 1 == 1):
|
||||
full_test_name = "before_" + test_name
|
||||
self.attributes.update({full_test_name : func})
|
||||
return self
|
||||
|
||||
def with_assertion(self, test_name, func = lambda self: 1 == 1):
|
||||
full_test_name = "assertion_" + test_name
|
||||
self.attributes.update({full_test_name : func})
|
||||
return self
|
||||
|
||||
def with_run(self, test_name, func = lambda self: 1 == 1):
|
||||
full_test_name = "run_" + test_name
|
||||
self.attributes.update({full_test_name : func})
|
||||
return self
|
||||
|
||||
def with_after(self, test_name, func = lambda self: 1 == 1):
|
||||
full_test_name = "after_" + test_name
|
||||
self.attributes.update({full_test_name : func})
|
||||
return self
|
||||
|
||||
def with_after_all(self, func = lambda self: 1 == 1):
|
||||
self.attributes.update({"after_all" : func })
|
||||
return self
|
||||
|
||||
def with_test(self, test_name, func = lambda self: 1 == 1):
|
||||
full_test_name = test_name
|
||||
self.attributes.update({full_test_name : func})
|
||||
return self
|
||||
|
||||
def build(self):
|
||||
new_class = type(self.class_name, (NutterFixture,), self.attributes)
|
||||
return new_class
|
|
@ -0,0 +1,5 @@
|
|||
[flake8]
|
||||
ignore = E226,E302,E41,E721
|
||||
max-line-length = 88
|
||||
exclude = tests/*
|
||||
max-complexity = 10
|
Загрузка…
Ссылка в новой задаче