Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
Перейти к файлу
Advitya Gemawat f258b11bc1
OD model and data util functions in rai_test_utils (#2246)
* OD model & data ckpt

* torch dependency

* removed 3.6 from PR gate

* removed 3.6 from PR gate

* import fixes

* str format fix

* image utils

* model util function

* dependency updates

* added constants

* unit tests

* lint fixes

* sorted imports

* dependency update

* install updates

* gate tweak

* gate tweak

* gate tweak

* gate tweak

* gate tweak

* gate tweak

* torch install tweak

* torch install tweak

* torch install tweak

* torch install tweak

* torch install tweak

* torch install tweak

* updated gate test for torch installation on pip

* updated gate test for torch installation on pip

* updated gate test for torch installation on pip

* updated gate test for torch installation on pip

* removed torch installation

* torch ckpt

* torch conda install

* conda setup

* gate tweak

* bash test

* gate py + conda

* bash support to py commands

* bash support to py commands

* bash support to py commands

* bash support to py commands

* bash support to py commands

* added mkl support

* added mkl support

* disabled py 3.7 due to pip internal import error

* raiutils dependency update

* raiutils dependency update

* numpy+mkl fixes

* numpy+mkl fixes

* test revert

* gate update

* revert dependencies

* removed pip compile

* removed pip sync

* convert CI-python to use conda and install pytorch dependencies

* convert CI-python to use conda and install pytorch dependencies

* dependency update

* auto lint fixes

* added torch & torchvision versions for macos compat

* version revert

* disabled torch incompat test

---------

Co-authored-by: Ilya Matiach <ilmat@microsoft.com>
2023-08-26 02:56:24 +00:00
.azure-devops fix build failures due to lint errors for onelocbuild yml (#2137) 2023-06-23 16:21:31 +00:00
.eslintrc Add dataset metadata to `IDataset` (#2071) 2023-05-22 21:18:55 +00:00
.github OD model and data util functions in rai_test_utils (#2246) 2023-08-26 02:56:24 +00:00
.vscode clean up dataset explorer (#1721) 2022-09-15 15:12:42 -07:00
Localise Juno: check in to JUNO/hb_a12a4630-4852-4e7d-9cbc-c0e1117da1ab_20230811114026921. (#2237) 2023-08-11 11:03:29 -04:00
apps add e2e UI notebook tests to blbooksgenre text classification notebook (#2273) 2023-08-25 11:38:18 -04:00
docs Model Overview Object Detection Documentation 2023-06-26 20:14:42 +00:00
erroranalysis release erroranalysis 0.4.5 (#2262) 2023-08-21 00:13:34 -04:00
img readme-enhancements (#1704) 2022-09-26 16:58:29 -04:00
libs Endpoint Interruption Logic for Model Overview with QA (#2274) 2023-08-25 15:38:33 -04:00
nlp_feature_extractors Update `raiutils` to 0.4.0 in `raiwidgets`, `responsibleai`, `nlp_feature_extractors` and `erroranalysis` (#1982) 2023-02-23 05:00:59 +00:00
notebooks add e2e UI notebook tests to blbooksgenre text classification notebook (#2273) 2023-08-25 11:38:18 -04:00
rai_core_flask Add 3.10 classifier for python packages (#2035) 2023-04-21 19:36:22 +00:00
rai_test_utils OD model and data util functions in rai_test_utils (#2246) 2023-08-26 02:56:24 +00:00
raiutils Update version.py (#2244) 2023-08-14 10:04:35 -04:00
raiwidgets Remove deprecated modeule `cohort.py` from `raiwidgets` (#2278) 2023-08-24 16:49:58 -07:00
responsibleai Turn off flag `should_construct_pandas_query` in `PredictionsModelWrapperClassification` (#2277) 2023-08-25 03:25:22 +00:00
responsibleai_text fix CI errors due to new mlflow and pydantic dependencies (#2259) 2023-08-21 09:02:36 -04:00
responsibleai_vision GPU support for OD metrics (#2266) 2023-08-24 13:50:37 -04:00
scripts add e2e UI notebook tests to blbooksgenre text classification notebook (#2273) 2023-08-25 11:38:18 -04:00
tools update nx to v11 (#218) 2020-12-21 14:05:24 -08:00
.editorconfig Xuke/add prettier to ci (#14) 2020-08-08 00:09:23 -07:00
.eslintrc.json [Image Explorer] CanvasTools Image Loading support for Object Detection (#2097) 2023-06-13 18:07:13 -04:00
.gitignore Support forecasting in RAIInsights (#1948) 2023-03-03 19:01:25 +00:00
.prettierignore sample notebook changes and readme (#282) 2021-01-28 23:37:57 -08:00
.prettierrc Xuke/add prettier to ci (#14) 2020-08-08 00:09:23 -07:00
.yarnrc fix causal and add responsible api package to cd publish (#584) 2021-06-08 23:11:05 -07:00
CHANGES.md Release responsibleai and raiwidgets version 0.30.0 (#2281) 2023-08-25 16:59:38 -04:00
CODEOWNERS Update CODEOWNERS (#2280) 2023-08-25 06:44:52 -04:00
CONTRIBUTING.md Doc update for `responsibleai_text` (#2169) 2023-07-10 20:22:27 +00:00
LICENSE Updating LICENSE to template content 2020-07-07 10:51:29 -07:00
README.md Image Classification Notebook for UI testing #2175 (#2176) 2023-07-17 14:52:05 +00:00
RELEASING.md add releasing doc (#970) 2021-10-30 00:40:00 -07:00
SECURITY.md Xuke/add prettier to ci (#14) 2020-08-08 00:09:23 -07:00
_NOTICE.md Update _NOTICE.md (#2231) 2023-08-10 14:06:06 -04:00
babel.config.json upgrade nx (#725) 2021-07-29 11:32:40 -07:00
jest.config.js Forecasting: validate characters for name field in transformation creation dialog (#1986) 2023-02-25 01:06:37 +00:00
jest.preset.js [office-ui upgrade] Move ChoiceGroup & all other remaining dependencies to fluentui (#1501) 2022-06-15 02:55:36 +00:00
nx.json Forecasting: boilerplate only (#1887) 2023-01-06 21:34:34 -05:00
package.json Vision Data Explorer Table & Class View tests (#2220) 2023-08-09 10:30:51 -04:00
pyproject.toml Add pyproject.toml file for `isort` config (#2052) 2023-05-05 21:27:16 +00:00
requirements-linting.txt Update requirements-linting.txt (#1861) 2022-12-22 01:35:04 +00:00
rollup.config.js fix localization build (#500) 2021-05-06 21:55:34 -07:00
setup.cfg Enable flake8-breakpoint and fix errors (#1705) 2022-09-13 02:53:14 +00:00
setupTest.ts add real model for dev env to have better test data (#1893) 2023-01-20 15:51:05 -08:00
tsconfig.base.json Forecasting: boilerplate only (#1887) 2023-01-06 21:34:34 -05:00
version.cfg Release responsibleai and raiwidgets version 0.30.0 (#2281) 2023-08-25 16:59:38 -04:00
webpack.config.js add real model for dev env to have better test data (#1893) 2023-01-20 15:51:05 -08:00
workspace.json Forecasting: boilerplate only (#1887) 2023-01-06 21:34:34 -05:00
yarn.lock upgrade tslib to 2.5.0 (#1973) 2023-02-21 12:50:40 -05:00

README.md

MIT license

Responsible AI Widgets Python Build UI deployment to test environment

PyPI raiwidgets PyPI responsibleai PyPI erroranalysis PyPI raiutils PyPI rai_test_utils

npm model-assessment

Responsible AI Toolbox

Responsible AI is an approach to assessing, developing, and deploying AI systems in a safe, trustworthy, and ethical manner, and take responsible decisions and actions.

Responsible AI Toolbox is a suite of tools providing a collection of model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

ResponsibleAIToolboxOverview

The Toolbox consists of three repositories:

 

Repository Tools Covered
Responsible-AI-Toolbox Repository (Here) This repository contains four visualization widgets for model assessment and decision making:
1. Responsible AI dashboard, a single pane of glass bringing together several mature Responsible AI tools from the toolbox for a holistic responsible assessment and debugging of models and making informed business decisions. With this dashboard, you can identify model errors, diagnose why those errors are happening, and mitigate them. Moreover, the causal decision-making capabilities provide actionable insights to your stakeholders and customers.
2. Error Analysis dashboard, for identifying model errors and discovering cohorts of data for which the model underperforms.
3. Interpretability dashboard, for understanding model predictions. This dashboard is powered by InterpretML.
4. Fairness dashboard, for understanding models fairness issues using various group-fairness metrics across sensitive features and cohorts. This dashboard is powered by Fairlearn.
Responsible-AI-Toolbox-Mitigations Repository The Responsible AI Mitigations Library helps AI practitioners explore different measurements and mitigation steps that may be most appropriate when the model underperforms for a given data cohort. The library currently has two modules:
1. DataProcessing, which offers mitigation techniques for improving model performance for specific cohorts.
2. DataBalanceAnalysis, which provides metrics for diagnosing errors that originate from data imbalance either on class labels or feature values.
3. Cohort: provides classes for handling and managing cohorts, which allows the creation of custom pipelines for each cohort in an easy and intuitive interface. The module also provides techniques for learning different decoupled estimators (models) for different cohorts and combining them in a way that optimizes different definitions of group fairness.
Responsible-AI-Tracker Repository Responsible AI Toolbox Tracker is a JupyterLab extension for managing, tracking, and comparing results of machine learning experiments for model improvement. Using this extension, users can view models, code, and visualization artifacts within the same framework enabling therefore fast model iteration and evaluation processes. Main functionalities include:
1. Managing and linking model improvement artifacts
2. Disaggregated model evaluation and comparisons
3. Integration with the Responsible AI Mitigations library
4. Integration with mlflow
Responsible-AI-Toolbox-GenBit Repository The Responsible AI Gender Bias (GenBit) Library helps AI practitioners measure gender bias in Natural Language Processing (NLP) datasets. The main goal of GenBit is to analyze your text corpora and compute metrics that give insights into the gender bias present in a corpus.

Introducing Responsible AI dashboard

Responsible AI dashboard is a single pane of glass, enabling you to easily flow through different stages of model debugging and decision-making. This customizable experience can be taken in a multitude of directions, from analyzing the model or data holistically, to conducting a deep dive or comparison on cohorts of interest, to explaining and perturbing model predictions for individual instances, and to informing users on business decisions and actions.

ResponsibleAIDashboard

In order to achieve these capabilities, the dashboard integrates together ideas and technologies from several open-source toolkits in the areas of

  • Error Analysis powered by Error Analysis, which identifies cohorts of data with higher error rate than the overall benchmark. These discrepancies might occur when the system or model underperforms for specific demographic groups or infrequently observed input conditions in the training data.

  • Fairness Assessment powered by Fairlearn, which identifies which groups of people may be disproportionately negatively impacted by an AI system and in what ways.

  • Model Interpretability powered by InterpretML, which explains blackbox models, helping users understand their model's global behavior, or the reasons behind individual predictions.

  • Counterfactual Analysis powered by DiCE, which shows feature-perturbed versions of the same datapoint who would have received a different prediction outcome, e.g., Taylor's loan has been rejected by the model. But they would have received the loan if their income was higher by $10,000.

  • Causal Analysis powered by EconML, which focuses on answering What If-style questions to apply data-driven decision-making – how would revenue be affected if a corporation pursues a new pricing strategy? Would a new medication improve a patients condition, all else equal?

  • Data Balance powered by Responsible AI, which helps users gain an overall understanding of their data, identify features receiving the positive outcome more than others, and visualize feature distributions.

Responsible AI dashboard is designed to achieve the following goals:

  • To help further accelerate engineering processes in machine learning by enabling practitioners to design customizable workflows and tailor Responsible AI dashboards that best fit with their model assessment and data-driven decision making scenarios.
  • To help model developers create end to end and fluid debugging experiences and navigate seamlessly through error identification and diagnosis by using interactive visualizations that identify errors, inspect the data, generate global and local explanations models, and potentially inspect problematic examples.
  • To help business stakeholders explore causal relationships in the data and take informed decisions in the real world.

This repository contains the Jupyter notebooks with examples to showcase how to use this widget. Get started here.

Installation

Use the following pip command to install the Responsible AI Toolbox.

If running in jupyter, please make sure to restart the jupyter kernel after installing.

pip install raiwidgets

Responsible AI dashboard Customization

The Responsible AI Toolboxs strength lies in its customizability. It empowers users to design tailored, end-to-end model debugging and decision-making workflows that address their particular needs. Need some inspiration? Here are some examples of how Toolbox components can be put together to analyze scenarios in different ways:

Please note that model overview (including fairness analysis) and data explorer components are activated by default!  

Responsible AI Dashboard Flow Use Case
Model Overview -> Error Analysis -> Data Explorer To identify model errors and diagnose them by understanding the underlying data distribution
Model Overview -> Fairness Assessment -> Data Explorer To identify model fairness issues and diagnose them by understanding the underlying data distribution
Model Overview -> Error Analysis -> Counterfactuals Analysis and What-If To diagnose errors in individual instances with counterfactual analysis (minimum change to lead to a different model prediction)
Model Overview -> Data Explorer -> Data Balance To understand the root cause of errors and fairness issues introduced via data imbalances or lack of representation of a particular data cohort
Model Overview -> Interpretability To diagnose model errors through understanding how the model has made its predictions
Data Explorer -> Causal Inference To distinguish between correlations and causations in the data or decide the best treatments to apply to see a positive outcome
Interpretability -> Causal Inference To learn whether the factors that model has used for decision making has any causal effect on the real-world outcome.
Data Explorer -> Counterfactuals Analysis and What-If To address customer questions about what they can do next time to get a different outcome from an AI.
Data Explorer -> Data Balance To gain an overall understanding of the data, identify features receiving the positive outcome more than others, and visualize feature distributions

Model Debugging Examples:

Responsible Decision Making Examples:

Supported Models

This Responsible AI Toolbox API supports models that are trained on datasets in Python numpy.ndarray, pandas.DataFrame, iml.datatypes.DenseData, or scipy.sparse.csr_matrix format.

The explanation functions of Interpret-Community accept both models and pipelines as input as long as the model or pipeline implements a predict or predict_proba function that conforms to the Scikit convention. If not compatible, you can wrap your model's prediction function into a wrapper function that transforms the output into the format that is supported (predict or predict_proba of Scikit), and pass that wrapper function to your selected interpretability techniques.

If a pipeline script is provided, the explanation function assumes that the running pipeline script returns a prediction. The repository also supports models trained via PyTorch, TensorFlow, and Keras deep learning frameworks.

Other Use Cases

Tools within the Responsible AI Toolbox can also be used with AI models offered as APIs by providers such as Azure Cognitive Services. To see example use cases, see the folders below:

Maintainers