November 2021 updates
This commit is contained in:
Родитель
66198be805
Коммит
a5e18ec896
|
@ -334,3 +334,6 @@ ASALocalRun/
|
|||
|
||||
# Jupyter Notebook checkpoints folder
|
||||
.ipynb_checkpoints/
|
||||
|
||||
# VS Code folder
|
||||
.vscode/
|
|
@ -9,7 +9,7 @@ Before the hands-on lab setup guide
|
|||
</div>
|
||||
|
||||
<div class="MCWHeader3">
|
||||
January 2021
|
||||
November 2021
|
||||
</div>
|
||||
|
||||
Information in this document, including URL and other Internet Web site references, is subject to change without notice. Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.
|
||||
|
|
|
@ -9,7 +9,7 @@ Hands-on lab step-by-step
|
|||
</div>
|
||||
|
||||
<div class="MCWHeader3">
|
||||
January 2021
|
||||
November 2021
|
||||
</div>
|
||||
|
||||
Information in this document, including URL and other Internet Web site references, is subject to change without notice. Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.
|
||||
|
|
|
@ -1,183 +0,0 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Environment Setup"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Install required libraries\n",
|
||||
"\n",
|
||||
"Run each cell one at a time to install the required libraries."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install --upgrade pip"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Ignore errors related to compatibility. As long as you see the message at the end: **Successfully installed numpy-1.18.5**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install numpy==1.18.5"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install xlrd==1.2.0"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Ignore errors related to compatibility. As long as you see the message at the end: **Successfully installed pandas-1.0.4**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install pandas==1.0.4"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Ignore errors related to compatibility. As long as you see the message at the end: **Successfully installed scikit-learn-0.23.1**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install scikit-learn==0.23.1"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Ignore errors related to tensorflow-gpu compatibility. **Installing tensorflow can take more than 5 minutes. Please be patient if it appears to be hung or stuck for few minutes during installation.**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install tensorflow==2.2.0"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Ignore errors related to compatibility. As long as you see the message at the end: **Successfully installed joblib-0.15.1**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install joblib==0.15.1"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install nltk==3.4.5"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install gensim==3.8.3"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Ignore errors related to compatibility. As long as you see the message at the end: **Successfully installed onnxruntime-1.3.0** OR **Requirement already satisfied**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install onnxmltools==1.7.0\n",
|
||||
"!pip install keras2onnx==1.7.0\n",
|
||||
"!pip install onnxruntime==1.4.0\n",
|
||||
"!pip install tf2onnx==1.6.3"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python3-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.9"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 1
|
||||
}
|
|
@ -168,9 +168,9 @@
|
|||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6 - AzureML",
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python3-azureml"
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
|
@ -182,7 +182,7 @@
|
|||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.9"
|
||||
"version": "3.8.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
|
|
@ -256,6 +256,25 @@
|
|||
"Run the following cells to test scoring using a single input row against the deployed web service."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Load the deployed webservice from workspace**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.webservice import Webservice\n",
|
||||
"service_name = \"summarizer\"\n",
|
||||
"webservice = Webservice(ws, service_name)\n",
|
||||
"webservice"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
|
@ -294,7 +313,9 @@
|
|||
"source": [
|
||||
"## Capture the scoring URI\n",
|
||||
"\n",
|
||||
"In order to call the service from a REST client, you need to acquire the scoring URI. Run the following cell to retrieve the scoring URI and take note of this value, you will need it in the last notebook."
|
||||
"In order to call the service from a REST client, you need to acquire the scoring URI. Take a note of printed scoring URI, you will need it in the last notebook.\n",
|
||||
"\n",
|
||||
"The default settings used in deploying this service result in a service that does not require authentication, so the scoring URI is the only value you need to call this service."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -303,22 +324,23 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"webservice.scoring_uri"
|
||||
"url = webservice.scoring_uri\n",
|
||||
"print('ACI Service: Summarizer scoring URI is: {}'.format(url))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The default settings used in deploying this service result in a service that does not require authentication, so the scoring URI is the only value you need to call this service."
|
||||
]
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6 - AzureML",
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python3-azureml"
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
|
@ -330,7 +352,7 @@
|
|||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.9"
|
||||
"version": "3.8.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
|
|
@ -53,6 +53,7 @@
|
|||
"import azureml.core\n",
|
||||
"from azureml.core import Experiment, Workspace, Run, Datastore\n",
|
||||
"from azureml.core.dataset import Dataset\n",
|
||||
"from azureml.data.datapath import DataPath\n",
|
||||
"from azureml.core.compute import ComputeTarget\n",
|
||||
"from azureml.core.model import Model\n",
|
||||
"from azureml.train.dnn import TensorFlow\n",
|
||||
|
@ -246,9 +247,18 @@
|
|||
"source": [
|
||||
"## Prepare the training data\n",
|
||||
"\n",
|
||||
"Contoso Ltd has provided a small document containing examples of the text they receive as claim text. They have provided this in a text file with one line per sample claim.\n",
|
||||
"Contoso Ltd has provided a small document containing examples of the text they receive as claim text. They have provided this in a csv file with one line per sample claim. The csv file also has labels each of the sample claims provided as either 0 (\"home insurance claim\") or 1 (\"auto insurance claim\")."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Connect to an Azure Machine Learning Workspace\n",
|
||||
"\n",
|
||||
"Run the following cell to download and examine the contents of the file. Take a moment to read the claims (you may find some of them rather comical!)."
|
||||
"The Azure Machine Learning Python SDK is required for leveraging the experimentation, model management and model deployment capabilities of Azure Machine Learning services. Run the following cell to create a new Azure Machine Learning **Workspace** and save the configuration to disk. The configuration file named `config.json` is saved in a folder named `.azureml`. \n",
|
||||
"\n",
|
||||
"**Important Note**: You might be prompted to login in the text that is output below the cell. Be sure to navigate to the URL displayed and enter the code that is provided. Once you have entered the code, return to this notebook and wait for the output to read `Workspace configuration succeeded`."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -257,29 +267,16 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"data_location = './data'\n",
|
||||
"base_data_url = 'https://databricksdemostore.blob.core.windows.net/data/05.03/'\n",
|
||||
"filesToDownload = ['claims_text.txt', 'claims_labels.txt']\n",
|
||||
"\n",
|
||||
"os.makedirs(data_location, exist_ok=True)\n",
|
||||
"\n",
|
||||
"for file in filesToDownload:\n",
|
||||
" data_url = os.path.join(base_data_url, file)\n",
|
||||
" local_file_path = os.path.join(data_location, file)\n",
|
||||
" urllib.request.urlretrieve(data_url, local_file_path)\n",
|
||||
" print('Downloaded file: ', file)\n",
|
||||
" \n",
|
||||
"claims_corpus = [claim for claim in open(os.path.join(data_location, 'claims_text.txt'))]\n",
|
||||
"claims_corpus"
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws)\n",
|
||||
"print('Workspace configuration succeeded')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In addition to the claims sample, Contoso Ltd has also provided a document that labels each of the sample claims provided as either 0 (\"home insurance claim\") or 1 (\"auto insurance claim\"). This to is presented as a text file with one row per sample, presented in the same order as the claim text.\n",
|
||||
"\n",
|
||||
"Run the following cell to examine the contents of the supplied claims_labels.txt file:"
|
||||
"### Upload training data to the blob store"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -288,17 +285,79 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"labels = [int(re.sub(\"\\n\", \"\", label)) for label in open(os.path.join(data_location, 'claims_labels.txt'))]\n",
|
||||
"print(len(labels))\n",
|
||||
"print(labels[0:5]) # first 5 labels\n",
|
||||
"print(labels[-5:]) # last 5 labels"
|
||||
"input_location = \"./data\"\n",
|
||||
"target_path = \"training-data\"\n",
|
||||
"datastore = ws.get_default_datastore()\n",
|
||||
"datastore.upload(input_location, \n",
|
||||
" target_path = target_path, \n",
|
||||
" overwrite = True, \n",
|
||||
" show_progress = True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As you can see from the above output, the values are integers 0 or 1. In order to use these as labels with which to train our model, we need to convert these integer values to categorical values (think of them like enum's from other programming languages).\n",
|
||||
"### Create a Tabular dataset and review the training data"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"target_path = \"training-data\"\n",
|
||||
"file_name = \"claims_data.csv\"\n",
|
||||
"training_data_path = DataPath(datastore=datastore, \n",
|
||||
" path_on_datastore=os.path.join(target_path, file_name),\n",
|
||||
" name=\"training-data\")\n",
|
||||
"train_ds = Dataset.Tabular.from_delimited_files(path=training_data_path)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Register the training dataset"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"dataset_name = \"claims-dataset\"\n",
|
||||
"description = \"Dataset to classify claim type - Auto or Home.\"\n",
|
||||
"registered_dataset = train_ds.register(ws, dataset_name, description=description, create_new_version=True)\n",
|
||||
"print('Registered dataset name {} and version {}'.format(registered_dataset.name, registered_dataset.version))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Review the training dataset"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"df = train_ds.to_pandas_dataframe()\n",
|
||||
"claims_corpus = df['claims'].values\n",
|
||||
"labels = df['labels'].values\n",
|
||||
"df.sample(n=10)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As you can see from the above output, the label values are integers 0 or 1. In order to use these as labels with which to train our model, we need to convert these integer values to categorical values (think of them like enum's from other programming languages).\n",
|
||||
"\n",
|
||||
"We can use the to_categorical method from `keras.utils` to convert these value into binary categorical values. Run the following cell:"
|
||||
]
|
||||
|
@ -426,36 +485,6 @@
|
|||
"print('Lenght of the vector: ', len(X[5]))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Upload training data to blob store"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Create the `Workspace` from the saved config file**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(azureml.core.VERSION)\n",
|
||||
"\n",
|
||||
"from azureml.core.workspace import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print(ws)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
@ -478,7 +507,7 @@
|
|||
"\n",
|
||||
"datastore = ws.get_default_datastore()\n",
|
||||
"datastore.upload(input_location, \n",
|
||||
" target_path = 'inputs', \n",
|
||||
" target_path = 'training-inputs', \n",
|
||||
" overwrite = True, \n",
|
||||
" show_progress = True)\n",
|
||||
"\n",
|
||||
|
@ -727,7 +756,9 @@
|
|||
" '--batch-size', 16,\n",
|
||||
" '--epochs', 100], \n",
|
||||
" compute_target=compute_target, \n",
|
||||
" environment=tensorflow_env)"
|
||||
" environment=tensorflow_env)\n",
|
||||
"\n",
|
||||
"print(\"Created ScriptRunConfig\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -751,7 +782,8 @@
|
|||
"source": [
|
||||
"experiment_name = 'claims-classification-exp'\n",
|
||||
"experiment = Experiment(ws, experiment_name)\n",
|
||||
"run = experiment.submit(src)"
|
||||
"run = experiment.submit(src)\n",
|
||||
"run"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -762,7 +794,7 @@
|
|||
"\n",
|
||||
"Using the azureml Jupyter widget, you can monitor the training run. You can monitor the validation accuracy and validation loss in real-time as the training progresses.\n",
|
||||
"\n",
|
||||
"The training will approximately take around 2-4 minutes to complete. Once the training is completed you can then download the trained models locally by running the **Download the trained models** cell."
|
||||
"The training job will approximately take around 15 minutes to complete. Note that the majority of time is spent on preparing the environment. Once the training is completed you can then download the trained models locally by running the **Download the trained models** cell."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -852,7 +884,8 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"test_claim = ['I crashed my car into a pole.', \n",
|
||||
"test_claim = ['The house was on fire, everthing reduced to ashes.',\n",
|
||||
" 'I crashed my car into a pole.', \n",
|
||||
" 'The flood ruined my house.', \n",
|
||||
" 'I lost control of my car and fell in the river.']\n",
|
||||
"\n",
|
||||
|
@ -877,8 +910,13 @@
|
|||
"source": [
|
||||
"pred = model.predict(test_data)\n",
|
||||
"pred_label = pred.argmax(axis=1)\n",
|
||||
"pred_df = pd.DataFrame(np.column_stack((pred,pred_label)), columns=['class_0', 'class_1', 'label'])\n",
|
||||
"pred_df = pd.DataFrame(np.column_stack((test_claim,pred,pred_label)), columns=['claim', \n",
|
||||
" 'class_0', \n",
|
||||
" 'class_1', \n",
|
||||
" 'label'])\n",
|
||||
"pred_df.label = pred_df.label.astype(int)\n",
|
||||
"pred_df['prediction'] = pred_df['label'].apply(lambda x: 'Auto Insurance Claim' \n",
|
||||
" if x == 1 else 'Home Insurance Claim')\n",
|
||||
"print('Predictions')\n",
|
||||
"pred_df"
|
||||
]
|
||||
|
@ -893,9 +931,9 @@
|
|||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6 - AzureML",
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python3-azureml"
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
|
@ -907,7 +945,7 @@
|
|||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.9"
|
||||
"version": "3.8.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
|
|
@ -327,9 +327,18 @@
|
|||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"1.34.0\n",
|
||||
"Workspace.create(name='mcwmachinelearning', subscription_id='fdbba0bc-f686-4b8b-8b29-394e0d9ae697', resource_group='mcw-support-jss')\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
|
@ -561,6 +570,36 @@
|
|||
"## Test Deployment"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Load the deployed webservice from workspace**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AciWebservice(workspace=Workspace.create(name='mcwmachinelearning', subscription_id='fdbba0bc-f686-4b8b-8b29-394e0d9ae697', resource_group='mcw-support-jss'), name=claimclassservice, image_id=None, compute_type=None, state=ACI, scoring_uri=Healthy, tags=http://d82ed023-220b-4ad7-934e-5e510a2ade4b.eastus.azurecontainer.io/score, properties={'name': 'Claim Classification'}, created_by={'azureml.git.repository_uri': 'https://github.com/microsoft/MCW-Cognitive-services-and-deep-learning.git', 'mlflow.source.git.repoURL': 'https://github.com/microsoft/MCW-Cognitive-services-and-deep-learning.git', 'azureml.git.branch': 'main', 'mlflow.source.git.branch': 'main', 'azureml.git.commit': '66198be8051c8c12b3f4da33d87f23163562c69b', 'mlflow.source.git.commit': '66198be8051c8c12b3f4da33d87f23163562c69b', 'azureml.git.dirty': 'True', 'hasInferenceSchema': 'False', 'hasHttps': 'False'})"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from azureml.core.webservice import Webservice\n",
|
||||
"service_name = \"claimclassservice\"\n",
|
||||
"service = Webservice(ws, service_name)\n",
|
||||
"service"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
@ -570,9 +609,19 @@
|
|||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Predicted label for test claim #1 is 1\n",
|
||||
"Predicted label for test claim #2 is 0\n",
|
||||
"Predicted label for test claim #3 is 1\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"\n",
|
||||
|
@ -589,7 +638,39 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Make HTTP calls to test the deployed Web Service\n",
|
||||
"### Make HTTP calls to test the deployed Web Service"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Predicted label for test claim #1 is 1\n",
|
||||
"Predicted label for test claim #2 is 0\n",
|
||||
"Predicted label for test claim #3 is 1\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"headers = {'Content-Type':'application/json'}\n",
|
||||
"\n",
|
||||
"for i in range(len(test_claims)):\n",
|
||||
" response = requests.post(url, json.dumps([test_claims[i]]), headers=headers)\n",
|
||||
" print('Predicted label for test claim #{} is {}'.format(i+1, response.text))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Capture the scoring URI\n",
|
||||
"\n",
|
||||
"In order to call the service from a REST client, you need to acquire the scoring URI. Take a note of printed scoring URI, you will need it in the last notebook.\n",
|
||||
"\n",
|
||||
|
@ -598,19 +679,20 @@
|
|||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"ACI Service: Claim Classification scoring URI is: http://d82ed023-220b-4ad7-934e-5e510a2ade4b.eastus.azurecontainer.io/score\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"url = service.scoring_uri\n",
|
||||
"print('ACI Service: Claim Classification scoring URI is: {}'.format(url))\n",
|
||||
"headers = {'Content-Type':'application/json'}\n",
|
||||
"\n",
|
||||
"for i in range(len(test_claims)):\n",
|
||||
" response = requests.post(url, json.dumps([test_claims[i]]), headers=headers)\n",
|
||||
" print('Predicted label for test claim #{} is {}'.format(i+1, response.text))"
|
||||
"print('ACI Service: Claim Classification scoring URI is: {}'.format(url))"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -623,9 +705,9 @@
|
|||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6 - AzureML",
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python3-azureml"
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
|
@ -637,7 +719,7 @@
|
|||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.9"
|
||||
"version": "3.8.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
|
|
@ -6,7 +6,15 @@
|
|||
"source": [
|
||||
"# Combining Pre-Built & Custom AI Services\n",
|
||||
"\n",
|
||||
"In this notebook, you will integrate with the Computer Vision API and the Text Analytics API to augment the claims processing capabilities. In the end, you will integrate the API calls to the summarizer and classifier services that your deployed and produce a finished claim report that shows all of the processing applied to the claim text and claim image."
|
||||
"In this notebook, you will integrate with the Text Analytics API to augment the claims processing capabilities. In the end, you will integrate the API calls to the summarizer and classifier services that your deployed and produce a finished claim report that shows all of the processing applied to the claim text.\n",
|
||||
"\n",
|
||||
"The Text Analytics API is a cloud-based service that provides Natural Language Processing (NLP) features for text mining and text analysis. Here we will look at the following features from the API:\n",
|
||||
"\n",
|
||||
"- Sentiment Analysis\n",
|
||||
"- Opinion mining\n",
|
||||
"- Key phrase extraction\n",
|
||||
"- Language detection\n",
|
||||
"- PII detection"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -15,7 +23,7 @@
|
|||
"source": [
|
||||
"### Setup helper functions\n",
|
||||
"\n",
|
||||
"Run the cell below to enable helper functions to save locally the outputs as pickle files from the various cognitive services "
|
||||
"Run the cell below to enable helper functions to save locally the outputs as pickle files from the various Text Analytics services."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -44,14 +52,14 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Task 1 - Caption & Tag with the Computer Vision API"
|
||||
"## Setup"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In the cell bellow, provided the key to your Computer Vision API and run the cell."
|
||||
"Update the following cell with the correct **endpoint URL** and **key** for your deployed instance of the Text Analytics API and run the cell. Be sure your value ends in a slash (/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -60,17 +68,35 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"subscription_key = '' #\"<your_computer_vision_api_key>\"\n",
|
||||
"assert subscription_key"
|
||||
"endpoint = 'https://ta-kat.cognitiveservices.azure.com/' #\"<your_text_analytics_api_endpoint>\"\n",
|
||||
"key = 'ceca0114c9ee489cb4621754d6e70e1a' #\"<your_text_analytics_key>\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Construct the endpoint to the Computer Vision API by running the following cell. Notice the last path segment is analyze, which indicates you will use the analyze feature.\n",
|
||||
"**Instantiate the Text Analytics Client**"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azure.core.credentials import AzureKeyCredential\n",
|
||||
"from azure.ai.textanalytics import TextAnalyticsClient\n",
|
||||
"\n",
|
||||
"Be sure to update the value in vision_endpoint below so it matches the Endpoint value you copied from the Azure Portal for your instance of the Computer Vision service. Be sure your value ends in a slash (/)."
|
||||
"credential = AzureKeyCredential(key)\n",
|
||||
"client = TextAnalyticsClient(endpoint=endpoint, credential=credential)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Setup the example cliams document**"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -79,174 +105,30 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"vision_endpoint = '' #\"<your_computer_vision_api_endpoint>\"\n",
|
||||
"vision_base_url = vision_endpoint + \"vision/v3.0/\"\n",
|
||||
"vision_analyze_url = vision_base_url + \"analyze\""
|
||||
"claim = (\"I had called earlier to report a car accident and I spoke with Jane and she was very helpful. \"\n",
|
||||
" \"However, the wait time on the call was unacceptable, I was on it for more than 30 minutes. \"\n",
|
||||
" \"As requested, my license plate number is ABC2021. \"\n",
|
||||
" \"Like I said on the phone, the accident was the other SUV fault \"\n",
|
||||
" \"for making a sharp turn and hitting my car on the right passenger side. \"\n",
|
||||
" \"Thankfully I was not hurt but the damage to the car is substantial. \"\n",
|
||||
" \"I request you to process the claim urgently and give me a loner vehicle for the duration of repairs. \"\n",
|
||||
" \"My mobile phone is 55-999-5555 if you need to reach me. Thank you.\")\n",
|
||||
"print(claim)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The following cell contains a list of sample images found after performing a simple web search. Feel free to substitute in URLs to the image of your choice."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"fender_bender = \"https://www.washingtonpost.com/blogs/innovations/files/2015/02/Stolen_Car_Crash-00aef.jpg\"\n",
|
||||
"damaged_house = \"https://c2.staticflickr.com/8/7342/10983313185_0589b74946_z.jpg\"\n",
|
||||
"police_car = \"https://localtvwnep.files.wordpress.com/2015/11/fender-bender.jpeg\"\n",
|
||||
"car_with_text = \"https://static.buildasign.com/cmsimages/bas-vinyl-lettering-splash-01.png\"\n",
|
||||
"car_tow = 'https://i.ytimg.com/vi/wmxJ2FrzTWo/maxresdefault.jpg'"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"From the list of images above, select one and assign it to image_url for further processing:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"image_url = car_tow"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Run the following cell to preview the image you have selected."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from IPython.display import Image, display\n",
|
||||
"display(Image(image_url))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The following cell builds the HTTP request to make against the Computer Vision API.\n",
|
||||
"## Sentiment Analysis\n",
|
||||
"\n",
|
||||
"Run the following cell to retrieve the caption and tags:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import requests\n",
|
||||
"headers = {'Ocp-Apim-Subscription-Key': subscription_key }\n",
|
||||
"params = {'visualFeatures': 'Categories,Description,Tags,Color'}\n",
|
||||
"data = {'url': image_url}\n",
|
||||
"response = requests.post(vision_analyze_url, headers=headers, params=params, json=data)\n",
|
||||
"response.raise_for_status()\n",
|
||||
"analysis = response.json()\n",
|
||||
"# Save the ouput locally\n",
|
||||
"save_output(analysis, 'vision_results')\n",
|
||||
"analysis"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As you can see in the above output, the result is a nested document structure. Run the following cells to pull out the caption and top 3 tag results:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"caption = analysis[\"description\"][\"captions\"][0][\"text\"].capitalize()\n",
|
||||
"caption"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"topTags = analysis[\"description\"][\"tags\"][0:3]\n",
|
||||
"topTags"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Task 2 - Performing OCR"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In order to perform OCR with the Computer Vision service, you need to target the OCR endpoint.\n",
|
||||
"The Text Analytics API's Sentiment Analysis feature is used for detecting positive and negative sentiment. If you send a Sentiment Analysis request, the API will return sentiment labels (such as \"negative\", \"neutral\" and \"positive\") and confidence scores at the sentence and document-level.\n",
|
||||
"\n",
|
||||
"Run the following cell to construct the right URL:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"vision_ocr_url = vision_base_url + \"ocr\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Next, invoke the OCR endpoint with the following code and examine the result:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"headers = {'Ocp-Apim-Subscription-Key': subscription_key }\n",
|
||||
"params = {}\n",
|
||||
"data = {'url': image_url}\n",
|
||||
"response = requests.post(vision_ocr_url, headers=headers, params=params, json=data)\n",
|
||||
"response.raise_for_status()\n",
|
||||
"ocr_analysis = response.json()\n",
|
||||
"# Save the ouput locally\n",
|
||||
"save_output(ocr_analysis, 'ocr_results')\n",
|
||||
"ocr_analysis"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We have provided the following code for you to extract the text as a flat array from the results.\n",
|
||||
"Run the cell below and observe the following:\n",
|
||||
"- Overall document level sentiment with breakdown of the sentiment scores\n",
|
||||
"- Sentence level sentiment with breakdown of the sentiment scores\n",
|
||||
"\n",
|
||||
"Run the following cell to extract the text items from the results document:"
|
||||
"In the end we will save the raw response from the **analyze_sentiment** API in local directory: `output_location`."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -255,64 +137,35 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import itertools\n",
|
||||
"flatten = lambda x: list(itertools.chain.from_iterable(x))\n",
|
||||
"words_list = [[ [w['text'] for w in line['words']] for line in d['lines']] for d in ocr_analysis['regions']]\n",
|
||||
"words_list = flatten(flatten(words_list))\n",
|
||||
"print(list(words_list))"
|
||||
"response = client.analyze_sentiment(documents=[claim])[0]\n",
|
||||
"overall_sentiment = response.sentiment\n",
|
||||
"print(\"Document Sentiment: {}\".format(overall_sentiment))\n",
|
||||
"overall_positive_score = response.confidence_scores.positive\n",
|
||||
"overall_neutral_score = response.confidence_scores.neutral\n",
|
||||
"overall_negative_score = response.confidence_scores.negative\n",
|
||||
"print(\"Overall scores: positive={0:.2f}; neutral={1:.2f}; negative={2:.2f} \\n\".format\n",
|
||||
" (overall_positive_score, overall_neutral_score, overall_negative_score))\n",
|
||||
"for idx, sentence in enumerate(response.sentences):\n",
|
||||
" print(\"Sentence: {}\".format(sentence.text))\n",
|
||||
" print(\"Sentiment: {}\".format(sentence.sentiment))\n",
|
||||
" print(\"Sentence score: Positive={0:.2f} Neutral={1:.2f} Negative={2:.2f}\\n\".format\n",
|
||||
" (sentence.confidence_scores.positive, \n",
|
||||
" sentence.confidence_scores.neutral, \n",
|
||||
" sentence.confidence_scores.negative\n",
|
||||
" )\n",
|
||||
" )\n",
|
||||
"save_output(response, 'sentiment_results')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Task 3 - Performing Sentiment Analysis"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Sentiment Analysis is performed using the Text Analytics API.\n",
|
||||
"## Opinion Mining\n",
|
||||
"\n",
|
||||
"Update the following cell with the key to your instance of the Text Analytics API and run the cell:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"text_analytics_subscription_key = '' #\"<your_text_analytics_key>\"\n",
|
||||
"assert text_analytics_subscription_key"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Update the following cell with the correct Endpoint URL for your deployed instance of the Text Analytics API and run the cell. Be sure your value ends in a slash (/)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#\"<your_text_analytics_api_endpoint>\"\n",
|
||||
"text_analytics_base_url = ''\n",
|
||||
"sentiment_api_url = text_analytics_base_url + \"text/analytics/v3.0/sentiment\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The following cell has a set of example claims you can use to test the measurement sentiment.\n",
|
||||
"You can also use the **analyze_sentiment** API to mine opinions in the document. Opinion Mining provides granular information about the opinions related to works in the text. \n",
|
||||
"\n",
|
||||
"Run the cell:"
|
||||
"Run the below cell and review the extracted opinions from claim document. Observe that the API detects the opinion expressed by the user regarding wait times."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -321,47 +174,43 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"neg_sent = \"\"\"We are just devastated and emotionally drained. \n",
|
||||
"The roof was torn off of our car, and to make matters\n",
|
||||
"worse my daughter's favorite teddy bear was impaled on the street lamp.\"\"\"\n",
|
||||
"pos_sent = \"\"\"We are just happy the damaage was mininmal and that everyone is safe. \n",
|
||||
"We are thankful for your support.\"\"\"\n",
|
||||
"neutral_sent = \"\"\"I crashed my car.\"\"\"\n",
|
||||
"long_claim = \"\"\"\n",
|
||||
"I was driving down El Camino and stopped at a red light.\n",
|
||||
"It was about 3pm in the afternoon. The sun was bright and shining just behind the stoplight.\n",
|
||||
"This made it hard to see the lights. There was a car on my left in the left turn lane.\n",
|
||||
"A few moments later another car, a black sedan pulled up behind me. \n",
|
||||
"When the left turn light changed green, the black sedan hit me thinking \n",
|
||||
"that the light had changed for us, but I had not moved because the light \n",
|
||||
"was still red. After hitting my car, the black sedan backed up and then sped past me.\n",
|
||||
"I did manage to catch its license plate. The license plate of the black sedan was ABC123. \n",
|
||||
"\"\"\""
|
||||
"opinions = []\n",
|
||||
"response = client.analyze_sentiment(documents=[claim], show_opinion_mining=True)[0]\n",
|
||||
"for sentence in response.sentences:\n",
|
||||
" for mined_opinion in sentence.mined_opinions:\n",
|
||||
" opinion = {}\n",
|
||||
" opinion['Sentence'] = sentence.text\n",
|
||||
" print(\"Sentence: {}\".format(sentence.text))\n",
|
||||
" opinion['target'] = target.text\n",
|
||||
" opinion['target_sentiment'] = target.sentiment\n",
|
||||
" print(\"Target: '{}' Sentiment: {} (scores: Positive={} Negative={})\".format(\n",
|
||||
" target.text, \n",
|
||||
" target.sentiment, \n",
|
||||
" round(target.confidence_scores.positive, 1), \n",
|
||||
" round(target.confidence_scores.negative, 1)))\n",
|
||||
" opinion['assessments'] = []\n",
|
||||
" for assessment in mined_opinion.assessments:\n",
|
||||
" item = {}\n",
|
||||
" item['assessment'] = assessment.text\n",
|
||||
" item['assessment_sentiment'] = assessment.sentiment\n",
|
||||
" opinion['assessments'].append(item)\n",
|
||||
" print(\"Assessment: '{}' Sentiment: {} (scores: Positive={} Negative={}\".format(\n",
|
||||
" assessment.text, \n",
|
||||
" assessment.sentiment,\n",
|
||||
" round(assessment.confidence_scores.positive, 1), \n",
|
||||
" round(assessment.confidence_scores.negative, 1)))\n",
|
||||
" print(\"\\n\")\n",
|
||||
" opinions.append(opinion)\n",
|
||||
"save_output(response, 'opinion_mining_results')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"From the above list of claims, select one and assign its variable to claim_text to be used in the call to the Text Analytics API."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"claim_text = long_claim"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The API requires you to submit a document of the following form.\n",
|
||||
"## Key Phrase Extraction\n",
|
||||
"\n",
|
||||
"Run the cell to build the request document:"
|
||||
"The Key Phrase Extraction (KPE) capability of the Text Analytics API is useful to quickly identify the main points in a collection of documents. We will apply KPE to our claims text to extract main concepts in the text. Run the cell below."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -370,16 +219,30 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"documents = {'documents' : [\n",
|
||||
" {'id': '1', 'language': 'en', 'text': claim_text}\n",
|
||||
"]}"
|
||||
"key_phrases = []\n",
|
||||
"try:\n",
|
||||
" response = client.extract_key_phrases(documents = [claim])[0]\n",
|
||||
" if not response.is_error:\n",
|
||||
" print(\"\\tKey Phrases:\")\n",
|
||||
" for phrase in response.key_phrases:\n",
|
||||
" key_phrases.append(phrase)\n",
|
||||
" print(\"\\t\\t\", phrase)\n",
|
||||
" save_output(response, 'kpe_results')\n",
|
||||
" else:\n",
|
||||
" print(response.id, response.error)\n",
|
||||
"except Exception as err:\n",
|
||||
" print(\"Encountered exception. {}\".format(err))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now invoke the Text Analytics API and observe the result."
|
||||
"## Language Detection\n",
|
||||
"\n",
|
||||
"The Language Detection feature of the Azure Text Analytics REST API evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis.\n",
|
||||
"\n",
|
||||
"First, we will translate our claim to Spanish language and then evaluate the translated claim. Run the following two cells to detect the claims language."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -388,20 +251,15 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"headers = {\"Ocp-Apim-Subscription-Key\": text_analytics_subscription_key}\n",
|
||||
"response = requests.post(sentiment_api_url, headers=headers, json=documents)\n",
|
||||
"sentiments = response.json()\n",
|
||||
"# Save the ouput locally\n",
|
||||
"save_output(sentiments, 'sentiment_results')\n",
|
||||
"sentiments"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Please note that the Text Analytics API provides breakdown of sentiments at the individual sentence levels. Run the following cell to view the overall sentiment of the document and sentiment breakdown by sentences."
|
||||
"claim_spanish = (\"Había llamado antes para reportar un accidente automovilístico y hablé con Jane \"\n",
|
||||
" \"y ella fue de gran ayuda. Sin embargo, el tiempo de espera de la llamada fue inaceptable, \"\n",
|
||||
" \"estuve en ella durante más de 30 minutos. Según lo solicitado, mi número de placa es ABC2021. \"\n",
|
||||
" \"Como dije por teléfono, el accidente fue la otra falla de la SUV por hacer un giro brusco y \"\n",
|
||||
" \"golpear mi auto en el lado derecho del pasajero. Afortunadamente no me lastimé, pero el daño \"\n",
|
||||
" \"al auto es sustancial. Le solicito que procese el reclamo con urgencia y me dé un vehículo \"\n",
|
||||
" \"solitario mientras dure la reparación. Mi teléfono móvil es 55-999-5555 si necesita \"\n",
|
||||
" \"comunicarse conmigo. Gracias.\")\n",
|
||||
"print(claim_spanish)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -410,26 +268,77 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"score_interpretation = sentiments['documents'][0]['sentiment']\n",
|
||||
"print('Overall document sentiment:', score_interpretation)\n",
|
||||
"print('')\n",
|
||||
"print('Sentiment breakdown by sentences')\n",
|
||||
"for item in sentiments['documents'][0]['sentences']:\n",
|
||||
" print('Sentence:', item['text'].strip(), 'Sentiment:', item['sentiment'])"
|
||||
"try:\n",
|
||||
" response = client.detect_language(documents = [claim_spanish], country_hint = 'us')[0]\n",
|
||||
" print(\"Language:\", response.primary_language.name)\n",
|
||||
" print(\"Confidence Score:\", response.primary_language.confidence_score)\n",
|
||||
" save_output(response, 'language_detection_results')\n",
|
||||
"except Exception as err:\n",
|
||||
" print(\"Encountered exception. {}\".format(err))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Task 4 - Save the Results in Blob Store\n",
|
||||
"Let's detect the language of the original claim."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"language = None\n",
|
||||
"try:\n",
|
||||
" response = client.detect_language(documents = [claim], country_hint = 'us')[0]\n",
|
||||
" language = response.primary_language.name\n",
|
||||
" print(\"Language:\", response.primary_language.name)\n",
|
||||
" print(\"Confidence Score:\", response.primary_language.confidence_score)\n",
|
||||
" save_output(response, 'language_detection_results')\n",
|
||||
"except Exception as err:\n",
|
||||
" print(\"Encountered exception. {}\".format(err))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Personally Identifiable Information (PII) Detection\n",
|
||||
"\n",
|
||||
"The PII Detection extracts personal information from an input text and gives you the option of masking it. Run the cell below to identify PII in the claims text."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"redacted_claim = None\n",
|
||||
"response = client.recognize_pii_entities([claim], language=\"en\")\n",
|
||||
"result = [doc for doc in response if not doc.is_error]\n",
|
||||
"for doc in result:\n",
|
||||
" redacted_claim = doc.redacted_text\n",
|
||||
" print(\"Redacted Text: {}\".format(redacted_claim))\n",
|
||||
" for entity in doc.entities:\n",
|
||||
" print(\"Entity: {}\".format(entity.text))\n",
|
||||
" print(\"\\tCategory: {}\".format(entity.category))\n",
|
||||
" print(\"\\tConfidence Score: {}\".format(entity.confidence_score))\n",
|
||||
" save_output(response, 'pii_detection_results')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Save the Results in Blob Store\n",
|
||||
"\n",
|
||||
"Save the JSON responses that came from the various cognitive services to a permanent store like the blob storage for future reference."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
|
@ -483,7 +392,6 @@
|
|||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
|
@ -497,7 +405,7 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Task 5 - Invoking the Azure ML Deployed Services"
|
||||
"## Invoking the Azure ML Deployed Services"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -513,6 +421,7 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import requests\n",
|
||||
"def invoke_service(ml_service_key, ml_service_scoring_endpoint, ml_service_input):\n",
|
||||
" headers = {\"Authorization\": \"Bearer \" + ml_service_key}\n",
|
||||
" response = requests.post(ml_service_scoring_endpoint, headers=headers, json=ml_service_input)\n",
|
||||
|
@ -526,7 +435,7 @@
|
|||
"source": [
|
||||
"Configure the classifier invocation with the key and endpoint as appropriate to your deployed instance:\n",
|
||||
"\n",
|
||||
"> This is the scoring URI you copied from the notebook named `04 Deploy Classifier Web Service.ipynb`."
|
||||
"> This is the scoring URI you copied from the notebook named `05 Deploy Classifier Web Service.ipynb`."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -537,8 +446,8 @@
|
|||
"source": [
|
||||
"classifier_service_key = \"\" #leave this value empty if the service does not have authentication enabled\n",
|
||||
"#\"<your_classifier_scoring_url>\"\n",
|
||||
"classifier_service_scoring_endpoint = ''\n",
|
||||
"classifier_service_input = [claim_text]"
|
||||
"classifier_service_scoring_endpoint = 'http://d82ed023-220b-4ad7-934e-5e510a2ade4b.eastus.azurecontainer.io/score'\n",
|
||||
"classifier_service_input = [claim]"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -587,8 +496,8 @@
|
|||
"source": [
|
||||
"summarizer_service_key = \"\" #leave this value empty if the service does not have authentication enabled\n",
|
||||
"#\"<your_summarizer_service_url>\"\n",
|
||||
"summarizer_service_scoring_endpoint = ''\n",
|
||||
"summarizer_service_input = claim_text"
|
||||
"summarizer_service_scoring_endpoint = 'http://34412728-55c6-4593-b7dc-3b840ef28fc7.eastus.azurecontainer.io/score'\n",
|
||||
"summarizer_service_input = claim"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -606,15 +515,15 @@
|
|||
"source": [
|
||||
"summarizer_result = invoke_service(summarizer_service_key, summarizer_service_scoring_endpoint, \n",
|
||||
" summarizer_service_input)\n",
|
||||
"formatted_result = summarizer_result[0].replace(\"\\\\n\", \" \").strip() if len(summarizer_result) > 0 else \"N/A\"\n",
|
||||
"formatted_result"
|
||||
"formatted_summary = summarizer_result[0].replace(\"\\\\n\", \" \").strip() if len(summarizer_result) > 0 else \"N/A\"\n",
|
||||
"print(formatted_summary)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Task 6 - Summarizing the Results\n",
|
||||
"## Summarizing the Results\n",
|
||||
"\n",
|
||||
"In this final task, you pull together all of the pieces to display the results of your AI based processing.\n",
|
||||
"\n",
|
||||
|
@ -630,23 +539,50 @@
|
|||
"from IPython.core.display import HTML\n",
|
||||
"\n",
|
||||
"displayTemplate = \"\"\"\n",
|
||||
"<div><b>Claim Summary</b></div>\n",
|
||||
"<div>Classification: {}</div>\n",
|
||||
"<div>Caption: {}</div>\n",
|
||||
"<div>Tags: {}</div>\n",
|
||||
"<div>Text in Image: {}</div>\n",
|
||||
"<div>Sentiment: {}</div>\n",
|
||||
"<div><img src='{}' width='200px'></div>\n",
|
||||
"<div>Summary: </div>\n",
|
||||
"<div><pre>{} </pre></div>\n",
|
||||
"<div><h1>Claim Summary</h1></div>\n",
|
||||
"<div> </div>\n",
|
||||
"<div>Claim:</div>\n",
|
||||
"<div>Claim Type: <b>{}</b></div>\n",
|
||||
"<div>Language: {}</div>\n",
|
||||
"<div>Overall Sentiment: {}</div>\n",
|
||||
"<div>Overall Positive Sentiment Score: {}</div>\n",
|
||||
"<div>Overall Negative Sentiment Score: {}</div>\n",
|
||||
"<div> </div>\n",
|
||||
"<div>Key Phrases:</div>\n",
|
||||
"<div><b>{}</b></div>\n",
|
||||
"<div><b>{}</b></div>\n",
|
||||
"<div><b>{}</b></div>\n",
|
||||
"<div><b>{}</b></div>\n",
|
||||
"<div><b>{}</b></div>\n",
|
||||
"<div> </div>\n",
|
||||
"<div>Key Opinion:</div>\n",
|
||||
"<div><pre>{}</pre></div>\n",
|
||||
"<div>Target: <b>{}</b> Sentiment: <b>{}</b></div>\n",
|
||||
"<div>Assessment: <b>{}</b> Sentiment: <b>{}</b></div>\n",
|
||||
"<div> </div>\n",
|
||||
"<div>Claim Summary:</div>\n",
|
||||
"<div><pre>{}</pre></div>\n",
|
||||
"<div> </div>\n",
|
||||
"<div>Redacted Claim:</div>\n",
|
||||
"<div>{}</div>\n",
|
||||
"\n",
|
||||
"\"\"\"\n",
|
||||
"displayTemplate = displayTemplate.format(classification, caption, ' '.join(topTags), ' '.join(words_list), \n",
|
||||
" score_interpretation, image_url, formatted_result, \n",
|
||||
" claim_text)\n",
|
||||
"displayTemplate = displayTemplate.format(classification, \n",
|
||||
" language, \n",
|
||||
" overall_sentiment, \n",
|
||||
" overall_positive_score, \n",
|
||||
" overall_negative_score, \n",
|
||||
" key_phrases[0],\n",
|
||||
" key_phrases[1],\n",
|
||||
" key_phrases[2],\n",
|
||||
" key_phrases[3],\n",
|
||||
" key_phrases[4],\n",
|
||||
" opinions[0]['Sentence'],\n",
|
||||
" opinions[0]['target'],\n",
|
||||
" opinions[0]['target_sentiment'], \n",
|
||||
" opinions[0]['assessments'][0]['assessment'],\n",
|
||||
" opinions[0]['assessments'][0]['assessment_sentiment'],\n",
|
||||
" formatted_summary, \n",
|
||||
" redacted_claim\n",
|
||||
" )\n",
|
||||
"display(HTML(displayTemplate))"
|
||||
]
|
||||
},
|
||||
|
@ -660,9 +596,9 @@
|
|||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.6 - AzureML",
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python3-azureml"
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
|
@ -674,7 +610,7 @@
|
|||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.9"
|
||||
"version": "3.8.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
|
|
@ -0,0 +1,59 @@
|
|||
claims, labels
|
||||
"Coming home, I drove into the wrong house and collided with a tree I don't have.", 1
|
||||
The other car collided with mine without givng warning of its intentions., 1
|
||||
"I thought my window was down, but I found out it was up when I put my head through it.", 1
|
||||
I collided with a stationary truck coming the other way., 1
|
||||
A truck backed through my windshield into my wife's face., 1
|
||||
A pedestrian hit me and went under my car., 1
|
||||
The guy was all over the road and I had to swerve a number of times before I hit him., 1
|
||||
"I pulled away from the side of the road, glanced at my mother-in-law and headed over the embankment.", 1
|
||||
In an attempt to kill a fly I drove into a telephone pole., 1
|
||||
I had been driving for forty years when I fell asleep at the wheel and had an accident., 1
|
||||
As I approached the intersection a sign suddenly appeared where no STOP sign had ever appeared before., 1
|
||||
My car was legally parked as it backed into the other car., 1
|
||||
"An invisible car came out of nowhere, struck my car and vanished.", 1
|
||||
"I told the police that I was not injured but on removing my hat, found that I had fractured my skull.", 1
|
||||
"The pedestrian had no idea which direction to run, so I ran over him.", 1
|
||||
"I saw a slow moving, sad old faced gentleman as he bounced off the roof of my car.", 1
|
||||
I was thrown from my car as it left the road. I was later found in a ditch by some stray cows., 1
|
||||
"I was driving down El Camino and stopped at a red light. It was about 3pm in the afternoon. The sun was bright and shining just behind the stoplight. This made it hard to see the lights. There was a car on my left in the left turn lane. A few moments later another car, a black sedan pulled up behind me. When the left turn light changed green, the black sedan hit me thinking that the light had changed for us, but I had not moved because the light was still red. After hitting my car, the black sedan backed up and then sped past me. I did manage to catch its license plate. The license plate of the black sedan was ABC123.", 1
|
||||
I caught the end of the yellow light and the other car moved into the intersection before the light had turned green. I clipped its fender., 1
|
||||
I parked by electric car outside the gym and when I came back it was missing., 1
|
||||
It was dark and I did not see the stop sign. I ran straight into the car crossing the intersection in front of me., 1
|
||||
The guy was slipping all over the road and I had to maneuver a number of times before I hit him., 1
|
||||
I pulled away from the side of the road glanced at my sister-in-law and headed over the footpath., 1
|
||||
In an attempt to kill an ant I drove into a fence., 1
|
||||
The second car collided with the front one without any reason. Maybe the driver was texting., 1
|
||||
"I thought my window was down, when I drove through the car wash. Now everything is wet and ruined.", 1
|
||||
I collided with a stationary truck when I made a sudden U turn., 1
|
||||
I had been driving for more than 8 hours and I fell asleep at the wheel and that is when I crashed., 1
|
||||
As I approached the intersection I failed to see the newly posted STOP sign., 1
|
||||
"The tornado was devastating, it tore the roof off of our house and the wind took with it all of hour precious paintings.", 0
|
||||
"The house was under five feet of water, and the first floor was completely flooded.", 0
|
||||
The garage lights short circuited and the cardboard box caught fire., 0
|
||||
There was a gas leak in the kitchen and only the cat was home., 0
|
||||
"The bug damage caused the wood supporting the deck to rot. In the strong hail winds, it collapsed.", 0
|
||||
The powder room had a leak and water dripped into the basement destroying our fine carpet., 0
|
||||
The hurricane wind were so strong it ripped the roof off of the house., 0
|
||||
"In the torrential rains, the hillside behind our home turned into a mudslide. It demolished one wall.", 0
|
||||
The snow began to pile up so high that the tree in our front yard fell over on to our roof., 0
|
||||
Our home's roof could not withstand the weight of all the snow that had piled on top of it., 0
|
||||
The rain came so hard that our gutters were clogged and water began to collect on second story patio. It collapsed with the weight., 0
|
||||
The strong winds ripped open our front door and sent everything inside the house flying. All of our fragile antiques were shattered., 0
|
||||
There was strong rumble from the earthquake and then we heard a crack as the patio collapsed into the ocean., 0
|
||||
It hailed last night and damaged our roof and windows. The window pane glass was shattered all over the living room., 0
|
||||
The earthquake create a large crack in our cieling., 0
|
||||
"After the quake, our foundation of has a crack in it that runs the length of the house.", 0
|
||||
The burglars broke in to the house through our living room window., 0
|
||||
The thieves came in via the sliding glass door in the backyard which was unlocked., 0
|
||||
We came home to find all of our computers missing. The thieves entered through the french doors., 0
|
||||
We left a candle on in the bathroom. A towel hanging nearby caught fire and then burned the whole bathroom., 0
|
||||
The oven was left on as everyone forgot to turn it off. The turkey caught fire and the intense heat melted the stovetop., 0
|
||||
The christmas tree lights short circuited and the christmas tree caught fire., 0
|
||||
There was a gas leak in the house when nobody was home. A spark must have triggered the explosion., 0
|
||||
"The termite damage caused the wood supporting the awning to weaken. In the strong rains, it collapsed.", 0
|
||||
The skylight had a leak and water dripped into the house destroying our persian carpet., 0
|
||||
The offshore wind was so strong it ripped the awning off of the house., 0
|
||||
The tornado winds ripped open our front roof and sent everything inside the house flying. All of our fragile items were destroyed., 0
|
||||
"There was a 6.5 earthquake and we in San Francisco are used to such event. It hurts to see the damage it costs.", 0
|
||||
We came home to find all of our gold jewelry missing. The robbers entered through the bay windows., 0
|
|
Двоичный файл не отображается.
До Ширина: | Высота: | Размер: 259 KiB После Ширина: | Высота: | Размер: 308 KiB |
|
@ -0,0 +1,13 @@
|
|||
numpy==1.18.5
|
||||
xlrd==1.2.0
|
||||
pandas==1.0.4
|
||||
scikit-learn==0.23.1
|
||||
tensorflow==2.2.0
|
||||
joblib==0.15.1
|
||||
nltk==3.4.5
|
||||
gensim==3.8.3
|
||||
onnxmltools==1.7.0
|
||||
keras2onnx==1.7.0
|
||||
onnxruntime==1.4.0
|
||||
tf2onnx==1.6.3
|
||||
azure-ai-textanalytics==5.1.0
|
|
@ -14,7 +14,7 @@ Next, they would like to summarize long claim text automatically. This summariza
|
|||
|
||||
Finally, they would like to automatically extract information from the photos submitted with the claims to increase their searchability.
|
||||
|
||||
January 2021
|
||||
November 2021
|
||||
|
||||
## Target audience
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ Whiteboard design session student guide
|
|||
</div>
|
||||
|
||||
<div class="MCWHeader3">
|
||||
January 2021
|
||||
November 2021
|
||||
</div>
|
||||
|
||||
Information in this document, including URL and other Internet Web site references, is subject to change without notice. Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.
|
||||
|
|
|
@ -9,7 +9,7 @@ Whiteboard design session trainer guide
|
|||
</div>
|
||||
|
||||
<div class="MCWHeader3">
|
||||
January 2021
|
||||
November 2021
|
||||
</div>
|
||||
|
||||
Information in this document, including URL and other Internet Web site references, is subject to change without notice. Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.
|
||||
|
|
Загрузка…
Ссылка в новой задаче