Contrib Add HTML Demo - Second Try (#528)

* HTML Demo initial upload

* Updated UI code to run from Blob images

* Final UI changes

* Removed separate references

* Added notebook for deploying to app service. Updated readmes for JupyterCode and root directory.

* Notebook updates

* Update blob storage links

* Update readme.md

* Renamed notebook 30 to 3. Added ../../.. to the path so it will work in its new directory. Removed the log stream commands, as they do not work on all plans. Fixed some typos throughout.

* Updated notebooks to reflect new paths and names

* update screenshot

* Update readme.md

* update jpgs screenshot

* Update readme.md

* Update readme.md

* Update readme.md

* Update readme.md

* Update readme.md

* Update readme.md

* Update readme.md

* Update readme.md

* Update readme.md

* Update readme.md

* Update readme.md

* Update readme.md

* Update readme.md

* Update readme.md

* Updated readme

* Update JupyterCode readme some more

* Update readme

* Minor tweaks to readmes

* Added about tab

* Update readme.md

* Update readme.md

* Update readme.md

* Update readme.md

* Fix broken link

Co-authored-by: Charles Henneberger <cihen@bu.edu>
Co-authored-by: primnp <51148602+primnp@users.noreply.github.com>
Co-authored-by: Prim P <p.nuwapa@gmail.com>
Co-authored-by: mullaa <48615903+mullaa@users.noreply.github.com>
This commit is contained in:
Matthew Boyd 2020-06-18 16:39:49 -04:00 коммит произвёл GitHub
Родитель f6bc5dbf06
Коммит 2aebca4f9f
12 изменённых файлов: 2039 добавлений и 0 удалений

Просмотреть файл

@ -14,5 +14,6 @@ Each project should live in its own subdirectory ```/contrib/<project>``` and co
## Tools
| Directory | Project description | Build status (optional) |
|---|---|---|
| [HTML Demo](html_demo) | These files provide an HTML web page that allows users to visualize the output of a deployed computer vision DNN model. Users can improve on and gain insights from their deployed model by uploading query/test images and examining the model results for correctness through the user interface. The interface includes sample query/test images for testing your own model and example output for 3 types of models: image classification, object detection, and image similarity. | |
| [vm_builder](vm_builder) | This script helps users create a single Ubuntu Data Science Virtual Machine with a GPU with the computer vision recipes repo installed and ready to be used. If you find the script to be out-dated or not working, you can create the VM using the Azure portal or the Azure CLI tool with a few more steps. | |
| [vmss_builder](vmss_builder) | This script helps you setup a cluster of virtual machines with the computer vision recipes repo pre-installed using VMSS. This cluster is designed to be temporal, ie to be spun up and torn down. Users for this cluster will be prescribed a username/password/ip. This setup can be used for hands-on / lab sessions when you need to prepare multiple VM environments for a short period.|

Просмотреть файл

@ -0,0 +1,396 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>\n",
"\n",
"<i>Licensed under the MIT License.</i>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Image Similarity Export\n",
"\n",
"In the Scenario->Image Similarity notebook [12_fast_retrieval.ipynb](12_fast_retrieval.ipynb) we implemented the approximate nearest neighbor search method to find similar images from a group of reference images, given a query input image. This notebook repeats some of those steps with the goal of exporting computed reference image features to text file for use in visualizing the results in an HTML web interface. \n",
"\n",
"To be able to test the model in a simple HTML interface, we export: the computed reference image features, a separate text file of reference image file names, and thumbnail versions of the reference images. The first two files are initially exported as text files then compressed into zip files to minimuze file size. The reference images are converted to 150x150 pixel thumbnails and stored in a flat directory. All exports are saved to the UICode folder. Notebook **2_upload_ui** is used to upload the exports to your Azure Blob storage account for easy public access. \n",
"\n",
"It is assumed you already completed the steps in notebook [12_fast_retrieval.ipynb](12_fast_retrieval.ipynb) and have deployed your query image processing model to an Azure ML resource (container services, Kubernetes services, ML web app, etc.) with a queryable, CORS-compliant API endpoint."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# Ensure edits to libraries are loaded and plotting is shown in the notebook.\n",
"%matplotlib inline\n",
"%reload_ext autoreload\n",
"%autoreload 2"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"# Standard python libraries\n",
"import sys\n",
"import os\n",
"import numpy as np\n",
"from pathlib import Path\n",
"import random\n",
"import scrapbook as sb\n",
"from sklearn.neighbors import NearestNeighbors\n",
"from tqdm import tqdm\n",
"import zipfile\n",
"from zipfile import ZipFile\n",
"\n",
"# Fast.ai\n",
"import fastai\n",
"from fastai.vision import (\n",
" load_learner,\n",
" cnn_learner,\n",
" DatasetType,\n",
" ImageList,\n",
" imagenet_stats,\n",
" models,\n",
" PIL\n",
")\n",
"\n",
"# Computer Vision repository\n",
"sys.path.extend([\".\", \"../../..\"]) # to access the utils_cv library\n",
"from utils_cv.classification.data import Urls\n",
"from utils_cv.common.data import unzip_url\n",
"from utils_cv.common.gpu import which_processor, db_num_workers\n",
"from utils_cv.similarity.metrics import compute_distances\n",
"from utils_cv.similarity.model import compute_features_learner\n",
"from utils_cv.similarity.plot import plot_distances, plot_ranks_distribution"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Fast.ai version = 1.0.57\n",
"Cuda is not available. Torch is using CPU\n"
]
}
],
"source": [
"print(f\"Fast.ai version = {fastai.__version__}\")\n",
"which_processor()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data preparation\n",
"We start with parameter specifications and data preparation. We use the *Fridge objects* dataset, which is composed of 134 images, divided into 4 classes: can, carton, milk bottle and water bottle. "
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"tags": [
"parameters"
]
},
"outputs": [],
"source": [
"# Data location\n",
"DATA_PATH = unzip_url(Urls.fridge_objects_path, exist_ok=True)\n",
"\n",
"# Image reader configuration\n",
"BATCH_SIZE = 16\n",
"IM_SIZE = 300\n",
"\n",
"# Number of comparison of nearest neighbor versus exhaustive search for accuracy computation\n",
"NUM_RANK_ITER = 100\n",
"\n",
"# Size of thumbnail images in pixels\n",
"MAX_SIZE = (150, 150)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Training set: 27 images, validation set: 107 images\n"
]
}
],
"source": [
"# Load images into fast.ai's ImageDataBunch object\n",
"random.seed(642)\n",
"data = (\n",
" ImageList.from_folder(DATA_PATH)\n",
" .split_by_rand_pct(valid_pct=0.8, seed=20)\n",
" .label_from_folder()\n",
" .transform(size=IM_SIZE)\n",
" .databunch(bs=BATCH_SIZE, num_workers = db_num_workers())\n",
" .normalize(imagenet_stats)\n",
")\n",
"print(f\"Training set: {len(data.train_ds.x)} images, validation set: {len(data.valid_ds.x)} images\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load model\n",
"\n",
"Below we load a [ResNet18](https://arxiv.org/pdf/1512.03385.pdf) CNN from fast.ai's library which is pre-trained on ImageNet."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"learn = cnn_learner(data, models.resnet18, ps=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Alternatively, one can load a model which was trained using the [01_training_and_evaluation_introduction.ipynb](01_training_and_evaluation_introduction.ipynb) notebook using these lines of code:\n",
"```python\n",
" learn = load_learner(\".\", 'image_similarity_01_model')\n",
" learn.data = data\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Feature extraction\n",
"\n",
"We now compute the DNN features for each image in our validation set. We use the output of the penultimate layer as our image representation, which, for the Resnet-18 model has a dimensionality of 512 floating point values."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n"
]
}
],
"source": [
"# Use penultimate layer as image representation\n",
"embedding_layer = learn.model[1][-2] \n",
"print(embedding_layer)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Computed DNN features for the 107 validation images,each consisting of 512 floating point values.\n",
"\n"
]
}
],
"source": [
"# Compute DNN features for all validation images\n",
"valid_features = compute_features_learner(data, DatasetType.Valid, learn, embedding_layer)\n",
"print(f\"Computed DNN features for the {len(list(valid_features))} validation images,\\\n",
"each consisting of {len(valid_features[list(valid_features)[0]])} floating point values.\\n\")\n",
"\n",
"# Normalize all reference features to be of unit length\n",
"valid_features_list = np.array(list(valid_features.values()))\n",
"valid_features_list /= np.linalg.norm(valid_features_list, axis=1)[:,None]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Export for HTML Interface\n",
"\n",
"Here we package all of the data for upload to Blob Storage to interact with the model in a simple HTML interface. \n",
"\n",
"First, we export the computed reference features to ZIP file. "
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"f = open(\"ref_features.txt\", 'w')\n",
"f.write('[')\n",
"f.writelines('],\\n'.join('[' + ','.join(map(str,i)) for i in valid_features_list))\n",
"f.write(']]')\n",
"f.close()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we export the reference image file names to disk. Exported file names will include the parent directory name as well. "
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"f = open(\"ref_filenames.txt\", 'w')\n",
"f.write('[\"')\n",
"f.writelines('\",\\n\"'.join((i[len(DATA_PATH)+1:]).replace(\"/\",\"_\").replace(\"\\\\\",\"_\") for i in valid_features.keys()))\n",
"f.write('\"]')\n",
"f.close()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we compress the exported text data into Zip files."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"# Writing files to zipfiles, one by one \n",
"with ZipFile('ref_features.zip','w', zipfile.ZIP_DEFLATED) as zip: \n",
" zip.write(\"ref_features.txt\")\n",
"with ZipFile('ref_filenames.zip','w', zipfile.ZIP_DEFLATED) as zip: \n",
" zip.write(\"ref_filenames.txt\")\n",
" \n",
"# Remove the txt files\n",
"os.remove(\"ref_features.txt\")\n",
"os.remove(\"ref_filenames.txt\")\n",
"\n",
"# Make subfolder to hold all HTML Demo files and a subfolder for the zip files\n",
"if not os.path.exists('../UICode'):\n",
" os.makedirs('../UICode')\n",
"\n",
"if not os.path.exists('../UICode/data'):\n",
" os.makedirs('../UICode/data')\n",
" \n",
"# Move the zip files to the new directory\n",
"os.replace(\"ref_features.zip\", \"../UICode/data/ref_features.zip\")\n",
"os.replace(\"ref_filenames.zip\", \"../UICode/data/ref_filenames.zip\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we resize the reference images to 150x150 pixel thumbnails in a new directory called 'small-150'"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"# Make subfolder to hold all HTML Demo files and a subfolder for the zip files\n",
"if not os.path.exists('../UICode/small-150'):\n",
" os.makedirs('../UICode/small-150')\n",
"\n",
"path_mr = '../UICode/small-150'\n",
"\n",
"# Now resize the images to thumbnails\n",
"for root, dirs, files in os.walk(DATA_PATH):\n",
" for file in files:\n",
" if file.endswith(\".jpg\"):\n",
" #fname = path_mr +'/' + root[len(DATA_PATH)+1:] + '_' + file\n",
" fname = os.path.join(path_mr, root[len(DATA_PATH)+1:] + '_' + file) \n",
" im = PIL.Image.open(os.path.join(root, file))\n",
" im.thumbnail(MAX_SIZE) \n",
" im.save(fname, 'JPEG', quality=70)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python (cv)",
"language": "python",
"name": "cv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

Просмотреть файл

@ -0,0 +1,245 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>\n",
"\n",
"<i>Licensed under the MIT License.</i>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Upload files for HTML User Interface\n",
"\n",
"In the notebook [1_image_similarity_export.ipynb](1_image_similarity_export.ipynb) we exported reference image features, reference image file names, and reference image thumbnails. In this notebook we upload those items, as well as our simplified HTML interface, to your Azure Blob storage account for easy public access. \n",
"\n",
"You should create an Azure Storage Account and a \"Container\" in that account to store your uploaded files. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# Ensure edits to libraries are loaded and plotting is shown in the notebook.\n",
"%matplotlib inline\n",
"%reload_ext autoreload\n",
"%autoreload 2"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Azure Blob Storage SDK Version: 12.2.0\n"
]
}
],
"source": [
"# Standard python libraries\n",
"import sys\n",
"from pathlib import Path\n",
"from tqdm.notebook import trange, tqdm\n",
"import os, uuid\n",
"import azure.storage.blob\n",
"from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient, ContentSettings\n",
"\n",
"# Check Storage SDK version number\n",
"print(f\"Azure Blob Storage SDK Version: {azure.storage.blob.VERSION}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First we setup variables to point to your Azure Blob storage account and your existing Blob container. May be best to setup a fresh Blob container for this upload. "
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"tags": [
"parameters"
]
},
"outputs": [],
"source": [
"AZURE_ACCOUNT_NAME = \"YOUR ACCOUNT NAME\"\n",
"AZURE_ACCOUNT_KEY = \"YOUR ACCOUNT ACCESS KEY\"\n",
"BLOB_CONTAINER_NAME = \"YOUR CONTAINER NAME\"\n",
"ENDPOINT_SUFFIX = \"core.windows.net\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we upload the files to your Azure Blob storage."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Uploading non-image files:\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "94c0257b17f344ab851df91fb07b3e15",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"HBox(children=(FloatProgress(value=0.0, max=353000.0), HTML(value='')))"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Uploading thumbnail image files:\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "290a70054d8344bb964a93b8ff305941",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"HBox(children=(FloatProgress(value=0.0, max=469929.0), HTML(value='')))"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
"azure_storage_connection_str = \"DefaultEndpointsProtocol=https;AccountName={};AccountKey={};EndpointSuffix={}\".format(AZURE_ACCOUNT_NAME, AZURE_ACCOUNT_KEY, ENDPOINT_SUFFIX)\n",
"container_name = BLOB_CONTAINER_NAME\n",
"local_files = ['../UICode/data/ref_filenames.zip','../UICode/data/ref_features.zip','../UICode/index.html','../UICode/script.js','../UICode/example_imgs.js','../UICode/style.css']\n",
"blob_files = ['data/ref_filenames.zip','data/ref_features.zip','index.html','script.js','example_imgs.js','style.css']\n",
"\n",
"# Create the BlobServiceClient object which will be used to create a container client\n",
"blob_service_client = BlobServiceClient.from_connection_string(azure_storage_connection_str)\n",
"\n",
"# Get total size of non-image files to upload\n",
"sizecounter = 0\n",
"for file in local_files:\n",
" sizecounter += os.stat(file).st_size\n",
" \n",
"print(\"Uploading non-image files:\")\n",
"\n",
"# # Upload the individual files for the front-end and the ZIP files for reference features\n",
"i = 0\n",
"with tqdm(total=sizecounter, unit='B', unit_scale=True, unit_divisor=1024) as pbar:\n",
" while (i < len(local_files)):\n",
" # Create a blob client using the local file name as the name for the blob\n",
" blob_client = blob_service_client.get_blob_client(container=container_name, blob=blob_files[i])\n",
"\n",
" # Upload the file\n",
" with open(local_files[i], \"rb\") as data:\n",
" buf = 0\n",
" buf = os.stat(local_files[i]).st_size\n",
" if (i==2):\n",
" blob_client.upload_blob(data, overwrite=True, content_settings=ContentSettings(content_type=\"text/html\"))\n",
" elif (i==5):\n",
" blob_client.upload_blob(data, overwrite=True, content_settings=ContentSettings(content_type=\"text/css\"))\n",
" else:\n",
" blob_client.upload_blob(data, overwrite=True)\n",
" if buf:\n",
" pbar.update(buf)\n",
" i+=1\n",
"\n",
"# Upload the thumbnail versions of the reference images\n",
"path_blob = 'small-150'\n",
"path_local = '../UICode/{}'.format(path_blob)\n",
"\n",
"# Get total size of all image files to upload\n",
"sizecounter = 0\n",
"for root, dirs, files in os.walk(path_local):\n",
" for file in files:\n",
" sizecounter += os.stat(os.path.join(path_local, file)).st_size\n",
"\n",
"print(\"Uploading thumbnail image files:\")\n",
"\n",
"with tqdm(total=sizecounter, unit='B', unit_scale=True, unit_divisor=1024) as pbar:\n",
" for root, dirs, files in os.walk(path_local):\n",
" for file in files:\n",
" blob_client = blob_service_client.get_blob_client(container=container_name, blob=path_blob+'/'+file)\n",
" with open(os.path.join(path_local, file), \"rb\") as data:\n",
" buf = 0\n",
" buf = os.stat(os.path.join(path_local, file)).st_size\n",
" blob_client.upload_blob(data, overwrite=True)\n",
" if buf:\n",
" pbar.set_postfix(file=file, refresh=False)\n",
" pbar.update(buf)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python (cv)",
"language": "python",
"name": "cv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

Просмотреть файл

@ -0,0 +1,41 @@
## HTML Demo - Jupyter Code
### Directory Description
This directory contains a few helper notebooks that upload files and deploy models that allow the web page to work.
| Notebook name | Description |
| --- | --- |
| [1_image_similarity_export.ipynb](1_image_similarity_export.ipynb)| Exports computed reference image features for use in visualizng results (see details in "Image Similarity" section below) |
| [2_upload_ui.ipynb](2_upload_ui.ipynb)| Uploads web page files to Azure Blob storage |
| [3_deployment_to_azure_app_service.ipynb](3_deployment_to_azure_app_service.ipynb)| Deploys image classification model as an Azure app service |
### Requirements
To run the code in the [2_upload_ui.ipynb](2_upload_ui.ipynb) notebook, you must first:
1. Install the [https://pypi.org/project/azure-storage-blob/](Azure Storage Blobs client library for Python)
2. Have (or create) an Azure account with a Blob storage container where you would like to store the UI files
3. Note your Blob stoarge credentials to upload files programmatically; you will need:
a. Azure Account Name
b. Azure Account Key
c. Blob Container Name
4. Update [2_upload_ui.ipynb](2_upload_ui.ipynb) to add your Blob storage credentials
### Usage
* These notebooks can be run in the [Microsoft Computer Vision conda environment](https://github.com/microsoft/computervision-recipes/blob/master/SETUP.md).
* If you want to use an image similarity model, you can run [1_image_similarity_export.ipynb](1_image_similarity_export.ipynb) to export your image features for the web page to use.
* To upload the web page for sharing, notebook [2_upload_ui.ipynb](2_upload_ui.ipynb) outlines the process of uploading to your Azure Blob storage.
* As the web page needs the API to allow CORS, we recommend uploading models as an Azure app service. Notebook [3_deployment_to_azure_app_service.ipynb](3_deployment_to_azure_app_service.ipynb) gives a tutorial on how to do so with an image classification model.
### Image Similarity
Image similarity relies on comparing DNN features of a query image, to the respective DNN features of potentially tens of thousands of references images. The notebooks in this directory compute these reference image DNN features and package them for use in the HTML UI. The DNN features are exported into a text file and compressed to be uploaded with the HTML UI files. To compare a query image to these exported reference image features, you will need a DNN model deployed to Azure App services that is able to compute the features of the query image and return them via API call.
Steps:
1. Execute "1_image_similarity_export.ipynb" notebook to generate your reference image features and export them to compressed ZIP files
2. Execute "2_upload_ui.ipynb" notebook to upload the HTML UI and all supporting files to your Azure Blob storage account
3. Execute "3_deployment_to_azure_app_service.ipynb" notebook to upload your model for generating DNN features for your query image and create an API endpoint in Azure App service
4. Open the index.html file from your Blob storage account in a browser, enter your API endpoint URL, upload a query image and see what you get back

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -0,0 +1,388 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"/>
<title>Microsoft CVBP - HTML Demo</title>
<link rel = "icon" href ="https://cvbp.blob.core.windows.net/public/html_demo/img/logo_small.png" type = "image/x-icon">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh"
crossorigin="anonymous">
<link href="style.css" rel="stylesheet" type="text/css" />
</head>
<body>
<!-- Nav Bar -->
<nav class="navbar navbar-toggleable-md navbar-dark fixed-top mainColor">
<a class="navbar-brand" href="https://github.com/mcboyd-bu/computervision-recipes/tree/contrib_html_demo/contrib/html_demo">Microsoft CVBP</a>
<a href="#home" class="navbar-brand" id="about" onClick="aboutModal()">About</a>
<!-- "About" Modal -->
<div id="aboutModal" class="modal">
<!-- Modal content -->
<div class="modal-content">
<span id="closeModal" class="close">&times;</span>
<h1>About the project</h1>
<section>
<h2>Project Description</h2>
<p>This project provides an HTML web page that allows users to visualize the output of a deployed computer vision DNN model. Users can improve on and gain insights from their deployed model by uploading query/test images and examining the model's results for correctness through the user interface. The web page includes some sample query/test images from the Microsoft image set, as well as example output for 3 types of models: Image Classification, Object Detection, and Image Similarity.</p>
</section>
<section>
<h2>Usage</h2>
<p>To use a deployed model in the Use My Model tab:</p>
<ol type="1">
<li>Enter the model's API URL in the text field</li>
<li>Upload or select images to use:</li>
<ol type="a">
<li>Webcam:</li>
<ol type="i">
<li>Allow the browser to use your web cam</li>
<li>Select Snap Photo to take a picture</li>
<li>Select Use Image to add the captured image</li>
</ol>
<li>Samples: Select an image by clicking on it</li>
<li>Choose Files: Select images to upload from your machine's file explorer</li>
</ol>
<li>Select Upload to send the images to the model's API</li>
<li>View results below!</li>
</ol>
<p>To view examples in the See Example tab:</p>
<ol type="1">
<li>Click on an image you wish to view</li>
<li>See results from image classification, object detection, and image similarity models below!</li>
</ol>
</section>
<section>
<h2>Authors</h2>
<p>This work was completed by a team of students from the Boston University College of Engineering as part of the EC528 Cloud Computing class. The project was completed in collaboration with three Microsoft engineers who proposed the project and acted as team mentors.</p>
<h3>Student team:</h3>
<p>Matthew Boyd, Charles Henneberger, Xushan "Mulla" Hu, SeungYeun "Kelly" Lee, Nuwapa "Prim" Promchotichai</p>
<h3>Microsoft Mentors:</h3>
<p>Patrick Buehler, Young Park, JS Tan</p>
</section>
</div> <!-- END Modal Content -->
</div> <!-- END Modal -->
</nav>
<main role="main">
<div class="jumbotron">
<div class="container-xl">
<ul class="nav nav-tabs" id="myTab" role="tablist">
<li class="nav-item">
<a class="nav-link active font18" id="mymodel-tab" data-toggle="tab" href="#mymodel" role="tab" aria-controls="mymodel" aria-selected="true">Use My Model</a>
</li>
<li class="nav-item">
<a class="nav-link font18" id="seeexample-tab" data-toggle="tab" href="#seeexample" role="tab" aria-controls="seeexample" aria-selected="false">See Example</a>
</li>
</ul>
<div class="tab-content card" id="myTabContent">
<!-- for use my model tab -->
<div class="tab-pane fade show active card-body" id="mymodel" role="tabpanel" aria-labelledby="mymodel-tab">
<!-- Image upload and thumbnail container -->
<div class="container-fluid mx-auto">
<form onsubmit="return false">
<div class="form-group">
<div class="text-muted mb-2"><span class="badge badge-primary">1</span> Enter your model's API URL:</div>
<input type="text" class="form-control form-control-lg" id="url" name="text2" required>
</div>
</form>
<div id="alertdiv"></div>
<div class="row">
<!-- Image Upload Button and Thumnails - Outer Col1 -->
<div class="col">
<div class="text-muted mb-2"><span class="badge badge-primary">2</span> Choose up to 4 images from your webcam, sample images, or your own local images:</div>
<div class="row">
<div class="thumbnails col text-center">
<!--Webcam Feature-->
<button id="btnWebcam" class="btn btn-secondary btn-block shadow rounded mb-2 mainColor" type="button" data-toggle="collapse" href="#multiCollapseWebcam" aria-expanded="false" aria-controls="multiCollapseWebcam">Webcam</button>
<!-- Sample Images -->
<button id="btnSample" class="btn btn-secondary btn-block shadow rounded mb-2 mainColor" type="button" data-toggle="collapse" href="#multiCollapseSample" aria-expanded="false" aria-controls="multiCollapseSample">Samples</button>
<!-- Choose (Local) Files Button -->
<label class="btn btn-secondary btn-block shadow rounded mb-2 mainColor">
Choose Files <input type="file" multiple class="custom-file-input mx-auto" accept="image/*" id="inputGroupFile03" onchange="handleFiles(this.files)" hidden>
</label>
</div> <!-- END Inner col 1 -->
<!-- img thumbnails - 4 columns -->
<div class="col thumbnails mb-2">
<div class="img-wrap img-wrap-ph rounded" id="b64imgwrap-0">
<button id="clear-0" type="button" class="clearBtn btn btn-danger btn-sm hide text-white" onclick="removeImg(0)"> <div class="icon regular">X</div></button>
<img id="b64img-0" class="rounded" src="">
</div>
</div>
<div class="col thumbnails mb-2">
<div class="img-wrap img-wrap-ph rounded" id="b64imgwrap-1">
<button id="clear-1" type="button" class="clearBtn btn btn-danger btn-sm hide text-white" onclick="removeImg(1)"> <div class="icon regular">X</div></button>
<img id="b64img-1" class="rounded" src="">
</div>
</div>
<div class="col thumbnails mb-2">
<div class="img-wrap img-wrap-ph rounded" id="b64imgwrap-2">
<button id="clear-2" type="button" class="clearBtn btn btn-danger btn-sm hide text-white" onclick="removeImg(2)"> <div class="icon regular">X</div></button>
<img id="b64img-2" class="rounded" src="">
</div>
</div>
<div class="col thumbnails mb-2">
<div class="img-wrap img-wrap-ph rounded" id="b64imgwrap-3">
<button id="clear-3" type="button" class="clearBtn btn btn-danger btn-sm hide text-white" onclick="removeImg(3)"> <div class="icon regular">X</div></button>
<img id="b64img-3" class="rounded" src="">
</div>
</div>
</div> <!-- Close Inner row -->
</div> <!-- Close Outer Col1 -->
<!-- Upload button - Outer Col2 -->
<div class="text-center col-md-auto">
<div class="text-muted mb-2"><span class="badge badge-primary">3</span> Test your model:</div>
<button class="btn btn-primary btn-block shadow rounded button-font" type="button" id="uploadbtn" data-target="#exampleModalCenter" onclick="APIRequest()"> Upload</button>
<!-- spinners -->
<div id="uploadstatus" class="mt-1">
</div> <!-- END spinners -->
</div> <!-- END upload button - Close Outer Col2 -->
</div> <!-- END main row -->
<!-- New Row For Hidden Display Element: Webcam -->
<div class="row collapse multi-collapse mt-2" id="multiCollapseWebcam">
<div class="col-12">
<div class="row">
<div class="col-sm-6">
<div class="card text-white bg-secondary">
<div class="card-header d-flex align-items-center">
<span>LIVE Webcam</span>
<button id="snap" type="button" class="btn btn-light ml-auto">Snap Photo</button>
</div>
<div class="card-body">
<div class="embed-responsive embed-responsive-4by3">
<video id="videoElement" class="embed-responsive-item" autoplay></video>
</div>
</div>
</div>
</div>
<div class="col-sm-6">
<div class="card text-white bg-secondary">
<div class="card-header d-flex align-items-center">
<span>Captured Image</span>
<button id="useImage" class="btn btn-light ml-auto">Use Image</button>
</div>
<div class="card-body">
<canvas id="webCamCanvas" class="hide" width="30" height="30"></canvas>
</div>
</div>
</div>
</div>
</div>
</div> <!-- END Hidden Row: Webcam -->
<!-- New Row For Hidden Display Element: Sample Images -->
<div class="row collapse multi-collapse mt-2" id="multiCollapseSample">
<div class="col-12">
<div class="row">
<div class="col-sm-12">
<div class="card text-white bg-secondary">
<div class="card-header d-flex align-items-center">
<span>Select Sample Images to Test your Model</span>
<button id="sampleClose" type="button" class="btn btn-light ml-auto" onclick="sampleClose()">Hide Samples</button>
</div>
<div class="card-body">
<div class="row text-center">
<div class="col-xs">
<img id="sample1" data-eid="1" class="rounded m-2 sImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/can_1.jpg">
</div>
<div class="col-xs">
<img id="sample2" data-eid="2" class="rounded m-2 sImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/can_15.jpg">
</div>
<div class="col-xs">
<img id="sample3" data-eid="3" class="rounded m-2 sImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/can_28.jpg">
</div>
<div class="col-xs">
<img id="sample4" data-eid="4" class="rounded m-2 sImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/carton_33.jpg">
</div>
<div class="col-xs">
<img id="sample5" data-eid="5" class="rounded m-2 sImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/carton_40.jpg">
</div>
<div class="col-xs">
<img id="sample6" data-eid="6" class="rounded m-2 sImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/carton_50.jpg">
</div>
<div class="col-xs">
<img id="sample7" data-eid="7" class="rounded m-2 sImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/milk_bottle_66.jpg">
</div>
<div class="col-xs">
<img id="sample8" data-eid="8" class="rounded m-2 sImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/milk_bottle_77.jpg">
</div>
<div class="col-xs">
<img id="sample9" data-eid="9" class="rounded m-2 sImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/water_bottle_111.jpg">
</div>
<div class="col-xs">
<img id="sample10" data-eid="10" class="rounded m-2 sImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/water_bottle_115.jpg">
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div> <!-- END Hidden Row: Sample Images -->
</div> <!-- END container -->
<!-- Results Container -->
<div class="container-xl">
<p class="text-center font-weight-bold pt-4 result-font">RESULTS</p>
<!--displays result images-->
<div class="row">
<div class="col-md-6 mx-auto" id="resultsDiv0">
<div class="card mb-4 shadow-sm">
<div id="resultsDiv0-content" class="card-img-top hide-overflow" >
<canvas id="resultsCanvas0" width="30" height="30"></canvas>
<img id="resultsImg0">
</div>
<div class="card-body">
<p class="card-text"></p>
</div>
</div>
</div> <!-- END Col -->
<div class="col-md-6 mx-auto" id="resultsDiv1">
<div class="card mb-4 shadow-sm">
<div id="resultsDiv1-content" class="card-img-top" >
<canvas id="resultsCanvas1" width="30" height="30"></canvas>
<img id="resultsImg1">
</div>
<div class="card-body">
<p class="card-text"></p>
</div>
</div>
</div> <!-- END Col -->
<div class="col-md-6 mx-auto" id="resultsDiv2">
<div class="card mb-4 shadow-sm">
<div id="resultsDiv2-content" class="card-img-top hide-overflow" >
<canvas id="resultsCanvas2" width="30" height="30"></canvas>
<img id="resultsImg2">
</div>
<div class="card-body">
<p class="card-text"></p>
</div>
</div>
</div> <!-- END Col -->
<div class="col-md-6 mx-auto" id="resultsDiv3">
<div class="card mb-4 shadow-sm">
<div id="resultsDiv3-content" class="card-img-top hide-overflow">
<canvas id="resultsCanvas3" width="30" height="30"></canvas>
<img id="resultsImg3">
</div>
<!-- </div> --> <div class="card-body">
<p class="card-text"></p>
</div>
</div>
</div> <!-- END Col -->
</div> <!-- END main Row -->
</div> <!-- END Results Container -->
</div> <!-- END Tab-Pane Use My Model -->
<!-- for see example tab -->
<div class="tab-pane fade card-body mt-3" id="seeexample" role="tabpanel" aria-labelledby="seeexample-tab">
<!-- img thumbnails and checkbox -->
<div id="exampleImages" class="container mx-auto" >
<img id="example0" data-eid="0" class="rounded mr-2 mb-2 eImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/can_8.jpg">
<img id="example1" data-eid="1" class="rounded mr-2 mb-2 eImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/can_31.jpg">
<img id="example2" data-eid="2" class="rounded mr-2 mb-2 eImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/carton_47.jpg">
<img id="example3" data-eid="3" class="rounded mr-2 mb-2 eImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/carton_59.jpg">
<img id="example4" data-eid="4" class="rounded mr-2 mb-2 eImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/milk_bottle_76.jpg">
<img id="example5" data-eid="5" class="rounded mr-2 mb-2 eImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/milk_bottle_97.jpg">
<img id="example6" data-eid="6" class="rounded mr-2 mb-2 eImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/water_bottle_105.jpg">
<img id="example7" data-eid="7" class="rounded mr-2 mb-2 eImg" src="https://cvbp.blob.core.windows.net/public/html_demo/img/water_bottle_123.jpg">
<div class="mt-3 text-center">
<div class="form-check-inline">
<label class="form-check-label">
<input type="checkbox" class="form-check-input" id="odCheck" onclick='exampleModels()' checked>Object Detection
</label>
</div>
<div class="form-check-inline">
<label class="form-check-label">
<input type="checkbox" class="form-check-input" id="icCheck" onclick='exampleModels()' checked>Image Classification
</label>
</div>
<div class="form-check-inline">
<label class="form-check-label">
<input type="checkbox" class="form-check-input" id="isCheck" onclick='exampleModels()' checked>Image Similarity
</label>
</div>
</div>
</div> <!-- END img thumbnails and checkbox -->
<!-- results container -->
<div class="container mx-auto mt-4">
<p class="text-center font-weight-bold result-font">RESULTS</p>
<div class="row">
<div class="col-md-4 mx-auto" id="resultsDiv8">
<div class="card mb-4 shadow-sm">
<div id="resultsDiv8-content" class="card-img-top hide-overflow" >
<canvas id="resultsCanvas8" width="30" height="30"></canvas>
<img id="resultsImg8">
</div>
<div class="card-body">
<p class="card-text">Object Detection</p>
</div>
</div>
</div>
<div class="col-md-4 mx-auto" id="resultsDiv7">
<div class="card mb-4 shadow-sm">
<div id="resultsDiv7-content" class="card-img-top hide-overflow" >
<canvas id="resultsCanvas7" width="30" height="30"></canvas>
<img id="resultsImg7">
</div>
<div class="card-body">
<p class="card-text">Image Classification</p>
</div>
</div>
</div>
<div class="col-md-4 mx-auto" id="resultsDiv9">
<div class="card mb-4 shadow-sm">
<div id="resultsDiv9-content" class="card-img-top hide-overflow" >
<canvas id="resultsCanvas9" width="30" height="30"></canvas>
<img id="resultsImg9">
</div>
<div class="card-body imSim text-center">
<p class="card-text">Image Similarity</p>
</div>
</div>
</div>
</div>
</div> <!-- END results container -->
</div> <!-- END Tab-Pane See Example -->
</div> <!-- END Tab-Content -->
</div> <!-- END Jumbotron Container -->
</div> <!-- END Jumbotron -->
</main>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.16.0/umd/popper.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.4.1/js/bootstrap.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.3.0/jszip.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jszip-utils/0.1.0/jszip-utils.min.js"></script>
<script src="script.js"></script>
<script src="example_imgs.js"></script>
</body>
</html>

Просмотреть файл

@ -0,0 +1,27 @@
## HTML Demo - UI Files
### Directory Description
This directory contains a html file with separate stylesheet and JavaScript functions.
| File name | Description |
| --- | --- |
| [example_imgs.js](example_imgs.js) | Static definitions used to display DNN model Example output |
| [index.html](index.html) | User interface components |
| [style.css](style.css) | Styling of the components on the webpage |
| [script.js](script.js) | JavaScript functions to drive the back-end of the webpage |
### Usage
The files in this repository are made up of the user interface components with functioning back-end. User Interface "Use My Model" tab allows you to upload multiple image files, test images with your DNN model's API and visualize the output of the model. "See Example" tab allows you to visualize the output of three machine learning model scenarios (image classification, object detection, and image similarity) on a set of example images.
To run a webpage, please follow the guidelines on [html_demo/readme.md](../readme.md) for a necessary set up. You have to execute the notebooks in JupyterCode in your conda environment [JupyterCode/readme.md](../JupyterCode/readme.md) and deploy the models to be able to visualize different machine learning models on "See Example" tab of the webpage.
[style.css](style.css) and [script.js](script.js) have to be in the same directory as [index.html](index.html) to allow accurate full-rendering of the webpage.

Просмотреть файл

@ -0,0 +1,660 @@
var fn_array = []; // File name array for image similarity
var ref_array; // Reference features array for image similarity
var fn_array_ex = []; // File name array for image similarity EXAMPLE images
var ref_array_ex; // Reference features array for image similarity EXAMPLE images
var imgList = [0, 0, 0, 0];
var imgListEmpty = 4; // Number of available slots in the imgList
var b64o = [0, 0, 0, 0];
var b64e = 0;
var b64temp = 0;
// Create off-screen image elements
var tempImg = new Array();
tempImg[0] = new Image();
tempImg[1] = new Image();
tempImg[2] = new Image();
tempImg[3] = new Image();
// Grab elements, create settings, etc.
var video = document.getElementById('videoElement');
// Elements for taking the snapshot
var webCamCanvas = document.getElementById('webCamCanvas');
var wCCcontext = webCamCanvas.getContext('2d');
function aboutModal() {
// Display the modal
var modal = document.getElementById("aboutModal");
modal.style.display = "block";
// Get the <span> element that closes the modal
var span = document.getElementById("closeModal");
// Close the modal on click
span.onclick = function() {
modal.style.display = "none";
}
// When the user clicks anywhere outside of the modal, close it
window.onclick = function(event) {
if (event.target == modal) {
modal.style.display = "none";
}
}
}
function populateTable(i, tableData) {
var cardBody = document.getElementById("resultsDiv"+i).getElementsByClassName('card-body')[0];
cardBody.innerHTML = "<p class='card-text'>Image Similarity</p>";
tableData.forEach(function(rowData) {
var item = document.createElement('div');
item.classList.add("item");
var img = document.createElement('img');
img.src = 'https://cvbp.blob.core.windows.net/public/html_demo/small-150/' + rowData[0];
//img.src = 'small-150/' + rowData[0];
var txt = document.createElement('p');
txt.innerHTML = rowData[0] + "<br/><i>Dist.: " + rowData[1] + "</i>";
item.appendChild(img);
item.appendChild(txt);
cardBody.appendChild(item);
});
}
function eucDistance(a, b) {
return a
.map((x, i) => Math.abs( x - b[i] ) ** 2) // square the difference
.reduce((sum, now) => sum + now) // sum
** (1/2)
}
function calcSimilar(top, queryFeatures, simType) {
var dist_array = [];
var rows = 0;
if (simType == "example") {
rows = ref_array_ex.length;
} else {
rows = ref_array.length;
}
var retImg = "-1";
if (!queryFeatures) {
var queryRow = Math.floor(Math.random() * (rows - 0 + 1) + 0);
var queryimg = ref_array[queryRow];
retImg = 'https://cvbp.blob.core.windows.net/public/html_demo/small-150/' + fn_array[queryRow];
//retImg = 'small-150/' + fn_array[queryRow];
} else {
var queryimg = queryFeatures;
}
for (i = 0; i < rows; i++) {
if (simType == "example") {
let euc = eucDistance(queryimg, ref_array_ex[i]).toFixed(2);
var arr = [fn_array_ex[i],euc];
} else {
let euc = eucDistance(queryimg, ref_array[i]).toFixed(2);
var arr = [fn_array[i],euc];
}
dist_array.push(arr);
}
var topValues = dist_array.sort((a,b) => a[1]-b[1]).slice(0,top);
return [topValues, retImg];
}
// Process zip file of filenames and parse into array
async function parseSimFileNames(fileType) {
return new Promise(async function(res,rej) {
new JSZip.external.Promise(function (resolve, reject) {
zipFile_fn = 'data/ref_filenames.zip';
if (fileType == "example")
zipFile_fn = 'https://cvbp.blob.core.windows.net/public/html_demo/data/ref_filenames.zip';
//JSZipUtils.getBinaryContent('data/ref_filenames.zip', function(err, data) {
JSZipUtils.getBinaryContent(zipFile_fn, function(err, data) {
if (err) {
reject(err);
} else {
resolve(data);
}
});
}).then(function (data) {
return JSZip.loadAsync(data);
}).then(function (zip) {
if (zip.file("../visualize/data/ref_filenames.txt")) {
return zip.file("../visualize/data/ref_filenames.txt").async("string");
} else {
return zip.file("ref_filenames.txt").async("string");
}
}).then(function (text) {
if (fileType == "example")
fn_array_ex = JSON.parse(text);
else
fn_array = JSON.parse(text);
res();
});
})
}
// Process zip file of reference image features and parse into array
async function parseSimFileFeatures(fileType) {
return new Promise(async function(res,rej) {
new JSZip.external.Promise(function (resolve, reject) {
zipFile_ref = 'data/ref_features.zip';
if (fileType == "example")
zipFile_ref = 'https://cvbp.blob.core.windows.net/public/html_demo/data/ref_features.zip';
// JSZipUtils.getBinaryContent('data/ref_features.zip', function(err, data) {
JSZipUtils.getBinaryContent(zipFile_ref, function(err, data) {
if (err) {
reject(err);
} else {
resolve(data);
}
});
}).then(function (data) {
return JSZip.loadAsync(data);
}).then(function (zip) {
if (zip.file("../visualize/data/ref_features.txt")) {
return zip.file("../visualize/data/ref_features.txt").async("string");
} else {
return zip.file("ref_features.txt").async("string");
}
}).then(function (text) {
if (fileType == "example")
ref_array_ex = JSON.parse(text);
else
ref_array = JSON.parse(text);
res();
});
})
}
// Handle sample image clicks - need this unsual syntax to accomodate the async nature of the "photoSave" process
document.querySelectorAll('.sImg').forEach(item => {
item.addEventListener('click', () => handleSamples(item), false)
});
document.querySelectorAll('.sImg').forEach(item => {
item.addEventListener('click', () => custom_close(), false)
});
function custom_close(){
$('#sampleModal').modal('hide');
}
// Handle sample image clicks - actual work
async function handleSamples(imgItem) {
if (imgListEmpty == 0) {
displayError(1);
return;
}
var tmpCanvas = document.createElement("canvas");
tmpCanvas.width = imgItem.naturalWidth;
tmpCanvas.height = imgItem.naturalHeight;
var tmpCtx = tmpCanvas.getContext("2d");
// Below 2 lines required to access images from external domain
// Else the canvas is "tainted" by the external content and cannot be Base64 converted
var imgTemp = new Image;
imgTemp.crossOrigin = "anonymous";
imgTemp.onload = async function(){
tmpCtx.drawImage(imgTemp, 0, 0);
b64temp = tmpCanvas.toDataURL();
await photoSave(0,b64temp);
b64temp = 0;
};
imgTemp.src = imgItem.src;
}
// Handle example image clicks - need this unsual syntax to accomodate the async nature of the process
document.querySelectorAll('.eImg').forEach(item => {
item.addEventListener('click', () => exampleClick(item), false)
});
// Handle example image clicks - actual work
async function exampleClick(imgItem) {
var tmpCanvas = document.createElement("canvas");
var tmpCanvas = document.getElementById("resultsCanvas8");
tmpCanvas.width = imgItem.naturalWidth;
tmpCanvas.height = imgItem.naturalHeight;
var tmpCtx = tmpCanvas.getContext("2d");
// Below 2 lines required to access images from external domain
// Else the canvas is "tainted" by the external content and cannot be Base64 converted
var imgTemp = new Image;
imgTemp.crossOrigin = "anonymous";
imgTemp.onload = async function(){
tmpCtx.drawImage(imgTemp, 0, 0);
b64e = tmpCanvas.toDataURL();
// Image classification
let exampleId = imgItem.getAttribute("data-eid");
var exampleData = exampleIC[exampleId];
var showExample = await jsonParser(exampleData, 7);
// Object detection
exampleData = exampleOD[exampleId];
showExample = await jsonParser(exampleData, 8);
// Image similarity
exampleData = exampleIS[exampleId];
showExample = await jsonParser(exampleData, 9);
};
imgTemp.src = imgItem.src;
}
function exampleModels() {
var icCheck = document.getElementById("icCheck").checked;
var odCheck = document.getElementById("odCheck").checked;
var isCheck = document.getElementById("isCheck").checked;
var icDiv = document.getElementById("resultsDiv7");
var odDiv = document.getElementById("resultsDiv8");
var isDiv = document.getElementById("resultsDiv9");
if (icCheck) icDiv.classList.remove("hide");
else icDiv.classList.add("hide");
if (odCheck) odDiv.classList.remove("hide");
else odDiv.classList.add("hide");
if (isCheck) isDiv.classList.remove("hide");
else isDiv.classList.add("hide");
}
// Trigger photo take
document.getElementById("snap").addEventListener("click", function() {
webCamCanvas.classList.remove("hide");
var width = video.videoWidth;
var height = video.videoHeight;
wCCcontext.canvas.width = width;
wCCcontext.canvas.height = height;
wCCcontext.drawImage(video, 0, 0, width, height);
});
// Trigger photo save - need this unsual syntax to accomodate the async nature of the "photoSave" process
document.getElementById("useImage").addEventListener("click", () => photoSave(), false);
async function photoSave(saveType, b64i) {
$("#imageaddedmsg").toggleClass("show");
if (saveType == 0)
var dataURL = b64i;
else {
// Basically this gets called only when an attempt is made to save the webcam image
if (imgListEmpty == 0) {
displayError(1);
return;
}
var dataURL = webCamCanvas.toDataURL();
}
var thumbnailURL = await resizeImg(dataURL, 150);
var fullimgURL = await resizeImg(dataURL, 480);
for (let i = 0; i < 4; i++) {
if (imgList[i] == 0) {
console.log("imgList has empty slot at: " + i);
document.getElementById("b64img-" + i).src=thumbnailURL;
document.getElementById("clear-" + i).classList.remove("hide");
document.getElementById("b64imgwrap-" + i).classList.remove("img-wrap-ph");
imgList[i] = 1;
b64o[i] = fullimgURL;
imgListEmpty--;
i = 4;
}
if (("b64o"+i) == 0) {
console.log("b64 object " + i + "is empty (set to 0)");
}
}
if (video.srcObject) {
$('#multiCollapseWebcam').collapse('hide');
}
}
$('#multiCollapseWebcam').on('hide.bs.collapse', function () {
webcamStop();
webCamCanvas.classList.add("hide");
document.getElementById("btnWebcam").classList.remove("active");
document.getElementById("btnWebcam").innerText = "Webcam";
})
$('#multiCollapseWebcam').on('shown.bs.collapse', function () {
webcamActivate1();
})
$('#multiCollapseSample').on('hidden.bs.collapse', function () {
document.getElementById("btnSample").classList.remove("active");
document.getElementById("btnSample").innerText = "Samples";
})
$('#multiCollapseSample').on('shown.bs.collapse', function () {
document.getElementById("btnSample").classList.add("active");
document.getElementById("btnSample").innerText = "Hide Samples";
})
function sampleClose() {
$('#multiCollapseSample').collapse('hide');
}
function handleFiles(files) {
num_file = files.length;
var j = num_file;
console.log("num_file: " + num_file);
if (num_file > 4)
num_file = 4;
if (imgListEmpty == 0) {
displayError(1);
return 0;
}
for (let i = 0; i < num_file; i++) {
const file = files[i];
const reader = new FileReader();
if (!reader) {
console.log("sorry, change the browser.");
return
}
// Save the images
reader.onload = ( function(aImg) { return function(e) {
aImg.src = e.target.result;
console.log("image #" + i + " processed")
j--;
if(j == 0) // After all files have been processed, call fnc to display them
saveFiles(num_file);
}; })(tempImg[i]);
reader.readAsDataURL(file);
}
}
async function saveFiles(numFiles) {
for (let k = 0; k < numFiles; k++) {
await photoSave(0, tempImg[k].src);
tempImg[k].src = ""; // Clear photo from temp storage after saving it
}
console.log("imgListEmpty: " + imgListEmpty);
}
// Delete image from display and img list
function removeImg(imgNumber) {
console.log("Remove Image Number: " + imgNumber);
document.getElementById("b64img-" + imgNumber).src="";
document.getElementById("clear-" + imgNumber).classList.add("hide");
document.getElementById("b64imgwrap-" + imgNumber).classList.add("img-wrap-ph");
imgList[imgNumber] = 0;
b64o[imgNumber] = 0;
imgListEmpty++;
console.log("imgListEmpty in removeImg: " + imgListEmpty);
}
function resizeImg(b64Orig, newHeight) {
return new Promise(async function(resolve,reject){
// Create an off-screen canvas
var rIcanvas = document.createElement('canvas'),
rIctx = rIcanvas.getContext('2d');
// Create an off-screen image element
var rImage = new Image();
// WHen the image is loaded, process it
rImage.onload = function() {
// Original dimensions of image
var width = rImage.naturalWidth;
var height = rImage.naturalHeight;
var ratio = width / height;
// Dimensions of resized image (via canvas): rIwidth and newHeight
var rIwidth = ratio * newHeight;
// Set canvas dimensions to resized image dimensions
rIcanvas.width = rIwidth;
rIcanvas.height = newHeight;
// Draw the image on teh canvas at the new size
rIctx.drawImage(rImage, 0, 0, width, height, 0, 0, rIcanvas.width, rIcanvas.height);
// Export the new image as Base64 and return to calling function
var rIdu = rIcanvas.toDataURL();
resolve(rIdu);
}
// Load the image from the original Base64 source (passed into this function)
rImage.src = b64Orig;
})
}
function APIValidation(url) {
var pattern = /(ftp|http|https):\/\/(\w+:{0,1}\w*@)?(\S+)(:[0-9]+)?(\/|\/([\w#!:.?+=&%@!\-\/]))?/;
if (pattern.test(url)) {
console.log("url is valid")
return true;
} else {
displayError(3);
return false;
}
}
function webcamActivate1(){
document.getElementById("btnWebcam").classList.add("active");
document.getElementById("btnWebcam").innerText = "Hide Webcam";
if (navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({ video: true })
.then(function (stream) {
video.srcObject = stream;
})
.catch(function (err0r) {
console.log("Something went wrong!");
});
}
}
function webcamStop(){
var stream = video.srcObject;
var tracks = stream.getTracks();
for (var i = 0; i < tracks.length; i++) {
var track = tracks[i];
track.stop();
}
video.srcObject = null;
}
function displayError(errno) {
var errtext = "";
switch (errno) {
case 1:
errtext = "Only 4 images can be uploaded at a time. To use different images, delete one of your existing thumbnails.";
break;
case 2:
errtext = "Error during API request.";
break;
case 3:
errtext = "Invalid API url.";
break;
default:
errtext = "An error occured.";
break;
}
var alertDiv = document.getElementById("alertdiv");
alertDiv.innerHTML = '<div id="alert" class="alert alert-danger alert-dismissible fade hide" role="alert"><strong>Alert!</strong> <span id="progress">You should check in on some of those fields below.</span><button type="button" class="close" data-dismiss="alert" aria-label="Close"><span aria-hidden="true">&times;</span></button></div>'
var progress = document.getElementById("progress");
progress.innerHTML = errtext;
var alert = document.getElementById("alert");
alert.classList.remove("hide");
alert.classList.add("show");
}
function renderImage(i) {
return new Promise(async function(resolve,reject){
var img = document.getElementById("resultsImg" + i);
img.onload = function(){
var width = img.naturalWidth;
var height = img.naturalHeight;
var c = document.getElementById("resultsCanvas" + i);
var ctx = c.getContext("2d");
ctx.canvas.height = height;
ctx.canvas.width = width;
var scale = Math.min(c.width / width, c.height / height);
// get the top left position of the image
var imgx = (c.width / 2) - (width / 2) * scale;
var imgy = (c.height / 2) - (height / 2) * scale;
ctx.drawImage(img, imgx, imgy, width * scale, height * scale);
img.src = "";
img.classList.add("hide");
resolve();
};
if (i < 7)
img.src = b64o[i];
else
img.src = b64e;
})
}
function imgdetection(i, x, y, xwidth, xheight, label) {
var c = document.getElementById("resultsCanvas" + i);
var ctx = c.getContext("2d");
ctx.lineWidth = 5;
ctx.strokeStyle = "#FF0000";
ctx.fillStyle = "#FF0000";
ctx.font = "20px Verdana";
ctx.strokeRect(x, y, xwidth, xheight);
ctx.fillText(label, 10 + parseInt(x), 20 + parseInt(y));
}
function imgclassification(i, label, probability) {
var c = document.getElementById("resultsCanvas" + i);
var ctx = c.getContext("2d");
ctx.lineWidth = 5;
ctx.strokeStyle = "#FF0000";
ctx.fillStyle = "#FF0000";
ctx.font = "20px Verdana";
ctx.fillText(label, 10, 30);
ctx.fillText(parseFloat(probability).toFixed(2), 10, 60);
}
// "count" indicates the number of similar results to return; use 5 for now
// Call with no "queryFeatures" to use a random image from the existing thumbnails
// So example call without queryFeatures: imgsimilarity(0,5)
async function imgsimilarity(i, count, queryFeatures) {
// Do work here
simType = "mymodel";
if (i > 6)
simType = "example";
if (fn_array.length == 0 && i < 7) {
// The zip files for the similarity comparison have't been processed yet
await parseSimFileNames(simType);
await parseSimFileFeatures(simType);
} else if (fn_array_ex.length == 0 && i > 6) {
// The zip files for the similarity comparison have't been processed yet
await parseSimFileNames(simType);
await parseSimFileFeatures(simType);
}
var results = calcSimilar(count, queryFeatures, simType);
// results: [topResults from image matching, path to query image if no queryFeatures]
populateTable(i, results[0]);
}
async function jsonParser(jString, ovr) {
let resp = JSON.parse(jString)
if (Array.isArray(resp[0])) {
if (resp[0][0].hasOwnProperty("top")) {
// "[[top: #, ]]"
//will need to target a different feature if another scenario ends up doing rectangle boxes
for (let i in resp) {
let j = i
if (ovr) {
j = ovr
}
await renderImage(j);
for (let box of resp[i]) {
let x = box.left
let y = box.top
let width = box.right - box.left
let height = box.bottom - box.top
let label = box.label_name
imgdetection(j, x, y, width, height, label)
}
}
return "detection"
}
return "err"
}
// '[{"label":"asdasd","probability":"0.21354"},{"label": "klsdfjkdsfjklsdf","probability":"0.4512457"}]'
if(resp[0].hasOwnProperty("probability")) {
for (let i in resp) {
let j = i
if (ovr) {
j = ovr
}
await renderImage(j);
let label = resp[i].label
let prob = resp[i].probability
imgclassification(j, label, prob);
}
return "classification"
}
if(resp[0].hasOwnProperty("features")) {
for (let i in resp) {
let j = i
if (ovr) {
j = ovr
}
await renderImage(j);
let features = resp[i].features;
imgsimilarity(j, 5, features);
}
return "similarity"
}
return "err"
}
//whatever calls this should have a timeout
function APIRequest() {
let url = document.getElementById("url").value;
if (!APIValidation(url))
return 0;
var uplBtn = document.getElementById("uploadbtn");
var uplStatus = document.getElementById("uploadstatus");
uplBtn.disabled = "true";
uplStatus.classList.remove("hide");
uplStatus.innerHTML = 'Loading... <div class="spinner-border ml-auto spinner-border-sm" role="status" aria-hidden="true"></div>';
//console.log(url);
//console.log(JSON.stringify({data: b64o}));
let xhr = new XMLHttpRequest();
xhr.onload = function() {
console.log("request completed")
if (xhr.readyState == 4) {
if (xhr.status == 200) {
console.log(xhr.responseXML);
//loading(2);
jsonParser(xhr.responseText);
//loading(3);
} else {
displayError(0); // Display generic error message in bold, red text
console.log("Error: " + xhr.status + " response. " + xhr.responseText);
}
uplBtn.disabled = false;
uplStatus.innerHTML = '<span class="text-muted font-weight-light font-italic">Complete</span>';
}
}
xhr.onerror = function() {
console.log(xhr.status)
displayError(2); // Display generic API error message in bold, red text
uplBtn.disabled = false;
uplStatus.innerHTML = '<span class="text-muted font-weight-light font-italic">Complete</span>';
}
xhr.open("POST", url, true);
xhr.setRequestHeader('Content-Type', 'application/json');
//add b64 strings to payload list at key "data"
console.log("sending request")
let dataList = []
for (let i in b64o) {
if (b64o[i] != 0) {
dataList.push(b64o[i].split(',')[1]);
}
}
xhr.send(JSON.stringify({"data": dataList}));
}

Просмотреть файл

@ -0,0 +1,159 @@
@media (max-width: 576px) {
body {
padding-top:40px;
}
}
canvas{
background-color: #F2F7F6;
width: 100%;
}
.icon {
display: inline-flex;
align-self: center;
}
.img-wrap {
position: relative;
}
.img-wrap-ph {
min-width: 150px;
min-height: 150px;
border-style: dashed;
border-color: grey;
border-width: thin;
}
.img-wrap .clearBtn {
position: absolute;
top: 0px;
left: 0px;
z-index: 100;
}
.hide{
display:none;
}
.show{
display:block;
}
.sImg {
cursor: pointer;
max-width: 150px;
max-height: 150px;
}
.eImg {
cursor: pointer;
max-width: 150px;
max-height: 150px;
}
#hidden {
display: none;
}
.thumbnails, #exampleImages {
width: 100%;
text-align: center;
}
.thumbnails > div {
display: inline-block;
vertical-align: top;
text-align:center;
/* margin:2%; */
}
.item{
width:150px;
text-align:center;
display: inline-block;
background-color: transparent;
border: 1px solid transparent;
}
.card-body .imSim {
padding-left: 0.25rem;
padding-right: 0.25rem;
}
.mainColor {
background-color: #1D7D72;
}
.nav-link-color {
color:#FFFFFF;
}
.font18 {
font-size:18px;
}
.tab-content {
border-top-left-radius: 0rem !important;
border-top-right-radius: .25rem;
border-top: 0px !important;
}
.tab-pane {
padding-left:0px !important;
padding-right:0px !important;
}
.span-font {
font-size: 1.3em;
}
.button-font {
color: #FFFFFF;
font-size:18px;
}
.result-font {
color:#165A67;
font-size:24px;
}
.hide-overflow {
overflow:hidden;
}
/* The Modal (background) */
.modal {
display: none; /* Hidden by default */
position: fixed; /* Stay in place */
z-index: 1; /* Sit on top */
left: 0;
top: 0;
width: 100%; /* Full width */
height: 100%; /* Full height */
overflow: auto; /* Enable scroll if needed */
background-color: rgba(0,0,0,0.4); /* Black w/ opacity */
}
/* Modal Content/Box */
.modal-content {
background-color: #fefefe;
margin: 5% auto; /* 5% from the top and centered */
padding: 20px;
border: 1px solid #888;
width: 80%; /* Could be more or less, depending on screen size */
}
/* The Close Button */
.close {
color: #aaa;
font-size: 28px;
font-weight: bold;
}
.close:hover,
.close:focus {
color: black;
text-decoration: none;
cursor: pointer;
}

Двоичные данные
contrib/html_demo/media/UI-SeeExample.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 1.7 MiB

Двоичные данные
contrib/html_demo/media/UI-UseMyModel.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 997 KiB

Просмотреть файл

@ -0,0 +1,97 @@
## HTML Demo
### Project Description
This project provides an HTML web page that allows users to visualize the output of a deployed computer vision DNN model. Users can improve on and gain insights from their deployed model by uploading query/test images and examining the model's results for correctness through the user interface. The web page includes some sample query/test images from the Microsoft image set, as well as example output for 3 types of models: Image Classification, Object Detection, and Image Similarity.
### Contents
| Directory | Description |
| --- | --- |
| [JupyterCode](JupyterCode)| Contains helper notebooks that upload files and deploy models that allow the web page to work |
| [UICode](UICode)| Contains HTML, CSS, and JavaScript files to implement the web page |
| [media](media)| Image files embedded as screenshots in this and other readmes |
### Requirements
This repo has the following requirements:
- Azure account
- [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
- Conda environment created in the computervision-recipes [Setup Guide](https://github.com/microsoft/computervision-recipes/blob/master/SETUP.md)
Some of the notebooks in the JupyterCode repository will also instruct you to run some of the existing [scenario notebooks](https://github.com/microsoft/computervision-recipes/tree/master/scenarios).
### Usage
#### Setup
- Clone the repo
```bash
git clone git@github.com:microsoft/ComputerVision.git
```
- Execute the notebooks in JupyterCode in your conda environment to deploy a model and upload necessary code for the web page to work
#### Using the web page
To use a deployed model in the Use My Model tab:
1. Enter the model's API URL in the text field
1. Upload or select images to use:
1. Webcam
1. Allow the browser to use your web cam
1. Select Capture Image to take a picture
1. Select Add Images to add the captured image
1. Samples
1. Select an image by clicking on it
1. Choose Files
1. Select images to upload from your machine's file explorer
1. Select Upload to send the images to the model's API
1. View results below!
To view examples in the See Example tab:
1. Click on an image you wish to view
2. See results from image classification, object detection, and image similarity models below!
### Photo
Below is a screenshot of the working website with Use My Model
In "Use My Model" tab on the website, users can select multiple images and test their DNN model's API and see the visualization of the model's API.
<img src="./media/UI-UseMyModel.jpg" />
In "See Example" tab on the website, users can click on example images and view the visualization of three DNN models (Image Classification, Object Detection, Image Similarity)
<img src="./media/UI-SeeExample.jpg" />
### Authors
This work was completed by a team of students from the Boston University College of Engineering as part of the EC528 Cloud Computing class. The project was completed in collaboration with three Microsoft engineers who proposed the project and acted as team mentors.
**Student team:** Matthew Boyd, Charles Henneberger, Xushan "Mulla" Hu, SeungYeun "Kelly" Lee, Nuwapa "Prim" Promchotichai
**Microsoft mentors:** Patrick Buehler, Young Park, JS Tan
### FAQ
Q: Is an Azure account required to run this code?
A: No. Navigate the UICode folder and open the Index.html file in your browser. You will be able to view examples of model visualizations without having an Azure account.
Q: Can I use my own model instead of the ones uploaded by the notebooks?
A: Yes, you will just need to substitute your model where the other model is used.
Q: Why am I getting CORS issues when running a model/example?
A: In order to run the website and call models, you must enable CORS for the location of the html file on the app service. See the end of section 3.F in [3_deployment_to_azure_app_service.ipynb](JupyterCode/3_deployment_to_azure_app_service.ipynb).
Q: How do I enable CORS on items in my Blob storage account?
A: Open your storage account in the Azure portal. In the left-hand pane under "Settings" click on "CORS". Add a new entry indicating the origin of your request (or * to allow all requests) in the "Allowed origins" column and save your entry.