зеркало из
1
0
Форкнуть 0

Renumber notebooks and implement Patrick's suggestions

This commit is contained in:
Mary Wahl 2018-02-10 00:27:05 +00:00 коммит произвёл GitHub
Родитель a938ba6ebb
Коммит 8af5e04de6
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
4 изменённых файлов: 1034 добавлений и 0 удалений

Просмотреть файл

@ -0,0 +1,109 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Pixel-level land use classification\n",
"\n",
"The notebooks in this folder contain a tutorial illustrating how to create a deep neural network model that accepts an aerial image as input and returns a land cover label (forested, water, etc.) for every pixel in the image and deploy it in ESRI's [ArcGIS Pro](https://pro.arcgis.com/) software. Microsoft's [Cognitive Toolkit (CNTK)](https://www.microsoft.com/en-us/cognitive-toolkit/) is used to train and evaluate the model on an [Azure Geo AI Data Science Virtual Machine](https://docs.microsoft.com/azure/batch-ai/). The method shown here was developed in collaboration between the [Chesapeake Conservancy](http://chesapeakeconservancy.org/), [ESRI](https://www.esri.com), and [Microsoft Research](https://www.microsoft.com/research/) as part of Microsoft's [AI for Earth](https://www.microsoft.com/en-us/aiforearth) initiative.\n",
"\n",
"We recommend budgeting two hours for a full walkthrough of this tutorial. The code, shell commands, trained models, and sample images provided here may prove helpful even if you prefer not to complete the tutorial: we have provided explanations and direct links to these materials where possible."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Getting started\n",
"\n",
"This notebook is intended for use on an NC-series [Azure Geo AI Data Science VM](). For step-by-step instructions on provisioning the VM, visit [our git repository](https://github.com/Azure/pixel_level_land_classification).\n",
"\n",
"1. [Train a land classification model from scratch](./02_Train_a_land_classification_model_from_scratch.ipynb)\n",
"\n",
" In this section, you'll produce a trained CNTK model that you can use anywhere for pixel-level land cover prediction.\n",
" \n",
"1. [Apply your trained model to new aerial images](./03_Apply_trained_model_to_new_data.ipynb)\n",
"\n",
" You'll predict land use on a 1 km x 1 km region not previously seen during training, and examine your results in a full-color output file.\n",
" \n",
"1. [Apply your trained model in ArcGIS Pro](./04_Apply_trained_model_in_ArcGIS_Pro.ipynb)\n",
"\n",
" You'll apply your trained model to aerial data in real-time using ESRI's ArcGIS Pro software."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sample Output\n",
"\n",
"This tutorial will train a pixel-level land use classifier for a single epoch: your model will produce results similar to bottom-left. By expanding the training dataset and increasing the number of training epochs, we achieved results like the example at bottom right. The trained model is accurate enough to detect some features, like the small pond at top-center, that were not correctly annotated in the ground-truth labels.\n",
"\n",
"<img src=\"https://github.com/Azure/pixel_level_land_classification/raw/master/outputs/comparison_fullsize.PNG\"/>\n",
"\n",
"This notebook series will also illustrate how to apply your trained model in real-time as you scroll and zoom through regions in ArcGIS Pro:\n",
"\n",
"<img src=\"https://github.com/Azure/pixel_level_land_classification/raw/master/outputs/arcgispro_finished_screenshot.png\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## (Optional) Setting up an ArcGIS Pro trial membership"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The Geo AI Data Science VM comes with ESRI's ArcGIS Pro pre-installed, but you will need to supply credentials for an ArcGIS Pro license in order to run the program. You can obtain a 21-day trial license as follows:\n",
"\n",
"1. Complete the form on the [ArcGIS Pro free trial](https://www.esri.com/en-us/arcgis/products/arcgis-pro/trial) page.\n",
"1. You will receive an email from ESRI. Follow the activation URL to continue with registration.\n",
"1. After selecting account credentials, you will be asked to provide details of your organization. Fill these out as directed and click \"Save and continue.\"\n",
"1. When prompted to download ArcGIS Pro, click 'Continue with ArcGIS Pro online.\" (The program has already been downloaded and installed on the VM.)\n",
"1. Click on the \"Manage Licenses\" option on the menu ribbon.\n",
"1. In the new page that appears, you will find a \"Members\" section with an entry for your new username. Click the \"Configure licenses\" link next to your username.\n",
"1. Ensure that the ArcGIS Pro radio button is selected, and click the checkbox next to \"Extensions\" to select all extensions. Then, click \"Assign.\"\n",
"\n",
"You should now be able to launch ArcGIS Pro with your selected username and password."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"In this notebook series, we train and deploy a model on a Geo AI Data Science VM. To improve model accuracy, we recommend training for more epochs on a larger dataset. Please see [our GitHub repository](https://github.com/Azure/pixel_level_land_classification) for more details on scaling up training using Azure Batch AI.\n",
"\n",
"When you are done using your Geo AI Data Science VM, we recommend that you stop or delete it to prevent further charges.\n",
"\n",
"For comments and suggestions regarding this notebook, please post a [Git issue](https://github.com/Azure/pixel_level_land_classification/issues/new) or submit a pull request in the [pixel-level land classification repository](https://github.com/Azure/pixel_level_land_classification)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python [conda env:py35]",
"language": "python",
"name": "conda-env-py35-py"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

Просмотреть файл

@ -0,0 +1,241 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Train a land classification model from scratch\n",
"\n",
"In this notebook, you will train a neural network model to predict land use from aerial imagery using Microsoft's Cognitive Toolkit (CNTK). Later notebooks will illustrate how you can apply the trained model to new images, both in Jupyter notebooks and in ESRI's ArcGIS Pro.\n",
"\n",
"This tutorial will assume that you have already provisioned an NC series [Geo AI Data Science Virtual Machine]() and are using this Jupyter notebook while connected via remote desktop on that VM. If not, please see our guide to [provisioning and connecting to a Geo AI DSVM](https://github.com/Azure/pixel_level_land_classification/blob/master/geoaidsvm/setup.md).\n",
"\n",
"## Download supporting files\n",
"\n",
"The following commands will use the [AzCopy](https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy) utility to download sample data, a pre-trained model, and code to your VM. The file transfer may take a couple of minutes to complete. When finished, you should see a transfer summary indicating that all files were transferred successfully."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"!AzCopy /Source:https://aiforearthcollateral.blob.core.windows.net/imagesegmentationtutorial /SourceSAS:\"?st=2018-01-16T10%3A40%3A00Z&se=2028-01-17T10%3A40%3A00Z&sp=rl&sv=2017-04-17&sr=c&sig=KeEzmTaFvVo2ptu2GZQqv5mJ8saaPpeNRNPoasRS0RE%3D\" /Dest:D:\\pixellevellandclassification /S\n",
"print('Done.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you like, you can navigate to the `D:\\pixellevellandclassification` directory to examine the files we have transferred. You will find that the sample data are composed of paired files of [National Agricultural Imagery Project](https://www.fsa.usda.gov/programs-and-services/aerial-photography/imagery-programs/naip-imagery/) aerial images and land cover labels produced by the [Chesapeake Conservancy](http://chesapeakeconservancy.org/). While these data are stored in the common TIFF format, they are not readily viewable because they do not have the usual three (RGB) color channels."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Python packages\n",
"\n",
"Most of the Python packages used by our code -- CNTK, numpy, scipy, etc. -- are pre-installed on the Geo AI Data Science VM. However, we will need to install a few less-common packages:\n",
"- `tifffile`: load and save TIFF images.\n",
"- `gdal`: read specialized headers in our TIFF files that contain information on the region shown, geospatial coordinate system used, etc.\n",
"- `pyproj`: to read PROJ.4-formatted geospatial projection information\n",
"- `basemap`: to help convert between lat-lon coordinates and row/column positions in our data files.\n",
"\n",
"Special thanks to [Christoph Gohlke](https://www.lfd.uci.edu/~gohlke/pythonlibs) for preparation of the gdal, pyproj, and basemap wheels."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"!C:\\Anaconda\\envs\\py35\\python -m pip install tifffile\n",
"!C:\\Anaconda\\envs\\py35\\python -m pip install D:\\pixellevellandclassification\\wheels\\GDAL-2.2.3-cp35-cp35m-win_amd64.whl\n",
"!C:\\Anaconda\\envs\\py35\\python -m pip install D:\\pixellevellandclassification\\wheels\\pyproj-1.9.5.1-cp35-cp35m-win_amd64.whl\n",
"!C:\\Anaconda\\envs\\py35\\python -m pip install D:\\pixellevellandclassification\\wheels\\basemap-1.1.0-cp35-cp35m-win_amd64.whl\n",
"print('Done.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Perform training\n",
"\n",
"Before starting training, ensure that you do not have any running processes making use of GPUs. (This may be the case if you have other programs or Jupyter notebooks running.) To do so, execute the code cell below to check your GPU status and running processes using `nvidia-smi`:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Fri Feb 09 17:49:27 2018 \r\n",
"+-----------------------------------------------------------------------------+\r\n",
"| NVIDIA-SMI 385.08 Driver Version: 385.08 |\r\n",
"|-------------------------------+----------------------+----------------------+\r\n",
"| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n",
"| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n",
"|===============================+======================+======================|\r\n",
"| 0 Tesla K80 TCC | 00000CF1:00:00.0 Off | 0 |\r\n",
"| N/A 36C P8 34W / 149W | 233MiB / 11447MiB | 0% Default |\r\n",
"+-------------------------------+----------------------+----------------------+\r\n",
"| 1 Tesla K80 TCC | 0000BCF2:00:00.0 Off | 0 |\r\n",
"| N/A 33C P8 32W / 149W | 1MiB / 11447MiB | 0% Default |\r\n",
"+-------------------------------+----------------------+----------------------+\r\n",
" \r\n",
"+-----------------------------------------------------------------------------+\r\n",
"| Processes: GPU Memory |\r\n",
"| GPU PID Type Process name Usage |\r\n",
"|=============================================================================|\r\n",
"| No running processes found |\r\n",
"+-----------------------------------------------------------------------------+\r\n",
"\n"
]
}
],
"source": [
"import subprocess\n",
"\n",
"proc = subprocess.Popen('nvidia-smi', stdout=subprocess.PIPE)\n",
"print(proc.stdout.read().decode())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you receive an error stating that \"NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver,\" you may be using an Azure VM with no NVIDIA GPU. Please use an NC series VM as recommended above.\n",
"\n",
"To run the training script, edit the command below by replacing `%num_gpus%` with the number of GPUs on your VM:\n",
"\n",
"Geo AI DSVM SKU name | Number of GPUs\n",
":----:|:----:\n",
"NC6 | 1\n",
"NC12 | 2\n",
"NC24 | 4"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"mpiexec -n %num_gpus% C:\\Anaconda\\envs\\py35\\python ^\n",
" D:\\pixellevellandclassification\\scripts\\train_distributed.py ^\n",
" --input_dir D:\\pixellevellandclassification\\training_data ^\n",
" --model_dir D:\\pixellevellandclassification\\models ^\n",
" --num_epochs 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then, open a Windows command prompt (e.g. by clicking on the Start menu, typing \"Command Prompt\", and pressing Enter), paste in the command, and execute the command. It will generate a new model from scratch, train the model for one epoch, and save the model to `D:\\pixellevellandclassification\\models\\trained.model`. Training takes ~25 minutes with a single GPU, ~15 minutes with two GPUs, etc.\n",
"\n",
"During this time, you can finish reading this notebook and monitor progress as follows:\n",
"- In the command prompt where you launched the training, you should soon see output messages indicating the number of GPUs (\"nodes\") participating.\n",
"- Using Task Manager, observe that a Python process has been spawned for each GPU and is using a substantial amount of memory.\n",
" - This tutorial uses eight pairs of training files. They occupy more space in memory than they do on disk due to decompression on loading.\n",
" - Because it takes so long to load files of this size, we've chosen to load the files once at the beginning of training and hold them in memory for fast access. This is especially beneficial when training for more than one epoch.\n",
"- Re-run the `nvidia-smi` cell above: you should see utilization of all GPUs (eventually resulting in high temperature and GPU memory usage) and one running process per GPU.\n",
"\n",
"When training is complete, the output messages at the command prompt should indicate the duration of the training epoch and the error rate on the training set during the epoch, e.g.\n",
"```\n",
"Finished Epoch[1 of 1]: [Training] loss = 0.127706 * 16000, metric = 3.59% * 16000 1421.583s ( 11.3 samples/s);\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Understand the training script\n",
"\n",
"While training runs, take a moment to explore the training script and model definition in your favorite text editor:\n",
"```\n",
"D:\\pixellevellandclassification\\scripts\\train_distributed.py\n",
"D:\\pixellevellandclassification\\scripts\\model_mini_pub.py\n",
"```\n",
"\n",
"Below we provide some additional explanation of selected sections of these scripts."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Training data access\n",
"\n",
"Near the beginning of the training script is a custom minibatch source specifying how the training data should be read and used. Our training data comprise pairs of TIF images. The first image in each pair is a four-channel (red, green, blue, near-infrared) aerial image of a region of the Chesapeake Bay watershed. The second image is a single-channel \"image\" corresponding to the same region, in which each pixel's value corresponds to a land cover label:\n",
"- 0: Unknown land type\n",
"- 1: Water\n",
"- 2: Trees and shrubs\n",
"- 3: Herbaceous vegetation\n",
"- 4+: Barren and impervious (roads, buildings, etc.); we lump these labels together\n",
"\n",
"These two images in each pair correspond to the features and labels of the data, respectively. The minibatch source specifies that the available image pairs should be partitioned evenly between the workers, and each worker should load its set of image pairs into memory at the beginning of training. This ensures that the slow process of reading the input images is performed only once per training job. To produce each minibatch, subregions of a given image pair are sampled randomly. Training proceeds by cycling through the image pairs."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### The model architecture\n",
"The [model definition script](https://aiforearthcollateral.blob.core.windows.net/imagesegmentationtutorial/scripts/model_mini_pub.py) specifies the model architecture: a form of [U-Net](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/). The input for this model will be a 256 pixel x 256 pixel four-channel aerial image (corresponding to a 256 meter x 256 meter region), and the output will be predicted land cover labels for the 128 m x 128 m region at the center of the input region. (Predictions are not provided at the boundaries due to edge effects.)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"Now that you have produced a trained model, you can test its performance in the following notebook on [applying your model to new aerial images](./03_Apply_trained_model_to_new_data.ipynb). You may later wish to return to this section to:\n",
"- Train a model for more than one epoch to improve its performance\n",
"- Train with fewer GPUs to confirm the runtime scaling achieved with distributed training (NC12 and NC24 VMs only)\n",
"\n",
"When you are done using your Geo AI Data Science VM, we recommend that you stop or delete it to prevent further charges.\n",
"\n",
"For comments and suggestions regarding this notebook, please post a [Git issue](https://github.com/Azure/pixel_level_land_classification/issues/new) or submit a pull request in the [pixel-level land classification repository](https://github.com/Azure/pixel_level_land_classification)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python [conda env:py35]",
"language": "python",
"name": "conda-env-py35-py"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -0,0 +1,221 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Apply a trained land classifier model in ArcGIS Pro"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This tutorial will assume that you have already provisioned a [Geo AI Data Science Virtual Machine]() and are using this Jupyter notebook while connected via remote desktop on that VM. If not, please see our guide to [provisioning and connecting to a Geo AI DSVM](https://github.com/Azure/pixel_level_land_classification/blob/master/geoaidsvm/setup.md).\n",
"\n",
"By default, this tutorial will make use of a model we have pre-trained for 250 epochs. If you have completed the associated notebook on [training a land classifier from scratch](./02_Train_a_land_classification_model_from_scratch.ipynb), you will have the option of using your own model file."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup instructions\n",
"\n",
"### Install any updates to ArcGIS Pro\n",
"\n",
"At the time of this writing, updating ArcGIS Pro will install a new Python environment for use by the program. To simplify our instructions below, we will assume that you have updated ArcGIS Pro by following these instructions:\n",
"\n",
"1. Search for and launch the ArcGIS Pro program.\n",
"1. When prompted, enter your username and password.\n",
" - If you don't have an ArcGIS Pro license, see the instructions for getting a trial license in the [intro notebook](./01_Intro_to_pixel-level_land_classification.ipynb).)\n",
"1. As the program loads, a software update notification may appear at upper-right.\n",
" - If so, click on the notification for more information. Click \"Download Now\" next to begin the software update."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Install the supporting files\n",
"\n",
"If you have not already completed the associated notebook on [training a land classifier from scratch](./02_Train_a_land_classification_model_from_scratch.ipynb), execute the following cell to download supporting files to your Geo AI DSVM's D: drive."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"!AzCopy /Source:https://aiforearthcollateral.blob.core.windows.net/imagesegmentationtutorial /SourceSAS:\"?st=2018-01-16T10%3A40%3A00Z&se=2028-01-17T10%3A40%3A00Z&sp=rl&sv=2017-04-17&sr=c&sig=KeEzmTaFvVo2ptu2GZQqv5mJ8saaPpeNRNPoasRS0RE%3D\" /Dest:D:\\pixellevellandclassification /S\n",
"print('Done.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You will now have a local copy of a sample ArcGIS Pro project, sample trained model file, and Python wheels for CNTK and its dependencies."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Install the custom raster function\n",
"\n",
"We will use Python scripts to apply a trained model to aerial imagery in real-time as the user scrolls through a region of interest in ArcGIS Pro. These Python scripts are surfaced in ArcGIS Pro as a [custom raster function](https://github.com/Esri/raster-functions). The three files needed for the raster function (the main Python script, helper functions for e.g. colorizing the model's results, and an XML description file) will be copied into the ArcGIS Pro subdirectory created with this command:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import shutil, glob, os\n",
"\n",
"dest_dir = 'C:/Program Files/ArcGIS/Pro/Resources/Raster/Functions/Custom/ClassifyCNTK'\n",
"os.makedirs(dest_dir, exist_ok=True)\n",
"for i in glob.iglob('D:/pixellevellandclassification/arcgispro/ClassifyCNTK/*'):\n",
" shutil.copy(i, dest_dir)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add ArcGIS Pro's Python environment to the path\n",
"\n",
"At the time of this writing, ArcGIS Pro's software updates install a second Python environment containing Python 3.6. (The Geo AI DSVM's default Python environment, `py35`, then cannot be used.) We will need to add ArcGIS's python environment to the system path manually, as follows:\n",
"\n",
"1. Click on the Start menu and type in `Edit the system environment variables`. Click on the eponymous search result to load the \"Advanced\" tab of the System Properties window.\n",
"1. In the window that appears, click the \"Environment Variables...\" button at lower-right.\n",
"1. In the \"System variables\" window, select the \"PATH\" entry and click the \"Edit\" button.\n",
"1. Click the \"New\" button. Type `C:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3` into the new line, then click \"OK.\"\n",
"1. Click \"OK\" to exit the Environment Variables window.\n",
"1. Click \"OK\" to exit the System Properties window."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Install necessary Python packages in ArcGIS Pro's Python environment\n",
"\n",
"Execute the following cell to install CNTK and its dependencies in the ArcGIS Pro Python 3.6 environment:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"scrolled": false
},
"outputs": [],
"source": [
"!\"C:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\python.exe\" -m pip install D:\\pixellevellandclassification\\wheels\\numpy-1.14.0+mkl-cp36-cp36m-win_amd64.whl\n",
"!\"C:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\python.exe\" -m pip install D:\\pixellevellandclassification\\wheels\\scipy-1.0.0-cp36-cp36m-win_amd64.whl\n",
"!\"C:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\python.exe\" -m pip install D:\\pixellevellandclassification\\wheels\\cntk-2.0rc2-cp36-cp36m-win_amd64.whl\n",
"print('Done.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Evaluate the model in real-time using ArcGIS Pro\n",
"\n",
"### Load the sample project in ArcGIS Pro\n",
"\n",
"Begin by loading the sample ArcGIS Pro project we have provided:\n",
"\n",
"1. Search for and launch the ArcGIS Pro program.\n",
" - If ArcGIS Pro was open, restart it to ensure that all changes above are reflected when the proram loads.\n",
"1. On the ArcGIS Pro start screen, click on \"Open an Existing Project\".\n",
"1. Navigate to the folder where you extracted the sample project, and select the `D:\\pixellevellandclassification\\arcgispro\\sampleproject.aprx` file. Click \"OK.\"\n",
"\n",
"Once the project has loaded (allow ~30 seconds), you should see a screen split into four quadrants. After a moment, NAIP aerial imagery and ground-truth land use labels should beome visible in the upper-left and upper-right corners, respectively.\n",
"\n",
"<img src=\"https://github.com/Azure/pixel_level_land_classification/raw/master/outputs/arcgispro_finished_screenshot.png\">"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The bottom quadrants will show the model's best-guess labels (bottom right) and an average of label colors weighted by predicted probability (bottom left, providing an indication of uncertainty). If the bottom quadrants do not populate with results, you may need to add their layers manually using the following steps:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. Begin by selecting the \"AI Mixed Probabilities\" window at bottom-left.\n",
"1. Add and modify an aerial imagery layer:\n",
" 1. In the Catalog Pane (accessible from the View menu), click on Portal, then the cloud icon (labeled \"All Portal\" on hover).\n",
" 1. In the search field, type NAIP.\n",
" 1. Drag and drop the \"USA NAIP Imagery: Natural Color\" option into the window at bottom-left. You should see a new layer with this name appear in the Contents Pane at left.\n",
" 1. Right-click on \"USA NAIP Imagery: Natural Color\" in the Contents Pane and select \"Properties\".\n",
" 1. In the \"Processing Templates\" tab of the layer properties, change the Processing Template from \"Natural Color\" to \"None,\" then click OK.\n",
"1. Add a model predictions layer:\n",
" 1. In the Raster Functions Pane (accessible from the Analysis menu), click on the \"Custom\" option along the top.\n",
" 1. You should see a \"[ClassifyCNTK]\" heading in the Custom section. Collapse and re-expand it to reveal an option named \"Classify\". Click this button to bring up the raster function's options.\n",
" 1. Set the input raster to \"USA NAIP Imagery: Natural Color\".\n",
" 1. Set the trained model location to `D:\\pixellevellandclassification\\models\\250epochs.model`.\n",
" - Note: if you trained your own model using our companion notebook, you can use it instead by choosing `D:\\pixellevellandclassification\\models\\trained.model` as the location.\n",
" 1. Set the output type to \"Softmax\", indicating that each pixel's color will be an average of the class label colors, weighted by their relative probabilities.\n",
" - Note: selecting \"Hardmax\" will assign each pixel its most likely label's color insead.\n",
" 1. Click \"Create new layer\". After a few seconds, the model's predictions should appear in the bottom-left quadrant.\n",
"1. Repeat these steps with the bottom-right quadrant, selecting \"Hardmax\" as the output type."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that your project is complete, you can navigate and zoom in any quadrant window to compare ground truth vs. predicted labels throughout the Chesapeake Bay watershed region. If you venture outside the Chesapeake watershed, you may find that ground truth regions are no longer available, but NAIP data and model predictions should still be displayed. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"In this notebook series, we trained and deployed a model on a Geo AI Data Science VM. To improve model accuracy, we recommend training for more epochs on a larger dataset. Please see [our GitHub repository](https://github.com/Azure/pixel_level_land_classification) for more details on scaling up training using Batch AI.\n",
"\n",
"When you are done using your Geo AI Data Science VM, we recommend that you stop or delete it to prevent further charges.\n",
"\n",
"For comments and suggestions regarding this notebook, please post a [Git issue](https://github.com/Azure/pixel_level_land_classification/issues/new) or submit a pull request in the [pixel-level land classification repository](https://github.com/Azure/pixel_level_land_classification)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python [conda env:py35]",
"language": "python",
"name": "conda-env-py35-py"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}