Merge pull request #1974 from Azure/jeffshep/post158sync
Remove deprecated sample notebooks
This commit is contained in:
Коммит
eaa7c56590
|
@ -1,753 +0,0 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/machine-learning-pipelines/pipeline-style-transfer/pipeline-style-transfer-parallel-run.png)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Neural style transfer on video\n",
|
||||
"Using modified code from `pytorch`'s neural style [example](https://pytorch.org/tutorials/advanced/neural_style_tutorial.html), we show how to setup a pipeline for doing style transfer on video. The pipeline has following steps:\n",
|
||||
"1. Split a video into images\n",
|
||||
"2. Run neural style on each image using one of the provided models (from `pytorch` pretrained models for this example).\n",
|
||||
"3. Stitch the image back into a video.\n",
|
||||
"\n",
|
||||
"> **Tip**\n",
|
||||
"If your system requires low-latency processing (to process a single document or small set of documents quickly), use [real-time scoring](https://docs.microsoft.com/en-us/azure/machine-learning/v1/how-to-consume-web-service) instead of batch prediction."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites\n",
|
||||
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialize Workspace\n",
|
||||
"\n",
|
||||
"Initialize a workspace object from persisted configuration."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check core SDK version number\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print(\"SDK version:\", azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace, Experiment\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"print('Workspace name: ' + ws.name, \n",
|
||||
" 'Azure region: ' + ws.location, \n",
|
||||
" 'Subscription id: ' + ws.subscription_id, \n",
|
||||
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
|
||||
"from azureml.core import Datastore, Dataset\n",
|
||||
"from azureml.pipeline.core import Pipeline\n",
|
||||
"from azureml.pipeline.steps import PythonScriptStep\n",
|
||||
"from azureml.core.runconfig import CondaDependencies, RunConfiguration\n",
|
||||
"from azureml.core.compute_target import ComputeTargetException\n",
|
||||
"from azureml.data import OutputFileDatasetConfig"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Download models"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"# create directory for model\n",
|
||||
"model_dir = 'models'\n",
|
||||
"if not os.path.isdir(model_dir):\n",
|
||||
" os.mkdir(model_dir)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import urllib.request\n",
|
||||
"\n",
|
||||
"def download_model(model_name):\n",
|
||||
" # downloaded models from https://pytorch.org/tutorials/advanced/neural_style_tutorial.html are kept here\n",
|
||||
" url = \"https://pipelinedata.blob.core.windows.net/styletransfer/saved_models/\" + model_name\n",
|
||||
" local_path = os.path.join(model_dir, model_name)\n",
|
||||
" urllib.request.urlretrieve(url, local_path)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Register all Models"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.model import Model\n",
|
||||
"mosaic_model = None\n",
|
||||
"candy_model = None\n",
|
||||
"\n",
|
||||
"models = Model.list(workspace=ws, tags=['scenario'])\n",
|
||||
"for m in models:\n",
|
||||
" print(\"Name:\", m.name,\"\\tVersion:\", m.version, \"\\tDescription:\", m.description, m.tags)\n",
|
||||
" if m.name == 'mosaic' and mosaic_model is None:\n",
|
||||
" mosaic_model = m\n",
|
||||
" elif m.name == 'candy' and candy_model is None:\n",
|
||||
" candy_model = m\n",
|
||||
"\n",
|
||||
"if mosaic_model is None:\n",
|
||||
" print('Mosaic model does not exist, registering it')\n",
|
||||
" download_model('mosaic.pth')\n",
|
||||
" mosaic_model = Model.register(model_path = os.path.join(model_dir, \"mosaic.pth\"),\n",
|
||||
" model_name = \"mosaic\",\n",
|
||||
" tags = {'type': \"mosaic\", 'scenario': \"Style transfer using batch inference\"},\n",
|
||||
" description = \"Style transfer - Mosaic\",\n",
|
||||
" workspace = ws)\n",
|
||||
"else:\n",
|
||||
" print('Reusing existing mosaic model')\n",
|
||||
" \n",
|
||||
"\n",
|
||||
"if candy_model is None:\n",
|
||||
" print('Candy model does not exist, registering it')\n",
|
||||
" download_model('candy.pth')\n",
|
||||
" candy_model = Model.register(model_path = os.path.join(model_dir, \"candy.pth\"),\n",
|
||||
" model_name = \"candy\",\n",
|
||||
" tags = {'type': \"candy\", 'scenario': \"Style transfer using batch inference\"},\n",
|
||||
" description = \"Style transfer - Candy\",\n",
|
||||
" workspace = ws)\n",
|
||||
"else:\n",
|
||||
" print('Reusing existing candy model')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Create or use existing compute\n",
|
||||
"\n",
|
||||
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# AmlCompute\n",
|
||||
"cpu_cluster_name = \"cpu-cluster\"\n",
|
||||
"try:\n",
|
||||
" cpu_cluster = AmlCompute(ws, cpu_cluster_name)\n",
|
||||
" print(\"found existing cluster.\")\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" print(\"creating new cluster\")\n",
|
||||
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_v2\",\n",
|
||||
" max_nodes = 1)\n",
|
||||
"\n",
|
||||
" # create the cluster\n",
|
||||
" cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, provisioning_config)\n",
|
||||
" cpu_cluster.wait_for_completion(show_output=True)\n",
|
||||
" \n",
|
||||
"# AmlCompute\n",
|
||||
"gpu_cluster_name = \"gpu-cluster\"\n",
|
||||
"try:\n",
|
||||
" gpu_cluster = AmlCompute(ws, gpu_cluster_name)\n",
|
||||
" print(\"found existing cluster.\")\n",
|
||||
"except ComputeTargetException:\n",
|
||||
" print(\"creating new cluster\")\n",
|
||||
" provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"Standard_NC6s_v3\",\n",
|
||||
" max_nodes = 3)\n",
|
||||
"\n",
|
||||
" # create the cluster\n",
|
||||
" gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, provisioning_config)\n",
|
||||
" gpu_cluster.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Python Scripts\n",
|
||||
"We use an edited version of `neural_style_mpi.py` (original is [here](https://github.com/pytorch/examples/blob/master/fast_neural_style/neural_style/neural_style.py)). Scripts to split and stitch the video are thin wrappers to calls to `ffmpeg`. \n",
|
||||
"\n",
|
||||
"We install `ffmpeg` through conda dependencies."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"scripts_folder = \"scripts\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"process_video_script_file = \"process_video.py\"\n",
|
||||
"\n",
|
||||
"# peek at contents\n",
|
||||
"with open(os.path.join(scripts_folder, process_video_script_file)) as process_video_file:\n",
|
||||
" print(process_video_file.read())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"stitch_video_script_file = \"stitch_video.py\"\n",
|
||||
"\n",
|
||||
"# peek at contents\n",
|
||||
"with open(os.path.join(scripts_folder, stitch_video_script_file)) as stitch_video_file:\n",
|
||||
" print(stitch_video_file.read())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The sample video **organutan.mp4** is stored at a publicly shared datastore. We are registering the datastore below. If you want to take a look at the original video, click here. (https://pipelinedata.blob.core.windows.net/sample-videos/orangutan.mp4)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# datastore for input video\n",
|
||||
"account_name = \"pipelinedata\"\n",
|
||||
"video_ds = Datastore.register_azure_blob_container(ws, \"videos\", \"sample-videos\",\n",
|
||||
" account_name=account_name, overwrite=True)\n",
|
||||
"\n",
|
||||
"# the default blob store attached to a workspace\n",
|
||||
"default_datastore = ws.get_default_datastore()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Sample video"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"video_name=os.getenv(\"STYLE_TRANSFER_VIDEO_NAME\", \"orangutan.mp4\") \n",
|
||||
"orangutan_video = Dataset.File.from_files((video_ds,video_name))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"cd = CondaDependencies.create(python_version=\"3.8\", conda_packages=['pip==20.2.4'])\n",
|
||||
"\n",
|
||||
"cd.add_channel(\"conda-forge\")\n",
|
||||
"cd.add_conda_package(\"ffmpeg==4.0.2\")\n",
|
||||
"\n",
|
||||
"# Runconfig\n",
|
||||
"amlcompute_run_config = RunConfiguration(conda_dependencies=cd)\n",
|
||||
"amlcompute_run_config.environment.docker.base_image = \"pytorch/pytorch\"\n",
|
||||
"amlcompute_run_config.environment.spark.precache_packages = False"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ffmpeg_audio = OutputFileDatasetConfig(name=\"ffmpeg_audio\")\n",
|
||||
"processed_images = OutputFileDatasetConfig(name=\"processed_images\")\n",
|
||||
"output_video = OutputFileDatasetConfig(name=\"output_video\")\n",
|
||||
"\n",
|
||||
"ffmpeg_images = OutputFileDatasetConfig(name=\"ffmpeg_images\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Define tweakable parameters to pipeline\n",
|
||||
"These parameters can be changed when the pipeline is published and rerun from a REST call.\n",
|
||||
"As part of ParallelRunStep following 2 pipeline parameters will be created which can be used to override values.\n",
|
||||
" node_count\n",
|
||||
" process_count_per_node"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.pipeline.core.graph import PipelineParameter\n",
|
||||
"# create a parameter for style (one of \"candy\", \"mosaic\") to transfer the images to\n",
|
||||
"style_param = PipelineParameter(name=\"style\", default_value=\"mosaic\")\n",
|
||||
"# create a parameter for the number of nodes to use in step no. 2 (style transfer)\n",
|
||||
"nodecount_param = PipelineParameter(name=\"nodecount\", default_value=2)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"split_video_step = PythonScriptStep(\n",
|
||||
" name=\"split video\",\n",
|
||||
" script_name=\"process_video.py\",\n",
|
||||
" arguments=[\"--input_video\", orangutan_video.as_mount(),\n",
|
||||
" \"--output_audio\", ffmpeg_audio,\n",
|
||||
" \"--output_images\", ffmpeg_images],\n",
|
||||
" compute_target=cpu_cluster,\n",
|
||||
" runconfig=amlcompute_run_config,\n",
|
||||
" source_directory=scripts_folder\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"stitch_video_step = PythonScriptStep(\n",
|
||||
" name=\"stitch\",\n",
|
||||
" script_name=\"stitch_video.py\",\n",
|
||||
" arguments=[\"--images_dir\", processed_images.as_input(), \n",
|
||||
" \"--input_audio\", ffmpeg_audio.as_input(), \n",
|
||||
" \"--output_dir\", output_video],\n",
|
||||
" compute_target=cpu_cluster,\n",
|
||||
" runconfig=amlcompute_run_config,\n",
|
||||
" source_directory=scripts_folder\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Create environment, parallel step run config and parallel run step"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Environment\n",
|
||||
"from azureml.core.runconfig import DEFAULT_GPU_IMAGE\n",
|
||||
"\n",
|
||||
"parallel_cd = CondaDependencies.create(python_version=\"3.8\", conda_packages=['pip==20.2.4', 'numpy==1.19'])\n",
|
||||
"\n",
|
||||
"parallel_cd.add_channel(\"pytorch\")\n",
|
||||
"parallel_cd.add_conda_package(\"pytorch\")\n",
|
||||
"parallel_cd.add_conda_package(\"torchvision\")\n",
|
||||
"parallel_cd.add_conda_package(\"pillow<7\") # needed for torchvision==0.4.0\n",
|
||||
"\n",
|
||||
"styleenvironment = Environment(name=\"styleenvironment\")\n",
|
||||
"styleenvironment.python.conda_dependencies=parallel_cd\n",
|
||||
"styleenvironment.docker.base_image = DEFAULT_GPU_IMAGE"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.pipeline.core import PipelineParameter\n",
|
||||
"from azureml.pipeline.steps import ParallelRunConfig\n",
|
||||
"\n",
|
||||
"parallel_run_config = ParallelRunConfig(\n",
|
||||
" environment=styleenvironment,\n",
|
||||
" entry_script='transform.py',\n",
|
||||
" output_action='summary_only',\n",
|
||||
" mini_batch_size=\"1\",\n",
|
||||
" error_threshold=1,\n",
|
||||
" source_directory=scripts_folder,\n",
|
||||
" compute_target=gpu_cluster, \n",
|
||||
" node_count=nodecount_param,\n",
|
||||
" process_count_per_node=2\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.pipeline.steps import ParallelRunStep\n",
|
||||
"from datetime import datetime\n",
|
||||
"\n",
|
||||
"parallel_step_name = 'styletransfer-' + datetime.now().strftime('%Y%m%d%H%M')\n",
|
||||
"\n",
|
||||
"distributed_style_transfer_step = ParallelRunStep(\n",
|
||||
" name=parallel_step_name,\n",
|
||||
" inputs=[ffmpeg_images], # Input file share/blob container/file dataset\n",
|
||||
" output=processed_images, # Output file share/blob container\n",
|
||||
" arguments=[\"--style\", style_param],\n",
|
||||
" parallel_run_config=parallel_run_config,\n",
|
||||
" allow_reuse=False #[optional - default value True]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Run the pipeline"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pipeline = Pipeline(workspace=ws, steps=[stitch_video_step])\n",
|
||||
"\n",
|
||||
"pipeline.validate()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# submit the pipeline and provide values for the PipelineParameters used in the pipeline\n",
|
||||
"pipeline_run = Experiment(ws, 'styletransfer_parallel_mosaic').submit(pipeline)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Monitor pipeline run\n",
|
||||
"\n",
|
||||
"The pipeline run status could be checked in Azure Machine Learning portal (https://ml.azure.com). The link to the pipeline run could be retrieved by inspecting the `pipeline_run` object.\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# This will output information of the pipeline run, including the link to the details page of portal.\n",
|
||||
"pipeline_run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Optional: View detailed logs (streaming) "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Wait the run for completion and show output log to console\n",
|
||||
"pipeline_run.wait_for_completion(show_output=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Download output video"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Downloads the video in `output_video` folder"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def download_video(run, target_dir=None):\n",
|
||||
" stitch_run = run.find_step_run(stitch_video_step.name)[0]\n",
|
||||
" port_data = stitch_run.get_details()['outputDatasets'][0]['dataset']\n",
|
||||
" port_data.download(target_dir)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pipeline_run.wait_for_completion()\n",
|
||||
"download_video(pipeline_run, \"output_video_mosaic\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Publish pipeline"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pipeline_name = \"style-transfer-batch-inference\"\n",
|
||||
"print(pipeline_name)\n",
|
||||
"\n",
|
||||
"published_pipeline = pipeline.publish(\n",
|
||||
" name=pipeline_name, \n",
|
||||
" description=pipeline_name)\n",
|
||||
"print(\"Newly published pipeline id: {}\".format(published_pipeline.id))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Get published pipeline\n",
|
||||
"This is another way to get the published pipeline."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.pipeline.core import PublishedPipeline\n",
|
||||
"\n",
|
||||
"# You could retrieve all pipelines that are published, or \n",
|
||||
"# just get the published pipeline object that you have the ID for.\n",
|
||||
"\n",
|
||||
"# Get all published pipeline objects in the workspace\n",
|
||||
"all_pub_pipelines = PublishedPipeline.list(ws)\n",
|
||||
"\n",
|
||||
"# We will iterate through the list of published pipelines and \n",
|
||||
"# use the last ID in the list for Schelue operations: \n",
|
||||
"print(\"Published pipelines found in the workspace:\")\n",
|
||||
"for pub_pipeline in all_pub_pipelines:\n",
|
||||
" print(\"Name:\", pub_pipeline.name,\"\\tDescription:\", pub_pipeline.description, \"\\tId:\", pub_pipeline.id, \"\\tStatus:\", pub_pipeline.status)\n",
|
||||
" if(pub_pipeline.name == pipeline_name):\n",
|
||||
" published_pipeline = pub_pipeline\n",
|
||||
"\n",
|
||||
"print(\"Published pipeline id: {}\".format(published_pipeline.id))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Run pipeline through REST calls for other styles\n",
|
||||
"\n",
|
||||
"# Get AAD token"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.authentication import InteractiveLoginAuthentication\n",
|
||||
"import requests\n",
|
||||
"\n",
|
||||
"auth = InteractiveLoginAuthentication()\n",
|
||||
"aad_token = auth.get_authentication_header()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Get endpoint URL"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"rest_endpoint = published_pipeline.endpoint\n",
|
||||
"print(\"Pipeline REST endpoing: {}\".format(rest_endpoint))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Send request and monitor"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"experiment_name = 'styletransfer_parallel_candy'\n",
|
||||
"response = requests.post(rest_endpoint, \n",
|
||||
" headers=aad_token,\n",
|
||||
" json={\"ExperimentName\": experiment_name,\n",
|
||||
" \"ParameterAssignments\": {\"style\": \"candy\", \"NodeCount\": 3}})\n",
|
||||
"\n",
|
||||
"run_id = response.json()[\"Id\"]\n",
|
||||
"\n",
|
||||
"from azureml.pipeline.core.run import PipelineRun\n",
|
||||
"published_pipeline_run_candy = PipelineRun(ws.experiments[experiment_name], run_id)\n",
|
||||
"\n",
|
||||
"# Show detail information of run\n",
|
||||
"published_pipeline_run_candy"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Download output from re-run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"published_pipeline_run_candy.wait_for_completion()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"download_video(published_pipeline_run_candy, target_dir=\"output_video_candy\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "sanpil joringer asraniwa pansav tracych"
|
||||
}
|
||||
],
|
||||
"category": "Other notebooks",
|
||||
"compute": [
|
||||
"AML Compute"
|
||||
],
|
||||
"datasets": [],
|
||||
"deployment": [
|
||||
"None"
|
||||
],
|
||||
"exclude_from_index": true,
|
||||
"framework": [
|
||||
"None"
|
||||
],
|
||||
"friendly_name": "Style transfer using ParallelRunStep",
|
||||
"index_order": 1,
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.9"
|
||||
},
|
||||
"tags": [
|
||||
"Batch Inferencing",
|
||||
"Pipeline"
|
||||
],
|
||||
"task": "Style transfer"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
|
@ -1,22 +0,0 @@
|
|||
import argparse
|
||||
import glob
|
||||
import os
|
||||
import subprocess
|
||||
|
||||
parser = argparse.ArgumentParser(description="Process input video")
|
||||
parser.add_argument('--input_video', required=True)
|
||||
parser.add_argument('--output_audio', required=True)
|
||||
parser.add_argument('--output_images', required=True)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
os.makedirs(args.output_audio, exist_ok=True)
|
||||
os.makedirs(args.output_images, exist_ok=True)
|
||||
|
||||
subprocess.run("ffmpeg -i {} {}/video.aac".format(args.input_video, args.output_audio),
|
||||
shell=True,
|
||||
check=True)
|
||||
|
||||
subprocess.run("ffmpeg -i {} {}/%05d_video.jpg -hide_banner".format(args.input_video, args.output_images),
|
||||
shell=True,
|
||||
check=True)
|
|
@ -1,22 +0,0 @@
|
|||
import argparse
|
||||
import os
|
||||
import subprocess
|
||||
|
||||
parser = argparse.ArgumentParser(description="Process input video")
|
||||
parser.add_argument('--images_dir', required=True)
|
||||
parser.add_argument('--input_audio', required=True)
|
||||
parser.add_argument('--output_dir', required=True)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
os.makedirs(args.output_dir, exist_ok=True)
|
||||
|
||||
subprocess.run("ffmpeg -framerate 30 -i {}/%05d_video.jpg -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p "
|
||||
"-y {}/video_without_audio.mp4"
|
||||
.format(args.images_dir, args.output_dir),
|
||||
shell=True, check=True)
|
||||
|
||||
subprocess.run("ffmpeg -i {}/video_without_audio.mp4 -i {}/video.aac -map 0:0 -map 1:0 -vcodec "
|
||||
"copy -acodec copy -y {}/video_with_audio.mp4"
|
||||
.format(args.output_dir, args.input_audio, args.output_dir),
|
||||
shell=True, check=True)
|
|
@ -1,172 +0,0 @@
|
|||
import argparse
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
import json
|
||||
import traceback
|
||||
from PIL import Image
|
||||
|
||||
import torch
|
||||
from torchvision import transforms
|
||||
|
||||
from azureml.core.model import Model
|
||||
|
||||
style_model = None
|
||||
|
||||
|
||||
class TransformerNet(torch.nn.Module):
|
||||
def __init__(self):
|
||||
super(TransformerNet, self).__init__()
|
||||
# Initial convolution layers
|
||||
self.conv1 = ConvLayer(3, 32, kernel_size=9, stride=1)
|
||||
self.in1 = torch.nn.InstanceNorm2d(32, affine=True)
|
||||
self.conv2 = ConvLayer(32, 64, kernel_size=3, stride=2)
|
||||
self.in2 = torch.nn.InstanceNorm2d(64, affine=True)
|
||||
self.conv3 = ConvLayer(64, 128, kernel_size=3, stride=2)
|
||||
self.in3 = torch.nn.InstanceNorm2d(128, affine=True)
|
||||
# Residual layers
|
||||
self.res1 = ResidualBlock(128)
|
||||
self.res2 = ResidualBlock(128)
|
||||
self.res3 = ResidualBlock(128)
|
||||
self.res4 = ResidualBlock(128)
|
||||
self.res5 = ResidualBlock(128)
|
||||
# Upsampling Layers
|
||||
self.deconv1 = UpsampleConvLayer(128, 64, kernel_size=3, stride=1, upsample=2)
|
||||
self.in4 = torch.nn.InstanceNorm2d(64, affine=True)
|
||||
self.deconv2 = UpsampleConvLayer(64, 32, kernel_size=3, stride=1, upsample=2)
|
||||
self.in5 = torch.nn.InstanceNorm2d(32, affine=True)
|
||||
self.deconv3 = ConvLayer(32, 3, kernel_size=9, stride=1)
|
||||
# Non-linearities
|
||||
self.relu = torch.nn.ReLU()
|
||||
|
||||
def forward(self, X):
|
||||
y = self.relu(self.in1(self.conv1(X)))
|
||||
y = self.relu(self.in2(self.conv2(y)))
|
||||
y = self.relu(self.in3(self.conv3(y)))
|
||||
y = self.res1(y)
|
||||
y = self.res2(y)
|
||||
y = self.res3(y)
|
||||
y = self.res4(y)
|
||||
y = self.res5(y)
|
||||
y = self.relu(self.in4(self.deconv1(y)))
|
||||
y = self.relu(self.in5(self.deconv2(y)))
|
||||
y = self.deconv3(y)
|
||||
return y
|
||||
|
||||
|
||||
class ConvLayer(torch.nn.Module):
|
||||
def __init__(self, in_channels, out_channels, kernel_size, stride):
|
||||
super(ConvLayer, self).__init__()
|
||||
reflection_padding = kernel_size // 2
|
||||
self.reflection_pad = torch.nn.ReflectionPad2d(reflection_padding)
|
||||
self.conv2d = torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride)
|
||||
|
||||
def forward(self, x):
|
||||
out = self.reflection_pad(x)
|
||||
out = self.conv2d(out)
|
||||
return out
|
||||
|
||||
|
||||
class ResidualBlock(torch.nn.Module):
|
||||
"""ResidualBlock
|
||||
introduced in: https://arxiv.org/abs/1512.03385
|
||||
recommended architecture: http://torch.ch/blog/2016/02/04/resnets.html
|
||||
"""
|
||||
|
||||
def __init__(self, channels):
|
||||
super(ResidualBlock, self).__init__()
|
||||
self.conv1 = ConvLayer(channels, channels, kernel_size=3, stride=1)
|
||||
self.in1 = torch.nn.InstanceNorm2d(channels, affine=True)
|
||||
self.conv2 = ConvLayer(channels, channels, kernel_size=3, stride=1)
|
||||
self.in2 = torch.nn.InstanceNorm2d(channels, affine=True)
|
||||
self.relu = torch.nn.ReLU()
|
||||
|
||||
def forward(self, x):
|
||||
residual = x
|
||||
out = self.relu(self.in1(self.conv1(x)))
|
||||
out = self.in2(self.conv2(out))
|
||||
out = out + residual
|
||||
return out
|
||||
|
||||
|
||||
class UpsampleConvLayer(torch.nn.Module):
|
||||
"""UpsampleConvLayer
|
||||
Upsamples the input and then does a convolution. This method gives better results
|
||||
compared to ConvTranspose2d.
|
||||
ref: http://distill.pub/2016/deconv-checkerboard/
|
||||
"""
|
||||
|
||||
def __init__(self, in_channels, out_channels, kernel_size, stride, upsample=None):
|
||||
super(UpsampleConvLayer, self).__init__()
|
||||
self.upsample = upsample
|
||||
if upsample:
|
||||
self.upsample_layer = torch.nn.Upsample(mode='nearest', scale_factor=upsample)
|
||||
reflection_padding = kernel_size // 2
|
||||
self.reflection_pad = torch.nn.ReflectionPad2d(reflection_padding)
|
||||
self.conv2d = torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride)
|
||||
|
||||
def forward(self, x):
|
||||
x_in = x
|
||||
if self.upsample:
|
||||
x_in = self.upsample_layer(x_in)
|
||||
out = self.reflection_pad(x_in)
|
||||
out = self.conv2d(out)
|
||||
return out
|
||||
|
||||
|
||||
def load_image(filename):
|
||||
img = Image.open(filename)
|
||||
return img
|
||||
|
||||
|
||||
def save_image(filename, data):
|
||||
img = data.clone().clamp(0, 255).numpy()
|
||||
img = img.transpose(1, 2, 0).astype("uint8")
|
||||
img = Image.fromarray(img)
|
||||
img.save(filename)
|
||||
|
||||
|
||||
def init():
|
||||
global output_path, args
|
||||
global style_model, device
|
||||
output_path = os.environ['AZUREML_BI_OUTPUT_PATH']
|
||||
print(f'output path: {output_path}')
|
||||
print(f'Cuda available? {torch.cuda.is_available()}')
|
||||
|
||||
arg_parser = argparse.ArgumentParser(description="parser for fast-neural-style")
|
||||
arg_parser.add_argument("--style", type=str, help="style name")
|
||||
args, unknown_args = arg_parser.parse_known_args()
|
||||
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||
with torch.no_grad():
|
||||
style_model = TransformerNet()
|
||||
model_path = Model.get_model_path(args.style)
|
||||
state_dict = torch.load(os.path.join(model_path))
|
||||
# remove saved deprecated running_* keys in InstanceNorm from the checkpoint
|
||||
for k in list(state_dict.keys()):
|
||||
if re.search(r'in\d+\.running_(mean|var)$', k):
|
||||
del state_dict[k]
|
||||
style_model.load_state_dict(state_dict)
|
||||
style_model.to(device)
|
||||
print(f'Model loaded successfully. Path: {model_path}')
|
||||
|
||||
|
||||
def run(mini_batch):
|
||||
|
||||
result = []
|
||||
for image_file_path in mini_batch:
|
||||
img = load_image(image_file_path)
|
||||
|
||||
with torch.no_grad():
|
||||
content_transform = transforms.Compose([
|
||||
transforms.ToTensor(),
|
||||
transforms.Lambda(lambda x: x.mul(255))
|
||||
])
|
||||
content_image = content_transform(img)
|
||||
content_image = content_image.unsqueeze(0).to(device)
|
||||
|
||||
output = style_model(content_image).cpu()
|
||||
output_file_path = os.path.join(output_path, os.path.basename(image_file_path))
|
||||
save_image(output_file_path, output[0])
|
||||
result.append(output_file_path)
|
||||
|
||||
return result
|
Двоичный файл не отображается.
До Ширина: | Высота: | Размер: 119 KiB |
|
@ -1,190 +0,0 @@
|
|||
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
# Licensed under the MIT License.
|
||||
|
||||
import numpy as np
|
||||
import argparse
|
||||
import os
|
||||
import re
|
||||
import tensorflow as tf
|
||||
import time
|
||||
import glob
|
||||
|
||||
from azureml.core import Run
|
||||
from utils import load_data
|
||||
from tensorflow.keras import Model, layers
|
||||
|
||||
|
||||
# Create TF Model.
|
||||
class NeuralNet(Model):
|
||||
# Set layers.
|
||||
def __init__(self):
|
||||
super(NeuralNet, self).__init__()
|
||||
# First hidden layer.
|
||||
self.h1 = layers.Dense(n_h1, activation=tf.nn.relu)
|
||||
# Second hidden layer.
|
||||
self.h2 = layers.Dense(n_h2, activation=tf.nn.relu)
|
||||
self.out = layers.Dense(n_outputs)
|
||||
|
||||
# Set forward pass.
|
||||
def call(self, x, is_training=False):
|
||||
x = self.h1(x)
|
||||
x = self.h2(x)
|
||||
x = self.out(x)
|
||||
if not is_training:
|
||||
# Apply softmax when not training.
|
||||
x = tf.nn.softmax(x)
|
||||
return x
|
||||
|
||||
|
||||
def cross_entropy_loss(y, logits):
|
||||
# Convert labels to int 64 for tf cross-entropy function.
|
||||
y = tf.cast(y, tf.int64)
|
||||
# Apply softmax to logits and compute cross-entropy.
|
||||
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
|
||||
# Average loss across the batch.
|
||||
return tf.reduce_mean(loss)
|
||||
|
||||
|
||||
# Accuracy metric.
|
||||
def accuracy(y_pred, y_true):
|
||||
# Predicted class is the index of highest score in prediction vector (i.e. argmax).
|
||||
correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.cast(y_true, tf.int64))
|
||||
return tf.reduce_mean(tf.cast(correct_prediction, tf.float32), axis=-1)
|
||||
|
||||
|
||||
# Optimization process.
|
||||
def run_optimization(x, y):
|
||||
# Wrap computation inside a GradientTape for automatic differentiation.
|
||||
with tf.GradientTape() as g:
|
||||
# Forward pass.
|
||||
logits = neural_net(x, is_training=True)
|
||||
# Compute loss.
|
||||
loss = cross_entropy_loss(y, logits)
|
||||
|
||||
# Variables to update, i.e. trainable variables.
|
||||
trainable_variables = neural_net.trainable_variables
|
||||
|
||||
# Compute gradients.
|
||||
gradients = g.gradient(loss, trainable_variables)
|
||||
|
||||
# Update W and b following gradients.
|
||||
optimizer.apply_gradients(zip(gradients, trainable_variables))
|
||||
|
||||
|
||||
print("TensorFlow version:", tf.__version__)
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('--data-folder', type=str, dest='data_folder', default='data', help='data folder mounting point')
|
||||
parser.add_argument('--batch-size', type=int, dest='batch_size', default=128, help='mini batch size for training')
|
||||
parser.add_argument('--first-layer-neurons', type=int, dest='n_hidden_1', default=128,
|
||||
help='# of neurons in the first layer')
|
||||
parser.add_argument('--second-layer-neurons', type=int, dest='n_hidden_2', default=128,
|
||||
help='# of neurons in the second layer')
|
||||
parser.add_argument('--learning-rate', type=float, dest='learning_rate', default=0.01, help='learning rate')
|
||||
parser.add_argument('--resume-from', type=str, default=None,
|
||||
help='location of the model or checkpoint files from where to resume the training')
|
||||
args = parser.parse_args()
|
||||
|
||||
previous_model_location = args.resume_from
|
||||
# You can also use environment variable to get the model/checkpoint files location
|
||||
# previous_model_location = os.path.expandvars(os.getenv("AZUREML_DATAREFERENCE_MODEL_LOCATION", None))
|
||||
|
||||
data_folder = args.data_folder
|
||||
print('Data folder:', data_folder)
|
||||
|
||||
# load train and test set into numpy arrays
|
||||
# note we scale the pixel intensity values to 0-1 (by dividing it with 255.0) so the model can converge faster.
|
||||
X_train = load_data(glob.glob(os.path.join(data_folder, '**/train-images-idx3-ubyte.gz'),
|
||||
recursive=True)[0], False) / np.float32(255.0)
|
||||
X_test = load_data(glob.glob(os.path.join(data_folder, '**/t10k-images-idx3-ubyte.gz'),
|
||||
recursive=True)[0], False) / np.float32(255.0)
|
||||
y_train = load_data(glob.glob(os.path.join(data_folder, '**/train-labels-idx1-ubyte.gz'),
|
||||
recursive=True)[0], True).reshape(-1)
|
||||
y_test = load_data(glob.glob(os.path.join(data_folder, '**/t10k-labels-idx1-ubyte.gz'),
|
||||
recursive=True)[0], True).reshape(-1)
|
||||
|
||||
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape, sep='\n')
|
||||
|
||||
training_set_size = X_train.shape[0]
|
||||
|
||||
n_inputs = 28 * 28
|
||||
n_h1 = args.n_hidden_1
|
||||
n_h2 = args.n_hidden_2
|
||||
n_outputs = 10
|
||||
learning_rate = args.learning_rate
|
||||
n_epochs = 20
|
||||
batch_size = args.batch_size
|
||||
|
||||
# Build neural network model.
|
||||
neural_net = NeuralNet()
|
||||
|
||||
# Stochastic gradient descent optimizer.
|
||||
optimizer = tf.optimizers.SGD(learning_rate)
|
||||
|
||||
# start an Azure ML run
|
||||
run = Run.get_context()
|
||||
|
||||
if previous_model_location:
|
||||
# Restore variables from latest checkpoint.
|
||||
checkpoint = tf.train.Checkpoint(model=neural_net, optimizer=optimizer)
|
||||
checkpoint_file_path = tf.train.latest_checkpoint(previous_model_location)
|
||||
checkpoint.restore(checkpoint_file_path)
|
||||
checkpoint_filename = os.path.basename(checkpoint_file_path)
|
||||
num_found = re.search(r'\d+', checkpoint_filename)
|
||||
if num_found:
|
||||
start_epoch = int(num_found.group(0))
|
||||
print("Resuming from epoch {}".format(str(start_epoch)))
|
||||
|
||||
start_time = time.perf_counter()
|
||||
for epoch in range(0, n_epochs):
|
||||
|
||||
# randomly shuffle training set
|
||||
indices = np.random.permutation(training_set_size)
|
||||
X_train = X_train[indices]
|
||||
y_train = y_train[indices]
|
||||
|
||||
# batch index
|
||||
b_start = 0
|
||||
b_end = b_start + batch_size
|
||||
for _ in range(training_set_size // batch_size):
|
||||
# get a batch
|
||||
X_batch, y_batch = X_train[b_start: b_end], y_train[b_start: b_end]
|
||||
|
||||
# update batch index for the next batch
|
||||
b_start = b_start + batch_size
|
||||
b_end = min(b_start + batch_size, training_set_size)
|
||||
|
||||
# train
|
||||
run_optimization(X_batch, y_batch)
|
||||
|
||||
# evaluate training set
|
||||
pred = neural_net(X_batch, is_training=False)
|
||||
acc_train = accuracy(pred, y_batch)
|
||||
|
||||
# evaluate validation set
|
||||
pred = neural_net(X_test, is_training=False)
|
||||
acc_val = accuracy(pred, y_test)
|
||||
|
||||
# log accuracies
|
||||
run.log('training_acc', np.float(acc_train))
|
||||
run.log('validation_acc', np.float(acc_val))
|
||||
print(epoch, '-- Training accuracy:', acc_train, '\b Validation accuracy:', acc_val)
|
||||
|
||||
# Save checkpoints in the "./outputs" folder so that they are automatically uploaded into run history.
|
||||
checkpoint_dir = './outputs/'
|
||||
checkpoint = tf.train.Checkpoint(model=neural_net, optimizer=optimizer)
|
||||
|
||||
if epoch % 2 == 0:
|
||||
checkpoint.save(checkpoint_dir)
|
||||
|
||||
run.log('final_acc', np.float(acc_val))
|
||||
os.makedirs('./outputs/model', exist_ok=True)
|
||||
|
||||
# files saved in the "./outputs" folder are automatically uploaded into run history
|
||||
# this is workaround for https://github.com/tensorflow/tensorflow/issues/33913 and will be fixed once we move to >tf2.1
|
||||
neural_net._set_inputs(X_train)
|
||||
tf.saved_model.save(neural_net, './outputs/model/')
|
||||
|
||||
stop_time = time.perf_counter()
|
||||
training_time = (stop_time - start_time) * 1000
|
||||
print("Total time in milliseconds for training: {}".format(str(training_time)))
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -1,27 +0,0 @@
|
|||
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
# Licensed under the MIT License.
|
||||
|
||||
import gzip
|
||||
import numpy as np
|
||||
import struct
|
||||
|
||||
|
||||
# load compressed MNIST gz files and return numpy arrays
|
||||
def load_data(filename, label=False):
|
||||
with gzip.open(filename) as gz:
|
||||
struct.unpack('I', gz.read(4))
|
||||
n_items = struct.unpack('>I', gz.read(4))
|
||||
if not label:
|
||||
n_rows = struct.unpack('>I', gz.read(4))[0]
|
||||
n_cols = struct.unpack('>I', gz.read(4))[0]
|
||||
res = np.frombuffer(gz.read(n_items[0] * n_rows * n_cols), dtype=np.uint8)
|
||||
res = res.reshape(n_items[0], n_rows * n_cols)
|
||||
else:
|
||||
res = np.frombuffer(gz.read(n_items[0]), dtype=np.uint8)
|
||||
res = res.reshape(n_items[0], 1)
|
||||
return res
|
||||
|
||||
|
||||
# one-hot encode a 1-D array
|
||||
def one_hot_encode(array, num_of_classes):
|
||||
return np.eye(num_of_classes)[array.reshape(-1)]
|
|
@ -1,466 +0,0 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Copyright (c) Microsoft Corporation. All rights reserved.\n",
|
||||
"\n",
|
||||
"Licensed under the MIT License."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/work-with-data/datadrift-tutorial/datadrift-quickdemo.png)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Analyze data drift in Azure Machine Learning datasets \n",
|
||||
"\n",
|
||||
"In this tutorial, you will setup a data drift monitor on a weather dataset to:\n",
|
||||
"\n",
|
||||
"☑ Analyze historical data for drift\n",
|
||||
"\n",
|
||||
"☑ Setup a monitor to recieve email alerts if data drift is detected going forward\n",
|
||||
"\n",
|
||||
"If your workspace is Enterprise level, view and exlpore the results in the Azure Machine Learning studio. The video below shows the results from this tutorial. \n",
|
||||
"\n",
|
||||
"![gif](media/video.gif)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prerequisites\n",
|
||||
"If you are using an Azure Machine Learning Compute instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) if you haven't already established your connection to the AzureML Workspace."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Check core SDK version number\n",
|
||||
"import azureml.core\n",
|
||||
"\n",
|
||||
"print('SDK version:', azureml.core.VERSION)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialize Workspace\n",
|
||||
"\n",
|
||||
"Initialize a workspace object from persisted configuration."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core import Workspace\n",
|
||||
"\n",
|
||||
"ws = Workspace.from_config()\n",
|
||||
"ws"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup target and baseline datasets\n",
|
||||
"\n",
|
||||
"Setup the baseline and target datasets. The baseline will be used to compare each time slice of the target dataset, which is sampled by a given frequency. For further details, see [our documentation](http://aka.ms/datadrift). \n",
|
||||
"\n",
|
||||
"The next few cells will:\n",
|
||||
" * get the default datastore\n",
|
||||
" * upload the `weather-data` to the datastore\n",
|
||||
" * create the Tabular dataset from the data\n",
|
||||
" * add the timeseries trait by specifying the timestamp column `datetime`\n",
|
||||
" * register the dataset\n",
|
||||
" * create the baseline as a time slice of the target dataset\n",
|
||||
" * optionally, register the baseline dataset\n",
|
||||
" \n",
|
||||
"The folder `weather-data` contains weather data from the [NOAA Integrated Surface Data](https://azure.microsoft.com/services/open-datasets/catalog/noaa-integrated-surface-data/) filtered down to to station names containing the string 'FLORIDA' to reduce the size of data. See `get_data.py` to see how this data is curated and modify as desired. This script may take a long time to run, hence the data is provided in the `weather-data` folder for this demo."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# use default datastore\n",
|
||||
"dstore = ws.get_default_datastore()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# upload weather data\n",
|
||||
"dstore.upload('weather-data', 'datadrift-data', overwrite=True, show_progress=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# import Dataset class\n",
|
||||
"from azureml.core import Dataset\n",
|
||||
"\n",
|
||||
"# create target dataset \n",
|
||||
"target = Dataset.Tabular.from_parquet_files(dstore.path('datadrift-data/**/data.parquet'))\n",
|
||||
"# set the timestamp column\n",
|
||||
"target = target.with_timestamp_columns('datetime')\n",
|
||||
"# register the target dataset\n",
|
||||
"target = target.register(ws, 'target')\n",
|
||||
"# retrieve the dataset from the workspace by name\n",
|
||||
"target = Dataset.get_by_name(ws, 'target')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# import datetime \n",
|
||||
"from datetime import datetime\n",
|
||||
"\n",
|
||||
"# set baseline dataset as January 2019 weather data\n",
|
||||
"baseline = Dataset.Tabular.from_parquet_files(dstore.path('datadrift-data/2019/01/data.parquet'))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# optionally, register the baseline dataset. if skipped, an unregistered dataset will be used\n",
|
||||
"#baseline = baseline.register(ws, 'baseline')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create compute target\n",
|
||||
"\n",
|
||||
"> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n",
|
||||
"\n",
|
||||
"Create an Azure Machine Learning compute cluster to run the data drift monitor and associated runs. The below cell will create a compute cluster named `'cpu-cluster'`. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.core.compute import AmlCompute, ComputeTarget\n",
|
||||
"\n",
|
||||
"compute_name = 'cpu-cluster'\n",
|
||||
"\n",
|
||||
"if compute_name in ws.compute_targets:\n",
|
||||
" compute_target = ws.compute_targets[compute_name]\n",
|
||||
" if compute_target and type(compute_target) is AmlCompute:\n",
|
||||
" print('found compute target. just use it. ' + compute_name)\n",
|
||||
"else:\n",
|
||||
" print('creating a new compute target...')\n",
|
||||
" provisioning_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D3_V2', min_nodes=0, max_nodes=2)\n",
|
||||
"\n",
|
||||
" # create the cluster\n",
|
||||
" compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\n",
|
||||
"\n",
|
||||
" # can poll for a minimum number of nodes and for a specific timeout.\n",
|
||||
" # if no min node count is provided it will use the scale settings for the cluster\n",
|
||||
" compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
|
||||
"\n",
|
||||
" # For a more detailed view of current AmlCompute status, use get_status()\n",
|
||||
" print(compute_target.get_status().serialize())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create data drift monitor\n",
|
||||
"\n",
|
||||
"See [our documentation](http://aka.ms/datadrift) for a complete description for all of the parameters. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"tags": [
|
||||
"datadrift-remarks-sample"
|
||||
]
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from azureml.datadrift import DataDriftDetector, AlertConfiguration\n",
|
||||
"\n",
|
||||
"alert_config = AlertConfiguration(['user@contoso.com']) # replace with your email to recieve alerts from the scheduled pipeline after enabling\n",
|
||||
"\n",
|
||||
"monitor = DataDriftDetector.create_from_datasets(ws, 'weather-monitor', baseline, target, \n",
|
||||
" compute_target='cpu-cluster', # compute target for scheduled pipeline and backfills \n",
|
||||
" frequency='Week', # how often to analyze target data\n",
|
||||
" feature_list=None, # list of features to detect drift on\n",
|
||||
" drift_threshold=None, # threshold from 0 to 1 for email alerting\n",
|
||||
" latency=0, # SLA in hours for target data to arrive in the dataset\n",
|
||||
" alert_config=alert_config) # email addresses to send alert"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Update data drift monitor\n",
|
||||
"\n",
|
||||
"Many settings of the data drift monitor can be updated after creation. In this demo, we will update the `drift_threshold` and `feature_list`. See [our documentation](http://aka.ms/datadrift) for details on which settings can be changed."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# get monitor by name\n",
|
||||
"monitor = DataDriftDetector.get_by_name(ws, 'weather-monitor')\n",
|
||||
"\n",
|
||||
"# create feature list - need to exclude columns that naturally drift or increment over time, such as year, day, index\n",
|
||||
"columns = list(baseline.take(1).to_pandas_dataframe())\n",
|
||||
"exclude = ['year', 'day', 'version', '__index_level_0__']\n",
|
||||
"features = [col for col in columns if col not in exclude]\n",
|
||||
"\n",
|
||||
"# update the feature list\n",
|
||||
"monitor = monitor.update(feature_list=features)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Analyze historical data and backfill\n",
|
||||
"\n",
|
||||
"You can use the `backfill` method to:\n",
|
||||
" * analyze historical data\n",
|
||||
" * backfill metrics after updating the settings (mainly the feature list)\n",
|
||||
" * backfill metrics for failed runs\n",
|
||||
" \n",
|
||||
"The below cells will run two backfills that will produce data drift results for 2019 weather data, with January used as the baseline in the monitor. The output can be seen from the `show` method after the runs have completed, or viewed from the Azure Machine Learning studio for Enterprise workspaces.\n",
|
||||
"\n",
|
||||
"![Drift results](media/drift-results.png)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"jupyter": {
|
||||
"source_hidden": true
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
">**Tip!** When starting with the data drift capability, start by backfilling on a small section of data to get initial results. Update the feature list as needed by removing columns that are causing drift, but can be ignored, and backfill this section of data until satisfied with the results. Then, backfill on a larger slice of data and/or set the alert configuration, threshold, and enable the schedule to recieve alerts to drift on your dataset. All of this can be done through the UI (Enterprise) or Python SDK."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Although it depends on many factors, the below backfill should typically take less than 20 minutes to run. Results will show as soon as they become available, not when the backfill is completed, so you may begin to see some metrics in a few minutes."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# backfill for one month\n",
|
||||
"backfill_start_date = datetime(2019, 9, 1)\n",
|
||||
"backfill_end_date = datetime(2019, 10, 1)\n",
|
||||
"backfill = monitor.backfill(backfill_start_date, backfill_end_date)\n",
|
||||
"backfill"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Query metrics and show results in Python\n",
|
||||
"\n",
|
||||
"The below cell will plot some key data drift metrics, and can be used to query the results. Run `help(monitor.get_output)` for specifics on the object returned."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# make sure the backfill has completed\n",
|
||||
"backfill.wait_for_completion(wait_post_processing=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# get results from Python SDK (wait for backfills or monitor runs to finish)\n",
|
||||
"results, metrics = monitor.get_output(start_time=datetime(year=2019, month=9, day=1))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# plot the results from Python SDK \n",
|
||||
"monitor.show(backfill_start_date, backfill_end_date)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Enable the monitor's pipeline schedule\n",
|
||||
"\n",
|
||||
"Turn on a scheduled pipeline which will anlayze the target dataset for drift every `frequency`. Use the latency parameter to adjust the start time of the pipeline. For instance, if it takes 24 hours for my data processing pipelines for data to arrive in the target dataset, set latency to 24. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# enable the pipeline schedule and recieve email alerts\n",
|
||||
"monitor.enable_schedule()\n",
|
||||
"\n",
|
||||
"# disable the pipeline schedule \n",
|
||||
"#monitor.disable_schedule()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Delete compute target\n",
|
||||
"\n",
|
||||
"Do not delete the compute target if you intend to keep using it for the data drift monitor scheduled runs or otherwise. If the minimum nodes are set to 0, it will scale down soon after jobs are completed, and scale up the next time the cluster is needed."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# optionally delete the compute target\n",
|
||||
"#compute_target.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Delete the DataDriftDetector\n",
|
||||
"\n",
|
||||
"Invoking the `delete()` method on the object deletes the the drift monitor permanently and cannot be undone. You will no longer be able to find it in the UI and the `list()` or `get()` methods. The object on which delete() was called will have its state set to deleted and name suffixed with deleted. The baseline and target datasets and model data that was collected, if any, are not deleted. The compute is not deleted. The DataDrift schedule pipeline is disabled and archived."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"monitor.delete()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Next steps\n",
|
||||
"\n",
|
||||
" * See [our documentation](https://aka.ms/datadrift) or [Python SDK reference](https://docs.microsoft.com/python/api/overview/azure/ml/intro)\n",
|
||||
" * [Send requests or feedback](mailto:driftfeedback@microsoft.com) on data drift directly to the team\n",
|
||||
" * Please open issues with data drift here on GitHub or on StackOverflow if others are likely to run into the same issue"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"authors": [
|
||||
{
|
||||
"name": "jamgan"
|
||||
}
|
||||
],
|
||||
"category": "tutorial",
|
||||
"compute": [
|
||||
"Remote"
|
||||
],
|
||||
"datasets": [
|
||||
"NOAA"
|
||||
],
|
||||
"deployment": [
|
||||
"None"
|
||||
],
|
||||
"exclude_from_index": false,
|
||||
"framework": [
|
||||
"Azure ML"
|
||||
],
|
||||
"friendly_name": "Data drift quickdemo",
|
||||
"index_order": 1,
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8 - AzureML",
|
||||
"language": "python",
|
||||
"name": "python38-azureml"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.7.4"
|
||||
},
|
||||
"star_tag": [
|
||||
"featured"
|
||||
],
|
||||
"tags": [
|
||||
"Dataset",
|
||||
"Timeseries",
|
||||
"Drift"
|
||||
],
|
||||
"task": "Filtering"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
|
@ -1,30 +0,0 @@
|
|||
# import packages
|
||||
import os
|
||||
import pandas as pd
|
||||
from calendar import monthrange
|
||||
from datetime import datetime, timedelta
|
||||
from azureml.core import Dataset, Datastore, Workspace
|
||||
from azureml.opendatasets import NoaaIsdWeather
|
||||
|
||||
# get workspace and datastore
|
||||
ws = Workspace.from_config()
|
||||
dstore = ws.get_default_datastore()
|
||||
|
||||
# adjust parameters as needed
|
||||
target_years = list(range(2010, 2020))
|
||||
start_month = 1
|
||||
|
||||
# get data
|
||||
for year in target_years:
|
||||
for month in range(start_month, 12 + 1):
|
||||
path = 'weather-data/{}/{:02d}/'.format(year, month)
|
||||
try:
|
||||
start = datetime(year, month, 1)
|
||||
end = datetime(year, month, monthrange(year, month)[1]) + timedelta(days=1)
|
||||
isd = NoaaIsdWeather(start, end).to_pandas_dataframe()
|
||||
isd = isd[isd['stationName'].str.contains('FLORIDA', regex=True, na=False)]
|
||||
os.makedirs(path, exist_ok=True)
|
||||
isd.to_parquet(path + 'data.parquet')
|
||||
except Exception as e:
|
||||
print('Month {} in year {} likely has no data.\n'.format(month, year))
|
||||
print('Exception: {}'.format(e))
|
Двоичный файл не отображается.
До Ширина: | Высота: | Размер: 56 KiB |
Двоичный файл не отображается.
До Ширина: | Высота: | Размер: 8.7 MiB |
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Двоичный файл не отображается.
Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше
Загрузка…
Ссылка в новой задаче