DOC-2056: initial language/style guide edit
This commit is contained in:
Родитель
6bb5cb880c
Коммит
ef5ba3b0f3
|
@ -1,37 +1,44 @@
|
|||
# Annotated Dataset
|
||||
# Annotated dataset
|
||||
|
||||
<img src="images/annotatedPicture.png" align="middle"/>
|
||||
|
||||
<!--## Real World Annotated Dataset
|
||||
As a part of this project there is a public annotated dataset that is currently being hosted in [gcs](https://storage.cloud.google.com/thea-dev/data/groceries/v1.zip?authuser=0) for users to download and explore.
|
||||
You can download and explore a public annotated dataset, hosted in [Google Cloud Storage](https://storage.cloud.google.com/thea-dev/data/groceries/v1.zip?authuser=0).
|
||||
|
||||
<img src="images/realImage.png" align="middle"/>-->
|
||||
|
||||
## Synthetic Data
|
||||
Training using synthetic data models is very attractive because large amounts of datasets can be generated to train models without a large cost of an entire team creating the data. While creating the background images seen in this sample project the following was applied. Random lighting, random occlusion and depth layers, random noise, and random blur to the images to provide a more robust environment for the model to train on. This allows users to create synthetic data that allows the precise control of the rendering process of the images to include various properties of the image in the simulation.
|
||||
## Synthetic data
|
||||
The advantage of training with synthetic data models is that large numbers of datasets can be generated to train models without the cost of an entire team creating the data. When the SynthDet team created the background images in this sample project they applied the following to the images to provide a more robust environment for the model to train on:
|
||||
|
||||
## Real World Data
|
||||
Real world data is extremely costly and time consuming to produce and error prone. This can become a large limitation on creating real world datasets that can be used
|
||||
to train models. This can be a long process because a human has to take pictures of the objects in various lighting, poses, and layouts that can be later used for annotating labels onto the images.
|
||||
- Random lighting
|
||||
- Random occlusion and depth layers
|
||||
- Random noise
|
||||
- Random blur
|
||||
|
||||
## Labeling Images
|
||||
This is done using a labeling tool that allows users to draw 2D bounding boxes around objects in an image. Once the box is around the image the user can then go and add a label to the bounding box (i.e. food-cereal-luckycharms), this has to be done for each object in the image. To build a dataset of images you need to have a large amount of images with different objects, layouts, etc... Once you complete this step you can begin to train a model using these real images to check the performance of the model on Tensor board or another notebook service
|
||||
This allows users to create synthetic data that allows the precise control of the rendering process of the images to include various properties of the image in the simulation.
|
||||
|
||||
## Domain Randomization
|
||||
Currently the strategy for creating purely synthetic images to be used for training object detection that is showing promising results. This is done by using a large amount of created 3D assets and randomly disturbing the objects in different poses and depth layers. After the image is created a few things happen to the image to alter the rendering, a random noise filter, a random noise filter, a random blur, and a random color illumination is added to the image.
|
||||
## Real world data
|
||||
Real world data is costly and time consuming to produce and error prone. This is a significant limitation on creating real world datasets that can be used to train models. This can be a long process because a human has to take pictures of the objects in various lighting, poses, and layouts that can be later used to annotate the images with labels.
|
||||
|
||||
## Enumeration of Exceptions & Assumptions
|
||||
There are some places where adjustments where made to the implemented methodology slightly from what is described in the paper.These changes where made either because there was not enough detail to definitively infer what the exact implementation entailed, or because the process described seemed either more complex or performance intensive than what has been implemented. Currently any modifications made will not substantially alter the results of the experiment, and changes made will be kept track of and adjust any variances that identified as being insufficiently similar. In our code, these deviations are identified with an "// XXX:" comment
|
||||
## Labeling images
|
||||
Users can label images with a labeling tool that allows users to draw 2D bounding boxes around objects in an image. When the box is around the image the user can add a label to the bounding box (that is, food-cereal-luckycharms). The user must do this for each object in the image. To build a dataset of images you must have a large amount of images with different objects and layouts. When you complete this step you can start training a model using these real images to check the performance of the model on Tensor board or another notebook service
|
||||
|
||||
## Domain randomization
|
||||
Currently the strategy for creating purely synthetic images to be used for training object detection is showing promising results. This is done by using a large amount of created 3D assets and randomly disturbing the objects in different poses and depth layers. After the user creates an image a few things happen to the image to alter the rendering: a random noise filter, a random blur, and a random color illumination are added to the image.
|
||||
|
||||
## Enumeration of exceptions and assumptions
|
||||
There are some places where the SynthDet team made slight adjustments to the implemented methodology compared to what the authors of the paper describe. The team made these changes either because there was not enough detail to definitively infer what the exact implementation entailed, or because the process the authors described seemed either more complex or more performance-intensive than what has been implemented. Currently any modifications the team made do not substantially alter the results of the experiment. The team logged any changes they made and adjusted any variances that the team identified as being insufficiently similar. In the team's code, these deviations are identified with an "// XXX:" comment.
|
||||
|
||||
### "Filling" the background
|
||||
The paper describes a process by which they render background objects randomly within the background until the entire screen space is covered, and they do so by only placing items in regions of the screen not already occupied by an object. This requires quite a bit of rendering, checking, re-rendering, and so on and is not particularly performant. In the BackgroundGenerator class, we instead divide the space into cells of sufficient density to ensure that with several objects rendered in a random position and orientation in each cell, the background will be completely filled. This circumvents having to do multiple rendering passes or any per-frame introspection on the rendered pixels, saving quite a bit of compute at the cost of potentially creating a background that is insufficiently OR overly dense.
|
||||
The paper describes a process by which the authors render background objects randomly within the background until the entire screen space is covered. They do this by only placing items in regions of the screen not already occupied by an object. This requires quite a bit of rendering, checking and re-rendering, and is not particularly performant. In the BackgroundGenerator class, the SynthDet team instead divided the space into cells of sufficient density to ensure that with several objects rendered in a random position and orientation in each cell, the background is completely filled. This avoids the need to do multiple rendering passes or any per-frame introspection on the rendered pixels, saving quite a bit of compute at the cost of potentially creating a background that is insufficiently dense or too dense.
|
||||
|
||||
### Background object scaling
|
||||
A slightly randomized scaling factor is applied to the background objects such that their projected size is within a factor of between 0.9 and 1.5 times the foreground object being placed. The authors describe a strategy for generating subsets of scaling factors from within the 0.9-1.5 range in order to create backgrounds with "primarily large" or "primarily small" objects. However, given that the subset is randomly selected and values from within the subset are also randomly selected, currently the step is omitted for the time being as it seemed extraneous.
|
||||
In addition, the "projected size" mentioned in the paper was not described in detail. This is understood to mean "surface area in pixels," and are approximating this projected size by computing the surface area of the 2-dimensional polygon formed by the 3D axis-aligned bounding box of the object after rotation.
|
||||
A slightly randomized scaling factor is applied to the background objects such that their projected size is within a factor of between 0.9 and 1.5 times the foreground object being placed. The authors describe a strategy for generating subsets of scaling factors from within the 0.9-1.5 range in order to create backgrounds with "primarily large" or "primarily small" objects. However, given that the subset is randomly selected and values from within the subset are also randomly selected, currently the step is omitted because the SynthDet team thought it was extraneous.
|
||||
|
||||
Also, the "projected size" mentioned in the paper was not described in detail. The team understood this to to mean "surface area in pixels," and are approximating this projected size by computing the surface area of the 2D polygon formed by the 3D axis-aligned bounding box of the object after rotation.
|
||||
|
||||
### Computing clipping/occlusion percentages
|
||||
When placing foreground and occluding objects, the paper often mentions determining the percent one object occludes or clips with another, or how much is clips with the edges of the screen area. However, it does not describe whether this is a pixel-level comparison or an approximation, and the thresholds it uses to make decisions for whether or not to reposition an object are all presented with a single significant digit. This is understood to mean that the computations are approximate, and are using the projected rectangular bounding boxes in the ForegroundObjectPlacer to do these computations to save on complexity and compute.
|
||||
When placing foreground and occluding objects, the paper often mentions determining the percent one object occludes or clips with another, or how much it clips with the edges of the screen area. However, the authors do not describe whether this is a pixel-level comparison or an approximation, and the thresholds it uses to make decisions for whether or not to reposition an object are all presented with a single significant digit. The SynthDet team understood this to mean that the computations are approximate, and are using the projected rectangular bounding boxes in the ForegroundObjectPlacer to do these computations to save on complexity and compute.
|
||||
|
||||
### Camera projection model
|
||||
The camera is currently capturing images using an perspective camera model. Visual inspection hints that there may be vanishing points along parallel lines, indicating a perspective transform is in play, but given the resolution and field of view in use, there shouldn't be any differences between perspective and orthographic projection to be significant enough to justify the more complex model.
|
||||
The camera is currently capturing images using an perspective camera model. Visual inspection hints that there may be vanishing points along parallel lines, indicating a perspective transform is in play, but given the resolution and field of view in use, there shouldn't be any differences between perspective and orthographic projection that are significant enough to justify the more complex model.
|
|
@ -1,7 +1,7 @@
|
|||
# Background: Unity
|
||||
If you are not familiar with the [Unity Engine](https://unity3d.com/unity), the [Unity Manual](https://docs.unity3d.com/Manual/index.html) and [Tutorials page](https://unity3d.com/learn/tutorials). The [Roll-a-ball tutorial](https://unity3d.com/learn/tutorials/s/roll-ball-tutorial) is a fantastic resource for learning the basic Unity concepts.
|
||||
# Background on Unity
|
||||
If you are not familiar with the [Unity Engine](https://unity3d.com/unity), see the [Unity Manual](https://docs.unity3d.com/Manual/index.html) and [Tutorials page](https://unity3d.com/learn/tutorials). A good resource for learning basic Unity concepts is the [Roll-a-ball tutorial](https://unity3d.com/learn/tutorials/s/roll-ball-tutorial).
|
||||
|
||||
Here are some manual pages for concepts used in SynthDet:
|
||||
For concepts used in SynthDet, see the following Unity manual pages:
|
||||
* [Editor](https://docs.unity3d.com/Manual/UsingTheEditor.html)
|
||||
* [Interface](https://docs.unity3d.com/Manual/LearningtheInterface.html)
|
||||
* [Scene](https://docs.unity3d.com/Manual/CreatingScenes.html)
|
||||
|
@ -10,6 +10,6 @@ Here are some manual pages for concepts used in SynthDet:
|
|||
* [Camera](https://docs.unity3d.com/Manual/Cameras.html)
|
||||
* [Scripting](https://docs.unity3d.com/Manual/ScriptingSection.html)
|
||||
* [Ordering of event functions](https://docs.unity3d.com/Manual/ExecutionOrder.html)
|
||||
(e.g. FixedUpdate, Update)
|
||||
(for example FixedUpdate or Update)
|
||||
* [Prefabs](https://docs.unity3d.com/Manual/Prefabs.html)
|
||||
* [Unity’s Package Manager](https://docs.unity3d.com/Manual/Packages.html)
|
||||
|
|
|
@ -1,30 +1,31 @@
|
|||
# Docker
|
||||
Guide for navigating issues in docker that can block workflow in using the SynthDet Sample Project.
|
||||
This is a guide for navigating issues in Docker that can block your workflow when using the SynthDet Sample Project.
|
||||
|
||||
## Authentication
|
||||
When setting up docker to push to a gcloud project you might run into an issue where you can’t push because of an authentication issue. If this happens it is most likely an issue where docker has not been configured with gcloud. You can fix this by running gcloud auth configure-docker in a cmd window going through the prompts.
|
||||
When setting up Docker to push to a Google Cloud project you might not be able to push because of an authentication issue. If this happens it is most likely an issue where Docker has not been configured with Google Cloud. To fix this, in a cmd window, run `gcloud auth configure-docker` and follow the prompts.
|
||||
|
||||
[Google advanced authentication](https://cloud.google.com/container-registry/docs/advanced-authentication) for further documentation using docker with google services.
|
||||
For more information on using Docker with Google services see [Google advanced authentication](https://cloud.google.com/container-registry/docs/advanced-authentication).
|
||||
|
||||
## Docker start error on Windows
|
||||
If you are running on a Windows platform you might run into an issue where Docker is unable to start because vmcompute can’t start. It seems there is a bug in Docker ver 2.2.0.3 on Windows OS 1909 that is causing issues with docker
|
||||
If you are running Docker on a Windows, you might experience an issue where Docker is unable to start because vmcompute can’t start. There is a bug in Docker version 2.2.0.3 on Windows OS 1909 that is causing issues with Docker.
|
||||
|
||||
Example of the error:
|
||||
|
||||
>Docker.Core.Backend.BackendDestroyException: Unable to stop Hyper-V VM: Service 'Hyper-V Host Compute Service (vmcompute)' cannot be started due to the following error: Cannot start service vmcompute on computer '.'.
|
||||
|
||||
### Steps:
|
||||
1. Open “Windows Security”
|
||||
2. Open “App & Browser control”
|
||||
3. Click “Exploit protection settings” at the bottom
|
||||
4. Switch to “Program Settings tab”
|
||||
5. Locate “C:\Windows\System32\vmcompute.exe” in the list and expand it
|
||||
6. If the “Edit” window didn’t open click “Edit”
|
||||
7. Scroll down to "Control flow guard (CFG) and un-check “Override system settings”
|
||||
8. Open a administrator powershell
|
||||
9. Start vmcompute from powershell “net start vmcompute”
|
||||
To fix the error:
|
||||
|
||||
1. Go to Start and open Windows Security
|
||||
2. Open App & Browser control
|
||||
3. At the bottom of the App & Browser control window, click **Exploit protection settings**
|
||||
4. On the Exploit protection window, click the Program Settings tab
|
||||
5. Find "C:\Windows\System32\vmcompute.exe" in the list and click it
|
||||
6. Click **Edit** to open the Program settings: vmcomute.exe window
|
||||
7. Scroll down to Control flow guard (CFG) and un-check **Override system settings**
|
||||
8. Open PowerShell as an administrator
|
||||
9. From the PowerShell, start vmcompute by running `net start vmcompute`
|
||||
10. Start the Docker Desktop
|
||||
|
||||
If this method doesn’t work, try selecting “Factory Reset” in the Docker error window. This should keep settings intact.
|
||||
If this method doesn’t work, in the Docker error window, select **Factory Reset**. This should keep settings intact.
|
||||
|
||||
If you are still facing issues with Docker there are some known git issues with Linux subsystem which may require you to downgrade to 2.1.0.5
|
||||
If you are still facing issues with Docker there are some known Git issues with the Linux subsystem that might require you to downgrade to 2.1.0.5
|
|
@ -1,22 +1,22 @@
|
|||
# Getting Started with SynthDet
|
||||
# Getting started with SynthDet
|
||||
|
||||
The goal of the workflow steps is to provide you with everything that you need to get started using the SynthDet project to create a synthetic dataset of grocery products and explore statistics on the dataset.
|
||||
These workflow steps provide you with everything that you need to get started using the SynthDet project to create a synthetic dataset of grocery products and explore statistics on the dataset.
|
||||
|
||||
## Workflow (Step-by-step)
|
||||
## Workflow
|
||||
|
||||
### Step 1: Open the SynthDet Sample project
|
||||
### Step 1: Open the SynthDet sample project
|
||||
|
||||
1. Start the Unity Hub
|
||||
2. Click Add... and select the (repo root)/SynthDet folder
|
||||
2. Click **Add** and select the (repo root)/SynthDet folder
|
||||
3. Click the project to open it
|
||||
4. In the project view in the editor find and locate the Scenes folder and open the MainScene
|
||||
|
||||
<img src="images/MainScene.PNG" align="middle"/>
|
||||
|
||||
### Step 2: Generating data locally
|
||||
1. With MainScene open press play. You should see randomized images being quickly generated in the game view.
|
||||
1. With MainScene open press the play button. You should see randomized images being quickly generated in the game view.
|
||||
<img src="images/PlayBttn.png" align="middle"/>
|
||||
2. The MainScene will continue for ~1 minute before exiting. Allow the scene to run until play mode is exited.
|
||||
2. The MainScene continues for ~1 minute before exiting. Allow the scene to run until play mode is exited.
|
||||
3. To view the dataset, navigate to the following location depending on your OS:
|
||||
- OSX: `~/Library/Application Support/UnityTechnologies/SynthDet`
|
||||
- Linux: `$XDG_CONFIG_HOME/unity3d/UnityTechnologies/SynthDet`
|
||||
|
@ -25,21 +25,21 @@ The goal of the workflow steps is to provide you with everything that you need t
|
|||
<img src="images/dataset.png" align="middle"/>
|
||||
|
||||
### Step 3: View statistics using datasetinsights
|
||||
Once the data is generated locally, `datasetinsights` can be used to show dataset statistics in a jupyter notebook via docker.
|
||||
Once the data is generated locally, you can use`datasetinsights` to show dataset statistics in a Jupyter notebook via Docker.
|
||||
|
||||
1. Run the `datasetinsights` docker image from DockerHub using the following command:
|
||||
|
||||
```docker run -p 8888:8888 -v <Synthetic Data File Path>:/data -t unitytechnologies/datasetinsights:0.1.0```
|
||||
```docker run -p 8888:8888 -v "<Synthetic Data File Path>":/data -t unitytechnologies/datasetinsights:0.1.0```
|
||||
|
||||
Replace `<Synthetic Data File Path>` with the path the local datasets (listed above in step 2.3).
|
||||
|
||||
> See [Docker Troubleshooting](DockerTroubleshooting.md) if you hit issues with Docker on Windows.
|
||||
> If you experience issues with Docker on Windows, see [Docker](Docker.md).
|
||||
|
||||
2. Open jupyter by navigating to http://localhost:8888 in a web browser.
|
||||
|
||||
2. Open Jupyter by navigating to http://localhost:8888 in a web browser
|
||||
|
||||
<img src="images/jupyterFolder.PNG" align="middle"/>
|
||||
|
||||
3. Navigate to `datasetinsights/notebooks` in jupyter
|
||||
3. In Jupyter, navigate to `datasetinsights/notebooks`
|
||||
4. Open the SynthDet_Statistics.ipynb notebook
|
||||
|
||||
<img src="images/theaNotebook.PNG" align="middle"/>
|
||||
|
|
|
@ -1,47 +1,47 @@
|
|||
# Prerequisites
|
||||
The following setup is required for running SynthDet and running dataset statistics:
|
||||
To run SynthDet and dataset statistics, do the following setup:
|
||||
|
||||
1. Download and install [Unity 2019.3](https://unity3d.com/get-unity/download)
|
||||
2. Clone the [SynthDet](https://github.com/Unity-Technologies/SynthDet) repository and initialize submodules. Follow these steps to ensure a fully initialized clone:
|
||||
2. Clone the [SynthDet repository](https://github.com/Unity-Technologies/SynthDet) and initialize the submodules. Follow these steps to ensure a fully initialized clone:
|
||||
```
|
||||
git lfs install
|
||||
git clone https://github.com/Unity-Technologies/SynthDet
|
||||
cd SynthDet
|
||||
git submodule update --init --recursive
|
||||
```
|
||||
>The submodule used in this repository is configured using https authentication. If you are prompted for a username and password in your shell, you will need to supply a [personal authentication token](https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line).
|
||||
>The submodule used in this repository is configured using HTTPS authentication. If you are prompted for a username and password in your shell, create a [personal authentication token](https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line).
|
||||
|
||||
>This repository uses Git LFS for many of the files in the project. If a message saying "Display 1 No Cameras Rendering" shows up when in Play mode in MainScene, ensure LFS is properly initialized by running `git lfs install` followed by `git lfs pull`.
|
||||
3. Install [Docker Desktop](https://www.docker.com/products/docker-desktop) for running `datasetinsights` jupyter notebooks locally
|
||||
>This repository uses Git LFS for many of the files in the project. In MainScene, in Play mode, if a message saying "Display 1 No Cameras Rendering" appears, ensure LFS is properly initialized by running `git lfs install` followed by `git lfs pull`.
|
||||
3. Install [Docker Desktop](https://www.docker.com/products/docker-desktop) to run `datasetinsights` Jupyter notebooks locally
|
||||
|
||||
## Additional requirements for running in Unity Simulation
|
||||
- Unity Account
|
||||
- Unity account
|
||||
- Unity Simulation beta access and CLI
|
||||
- Unity Cloud Project
|
||||
- Linux build support
|
||||
- Supported platforms: Windows10+ and OS X 10.13+
|
||||
|
||||
### Unity Account
|
||||
A Unity account is required to submit simulations to the Unity Simulation service. If you do not already have a Unity account please follow this [link](https://id.unity.com) to sign up for a free account
|
||||
### Unity account
|
||||
You need a Unity account in order to submit simulations to the Unity Simulation service. If you do not have a Unity account, [sign up](https://id.unity.com) for a free account.
|
||||
|
||||
### Unity Simulation account and CLI
|
||||
To use Unity Simulation, sign up for access on the [Unity Simulation](https://unity.com/products/simulation) page. Once you have access, download the [Unity Simulation CLI](https://github.com/Unity-Technologies/Unity-Simulation-Docs/releases) and follow the instructions to set it up.
|
||||
To use Unity Simulation, sign up for access on the [Unity Simulation](https://unity.com/products/simulation) page. When you have access, download the [Unity Simulation CLI](https://github.com/Unity-Technologies/Unity-Simulation-Docs/releases) and follow the instructions to set it up.
|
||||
|
||||
### Unity Cloud Project
|
||||
A Unity Cloud Project Id is required to use Unity Simulation. Please reference the [Setting up your project for Unity Services](https://docs.unity3d.com/Manual/SettingUpProjectServices.html) for instructions on how to enable Unity Cloud Services for a project and how to find the Cloud Project Id.
|
||||
To use Unity Simulation, you need a Unity Cloud Project ID. For instructions on how to enable Unity Cloud Services for a project and how to find the Cloud Project ID, see [Setting up your project for Unity Services](https://docs.unity3d.com/Manual/SettingUpProjectServices.html).
|
||||
|
||||
### Linux build support
|
||||
|
||||
Unity Simulation requires Linux build support installed in the Unity Editor.
|
||||
To use Unity Simulation, in the Unity Editor you must install Linux build support.
|
||||
|
||||
To install Linux build support open Unity Hub and navigate to the Installs tab.
|
||||
To install Linux build support open the Unity Hub and navigate to the Installs tab:
|
||||
|
||||
![Linux build support](images/req-2.png "Linux build support")
|
||||
|
||||
Click on the options menu icon, (...) to the right of your project’s Unity version, and select Add Component.
|
||||
To the right of your project’s Unity version, click the options menu icon (...) and select **Add Component**:
|
||||
|
||||
![Linux build support component](images/req-3.png "Linux build support component")
|
||||
|
||||
Ensure that Linux Build Support is checked in the components menu and click done to begin the install.
|
||||
In the components menu, enable **Linux Build Support** and click **Done** to begin the install.
|
||||
|
||||
![Linux build support fna](images/req-4.png "Linux build support component")
|
|
@ -1,14 +1,14 @@
|
|||
# SynthDet Documentation
|
||||
## Installation & Setup
|
||||
# SynthDet documentation
|
||||
## Installation and setup
|
||||
* [Prerequisites](Prerequisites.md)
|
||||
|
||||
## Getting Started
|
||||
## Getting started
|
||||
* [Getting Started with SynthDet](GettingStartedSynthDet.md)
|
||||
* [Running SynthDet in Unity Simulation](RunningSynthDetCloud.md)
|
||||
* [Dataset Insights](https://github.com/Unity-Technologies/dataset-insights)
|
||||
* [Getting Started with SynthDet Viewer AR App](https://github.com/Unity-Technologies/perception-synthdet-demo-app)
|
||||
|
||||
## Additional Documentation
|
||||
## Additional documentation
|
||||
* [Annotated Data](AnnotatedDataset.md)
|
||||
* [Background: Unity](BackgroundUnity.md)
|
||||
|
||||
|
|
|
@ -1,72 +1,69 @@
|
|||
# Running SynthDet in Unity Simulation
|
||||
|
||||
This walkthrough shows how to generate a dataset at scale using Unity Simulation.
|
||||
This walkthrough shows you how to generate a dataset at scale using Unity Simulation.
|
||||
|
||||
If you would like to use Unity Simulation please sign up for the [Unity Simulation Beta.](https://unity.com/products/simulation)
|
||||
If you would like to use Unity Simulation sign up for the [Unity Simulation Beta.](https://unity.com/products/simulation)
|
||||
|
||||
## Workflow (Step-by-step)
|
||||
## Workflow
|
||||
|
||||
### Step 1: Set up additional prerequisites
|
||||
See the "Additional requirements for running in Unity Simulation" section in [Prerequisites](Prerequisites.md)
|
||||
See the Additional requirements for running in Unity Simulation section in [Prerequisites](Prerequisites.md).
|
||||
|
||||
### Step 2: Open the SynthDet Sample project
|
||||
Please follow the Steps 1 & 2 in the [Getting Started with SynthDet](GettingStartedSynthDet.md)
|
||||
### Step 2: Open the SynthDet sample project
|
||||
Follow steps 1 and 2 in the [Getting Started with SynthDet](GettingStartedSynthDet.md).
|
||||
|
||||
### Step 3: Connect to Cloud Services
|
||||
The project will need to be connected to cloud services and a org id in order to access Unity Simulations in the cloud services
|
||||
To access Unity Simulations in Cloud Services, the project must be connected to Cloud Services and an org ID. To run the app on Unity Simulation, connect to Cloud Services and create a new Unity Project ID using the following steps:
|
||||
|
||||
1. To run the app on Unity Simulation, connect to cloud services and create a new Unity Project ID using the following steps:
|
||||
1. In the top right corner of the editor click the cloud button
|
||||
1. This will open the “Services” tab
|
||||
1. In the top-right corner of the Editor click the cloud button. This opens the Services tab.
|
||||
|
||||
<img src="images/OpenCloudServices.png" align="middle"/>
|
||||
|
||||
2. Make sure you are logged into your Unity account as well
|
||||
2. Ensure you are logged into your Unity account
|
||||
3. Create a new Unity Project ID
|
||||
|
||||
<img src="images/CreateNewUnityProjectID.png" align="middle"/>
|
||||
|
||||
4. When creating your project ID make sure select the desired organization for the project
|
||||
4. When creating your project ID, select the organization you want for the project
|
||||
|
||||
<img src="images/UnityProjectIdOrg.PNG" align="middle"/>
|
||||
|
||||
5. Here is a [Unity link](https://docs.unity3d.com/Manual/SettingUpProjectServices.html) for the services creation in case further information is needed
|
||||
If you need more information, see [Setting up your project for Unity Services](https://docs.unity3d.com/Manual/SettingUpProjectServices.html) in the Unity manual.
|
||||
|
||||
### Step 4: Run SynthDet in Unity Simulation
|
||||
|
||||
1. Start a run in Unity Simulation using the Run in USim window, once the run is executed it will take time ~ 10 mins for the run to complete
|
||||
1. Under Window -> Run in USim…
|
||||
2. Fill out the Run Name with an example name i.e. SynthDetTestRun
|
||||
3. If you are curious about the parameters in this window check out the [Unity Simulation information guide](UnitySimulationHelpInformation.md)
|
||||
1. Start a run in Unity Simulation using the Run in Unity Simulation window. When the run is executed it takes approximately ten minutes to complete.
|
||||
1. Click **Window** > **Run in USim…**
|
||||
2. Enter a name in the Run Name field, for example "SynthDetTestRun"
|
||||
3. For more information on the parameters in this window see the [Unity Simulation information guide](UnitySimulationHelpInformation.md).
|
||||
|
||||
<img src="images/USimRunWindow.PNG" align="middle"/>
|
||||
|
||||
3. Click “Execute in Unity Simulation”
|
||||
1. This will take some time for the runs to complete and the editor may appear frozen however it is executing the run
|
||||
4. Once the run is complete check the console log and take note and copy down the run-execution id and build id from the debug message
|
||||
3. Click **Execute on Unity Simulation**. It takes some time for the runs to complete and the Editor may seem frozen. However, the Editor is executing the run.
|
||||
4. When the run is complete check the console log and take note down the run-execution ID and build ID from the debug message:
|
||||
|
||||
<img src="images/NoteExecutionID.PNG" align="middle"/>
|
||||
|
||||
If you run into issues, check [Unity Simulation Help and Information](UnitySimulationHelpInformation.md)
|
||||
If you run into issues, see [Unity Simulation help and information](UnitySimulationHelpInformation.md).
|
||||
|
||||
### Step 5: Monitor status using Unity Simulation CLI
|
||||
Once the Unity Simulation run has been executed, the run needs to be verified that it has completed.
|
||||
When the Unity Simulation run has been executed, its completion must be verified.
|
||||
|
||||
1. Check the current summary of the execution run in Unity Simulation since we need the run to be completed
|
||||
1. Check the current summary of the execution run in Unity Simulation because the run must be completed
|
||||
1. Open a command line interface and navigate to the USim CLI for your platform
|
||||
2. Run the command `usim login auth`, this will authorize your account and log in
|
||||
3. In the command window run this command `summarize run-execution <execution id>`
|
||||
1. If you receive an error about the active project please go to [Unity Simulation Help](UnitySimulationHelpInformation.md)
|
||||
2. The command may need to be ran few times because you don’t want to continue until the run reports that it has completed
|
||||
2. Run the`usim login auth` command: this authorizes your account and logs in
|
||||
3. In the command window run this command: `summarize run-execution <execution id>`
|
||||
1. If you receive an error about the active project please see [Unity Simulation Help](UnitySimulationHelpInformation.md)
|
||||
2. You might need to run the command a few times because you don’t want to continue until the run reports that it has completed
|
||||
|
||||
<img src="images/usimSumExecution.PNG" align="middle"/>
|
||||
|
||||
2. (Optional) Download the manifest and check the generated data
|
||||
1. Run the cmd `usim download manifest <execution id>`
|
||||
2. This will download a csv file that will contain links to the generated data
|
||||
3. Verify some of the data looks good before continuing
|
||||
2. This downloads a CSV file that contains links to the generated data
|
||||
3. Verify that the data looks good before continuing
|
||||
|
||||
### Step 6: Run dataset statistics using the datasetinsights jupyter notebook
|
||||
### Step 6: Run dataset statistics using the datasetinsights Jupyter notebook
|
||||
|
||||
1. Run the `datasetinsights` docker image from DockerHub using the following command:
|
||||
|
||||
|
@ -74,13 +71,13 @@ Once the Unity Simulation run has been executed, the run needs to be verified th
|
|||
|
||||
Replace `$HOME/data` with the path where you want the dataset to be downloaded.
|
||||
|
||||
> See [Docker Troubleshooting](DockerTroubleshooting.md) if you hit issues with Docker on Windows.
|
||||
> If you hit issues with Docker on Windows, see [Docker](Docker.md).
|
||||
|
||||
2. Open jupyter by navigating to http://localhost:8888 in a web browser.
|
||||
|
||||
2. Open Jupyter by navigating to http://localhost:8888 in a web browser.
|
||||
|
||||
<img src="images/jupyterFolder.PNG" align="middle"/>
|
||||
|
||||
3. Navigate to `datasetinsights/notebooks` in jupyter
|
||||
3. Navigate to `datasetinsights/notebooks` in Jupyter
|
||||
4. Open the SynthDet_Statistics.ipynb notebook
|
||||
|
||||
<img src="images/theaNotebook.PNG" align="middle"/>
|
||||
|
|
|
@ -1,24 +1,24 @@
|
|||
# Unity Simulation Help and Information
|
||||
Guide for navigating issues in Unity Simulation that can block work flow in using the SynthDet Sample Project.
|
||||
# Unity Simulation help and information
|
||||
This is a guide for navigating issues in Unity Simulation that can block your workflow when using the SynthDet Sample Project.
|
||||
|
||||
## Run in USim Window
|
||||
## Run in Unity Simulation window
|
||||
|
||||
This section will explain the Run in USim window and the meanings behind the parameters that are needed to create a Unity Simulation run
|
||||
This section explains the Run in Unity Simulation window and the definitions of the parameters that are needed to create a Unity Simulation run:
|
||||
|
||||
<img src="images/USimRunWindow.PNG" align="middle"/>
|
||||
|
||||
### Parameters
|
||||
* Run name is used by Unity Simulation to label the run being executed on a node
|
||||
* Path to build zip is used for location of the linux player build of the target project, this will be unzipped and ran on a Unity Simulation node
|
||||
* Scale factor range is a graph mapping the scale factor values from 0-1 range
|
||||
* Scale factor steps is the number of samples between 0-1 that we will simulate on
|
||||
* Another way to think of scale factor range and steps is a change in the graph that min and max values in the graph range
|
||||
* Run Name is used by Unity Simulation to label the run being executed on a node
|
||||
* Path to build zip is used for the location of the Linux player build of the target project: the user unzips this and runs it on a Unity Simulation node
|
||||
* Scale Factor Range is a graph mapping the scale factor values in the 0-1 range
|
||||
* Scale Factor Steps is the number of samples between 0-1 that the user simulates on
|
||||
* Another way to think of Scale Factor Range and Scale Factor Steps is a change in the graph that min and max values in the graph range
|
||||
|
||||
## Unity Simulation can’t find my project
|
||||
If Unity Simulation can’t find your execution id within the active project you may need to activate the Sample Project in order to find your execution id. This mainly applies if you have already been using Unity Simulation with other project
|
||||
If Unity Simulation can’t find your execution ID in the active project you might need to activate the sample project in order to find your execution ID. This mainly applies if you have already been using Unity Simulation with other projects.
|
||||
|
||||
You might see an error message like this:
|
||||
usim summarize run-execution Ojbm1n0
|
||||
{"message":"Could not find project=595c36c6-a73d-4dfd-bd8e-d68f1f5f3084, executionId=Ojbm1n0"}
|
||||
`usim summarize run-execution Ojbm1n0
|
||||
{"message":"Could not find project=595c36c6-a73d-4dfd-bd8e-d68f1f5f3084, executionId=Ojbm1n0"}`
|
||||
|
||||
Follow the steps for activate Unity Project in this [Link](https://github.com/Unity-Technologies/Unity-Simulation-Docs/blob/master/doc/quickstart.md#activate-unity-project) to help activate the correct project
|
||||
To ensure you activate the correct project, follow the steps in the Unity Simulation Quick Start Guide in the [Activate Unity Project](https://github.com/Unity-Technologies/Unity-Simulation-Docs/blob/master/doc/quickstart.md#activate-unity-project) section.
|
||||
|
|
Загрузка…
Ссылка в новой задаче