* adding wrench

* correct build path

* release branch and 6.0 target

* XmlDoc update

* adressing xml docs

* more docs

* updating the release

* test xmldoc fixes

* more xml doc fixes

* Uncompress the 3DBall sample

* Fix API documentation

* more xml doc fixes

* Revert "Uncompress the 3DBall sample"

This reverts commit d67dc94192.

* reformat MaxStep xml

* more xml doc fixes

* fix more xml doc issues

* fix summary tag

* Updated changelog for missing PRs.

* Removed tabs from .tests.json.

* Updated changelog.

* Removed tabs from CHANGELOG.

* Fix failing ci post upgrade (#6141) (#6145)

* Update PerformancProject and DevProject.

* Removed mac perf tests.

* Removing standalone tests dep from wrench packaging.

* Fixed package works issues. Updated com.unity.ml-agents.md.

* Updated com.unity.ml-agents.md.

* Updated package version in Academy.cs

* Adding back in package pack deps.

* Updated package pack testing deps..

* Regenerated wrench ymls.

* License update.

* Extensions License update.

* Another license tweak.

* Another license tweak.

* Upgraded to sentis 2.1.0.

* Updated standalone yamato build test to using new ml-agents ubuntu ci bokken image.

* Bumped python and extensions package versions.

* Changed ci image for pytest gpu yamato test.

* Changed default cuda dtype to torch.float32.

* Updated version validation and extensions version.

* Fixed failing GPU test.

* Fixed failing GPU test.

* Updated readme table and make_readme_table.py

* Updated publish to pypi gha.

---------

Co-authored-by: alexandre-ribard <alexandre.ribard@unity3d.com>
Co-authored-by: Aurimas Petrovas <>
This commit is contained in:
Miguel Alonso Jr. 2024-10-05 13:53:04 -04:00 коммит произвёл GitHub
Родитель 8760552575
Коммит ac576f9f6d
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: B5690EEEBB952194
25 изменённых файлов: 64 добавлений и 63 удалений

2
.github/workflows/publish_pypi.yaml поставляемый
Просмотреть файл

@ -35,7 +35,7 @@ jobs:
python setup.py bdist_wheel
- name: Publish distribution 📦 to Test PyPI
if: startsWith(github.ref, 'refs/tags') && contains(github.ref, 'test')
uses: pypa/gh-action-pypi-publish@master
uses: pypa/gh-action-pypi-publish@release/v1
with:
password: ${{ secrets.TEST_PYPI_PASSWORD }}
repository_url: https://test.pypi.org/legacy/

Просмотреть файл

@ -2,7 +2,7 @@ pytest_gpu:
name: Pytest GPU
agent:
type: Unity::VM::GPU
image: ml-agents/ml-agents-ubuntu-18.04:latest
image: ml-agents/ubuntu-ci:v1.0.0
flavor: b1.large
commands:
- |

Просмотреть файл

@ -32,7 +32,7 @@
},
"source": [
"# ML-Agents Open a UnityEnvironment\n",
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
]
},
{
@ -149,7 +149,7 @@
" import mlagents\n",
" print(\"ml-agents already installed\")\n",
"except ImportError:\n",
" !python -m pip install -q mlagents==1.0.0\n",
" !python -m pip install -q mlagents==1.1.0\n",
" print(\"Installed ml-agents\")"
],
"execution_count": 1,

Просмотреть файл

@ -22,7 +22,7 @@
},
"source": [
"# ML-Agents Q-Learning with GridWorld\n",
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/gridworld.png?raw=true\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/gridworld.png?raw=true\" align=\"middle\" width=\"435\"/>"
]
},
{
@ -152,7 +152,7 @@
" import mlagents\n",
" print(\"ml-agents already installed\")\n",
"except ImportError:\n",
" !python -m pip install -q mlagents==1.0.0\n",
" !python -m pip install -q mlagents==1.1.0\n",
" print(\"Installed ml-agents\")"
],
"execution_count": 2,
@ -190,7 +190,7 @@
"id": "pZhVRfdoyPmv"
},
"source": [
"The [GridWorld](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Examples.md#gridworld) Environment is a simple Unity visual environment. The Agent is a blue square in a 3x3 grid that is trying to reach a green __`+`__ while avoiding a red __`x`__.\n",
"The [GridWorld](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Examples.md#gridworld) Environment is a simple Unity visual environment. The Agent is a blue square in a 3x3 grid that is trying to reach a green __`+`__ while avoiding a red __`x`__.\n",
"\n",
"The observation is an image obtained by a camera on top of the grid.\n",
"\n",

Просмотреть файл

@ -23,7 +23,7 @@
},
"source": [
"# ML-Agents Use SideChannels\n",
"<img src=\"https://raw.githubusercontent.com/Unity-Technologies/ml-agents/release_21_docs/docs/images/3dball_big.png\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://raw.githubusercontent.com/Unity-Technologies/ml-agents/release_22_docs/docs/images/3dball_big.png\" align=\"middle\" width=\"435\"/>"
]
},
{
@ -153,7 +153,7 @@
" import mlagents\n",
" print(\"ml-agents already installed\")\n",
"except ImportError:\n",
" !python -m pip install -q mlagents==1.0.0\n",
" !python -m pip install -q mlagents==1.1.0\n",
" print(\"Installed ml-agents\")"
],
"execution_count": 2,
@ -176,7 +176,7 @@
"## Side Channel\n",
"\n",
"SideChannels are objects that can be passed to the constructor of a UnityEnvironment or the `make()` method of a registry entry to send non Reinforcement Learning related data.\n",
"More information available [here](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Python-API.md#communicating-additional-information-with-the-environment)\n",
"More information available [here](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Python-API.md#communicating-additional-information-with-the-environment)\n",
"\n",
"\n",
"\n"
@ -189,7 +189,7 @@
},
"source": [
"### Engine Configuration SideChannel\n",
"The [Engine Configuration Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Python-API.md#engineconfigurationchannel) is used to configure how the Unity Engine should run.\n",
"The [Engine Configuration Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Python-API.md#engineconfigurationchannel) is used to configure how the Unity Engine should run.\n",
"We will use the GridWorld environment to demonstrate how to use the EngineConfigurationChannel."
]
},
@ -282,7 +282,7 @@
},
"source": [
"### Environment Parameters Channel\n",
"The [Environment Parameters Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Python-API.md#environmentparameters) is used to modify environment parameters during the simulation.\n",
"The [Environment Parameters Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Python-API.md#environmentparameters) is used to modify environment parameters during the simulation.\n",
"We will use the GridWorld environment to demonstrate how to use the EngineConfigurationChannel."
]
},
@ -419,7 +419,7 @@
},
"source": [
"### Creating your own Side Channels\n",
"You can send various kinds of data between a Unity Environment and Python but you will need to [create your own implementation of a Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Custom-SideChannels.md#custom-side-channels) for advanced use cases.\n"
"You can send various kinds of data between a Unity Environment and Python but you will need to [create your own implementation of a Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Custom-SideChannels.md#custom-side-channels) for advanced use cases.\n"
]
},
{

Просмотреть файл

@ -7,7 +7,7 @@
},
"source": [
"# ML-Agents run with Stable Baselines 3\n",
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
]
},
{
@ -127,7 +127,7 @@
" import mlagents\n",
" print(\"ml-agents already installed\")\n",
"except ImportError:\n",
" !python -m pip install -q mlagents==1.0.0\n",
" !python -m pip install -q mlagents==1.1.0\n",
" print(\"Installed ml-agents\")"
]
},

Просмотреть файл

@ -28,24 +28,24 @@ The ML-Agents Extensions package is not currently available in the Package Manag
recommended ways to install the package:
### Local Installation
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/docs/Installation.md#advanced-local-installation-for-development-1)
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/docs/Installation.md#advanced-local-installation-for-development-1)
directions (substituting `com.unity.ml-agents.extensions` for the package name).
### Github via Package Manager
In Unity 2019.4 or later, open the Package Manager, hit the "+" button, and select "Add package from git URL".
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/unity_package_manager_git_url.png)
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/unity_package_manager_git_url.png)
In the dialog that appears, enter
```
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_21
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_22
```
You can also edit your project's `manifest.json` directly and add the following line to the `dependencies`
section:
```
"com.unity.ml-agents.extensions": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_21",
"com.unity.ml-agents.extensions": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_22",
```
See [Git dependencies](https://docs.unity3d.com/Manual/upm-git.html#subfolder) for more information. Note that this
may take several minutes to resolve the packages the first time that you add it.
@ -67,4 +67,4 @@ If using the `InputActuatorComponent`
- No way to customize the action space of the `InputActuatorComponent`
## Need Help?
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/README.md) contains links for contacting the team or getting support.
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/README.md) contains links for contacting the team or getting support.

Просмотреть файл

@ -20,7 +20,7 @@ using Unity.Sentis;
* API. For more information on each of these entities, in addition to how to
* set-up a learning environment and train the behavior of characters in a
* Unity scene, please browse our documentation pages on GitHub:
* https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/docs/
* https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/docs/
*/
namespace Unity.MLAgents
@ -61,7 +61,7 @@ namespace Unity.MLAgents
/// fall back to inference or heuristic decisions. (You can also set agents to always use
/// inference or heuristics.)
/// </remarks>
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/" +
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/" +
"docs/Learning-Environment-Design.md")]
public class Academy : IDisposable
{

Просмотреть файл

@ -184,7 +184,7 @@ namespace Unity.MLAgents.Actuators
///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <seealso cref="IActionReceiver.OnActionReceived"/>
void WriteDiscreteActionMask(IDiscreteActionMask actionMask);

Просмотреть файл

@ -16,7 +16,7 @@ namespace Unity.MLAgents.Actuators
///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#masking-discrete-actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#masking-discrete-actions
/// </remarks>
/// <param name="branch">The branch for which the actions will be masked.</param>
/// <param name="actionIndex">Index of the action.</param>

Просмотреть файл

@ -192,13 +192,13 @@ namespace Unity.MLAgents
/// [OnDisable()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnDisable.html]
/// [OnBeforeSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnBeforeSerialize.html
/// [OnAfterSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnAfterSerialize.html
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design.md
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design.md
/// [Unity ML-Agents Toolkit]: https://github.com/Unity-Technologies/ml-agents
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Readme.md
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Readme.md
///
/// </remarks>
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/" +
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/" +
"docs/Learning-Environment-Design-Agents.md")]
[Serializable]
[RequireComponent(typeof(BehaviorParameters))]
@ -728,8 +728,8 @@ namespace Unity.MLAgents
/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// </remarks>
/// <param name="reward">The new value of the reward.</param>
public void SetReward(float reward)
@ -756,8 +756,8 @@ namespace Unity.MLAgents
/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
///</remarks>
/// <param name="increment">Incremental reward value.</param>
public void AddReward(float increment)
@ -945,8 +945,8 @@ namespace Unity.MLAgents
/// implementing a simple heuristic function can aid in debugging agent actions and interactions
/// with its environment.
///
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// </remarks>
/// <example>
@ -1203,7 +1203,7 @@ namespace Unity.MLAgents
/// For more information about observations, see [Observations and Sensors].
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// </remarks>
public virtual void CollectObservations(VectorSensor sensor)
{
@ -1245,7 +1245,7 @@ namespace Unity.MLAgents
///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <seealso cref="IActionReceiver.OnActionReceived"/>
public virtual void WriteDiscreteActionMask(IDiscreteActionMask actionMask) { }
@ -1312,7 +1312,7 @@ namespace Unity.MLAgents
///
/// For more information about implementing agent actions see [Agents - Actions].
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </para>
/// </remarks>
/// <param name="actions">

Просмотреть файл

@ -19,7 +19,7 @@ namespace Unity.MLAgents.Demonstrations
/// See [Imitation Learning - Recording Demonstrations] for more information.
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
/// </remarks>
[RequireComponent(typeof(Agent))]
[AddComponentMenu("ML Agents/Demonstration Recorder", (int)MenuGroup.Default)]

Просмотреть файл

@ -123,10 +123,10 @@ commands in an Anaconda Prompt _(if you open a new prompt, be sure to activate
the ml-agents Conda environment by typing `activate ml-agents`)_:
```sh
git clone --branch release_21 https://github.com/Unity-Technologies/ml-agents.git
git clone --branch release_22 https://github.com/Unity-Technologies/ml-agents.git
```
The `--branch release_21` option will switch to the tag of the latest stable
The `--branch release_22` option will switch to the tag of the latest stable
release. Omitting that will get the `main` branch which is potentially
unstable.
@ -151,7 +151,7 @@ config files in this directory when running `mlagents-learn`. Make sure you are
connected to the Internet and then type in the Anaconda Prompt:
```console
python -m pip install mlagents==1.0.0
python -m pip install mlagents==1.1.0
```
This will complete the installation of all the required Python packages to run
@ -162,7 +162,7 @@ pip will get stuck when trying to read the cache of the package. If you see
this, you can try:
```console
python -m pip install mlagents==1.0.0 --no-cache-dir
python -m pip install mlagents==1.1.0 --no-cache-dir
```
This `--no-cache-dir` tells the pip to disable the cache.

Просмотреть файл

@ -72,10 +72,10 @@ of our tutorials / guides assume you have access to our example environments).
the repository if you would like to explore more examples.
```sh
git clone --branch release_21 https://github.com/Unity-Technologies/ml-agents.git
git clone --branch release_22 https://github.com/Unity-Technologies/ml-agents.git
```
The `--branch release_21` option will switch to the tag of the latest stable
The `--branch release_22` option will switch to the tag of the latest stable
release. Omitting that will get the `develop` branch which is potentially unstable.
However, if you find that a release branch does not work, the recommendation is to use
the `develop` branch as it may have potential fixes for bugs and dependency issues.
@ -90,7 +90,7 @@ git clone --branch https://github.com/Unity-Technologies/ml-agents.git
You will need to clone the repository if you plan to modify or extend the
ML-Agents Toolkit for your purposes. If you plan to contribute those changes
back, make sure to clone the `develop` branch (by omitting `--branch release_21`
back, make sure to clone the `develop` branch (by omitting `--branch release_22`
from the command above). See our
[Contributions Guidelines](../com.unity.ml-agents/CONTRIBUTING.md) for more
information on contributing to the ML-Agents Toolkit.
@ -195,7 +195,7 @@ line parameters you can use with `mlagents-learn`.
the following command:
```shell
python -m pip install mlagents==1.0.0
python -m pip install mlagents==1.1.0
```
which will install the latest version of ML-Agents and associated dependencies available on PyPi. Note, you need to have the matching version of

Просмотреть файл

@ -579,7 +579,7 @@ To allow more variety of observations that grid sensor can capture, the
`GridSensorComponent` and the underlying `GridSensorBase` also provides interfaces
that can be overridden to collect customized observation from detected objects.
See the doc on
[extending grid Sensors](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/com.unity.ml-agents.extensions/Documentation~/CustomGridSensors.md)
[extending grid Sensors](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/com.unity.ml-agents.extensions/Documentation~/CustomGridSensors.md)
for more details on custom grid sensors.
__Note__: The `GridSensor` only works in 3D environments and will not behave

Просмотреть файл

@ -41,9 +41,9 @@
## Python Tutorial with Google Colab
- [Using a UnityEnvironment](https://colab.research.google.com/github/Unity-Technologies/ml-agents/blob/release_21_docs/colab/Colab_UnityEnvironment_1_Run.ipynb)
- [Q-Learning with a UnityEnvironment](https://colab.research.google.com/github/Unity-Technologies/ml-agents/blob/release_21_docs/colab/Colab_UnityEnvironment_2_Train.ipynb)
- [Using Side Channels on a UnityEnvironment](https://colab.research.google.com/github/Unity-Technologies/ml-agents/blob/release_21_docs/colab/Colab_UnityEnvironment_3_SideChannel.ipynb)
- [Using a UnityEnvironment](https://colab.research.google.com/github/Unity-Technologies/ml-agents/blob/release_22_docs/colab/Colab_UnityEnvironment_1_Run.ipynb)
- [Q-Learning with a UnityEnvironment](https://colab.research.google.com/github/Unity-Technologies/ml-agents/blob/release_22_docs/colab/Colab_UnityEnvironment_2_Train.ipynb)
- [Using Side Channels on a UnityEnvironment](https://colab.research.google.com/github/Unity-Technologies/ml-agents/blob/release_22_docs/colab/Colab_UnityEnvironment_3_SideChannel.ipynb)
## Help

Просмотреть файл

@ -223,7 +223,7 @@ folder
- The Parameter Randomization feature has been merged with the Curriculum feature. It is now possible to specify a sampler
in the lesson of a Curriculum. Curriculum has been refactored and is now specified at the level of the parameter, not the
behavior. More information
[here](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Training-ML-Agents.md).(#4160)
[here](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Training-ML-Agents.md).(#4160)
### Steps to Migrate
- The configuration format for curriculum and parameter randomization has changed. To upgrade your configuration files,

Просмотреть файл

@ -1,6 +1,6 @@
# Unity ML-Agents Toolkit
[![docs badge](https://img.shields.io/badge/docs-reference-blue.svg)](https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/docs/)
[![docs badge](https://img.shields.io/badge/docs-reference-blue.svg)](https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/docs/)
[![license badge](https://img.shields.io/badge/license-Apache--2.0-green.svg)](../LICENSE.md)
@ -44,7 +44,7 @@ See our [ML-Agents Overview](ML-Agents-Overview.md) page for detailed
descriptions of all these features. Or go straight to our [web docs](https://unity-technologies.github.io/ml-agents/).
## Releases & Documentation
**Our latest, stable release is `Release 21`. Click
**Our latest, stable release is `Release 22`. Click
[here](Getting-Started.md)
to get started with the latest release of ML-Agents.**
@ -67,8 +67,9 @@ under active development and may be unstable. A few helpful guidelines:
| **Version** | **Release Date** | **Source** | **Documentation** | **Download** | **Python Package** | **Unity Package** |
|:--------------------------:|:------:|:-------------:|:-------:|:------------:|:------------:|:------------:|
| **Release 22** | **October 5, 2024** | **[source](https://github.com/Unity-Technologies/ml-agents/tree/release_22)** | **[docs](https://unity-technologies.github.io/ml-agents/)** | **[download](https://github.com/Unity-Technologies/ml-agents/archive/release_22.zip)** | **[1.1.0](https://pypi.org/project/mlagents/1.1.0/)** | **[3.0.0](https://docs.unity3d.com/Packages/com.unity.ml-agents@3.0/manual/index.html)** |
| **develop (unstable)** | -- | [source](https://github.com/Unity-Technologies/ml-agents/tree/develop) | [docs](https://unity-technologies.github.io/ml-agents/) | [download](https://github.com/Unity-Technologies/ml-agents/archive/develop.zip) | -- | -- |
| **Release 21** | **October 9, 2023** | **[source](https://github.com/Unity-Technologies/ml-agents/tree/release_21)** | **[docs](https://unity-technologies.github.io/ml-agents/)** | **[download](https://github.com/Unity-Technologies/ml-agents/archive/release_21.zip)** | **[1.0.0](https://pypi.org/project/mlagents/1.0.0/)** | **[3.0.0](https://docs.unity3d.com/Packages/com.unity.ml-agents@3.0/manual/index.html)** |
If you are a researcher interested in a discussion of Unity as an AI platform,

Просмотреть файл

@ -35,9 +35,9 @@ The ML-Agents Toolkit only supports the models created with our trainers. Model
loading expects certain conventions for constants and tensor names. While it is
possible to construct a model that follows these conventions, we don't provide
any additional help for this. More details can be found in
[TensorNames.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/com.unity.ml-agents/Runtime/Inference/TensorNames.cs)
[TensorNames.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/com.unity.ml-agents/Runtime/Inference/TensorNames.cs)
and
[SentisModelParamLoader.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/com.unity.ml-agents/Runtime/Inference/SentisModelParamLoader.cs).
[SentisModelParamLoader.cs](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/com.unity.ml-agents/Runtime/Inference/SentisModelParamLoader.cs).
If you wish to run inference on an externally trained model, you should use
Sentis directly, instead of trying to run it through ML-Agents.

Просмотреть файл

@ -69,7 +69,7 @@ After launching your EC2 instance using the ami and ssh into it:
2. Clone the ML-Agents repo and install the required Python packages
```sh
git clone --branch release_21 https://github.com/Unity-Technologies/ml-agents.git
git clone --branch release_22 https://github.com/Unity-Technologies/ml-agents.git
cd ml-agents/ml-agents/
pip3 install -e .
```

Просмотреть файл

@ -33,7 +33,7 @@ view the documentation for doing so [here](#custom-instances).
instance, and set it as the working directory.
2. Install the required packages:
Torch: `pip3 install torch==1.7.0 -f https://download.pytorch.org/whl/torch_stable.html` and
MLAgents: `python -m pip install mlagents==1.0.0`
MLAgents: `python -m pip install mlagents==1.1.0`
## Testing

Просмотреть файл

@ -17,7 +17,7 @@ communication.
Install the `mlagents_envs` package with:
```sh
python -m pip install mlagents_envs==1.0.0
python -m pip install mlagents_envs==1.1.0
```
## Usage & More Information

Просмотреть файл

@ -16,7 +16,7 @@ package.
Install the `mlagents` package with:
```sh
python -m pip install mlagents==1.0.0
python -m pip install mlagents==1.1.0
```
## Usage & More Information

Просмотреть файл

@ -140,7 +140,8 @@ versions = [
ReleaseInfo("release_18", "2.1.0", "0.27.0", "June 9, 2021"),
ReleaseInfo("release_19", "2.2.1", "0.28.0", "January 14, 2022"),
ReleaseInfo("release_20", "2.3.0", "0.30.0", "November 21, 2022"),
ReleaseInfo("release_21", "3.0.0", "1.0.0", "October 9, 2023"),
ReleaseInfo("release_21", "3.0.0-exp.1", "1.0.0", "October 9, 2023"),
ReleaseInfo("release_22", "3.0.0", "1.1.0", "October 5, 2024"),
# Verified releases
# ReleaseInfo("", "1.0.8", "0.16.1", "May 26, 2021", is_verified=True),
# ReleaseInfo("", "1.0.7", "0.16.1", "March 8, 2021", is_verified=True),

Просмотреть файл

@ -26,7 +26,6 @@ __release_tag__ = {release_tag}
def _escape_non_none(s: Optional[str]) -> str:
"""
Returns s escaped in quotes if it is non-None, els e "None"
Returns s escaped in quotes if it is non-None, els e "None"
:param s:
:return:
"""