ml-agents/.gitignore

98 строки
1.2 KiB
Plaintext
Исходник Постоянная ссылка Обычный вид История

# Output Artifacts (Legacy)
/models
/summaries
# Output Artifacts
/results
# Output Builds
/Builds
2017-09-16 20:34:45 +03:00
2018-09-11 03:30:01 +03:00
# Training environments
/envs
0.2 Update * added broadcast to the player and heuristic brain. Allows the python API to record actions taken along with the states and rewards * removed the broadcast checkbox Added a Handshake method for the communicator The academy will try to handshake regardless of the brains present Player and Heuristic brains will send their information through the communicator but will not receive commands * bug fix : The environment only requests actions from external brains when unique * added warning in case no brins are set to external * fix on the instanciation of coreBrains, fix on the conversion of actions to arrays in the BrainInfo received from step * default discrete action is now 0 bug fix for discrete broadcast action (the action size should be one in Agents.cs) modified Tennis so that the default action is no action modified the TemplateDecsion.cs to ensure non null values are sent from Decide() and MakeMemory() * minor fixes * need to convert the state to int if the state is discrete * fix the basic decision to return 1 instead of null * modification to get started on the recurrent NN rewrite * new Monitor * Update Making-a-new-Unity-Environment.md * modified Environment to send a specific error when no external brains are in the environment * initial commit of the curriculum with broadcast. Improved the Unity python handshake * curriculum.json params must have 4 entries * bug fix for python3 * better checking of the format of the curriculum file * updated the tests of unityagents * fix on the CoreBrains so that if one Corebrain gets eraised, it will be reinstanciated * modified the socket to receive states and images of any size * Add curriculum support to PPO * Don't print * Add curriculum code to notebook and simplify * Log lesson in TensorBoard * added the modified files from dev-cooperative-env * porting the changes on ppo.py and removing AgentMonitor.cs * changes on the ppo.py * Can now switch inference configuration on/off in the editor. Reintroduced the broadcast feature for the non-External brains. Introduced the API number to check the compatibility between Unity and Python. * Add push curriculum * added the drone and the spider environment * Fix cumulative reward (Unity) and Nan reward (python) bugs * Fix curriculum smoothing, and use reward for push curriculum * remove commented code, replaced the action bars with action hist in drone, added the drone and spider to the examples * minor fix to the Notebook * Text adjustments * Now collecting the states in the PlayerCoreBrain in order to test and debug the CollectState method. * updated tests * ignoring the Packages folder that is created in unity-environment for Unity version 2017.3 Print a message if someone tries to lauch ppo with the load flag but an invalid run-path * New feature: The errors happening in Unity will be reported in the unity-environment.log file that will be generated when an environment with an external Brain is launched. This should help developers figure out faster if errors are happening on the Unity side. Looking into the Player.Log or using a developement build could be replaced by this feature. * formatting * Launching the environment with absolute path. Need testing on Windows and Linux * simplifications on launching from absolute path, bug fix : closing the environment when the file_name was wrong. * Clean up Area environment code * Provide support with incompatible API * Clean up environments * Add comments and alphabetize flags * removed the isInference checkbox * Fix academy reset out of order * the bytes file will besaved under the name of the environment, not its path * Modified the tests * added Asset/UnityPackageManager to .gitignore * putting the logfile path into the academy parameters * modified the python side to read the logfile path from the academy parameters * added the logfile_path property to the environment class. Give a link to the logfile when the timeout error is launched. Note: still need testing on windows * made the UnityTimeOutException that reads into the logfile when available * dev-logParam (#135) * added the method write text to trainer so it is easy to write log the hyperparameters as a dictionary. Note: needs tensorflow version r1.2 or above * added message if impossible to write text summary in Tensorboard * Curriculum documentation and improved Area code * Add image spaces * Make images images * Add CL to table of contents * added test in test_unityagents.py for curriculum class * Create monitor.md * Update Making-a-new-Unity-Environment.md * Update monitor.md * Uploading monitor image * Added the monitor link * added documentation on the broadcast feature * If reset does not take either config or progress, no information is logged. Bug fix : Environment handles invalid configurations better * replaced actions with previous_actions in the BrainInfo object * Various bug fixed and changes * Adjust demo curricula * Fix training buffer reset bug * Make wall height a float * Add pertained models for Area env * Instantiate training buffer with trainer * removed the drone environment * use logging instead of print Replaced the print statements with logging statements in the exception.py file Uses the same logger as the environment one named the logger unityagents * Fix Area environments * Performance improvements during training * Additional best practicese * Updates best ppo practices * Adjust wording * Update monitor.md * Add curriculum image * Update curriculum.md * Update broadcast.md * new docs organization * Use newer example curriculum * Update Making-a-new-Unity-Environment.md Stating that Awake should not be used in Agent or Academy * New Tennis env and model * fix on the test script Error was due to the absence of logpath in the dummy handshake message * Layers fix * Fix typo in model vars * removed the monitor display when training Increased the timeout time to 30 seconds to be consistent with start of application * Fix tennis collisions * Version 0.2 * Add flags for normalization and variable layers * Python2 fix * Update links throughout .md files for 0.2 release point to the new packages for the 0.2 release already public on S3 * Add new flags to ppo best practices
2017-12-06 02:28:29 +03:00
# Environemnt logfile
*Project.log
0.2 Update * added broadcast to the player and heuristic brain. Allows the python API to record actions taken along with the states and rewards * removed the broadcast checkbox Added a Handshake method for the communicator The academy will try to handshake regardless of the brains present Player and Heuristic brains will send their information through the communicator but will not receive commands * bug fix : The environment only requests actions from external brains when unique * added warning in case no brins are set to external * fix on the instanciation of coreBrains, fix on the conversion of actions to arrays in the BrainInfo received from step * default discrete action is now 0 bug fix for discrete broadcast action (the action size should be one in Agents.cs) modified Tennis so that the default action is no action modified the TemplateDecsion.cs to ensure non null values are sent from Decide() and MakeMemory() * minor fixes * need to convert the state to int if the state is discrete * fix the basic decision to return 1 instead of null * modification to get started on the recurrent NN rewrite * new Monitor * Update Making-a-new-Unity-Environment.md * modified Environment to send a specific error when no external brains are in the environment * initial commit of the curriculum with broadcast. Improved the Unity python handshake * curriculum.json params must have 4 entries * bug fix for python3 * better checking of the format of the curriculum file * updated the tests of unityagents * fix on the CoreBrains so that if one Corebrain gets eraised, it will be reinstanciated * modified the socket to receive states and images of any size * Add curriculum support to PPO * Don't print * Add curriculum code to notebook and simplify * Log lesson in TensorBoard * added the modified files from dev-cooperative-env * porting the changes on ppo.py and removing AgentMonitor.cs * changes on the ppo.py * Can now switch inference configuration on/off in the editor. Reintroduced the broadcast feature for the non-External brains. Introduced the API number to check the compatibility between Unity and Python. * Add push curriculum * added the drone and the spider environment * Fix cumulative reward (Unity) and Nan reward (python) bugs * Fix curriculum smoothing, and use reward for push curriculum * remove commented code, replaced the action bars with action hist in drone, added the drone and spider to the examples * minor fix to the Notebook * Text adjustments * Now collecting the states in the PlayerCoreBrain in order to test and debug the CollectState method. * updated tests * ignoring the Packages folder that is created in unity-environment for Unity version 2017.3 Print a message if someone tries to lauch ppo with the load flag but an invalid run-path * New feature: The errors happening in Unity will be reported in the unity-environment.log file that will be generated when an environment with an external Brain is launched. This should help developers figure out faster if errors are happening on the Unity side. Looking into the Player.Log or using a developement build could be replaced by this feature. * formatting * Launching the environment with absolute path. Need testing on Windows and Linux * simplifications on launching from absolute path, bug fix : closing the environment when the file_name was wrong. * Clean up Area environment code * Provide support with incompatible API * Clean up environments * Add comments and alphabetize flags * removed the isInference checkbox * Fix academy reset out of order * the bytes file will besaved under the name of the environment, not its path * Modified the tests * added Asset/UnityPackageManager to .gitignore * putting the logfile path into the academy parameters * modified the python side to read the logfile path from the academy parameters * added the logfile_path property to the environment class. Give a link to the logfile when the timeout error is launched. Note: still need testing on windows * made the UnityTimeOutException that reads into the logfile when available * dev-logParam (#135) * added the method write text to trainer so it is easy to write log the hyperparameters as a dictionary. Note: needs tensorflow version r1.2 or above * added message if impossible to write text summary in Tensorboard * Curriculum documentation and improved Area code * Add image spaces * Make images images * Add CL to table of contents * added test in test_unityagents.py for curriculum class * Create monitor.md * Update Making-a-new-Unity-Environment.md * Update monitor.md * Uploading monitor image * Added the monitor link * added documentation on the broadcast feature * If reset does not take either config or progress, no information is logged. Bug fix : Environment handles invalid configurations better * replaced actions with previous_actions in the BrainInfo object * Various bug fixed and changes * Adjust demo curricula * Fix training buffer reset bug * Make wall height a float * Add pertained models for Area env * Instantiate training buffer with trainer * removed the drone environment * use logging instead of print Replaced the print statements with logging statements in the exception.py file Uses the same logger as the environment one named the logger unityagents * Fix Area environments * Performance improvements during training * Additional best practicese * Updates best ppo practices * Adjust wording * Update monitor.md * Add curriculum image * Update curriculum.md * Update broadcast.md * new docs organization * Use newer example curriculum * Update Making-a-new-Unity-Environment.md Stating that Awake should not be used in Agent or Academy * New Tennis env and model * fix on the test script Error was due to the absence of logpath in the dummy handshake message * Layers fix * Fix typo in model vars * removed the monitor display when training Increased the timeout time to 30 seconds to be consistent with start of application * Fix tennis collisions * Version 0.2 * Add flags for normalization and variable layers * Python2 fix * Update links throughout .md files for 0.2 release point to the new packages for the 0.2 release already public on S3 * Add new flags to ppo best practices
2017-12-06 02:28:29 +03:00
2021-03-19 01:14:13 +03:00
# Custom settings asset
*.settings.asset*
2017-09-16 20:34:45 +03:00
# Visual Studio 2015 cache directory
/Project/.vs/
2017-09-16 20:34:45 +03:00
# Autogenerated VS/MD/Consulo solution and project files
/com.unity.ml-agentsExportedObj/
/com.unity.ml-agents.consulo/
2017-09-16 20:34:45 +03:00
*.csproj
*.unityproj
*.sln
*.suo
*.tmp
*.user
*.userprefs
*.pidb
*.booproj
*.svd
*.pdb
# Unity3D generated meta files
*.pidb.meta
# Unity3D Generated File On Crash Reports
/com.unity.ml-agents/sysinfo.txt
2017-09-16 20:34:45 +03:00
# Builds
*.apk
*.unitypackage
*.app
*.exe
*.x86_64
*.x86
# Plugins
/com.unity.ml-agents/VideoRecorder*
# Generated doc folders
/docs/html
2017-09-16 20:34:45 +03:00
# Mac hidden files
*.DS_Store
*/.ipynb_checkpoints
*/.idea
*.pyc
*.idea/misc.xml
*.idea/modules.xml
*.idea/
2017-09-16 20:34:45 +03:00
*.iml
*.cache
*/build/
*/dist/
*.egg-info*
*.eggs*
*.gitignore.swp
2018-07-20 20:20:04 +03:00
# VSCode hidden files
*.vscode/
.DS_Store
.ipynb_checkpoints
2019-07-16 01:18:06 +03:00
# pytest cache
2018-07-20 20:20:04 +03:00
*.pytest_cache/
2018-08-10 03:36:54 +03:00
# Ignore compiled protobuf files.
*Grpc.Tools*
2018-08-11 02:20:20 +03:00
# Ignore PyPi build files.
dist/
build/
# Python virtual environment
venv/
.mypy_cache/
2019-07-16 01:18:06 +03:00
# Code coverage report
.coverage
coverage.xml
/htmlcov/
**/UserSettings/*