Multi-Agent Resource Optimization (MARO) platform is an instance of Reinforcement Learning as a Service (RaaS) for real-world resource optimization problems.
Перейти к файлу
Jinyu Wang 0ba55f36d8 update windows commands order 2020-10-20 13:07:13 +08:00
.github enable git lint for examples 2020-10-16 03:30:09 +00:00
docker_files clean play image, fix pymaro not found 2020-10-01 04:09:26 +00:00
docs update badge 2020-10-13 18:01:55 +08:00
examples merge V0.1 into master: doc + apidoc (#132) 2020-09-30 12:03:08 +08:00
maro update version 2020-10-13 13:57:04 +08:00
notebooks V0.1 (#140) 2020-10-01 23:12:02 +08:00
scripts merge with v0.1 2020-09-22 18:35:39 +08:00
tests V0.1: bug fix, docs refine (#153) 2020-10-13 13:45:01 +08:00
.dockerignore V0.1 (#78) 2020-09-23 23:41:35 +08:00
.editorconfig .editorconfig ignore->unset 2020-10-12 15:14:52 +08:00
.gitignore setup v0.1 (#34) 2020-09-18 11:16:37 +08:00
CHANGELOG.md update changelog 2020-10-08 20:14:39 +08:00
CODE_OF_CONDUCT.md setup v0.1 (#34) 2020-09-18 11:16:37 +08:00
CONTRIBUTING.md add editor config 2020-10-10 17:27:09 +08:00
LICENSE fix pipy push 2020-09-25 04:31:11 +00:00
MANIFEST.in V0.1 (#83) 2020-09-24 11:52:27 +08:00
README.md update windows commands order 2020-10-20 13:07:13 +08:00
SECURITY.md setup v0.1 (#34) 2020-09-18 11:16:37 +08:00
playground.md V0.1 (#79) 2020-09-24 00:09:43 +08:00
pyproject.toml merge with v0.1 2020-09-22 18:35:39 +08:00
requirements.dev.txt V0.1 (#119) 2020-09-28 20:08:26 +08:00
setup.py merge V0.1 into master: doc + apidoc (#132) 2020-09-30 12:03:08 +08:00

README.md

License Platform Python Versions Code Size Docker Size Issues Pull Requests Dependencies test build docker docs PypI Versions Wheel Citi Bike CIM Gitter Stack Overflow Releases Commits Vulnerability Scan Lint Coverage Downloads Docker Pulls Play with MARO

MARO LOGO

Multi-Agent Resource Optimization (MARO) platform is an instance of Reinforcement learning as a Service (RaaS) for real-world resource optimization. It can be applied to many important industrial domains, such as container inventory management in logistics, bike repositioning in transportation, virtual machine provisioning in data centers, and asset management in finance. Besides Reinforcement Learning (RL), it also supports other planning/decision mechanisms, such as Operations Research.

Key Components of MARO:

  • Simulation toolkit: it provides some predefined scenarios, and the reusable wheels for building new scenarios.
  • RL toolkit: it provides a full-stack abstraction for RL, such as agent manager, agent, RL algorithms, learner, actor, and various shapers.
  • Distributed toolkit: it provides distributed communication components, interface of user-defined functions for message auto-handling, cluster provision, and job orchestration.

MARO Key Components

Contents

File/folder Description
maro MARO source code.
docs MARO docs, it is host on readthedocs.
examples Showcase of MARO.
notebooks MARO quick-start notebooks.

Install MARO from PyPI

  • Max OS / Linux

    pip install pymaro
    
  • Windows

    # Install torch first, if you don't have one.
    pip install torch===1.6.0 torchvision===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
    
    pip install pymaro
    

Install MARO from Source (Editable Mode)

  • Prerequisites

  • Enable Virtual Environment

    • Mac OS / Linux

      # If your environment is not clean, create a virtual environment firstly.
      python -m venv maro_venv
      source ./maro_venv/bin/activate
      
    • Windows

      # If your environment is not clean, create a virtual environment firstly.
      python -m venv maro_venv
      
      # You may need this for SecurityError in PowerShell.
      Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Unrestricted
      
      # Activate the virtual environment.
      .\maro_venv\Scripts\activate
      
  • Install MARO

    • Mac OS / Linux

      # Install MARO from source.
      bash scripts/install_maro.sh
      
    • Windows

      # Install MARO from source.
      .\scripts\install_maro.bat
      

Quick Example

from maro.simulator import Env

env = Env(scenario="cim", topology="toy.5p_ssddd_l0.0", start_tick=0, durations=100)

metrics, decision_event, is_done = env.step(None)

while not is_done:
    metrics, decision_event, is_done = env.step(None)

print(f"environment metrics: {env.metrics}")

Run Playground

  • Pull from Docker Hub

    # Run playground container.
    # Redis commander (GUI for redis) -> http://127.0.0.1:40009
    # Local host docs -> http://127.0.0.1:40010
    # Jupyter lab with maro -> http://127.0.0.1:40011
    docker run -p 40009:40009 -p 40010:40010 -p 40011:40011 arthursjiang/maro:cpu
    
  • Build from source

    • Mac OS / Linux

      # Build playground image.
      bash ./scripts/build_playground.sh
      
      # Run playground container.
      # Redis commander (GUI for redis) -> http://127.0.0.1:40009
      # Local host docs -> http://127.0.0.1:40010
      # Jupyter lab with maro -> http://127.0.0.1:40011
      docker run -p 40009:40009 -p 40010:40010 -p 40011:40011 maro/playground:cpu
      
    • Windows

      # Build playground image.
      .\scripts\build_playground.bat
      
      # Run playground container.
      # Redis commander (GUI for redis) -> http://127.0.0.1:40009
      # Local host docs -> http://127.0.0.1:40010
      # Jupyter lab with maro -> http://127.0.0.1:40011
      docker run -p 40009:40009 -p 40010:40010 -p 40011:40011 maro/playground:cpu
      

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

Copyright (c) Microsoft Corporation. All rights reserved.

Licensed under the MIT License.