MLOS is a Data Science powered infrastructure and methodology to democratize and automate Performance Engineering. MLOS enables continuous, instance-based, robust, and trackable systems optimization.
Перейти к файлу
Brian Kroth 13c62caae7
Add some additional conda dependencies to build wheels (#519)
Fixes some uncached rebuild issues that are holding up some other PRs.
2023-09-28 22:25:02 +00:00
.devcontainer Workaround an upstream conda vs. base devcontainer image dependency mismatches (#499) 2023-08-28 15:13:04 -07:00
.github Add some additional conda dependencies to build wheels (#519) 2023-09-28 22:25:02 +00:00
.vscode CLI args testing and improvements (#493) 2023-08-28 13:01:44 -07:00
conda-envs Add some additional conda dependencies to build wheels (#519) 2023-09-28 22:25:02 +00:00
doc Change VM ops terminology to Host ops for cleaner Service type separation (#497) 2023-08-29 16:14:27 +00:00
mlos_bench Don't require empty config element in service json schemas (#518) 2023-09-28 20:50:31 +00:00
mlos_core Smac optimizer bugfixes (#486) 2023-08-18 20:29:48 +00:00
scripts Automate Control Plane RG Setup (#480) 2023-09-06 14:02:41 -07:00
.bumpversion.cfg Automatic versioning support (#278) 2023-03-20 17:46:06 -05:00
.cspell.json mlos_bench: shell command line parsing improvements (#461) 2023-07-28 11:35:30 -07:00
.editorconfig initial checkin 2021-12-08 14:35:39 -06:00
.gitattributes Merged PR 775: Adds explicit line ending handling of shell scripts in the gitattributes 2022-12-24 00:31:27 +00:00
.gitignore CLI args testing and improvements (#493) 2023-08-28 13:01:44 -07:00
.prettierrc devcontainer style improvements (#358) 2023-05-15 22:19:20 +00:00
.pylintrc devcontainer style improvements (#358) 2023-05-15 22:19:20 +00:00
CODEOWNERS Add code owners for people to require reviews from (#281) 2023-03-22 11:05:44 -07:00
CODE_OF_CONDUCT.md Switch to Github Actions for CI/CD runs (#274) 2023-03-18 15:52:26 -05:00
CONTRIBUTING.md Switch to Github Actions for CI/CD runs (#274) 2023-03-18 15:52:26 -05:00
LICENSE.txt Merged PR 560: Merging initial os_autotune_main work to main 2022-08-04 16:11:35 +00:00
Makefile Add some additional conda dependencies to build wheels (#519) 2023-09-28 22:25:02 +00:00
NOTICE Merged PR 1054: Prepwork for OSSing 2023-03-17 21:20:39 +00:00
README.md CLI args testing and improvements (#493) 2023-08-28 13:01:44 -07:00
SECURITY.md Switch to Github Actions for CI/CD runs (#274) 2023-03-18 15:52:26 -05:00
conftest.py Don't require empty config element in service json schemas (#518) 2023-09-28 20:50:31 +00:00
setup.cfg Smac optimizer bugfixes (#486) 2023-08-18 20:29:48 +00:00

README.md

MLOS

MLOS DevContainer MLOS Linux MLOS Windows Code Coverage Status

This repository contains a stripped down implementation of essentially just the core optimizer and config space description APIs from the original MLOS as well as the mlos-bench module intended to help automate and manage running experiments for autotuning systems with mlos-core.

It is intended to provide a simplified, easier to consume (e.g. via pip), with lower dependencies abstraction to

  • describe a space of context, parameters, their ranges, constraints, etc. and result objectives
  • an "optimizer" service abstraction (e.g. register() and suggest()) so we can easily swap out different implementations methods of searching (e.g. random, BO, etc.)
  • provide some helpers for automating optimization experiment runner loops and data collection

For these design requirements we intend to reuse as much from existing OSS libraries as possible and layer policies and optimizations specifically geared towards autotuning over top.

Getting Started

The development environment for MLOS uses conda to ease dependency management.

Devcontainer

For a quick start, you can use the provided VSCode devcontainer configuration.

Simply open the project in VSCode and follow the prompts to build and open the devcontainer and the conda environment and additional tools will be installed automatically inside the container.

Manually

See Also: conda install instructions

Note: to support Windows we currently rely on some pre-compiled packages from conda-forge channels, which increases the conda solver time during environment create/update.

To work around this the (currently) experimental libmamba solver can be used.

See https://github.com/conda-incubator/conda-libmamba-solver#getting-started for more details.

  1. Create the mlos Conda environment.

    conda env create -f conda-envs/mlos.yml
    

    See the conda-envs/ directory for additional conda environment files, including those used for Windows (e.g. mlos-windows.yml).

    or

    # This will also ensure the environment is update to date using "conda env update -f conda-envs/mlos.yml"
    make conda-env
    

    Note: the latter expects a *nix environment.

  2. Initialize the shell environment.

    conda activate mlos
    
  3. For an example of using the mlos_core optimizer APIs run the BayesianOptimization.ipynb notebook.

  4. For an example of using the mlos_bench tool to run an experiment, see the mlos_bench Quickstart README.

    Here's a quick summary:

    ./scripts/generate-azure-credentials-config > global_config_azure.jsonc
    
    # run a simple experiment
    mlos_bench --config ./mlos_bench/mlos_bench/config/cli/azure-redis-1shot.jsonc
    

    See Also:

Distributing

  1. Build the wheel file(s)

    make dist
    
  2. Install it (e.g. after copying it somewhere else).

    # this will install just the optimizer component with SMAC support:
    pip install dist/mlos_core-0.1.0-py3-none-any.whl[smac]
    
    # this will install just the optimizer component with flaml support:
    pip install dist/mlos_core-0.1.0-py3-none-any.whl[flaml]
    
    # this will install just the optimizer component with smac and flaml support:
    pip install dist/mlos_core-0.1.0-py3-none-any.whl[smac,flaml]
    
    # this will install both the optimizer and the experiment runner:
    pip install dist/mlos_bench-0.1.0-py3-none-any.whl
    

    Note: exact versions may differ due to automatic versioning.

See Also