MLOS is a Data Science powered infrastructure and methodology to democratize and automate Performance Engineering. MLOS enables continuous, instance-based, robust, and trackable systems optimization.
Перейти к файлу
Brian Kroth d50f1f4e83
MacOS test fixups and CI pipelines (#876)
# Pull Request

## Title

Fixes some test issues for Mac OS environments and enables CI pipelines.

---

## Description

- [x] mypy fixups
- [x] Enable MacOS build pipeline
- [x] ~Enable MacOS devcontainer tests~
  - [x] ~Needs `docker` enabled (which isn't currently possible)~
- [x] ~Related: add *basic* devcontainer build test and run for Windows
as well (also isn't currently possible)~
- [x] Fixes MacOS tests
  - [x] Address parallel runner issues with output dir cleanup
  - [x] Fix ssh tests (also a parallelization issue)
- [x] ~Publish multi arch docker images (amd64 and arm64) (can't see
above)~

Closes #875 
  
---

## Type of Change

- 🛠️ Bug fix
-  New feature
- 🧪 Tests

---

## Testing

Locally tested on MacOS, Windows, and Linux.

---

## Additional Notes (optional)

Leaves some currently disabled stub code for creating devcontainers in
the future on MacOS and Windows hosts. Unfortunately this isn't
currently possible with the Github Action runners.


Also did some cosmetic renaming of the CI pipelines for better
descriptions.
This affects the required tests to pass in the repo settings.
Will adjust that after this is approved.

---
2024-11-05 11:24:50 -06:00
.devcontainer MacOS test fixups and CI pipelines (#876) 2024-11-05 11:24:50 -06:00
.github MacOS test fixups and CI pipelines (#876) 2024-11-05 11:24:50 -06:00
.vscode devcontainer build fixups for macOS clients (#874) 2024-10-21 10:17:37 -05:00
conda-envs Add support for Python 3.13 and mark is as the expected default (#879) 2024-10-31 13:46:43 -05:00
doc Various quick CI fixups (#870) 2024-10-14 16:25:10 -05:00
mlos_bench MacOS test fixups and CI pipelines (#876) 2024-11-05 11:24:50 -06:00
mlos_core Various quick CI fixups (#870) 2024-10-14 16:25:10 -05:00
mlos_viz Use scheduler in dummy runs (#861) 2024-10-01 16:44:13 -05:00
scripts Pypi packaging (#626) 2024-01-09 22:40:15 +00:00
.bumpversion.cfg Bump version: 0.6.0 → 0.6.1 (#845) 2024-08-16 10:58:38 -07:00
.cspell.json Improve mlos-viz for multiple repeats of a config and add tests (#633) 2024-01-28 17:46:44 -08:00
.editorconfig Enable black, isort, and doc formatters and checks (#774) 2024-07-12 19:56:14 +00:00
.git-blame-ignore-revs Ignore reformatting revision (#794) 2024-07-15 10:34:23 -05:00
.gitattributes Merged PR 775: Adds explicit line ending handling of shell scripts in the gitattributes 2022-12-24 00:31:27 +00:00
.gitignore Enable black, isort, and doc formatters and checks (#774) 2024-07-12 19:56:14 +00:00
.prettierrc devcontainer style improvements (#358) 2023-05-15 22:19:20 +00:00
CODEOWNERS Add code owners for people to require reviews from (#281) 2023-03-22 11:05:44 -07:00
CODE_OF_CONDUCT.md Switch to Github Actions for CI/CD runs (#274) 2023-03-18 15:52:26 -05:00
CONTRIBUTING.md Some small update notes to contributing (#877) 2024-10-31 19:04:08 +00:00
LICENSE.txt Merged PR 560: Merging initial os_autotune_main work to main 2022-08-04 16:11:35 +00:00
MAINTAINING.md README documentation updates (#647) 2024-01-19 12:13:47 -08:00
Makefile MacOS test fixups and CI pipelines (#876) 2024-11-05 11:24:50 -06:00
NOTICE Merged PR 1054: Prepwork for OSSing 2023-03-17 21:20:39 +00:00
README.md MacOS test fixups and CI pipelines (#876) 2024-11-05 11:24:50 -06:00
SECURITY.md Some documentation fixups (#524) 2023-11-08 00:11:57 +00:00
conftest.py Enable black, isort, and doc formatters and checks (#774) 2024-07-12 19:56:14 +00:00
mlos.code-workspace Fixup a vscode config file. (#643) 2024-01-18 18:15:50 -06:00
pyproject.toml Migrate to ConfigSpace 1.* (#802) 2024-08-02 16:37:41 -07:00
setup.cfg Migrate to ConfigSpace 1.* (#802) 2024-08-02 16:37:41 -07:00

README.md

MLOS

MLOS DevContainer MLOS Linux MLOS MacOS MLOS Windows Code Coverage Status

MLOS is a project to enable autotuning for systems.

Contents

Overview

MLOS currently focuses on an offline tuning approach, though we intend to add online tuning in the future.

To accomplish this, the general flow involves

  • Running a workload (i.e., benchmark) against a system (e.g., a database, web server, or key-value store).
  • Retrieving the results of that benchmark, and perhaps some other metrics from the system.
  • Feed that data to an optimizer (e.g., using Bayesian Optimization or other techniques).
  • Obtain a new suggested config to try from the optimizer.
  • Apply that configuration to the target system.
  • Repeat until either the exploration budget is consumed or the configurations' performance appear to have converged.
optimization loop

Source: LlamaTune: VLDB 2022

For a brief overview of some of the features and capabilities of MLOS, please see the following video:

demo video

Organization

To do this this repo provides two Python modules, which can be used independently or in combination:

  • mlos-bench provides a framework to help automate running benchmarks as described above.

  • mlos-viz provides some simple APIs to help automate visualizing the results of benchmark experiments and their trials.

    It provides a simple plot(experiment_data) API, where experiment_data is obtained from the mlos_bench.storage module.

  • mlos-core provides an abstraction around existing optimization frameworks (e.g., FLAML, SMAC, etc.)

    It is intended to provide a simple, easy to consume (e.g. via pip), with low dependencies abstraction to

    • describe a space of context, parameters, their ranges, constraints, etc. and result objectives
    • an "optimizer" service abstraction (e.g. register() and suggest()) so we can easily swap out different implementations methods of searching (e.g. random, BO, LLM, etc.)
    • provide some helpers for automating optimization experiment runner loops and data collection

For these design requirements we intend to reuse as much from existing OSS libraries as possible and layer policies and optimizations specifically geared towards autotuning systems over top.

By providing wrappers we aim to also allow more easily experimenting with replacing underlying optimizer components as new techniques become available or seem to be a better match for certain systems.

Contributing

See CONTRIBUTING.md for details on development environment and contributing.

Getting Started

The development environment for MLOS uses conda and devcontainers to ease dependency management, but not all these libraries are required for deployment.

For instructions on setting up the development environment please try one of the following options:

  • see CONTRIBUTING.md for details on setting up a local development environment
  • launch this repository (or your fork) in a codespace, or
  • have a look at one of the autotuning example repositories like sqlite-autotuning to kick the tires in a codespace in your browser immediately :)

conda activation

  1. Create the mlos Conda environment.

    conda env create -f conda-envs/mlos.yml
    

    See the conda-envs/ directory for additional conda environment files, including those used for Windows (e.g. mlos-windows.yml).

    or

    # This will also ensure the environment is update to date using "conda env update -f conda-envs/mlos.yml"
    make conda-env
    

    Note: the latter expects a *nix environment.

  2. Initialize the shell environment.

    conda activate mlos
    

Usage Examples

mlos-core

For an example of using the mlos_core optimizer APIs run the BayesianOptimization.ipynb notebook.

mlos-bench

For an example of using the mlos_bench tool to run an experiment, see the mlos_bench Quickstart README.

Here's a quick summary:

./scripts/generate-azure-credentials-config > global_config_azure.jsonc

# run a simple experiment
mlos_bench --config ./mlos_bench/mlos_bench/config/cli/azure-redis-1shot.jsonc

See Also:

mlos-viz

For a simple example of using the mlos_viz module to visualize the results of an experiment, see the sqlite-autotuning repository, especially the mlos_demo_sqlite_teachers.ipynb notebook.

Installation

The MLOS modules are published to pypi when new releases are tagged:

To install the latest release, simply run:

# this will install just the optimizer component with SMAC support:
pip install -U mlos-core[smac]

# this will install just the optimizer component with flaml support:
pip install -U "mlos-core[flaml]"

# this will install just the optimizer component with smac and flaml support:
pip install -U "mlos-core[smac,flaml]"

# this will install both the flaml optimizer and the experiment runner with azure support:
pip install -U "mlos-bench[flaml,azure]"

# this will install both the smac optimizer and the experiment runner with ssh support:
pip install -U "mlos-bench[smac,ssh]"

# this will install the postgres storage backend for mlos-bench
# and mlos-viz for visualizing results:
pip install -U "mlos-bench[postgres]" mlos-viz

Details on using a local version from git are available in CONTRIBUTING.md.

See Also

Examples

These can be used as starting points for new autotuning projects.

Publications