This commit is contained in:
Brian Kroth 2024-01-19 14:13:47 -06:00 коммит произвёл GitHub
Родитель 342eb5b259
Коммит 08b3a94a7c
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: B5690EEEBB952194
5 изменённых файлов: 102 добавлений и 9 удалений

Просмотреть файл

@ -8,11 +8,10 @@ Some notes for maintainers.
```sh ```sh
git checkout -b bump-version main git checkout -b bump-version main
./scripts/update-version.sh patch # or minor or major ./scripts/update-version.sh --no-tag patch # or minor or major
``` ```
> This will create a commit and local git tag for that version. > By default this would create a local tag, but we would have to overwrite it later, so we skip that step.
> You won't be able to create a release from that, so don't push it.
2. Test it! 2. Test it!
@ -44,12 +43,14 @@ Some notes for maintainers.
make dist-test make dist-test
``` ```
6. Update the tag remotely. 6. Update the tag remotely to the MLOS upstream repo.
```sh ```sh
git push --tags git push --tags # upstream (if that's what you called your upstream git remote)
``` ```
7. Make a "Release" on Github.
> Once this is done, the rules in [`.github/workflows/devcontainer.yml`](./.github/workflows/devcontainer.yml) will automatically publish the wheels to [pypi](https://pypi.org/project/mlos-core/) and tagged docker images to ACR. > Once this is done, the rules in [`.github/workflows/devcontainer.yml`](./.github/workflows/devcontainer.yml) will automatically publish the wheels to [pypi](https://pypi.org/project/mlos-core/) and tagged docker images to ACR.
> \ > \
> Note: This may fail if the version number is already published to pypi, in which case start from the beginning. > Note: This may fail if the version number is already published to pypi, in which case start from the beginning with a new patch version.

Просмотреть файл

@ -51,6 +51,10 @@ To do this this repo provides two Python modules, which can be used independentl
- [`mlos-bench`](./mlos_bench/) provides a framework to help automate running benchmarks as described above. - [`mlos-bench`](./mlos_bench/) provides a framework to help automate running benchmarks as described above.
- [`mlos-viz`](./mlos_viz/) provides some simple APIs to help automate visualizing the results of benchmark experiments and their trials.
It provides a simple `plot(experiment_data)` API, where `experiment_data` is obtained from the `mlos_bench.storage` module.
- [`mlos-core`](./mlos_core/) provides an abstraction around existing optimization frameworks (e.g., [FLAML](https://github.com/microsoft/FLAML), [SMAC](https://github.com/automl/SMAC3), etc.) - [`mlos-core`](./mlos_core/) provides an abstraction around existing optimization frameworks (e.g., [FLAML](https://github.com/microsoft/FLAML), [SMAC](https://github.com/automl/SMAC3), etc.)
It is intended to provide a simple, easy to consume (e.g. via `pip`), with low dependencies abstraction to It is intended to provide a simple, easy to consume (e.g. via `pip`), with low dependencies abstraction to
@ -127,12 +131,17 @@ See Also:
- [mlos_bench/config](./mlos_bench/mlos_bench/config/) for additional configuration details. - [mlos_bench/config](./mlos_bench/mlos_bench/config/) for additional configuration details.
- [sqlite-autotuning](https://github.com/Microsoft-CISL/sqlite-autotuning) for a complete external example of using MLOS to tune `sqlite`. - [sqlite-autotuning](https://github.com/Microsoft-CISL/sqlite-autotuning) for a complete external example of using MLOS to tune `sqlite`.
#### `mlos-viz`
For a simple example of using the `mlos_viz` module to visualize the results of an experiment, see the [`sqlite-autotuning`](https://github.com/Microsoft-CISL/sqlite-autotuning) repository, especially the [mlos_demo_sqlite_teachers.ipynb](https://github.com/Microsoft-CISL/sqlite-autotuning/blob/main/mlos_demo_sqlite_teachers.ipynb) notebook.
## Installation ## Installation
The MLOS modules are published to [pypi](https://pypi.org) when new releases are tagged: The MLOS modules are published to [pypi](https://pypi.org) when new releases are tagged:
- [mlos-core](https://pypi.org/project/mlos-core/) - [mlos-core](https://pypi.org/project/mlos-core/)
- [mlos-bench](https://pypi.org/project/mlos-bench/) - [mlos-bench](https://pypi.org/project/mlos-bench/)
- [mlos-viz](https://pypi.org/project/mlos-viz/)
To install the latest release, simply run: To install the latest release, simply run:
@ -151,14 +160,18 @@ pip install -U "mlos-bench[flaml,azure]"
# this will install both the smac optimizer and the experiment runner with ssh support: # this will install both the smac optimizer and the experiment runner with ssh support:
pip install -U "mlos-bench[smac,ssh]" pip install -U "mlos-bench[smac,ssh]"
# this will install the postgres storage backend for mlos-bench
# and mlos-viz for visualizing results:
pip install -U "mlos-bench[postgres]" mlos-viz
``` ```
Details on using a local version from git are available in [CONTRIBUTING.md](./CONTRIBUTING.md). Details on using a local version from git are available in [CONTRIBUTING.md](./CONTRIBUTING.md).
## See Also ## See Also
- API and Examples Documentation: <https://aka.ms/mlos-core/docs> - API and Examples Documentation: <https://microsoft.github.io/MLOS>
- Source Code Repository: <https://aka.ms/mlos-core/src> - Source Code Repository: <https://github.com/microsoft/MLOS>
### Examples ### Examples

Просмотреть файл

@ -27,6 +27,7 @@ It's available for `pip install` via the pypi repository at [mlos-bench](https:/
- [Run the benchmark](#run-the-benchmark) - [Run the benchmark](#run-the-benchmark)
- [Optimization](#optimization) - [Optimization](#optimization)
- [Resuming interrupted experiments](#resuming-interrupted-experiments) - [Resuming interrupted experiments](#resuming-interrupted-experiments)
- [Analyzing Results](#analyzing-results)
<!-- /TOC --> <!-- /TOC -->
@ -210,3 +211,33 @@ Experiments sometimes get interrupted, e.g., due to errors in automation scripts
To resume an interrupted experiment, simply run the same command as before. To resume an interrupted experiment, simply run the same command as before.
As mentioned above in the [importance of the `experiment_id` config](#importance-of-the-experiment-id-config) section, the `experiment_id` is used to resume interrupted experiments, reloading prior trial data for that `experiment_id`. As mentioned above in the [importance of the `experiment_id` config](#importance-of-the-experiment-id-config) section, the `experiment_id` is used to resume interrupted experiments, reloading prior trial data for that `experiment_id`.
## Analyzing Results
The results of the experiment are stored in the database as specified in experiment configs (see above).
After running the experiment, you can use the [`mlos-viz`](../mlos_viz/) package to analyze the results in a Jupyter notebook, for instance.
See the [`sqlite-autotuning`](https://github.com/Microsoft-CISL/sqlite-autotuning) repository for a full example.
The `mlos-viz` package uses the `ExperimentData` and `TrialData` [`mlos_bench.storage` APIs](./mlos_bench/storage/) to load the data from the database and visualize it.
For example:
```python
from mlos_bench.storage import from_config
# Specify the experiment_id used for your experiment.
experiment_id = "YourExperimentId"
trial_id = 1
# Specify the path to your storage config file.
storage = from_config(config_file="storage/sqlite.jsonc")
# Access one of the experiments' data:
experiment_data = storage.experiments[experiment_id]
# Full experiment results are accessible in a data frame:
results_data_frame = experiment_data.results
# Individual trial results are accessible via the trials dictionary:
trial_data = experiment_data.trials[trial_id]
# Tunables used for the trial are accessible via the config property:
trial_config = trial_data.config
```
See Also: <https://microsoft.github.io/MLOS> for full API documentation.

Просмотреть файл

@ -0,0 +1,44 @@
# mlos-bench Storage APIs
The [`mlos_bench.storage`](./) module provides APIs for both storing and retrieving experiment results.
## Write Access APIs for Experimentation
The `mlos_bench.storage` modules include the `Storage`, `Experiment`, and `Trial` classes.
The `Storage` class is the top-level class that implements a storage backend (e.g., SQL RDBMS).
Storage config files are typically needed to configure these (e.g., hostname and authentication info), but a default of `storage/sqlite.jsonc` is provided for local only storage.
The `Experiment` and `Trial` classes are used to store experiment and trial results, respectively.
See Also: <https://microsoft.github.io/MLOS> for full API details.
## Read Access APIs for Analysis
Read access to experiment results is provided via the `ExperimentData` and `TrialData` classes.
The former can be accessed thru `Storage.experiments[experiment_id]` and the latter thru `ExperimentData.trials[trial_id]`.
These are expected to be used in a more interactive environment such as a Jupyter notebook.
For example:
```python
from mlos_bench.storage import from_config
# Specify the experiment_id used for your experiment.
experiment_id = "YourExperimentId"
trial_id = 1
# Specify the path to your storage config file.
storage = from_config(config_file="storage/sqlite.jsonc")
# Access one of the experiments' data:
experiment_data = storage.experiments[experiment_id]
# Full experiment results are accessible in a data frame:
results_data_frame = experiment_data.results
# Individual trial results are accessible via the trials dictionary:
trial_data = experiment_data.trials[trial_id]
# Tunables used for the trial are accessible via the config property:
trial_config = trial_data.config
```
See the [`sqlite-autotuning`](https://github.com/Microsoft-CISL/sqlite-autotuning) repository for a full example.

Просмотреть файл

@ -2,6 +2,10 @@
The [`mlos_viz`](./) module is an aid to visualizing experiment benchmarking and optimization results generated and stored by [`mlos_bench`](../mlos_bench/). The [`mlos_viz`](./) module is an aid to visualizing experiment benchmarking and optimization results generated and stored by [`mlos_bench`](../mlos_bench/).
Its core API is `mlos_viz.plot(experiment)`, initially implemented as a wrapper around [`dabl`](https://github.com/dabl/dabl) to provide a basic visual overview of the results, where `experiment` is an [`ExperimentData`](../mlos_bench/mlos_bench/storage/base_experiment_data.py) objected returned from the [`mlos_bench.storage`](../mlos_bench/mlos_bench/storage/) layer. Its core API is `mlos_viz.plot(experiment)`, initially implemented as a wrapper around [`dabl`](https://github.com/dabl/dabl) to provide a basic visual overview of the results, where `experiment` is an [`ExperimentData`](../mlos_bench/mlos_bench/storage/base_experiment_data.py) objected returned from the [`mlos_bench.storage`](../mlos_bench/mlos_bench/storage/) layer APIs.
In the future, we plan to add more automatic visualizations, interactive visualizations, feedback to the `mlos_bench` experiment trial scheduler, etc. In the future, we plan to add more automatic visualizations, interactive visualizations, feedback to the `mlos_bench` experiment trial scheduler, etc.
It's available for `pip install` via the pypi repository at [mlos-viz](https://pypi.org/project/mlos-viz/).
See Also: <https://microsoft.github.io/MLOS> for full API details.