recommenders/SETUP.md

358 строки
17 KiB
Markdown
Исходник Обычный вид История

# Setup guide
2018-10-18 17:55:45 +03:00
2019-04-09 20:18:47 +03:00
This document describes how to setup all the dependencies to run the notebooks in this repository in following platforms:
2019-04-09 20:30:54 +03:00
* Local (Linux, MacOS or Windows) or [DSVM](https://azure.microsoft.com/en-us/services/virtual-machines/data-science-virtual-machines/) (Linux or Windows)
* [Azure Databricks](https://azure.microsoft.com/en-us/services/databricks/)
* Docker container
2019-04-09 20:18:47 +03:00
2018-12-11 07:50:19 +03:00
## Table of Contents
2018-11-15 20:22:28 +03:00
* [Compute environments](#compute-environments)
2018-12-03 12:20:01 +03:00
* [Setup guide for Local or DSVM](#setup-guide-for-local-or-dsvm)
* [Requirements](#requirements)
2018-12-03 12:20:01 +03:00
* [Dependencies setup](#dependencies-setup)
2019-03-01 21:27:50 +03:00
* [Register the conda environment as a kernel in Jupyter](#Register-the-conda-environment-as-a-kernel-in-Jupyter)
2018-11-13 20:28:31 +03:00
* [Troubleshooting for the DSVM](#troubleshooting-for-the-dsvm)
* [Setup guide for Azure Databricks](#setup-guide-for-azure-databricks)
* [Requirements of Azure Databricks](#requirements-of-azure-databricks)
* [Repository installation](#repository-installation)
2019-03-01 21:30:57 +03:00
* [Troubleshooting Installation on Azure Databricks](#Troubleshooting-Installation-on-Azure-Databricks)
2019-09-11 18:57:47 +03:00
* [Prepare Azure Databricks for Operationalization](#prepare-azure-databricks-for-operationalization)
* [Install the utilities via PIP](#install-the-utilities-via-pip)
* [Setup guide for Docker](#setup-guide-for-docker)
2018-11-13 20:28:31 +03:00
2018-11-15 20:22:28 +03:00
## Compute environments
2019-04-08 22:05:16 +03:00
Depending on the type of recommender system and the notebook that needs to be run, there are different computational requirements.
Currently, this repository supports **Python CPU**, **Python GPU** and **PySpark**.
2018-11-15 20:22:28 +03:00
2018-12-03 12:20:01 +03:00
## Setup guide for Local or DSVM
2018-11-13 20:28:31 +03:00
### Requirements
2018-10-18 17:55:45 +03:00
2019-09-12 15:30:54 +03:00
* A machine running Linux, MacOS or Windows
* Anaconda with Python version >= 3.6
2020-06-16 17:41:58 +03:00
* This is pre-installed on Azure DSVM such that one can run the following steps directly. To setup on your local machine, [Miniconda](https://docs.conda.io/en/latest/miniconda.html) is a quick way to get started.
2019-02-13 23:13:35 +03:00
* [Apache Spark](https://spark.apache.org/downloads.html) (this is only needed for the PySpark environment).
2018-10-18 17:55:45 +03:00
2018-12-03 12:20:01 +03:00
### Dependencies setup
2018-10-18 17:55:45 +03:00
2019-04-09 20:18:47 +03:00
As a pre-requisite to install the dependencies with Conda, make sure that Anaconda and the package manager Conda are both up to date:
2018-10-18 17:55:45 +03:00
```{shell}
2019-02-22 21:16:45 +03:00
conda update conda -n root
conda update anaconda # use 'conda install anaconda' if the package is not installed
```
2018-10-18 17:55:45 +03:00
2020-06-16 17:41:58 +03:00
We provide a script, [generate_conda_file.py](tools/generate_conda_file.py), to generate a conda-environment yaml file
2019-04-09 20:18:47 +03:00
which you can use to create the target environment using the Python version 3.6 with all the correct dependencies.
2018-10-18 17:55:45 +03:00
**NOTE** the `xlearn` package has dependency on `cmake`. If one uses the `xlearn` related notebooks or scripts, make sure `cmake` is installed in the system. Detailed instructions for installing `cmake` can be found [here](https://vitux.com/how-to-install-cmake-on-ubuntu-18-04/). The default version of `cmake` is 3.15.2. One can specify a different version by configuring the argument of `CMAKE` in building the Docker image.
2019-04-08 22:17:59 +03:00
Assuming the repo is cloned as `Recommenders` in the local system, to install **a default (Python CPU) environment**:
2018-10-18 17:55:45 +03:00
2018-10-18 19:16:21 +03:00
cd Recommenders
2020-06-16 17:41:58 +03:00
python tools/generate_conda_file.py
2019-02-12 16:36:37 +03:00
conda env create -f reco_base.yaml
2018-10-18 17:55:45 +03:00
2019-04-08 22:05:16 +03:00
You can specify the environment name as well with the flag `-n`.
Click on the following menus to see how to install Python GPU and PySpark environments:
2019-01-09 13:35:02 +03:00
<details>
<summary><strong><em>Python GPU environment</em></strong></summary>
2019-04-08 22:05:16 +03:00
Assuming that you have a GPU machine, to install the Python GPU environment:
2019-01-09 13:35:02 +03:00
cd Recommenders
2020-06-16 17:41:58 +03:00
python tools/generate_conda_file.py --gpu
2019-02-12 16:36:37 +03:00
conda env create -f reco_gpu.yaml
2019-01-09 13:35:02 +03:00
</details>
<details>
2019-04-07 23:29:14 +03:00
<summary><strong><em>PySpark environment</em></strong></summary>
2018-10-18 17:55:45 +03:00
2019-04-08 22:05:16 +03:00
To install the PySpark environment:
2018-10-18 17:55:45 +03:00
2018-10-18 19:16:21 +03:00
cd Recommenders
2020-06-16 17:41:58 +03:00
python tools/generate_conda_file.py --pyspark
2019-02-15 15:36:50 +03:00
conda env create -f reco_pyspark.yaml
2018-10-18 17:55:45 +03:00
2019-11-22 00:10:24 +03:00
> Additionally, if you want to test a particular version of spark, you may pass the --pyspark-version argument:
>
2020-06-16 17:41:58 +03:00
> python tools/generate_conda_file.py --pyspark-version 2.4.0
2019-11-22 00:10:24 +03:00
Then, we need to set the environment variables `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` to point to the conda python executable.
Click on the following menus to see details:
<details>
<summary><strong><em>Linux or MacOS</em></strong></summary>
To set these variables every time the environment is activated, we can follow the steps of this [guide](https://conda.io/docs/user-guide/tasks/manage-environments.html#macos-and-linux).
First, get the path of the environment `reco_pyspark` is installed:
RECO_ENV=$(conda env list | grep reco_pyspark | awk '{print $NF}')
Then, create the file `$RECO_ENV/etc/conda/activate.d/env_vars.sh` and add:
#!/bin/sh
RECO_ENV=$(conda env list | grep reco_pyspark | awk '{print $NF}')
export PYSPARK_PYTHON=$RECO_ENV/bin/python
export PYSPARK_DRIVER_PYTHON=$RECO_ENV/bin/python
export SPARK_HOME_BACKUP=$SPARK_HOME
unset SPARK_HOME
This will export the variables every time we do `conda activate reco_pyspark`.
To unset these variables when we deactivate the environment,
create the file `$RECO_ENV/etc/conda/deactivate.d/env_vars.sh` and add:
#!/bin/sh
unset PYSPARK_PYTHON
unset PYSPARK_DRIVER_PYTHON
export SPARK_HOME=$SPARK_HOME_BACKUP
unset SPARK_HOME_BACKUP
</details>
2019-11-22 00:10:24 +03:00
<details><summary><strong><em>Windows</em></strong></summary>
To set these variables every time the environment is activated, we can follow the steps of this [guide](https://conda.io/docs/user-guide/tasks/manage-environments.html#windows).
First, get the path of the environment `reco_pyspark` is installed:
for /f "delims=" %A in ('conda env list ^| grep reco_pyspark ^| awk "{print $NF}"') do set "RECO_ENV=%A"
Then, create the file `%RECO_ENV%\etc\conda\activate.d\env_vars.bat` and add:
@echo off
for /f "delims=" %%A in ('conda env list ^| grep reco_pyspark ^| awk "{print $NF}"') do set "RECO_ENV=%%A"
set PYSPARK_PYTHON=%RECO_ENV%\python.exe
set PYSPARK_DRIVER_PYTHON=%RECO_ENV%\python.exe
set SPARK_HOME_BACKUP=%SPARK_HOME%
set SPARK_HOME=
set PYTHONPATH_BACKUP=%PYTHONPATH%
set PYTHONPATH=
This will export the variables every time we do `conda activate reco_pyspark`.
To unset these variables when we deactivate the environment,
create the file `%RECO_ENV%\etc\conda\deactivate.d\env_vars.bat` and add:
@echo off
set PYSPARK_PYTHON=
set PYSPARK_DRIVER_PYTHON=
set SPARK_HOME=%SPARK_HOME_BACKUP%
set SPARK_HOME_BACKUP=
set PYTHONPATH=%PYTHONPATH_BACKUP%
set PYTHONPATH_BACKUP=
</details>
</details>
2018-10-18 17:55:45 +03:00
2019-01-09 13:35:02 +03:00
<details>
2019-04-08 22:05:16 +03:00
<summary><strong><em>Full (PySpark & Python GPU) environment</em></strong></summary>
2019-01-09 13:35:02 +03:00
2019-04-08 22:05:16 +03:00
With this environment, you can run both PySpark and Python GPU notebooks in this repository.
To install the environment:
2019-01-09 13:35:02 +03:00
cd Recommenders
2020-06-16 17:41:58 +03:00
python tools/generate_conda_file.py --gpu --pyspark
2019-02-12 16:36:37 +03:00
conda env create -f reco_full.yaml
2019-01-09 13:35:02 +03:00
2019-11-22 00:10:24 +03:00
Then, we need to set the environment variables `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` to point to the conda python executable.
See **PySpark environment** setup section for the details about how to setup those variables.
where you will need to change `reco_pyspark` string in the commands to `reco_full`.
2019-01-09 13:35:02 +03:00
</details>
### Register the conda environment as a kernel in Jupyter
We can register our created conda environment to appear as a kernel in the Jupyter notebooks.
conda activate my_env_name
python -m ipykernel install --user --name my_env_name --display-name "Python (my_env_name)"
2019-04-08 22:05:16 +03:00
If you are using the DSVM, you can [connect to JupyterHub](https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro#jupyterhub-and-jupyterlab) by browsing to `https://your-vm-ip:8000`.
2018-11-13 20:28:31 +03:00
### Troubleshooting for the DSVM
* We found that there can be problems if the Spark version of the machine is not the same as the one in the conda file. You can use the option `--pyspark-version` to address this issue.
* When running Spark on a single local node it is possible to run out of disk space as temporary files are written to the user's home directory. To avoid this on a DSVM, we attached an additional disk to the DSVM and made modifications to the Spark configuration. This is done by including the following lines in the file at `/dsvm/tools/spark/current/conf/spark-env.sh`.
```{shell}
2018-12-11 07:50:19 +03:00
SPARK_LOCAL_DIRS="/mnt"
2018-12-12 04:40:57 +03:00
SPARK_WORKER_DIR="/mnt"
2018-12-11 07:50:19 +03:00
SPARK_WORKER_OPTS="-Dspark.worker.cleanup.enabled=true, -Dspark.worker.cleanup.appDataTtl=3600, -Dspark.worker.cleanup.interval=300, -Dspark.storage.cleanupFilesAfterExecutorExit=true"
```
2018-11-13 20:28:31 +03:00
## Setup guide for Azure Databricks
### Requirements of Azure Databricks
* Databricks Runtime version 4.3 (Apache Spark 2.3.1, Scala 2.11) or greater
2018-11-13 20:28:31 +03:00
* Python 3
2019-10-28 13:59:24 +03:00
An example of how to create an Azure Databricks workspace and an Apache Spark cluster within the workspace can be found from [here](https://docs.microsoft.com/en-us/azure/azure-databricks/quickstart-create-databricks-workspace-portal). To utilize deep learning models and GPUs, you may setup GPU-enabled cluster. For more details about this topic, please see [Azure Databricks deep learning guide](https://docs.azuredatabricks.net/applications/deep-learning/index.html).
### Repository installation
2019-10-28 13:59:24 +03:00
2020-06-16 17:41:58 +03:00
You can setup the repository as a library on Databricks either manually or by running an [installation script](tools/databricks_install.py). Both options assume you have access to a provisioned Databricks workspace and cluster and that you have appropriate permissions to install libraries.
<details>
<summary><strong><em>Quick install</em></strong></summary>
This option utilizes an installation script to do the setup, and it requires additional dependencies in the environment used to execute the script.
> To run the script, following **prerequisites** are required:
> * Setup CLI authentication for [Azure Databricks CLI (command-line interface)](https://docs.azuredatabricks.net/user-guide/dev-tools/databricks-cli.html#install-the-cli). Please find details about how to create a token and set authentication [here](https://docs.azuredatabricks.net/user-guide/dev-tools/databricks-cli.html#set-up-authentication). Very briefly, you can install and configure your environment with the following commands.
>
> ```{shell}
2019-10-28 13:59:24 +03:00
> conda activate reco_pyspark
> databricks configure --token
> ```
>
> * Get the target **cluster id** and **start** the cluster if its status is *TERMINATED*.
> * You can get the cluster id from the databricks CLI with:
> ```{shell}
> databricks clusters list
> ```
> * If required, you can start the cluster with:
> ```{shell}
> databricks clusters start --cluster-id <CLUSTER_ID>`
> ```
2019-02-28 06:45:47 +03:00
2019-03-29 17:36:54 +03:00
The installation script has a number of options that can also deal with different databricks-cli profiles, install a version of the mmlspark library, overwrite the libraries, or prepare the cluster for operationalization. For all options, please see:
```{shell}
2020-06-16 17:41:58 +03:00
python tools/databricks_install.py -h
```
2019-03-29 17:36:54 +03:00
Once you have confirmed the databricks cluster is *RUNNING*, install the modules within this repository with the following commands.
2019-02-28 20:05:20 +03:00
```{shell}
2019-03-29 17:36:54 +03:00
cd Recommenders
2020-06-16 17:41:58 +03:00
python tools/databricks_install.py <CLUSTER_ID>
2019-02-28 20:05:20 +03:00
```
2020-06-16 17:41:58 +03:00
**Note** If you are planning on running through the sample code for operationalization [here](examples/05_operationalize/als_movie_o16n.ipynb), you need to prepare the cluster for operationalization. You can do so by adding an additional option to the script run. <CLUSTER_ID> is the same as that mentioned above, and can be identified by running `databricks clusters list` and selecting the appropriate cluster.
2019-02-28 06:45:47 +03:00
```{shell}
2020-06-16 17:41:58 +03:00
python tools/databricks_install.py --prepare-o16n <CLUSTER_ID>
2019-02-28 06:45:47 +03:00
```
See below for details.
</details>
<details>
<summary><strong><em>Manual setup</em></strong></summary>
To install the repo manually onto Databricks, follow the steps:
1. Clone the Microsoft Recommenders repository to your local computer.
2. Zip the contents inside the Recommenders folder (Azure Databricks requires compressed folders to have the `.egg` suffix, so we don't use the standard `.zip`):
```{shell}
cd Recommenders
zip -r Recommenders.egg .
```
2019-10-28 13:59:24 +03:00
3. Once your cluster has started, go to the Databricks workspace, and select the `Home` button.
4. Your `Home` directory should appear in a panel. Right click within your directory, and select `Import`.
5. In the pop-up window, there is an option to import a library, where it says: `(To import a library, such as a jar or egg, click here)`. Select `click here`.
6. In the next screen, select the option `Upload Python Egg or PyPI` in the first menu.
7. Next, click on the box that contains the text `Drop library egg here to upload` and use the file selector to choose the `Recommenders.egg` file you just created, and select `Open`.
8. Click on the `Create library`. This will upload the egg and make it available in your workspace.
9. Finally, in the next menu, attach the library to your cluster.
</details>
### Confirm Installation
After installation, you can now create a new notebook and import the utilities from Databricks in order to confirm that the import worked.
```{python}
2018-11-13 20:28:31 +03:00
import reco_utils
```
### Troubleshooting Installation on Azure Databricks
* For the [reco_utils](reco_utils) import to work on Databricks, it is important to zip the content correctly. The zip has to be performed inside the Recommenders folder, if you zip directly above the Recommenders folder, it won't work.
2018-11-13 20:28:31 +03:00
2019-09-11 18:57:47 +03:00
### Prepare Azure Databricks for Operationalization
2020-06-16 17:41:58 +03:00
This repository includes an end-to-end example notebook that uses Azure Databricks to estimate a recommendation model using matrix factorization with Alternating Least Squares, writes pre-computed recommendations to Azure Cosmos DB, and then creates a real-time scoring service that retrieves the recommendations from Cosmos DB. In order to execute that [notebook](examples/05_operationalize/als_movie_o16n.ipynb), you must install the Recommenders repository as a library (as described above), **AND** you must also install some additional dependencies. With the *Quick install* method, you just need to pass an additional option to the [installation script](tools/databricks_install.py).
<details>
<summary><strong><em>Quick install</em></strong></summary>
2019-02-28 06:45:47 +03:00
This option utilizes the installation script to do the setup. Just run the installation script
with an additional option. If you have already run the script once to upload and install the `Recommenders.egg` library, you can also add an `--overwrite` option:
```{shell}
2020-06-16 17:41:58 +03:00
python tools/databricks_install.py --overwrite --prepare-o16n <CLUSTER_ID>
```
This script does all of the steps described in the *Manual setup* section below.
</details>
<details>
<summary><strong><em>Manual setup</em></strong></summary>
You must install three packages as libraries from PyPI:
2019-02-28 06:45:47 +03:00
* `azure-cli==2.0.56`
* `azureml-sdk[databricks]==1.0.8`
* `pydocumentdb==2.3.3`
You can follow instructions [here](https://docs.azuredatabricks.net/user-guide/libraries.html#install-a-library-on-a-cluster) for details on how to install packages from PyPI.
Additionally, you must install the [spark-cosmosdb connector](https://docs.databricks.com/spark/latest/data-sources/azure/cosmosdb-connector.html) on the cluster. The easiest way to manually do that is to:
1. Download the [appropriate jar](https://search.maven.org/remotecontent?filepath=com/microsoft/azure/azure-cosmosdb-spark_2.3.0_2.11/1.2.2/azure-cosmosdb-spark_2.3.0_2.11-1.2.2-uber.jar) from MAVEN. **NOTE** This is the appropriate jar for spark versions `2.3.X`, and is the appropriate version for the recommended Azure Databricks run-time detailed above.
2. Upload and install the jar by:
1. Log into your `Azure Databricks` workspace
2. Select the `Clusters` button on the left.
3. Select the cluster on which you want to import the library.
4. Select the `Upload` and `Jar` options, and click in the box that has the text `Drop JAR here` in it.
5. Navigate to the downloaded `.jar` file, select it, and click `Open`.
6. Click on `Install`.
7. Restart the cluster.
</details>
2019-09-11 18:57:47 +03:00
## Install the utilities via PIP
2020-06-16 17:41:58 +03:00
A [setup.py](setup.py) file is provided in order to simplify the installation of the utilities in this repo from the main directory.
2019-09-11 18:57:47 +03:00
This still requires the conda environment to be installed as described above. Once the necessary dependencies are installed, you can use the following command to install `reco_utils` as a python package.
pip install -e .
2019-09-11 18:57:47 +03:00
It is also possible to install directly from GitHub. Or from a specific branch as well.
2019-09-11 18:57:47 +03:00
pip install -e git+https://github.com/microsoft/recommenders/#egg=pkg
pip install -e git+https://github.com/microsoft/recommenders/@staging#egg=pkg
2019-09-11 18:57:47 +03:00
**NOTE** - The pip installation does not install any of the necessary package dependencies, it is expected that conda will be used as shown above to setup the environment for the utilities being used.
## Setup guide for Docker
2020-06-16 17:41:58 +03:00
A [Dockerfile](tools/docker/Dockerfile) is provided to build images of the repository to simplify setup for different environments. You will need [Docker Engine](https://docs.docker.com/install/) installed on your system.
*Note: `docker` is already available on Azure Data Science Virtual Machine*
2020-06-16 17:41:58 +03:00
See guidelines in the Docker [README](tools/docker/README.md) for detailed instructions of how to build and run images for different environments.
Example command to build and run Docker image with base CPU environment.
```{shell}
DOCKER_BUILDKIT=1 docker build -t recommenders:cpu --build-arg ENV="cpu" .
docker run -p 8888:8888 -d recommenders:cpu
```
You can then open the Jupyter notebook server at http://localhost:8888