sarplus/README.md

132 строки
4.1 KiB
Markdown
Исходник Обычный вид История

2018-10-24 19:03:40 +03:00
# sarplus
pronounced sUrplus as it's simply better if not best!
2018-10-26 13:51:41 +03:00
2018-10-26 20:58:29 +03:00
Features
2018-10-26 21:37:17 +03:00
* Scalable PySpark based [implementation](python/pysarplus/SARPlus.py)
* Fast C++ based [predictions](python/src/pysarplus.cpp)
2018-10-26 20:58:29 +03:00
* Reduced memory consumption: similarity matrix cached in-memory once per worker, shared accross python executors
* Easy setup using [Spark Packages](https://spark-packages.org/package/eisber/sarplus)
2018-10-26 13:51:41 +03:00
2018-10-26 20:58:29 +03:00
# Benchmarks
2018-10-26 13:51:41 +03:00
2018-10-26 20:58:29 +03:00
| # Users | # Items | # Ratings | Runtime | Environment | Dataset |
|---------|---------|-----------|---------|-------------|---------|
| 2.5mio | 35k | 100mio | 1.3h | Databricks, 8 workers, [Azure Standard DS3 v2](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) | |
2018-10-26 21:28:59 +03:00
# Usage
```python
import pandas as pd
from pysarplus import SARPlus
# spark dataframe with user/item/rating/optional timestamp tuples
train_df = spark.createDataFrame(
pd.DataFrame({
'user_id': [1, 1, 2, 3, 3],
'item_id': [1, 2, 1, 1, 3],
'rating': [1, 1, 1, 1, 1],
}))
# spark dataframe with user/item tuples
test_df = spark.createDataFrame(
pd.DataFrame({
'user_id': [1, 3],
'item_id': [1, 3],
'rating': [1, 1],
}))
model = SARPlus(spark, col_user='user_id', col_item='item_id', col_rating='rating', col_timestamp='timestamp')
model.fit(train_df, similarity_type='jaccard')
model.recommend_k_items(test_df, 'sarplus_cache', top_k=3).show()
```
2018-10-26 20:58:29 +03:00
2018-10-26 21:56:22 +03:00
## Jupyter Notebook
2018-10-26 20:58:29 +03:00
2018-10-26 21:56:22 +03:00
Insert this cell prior to the code above.
```python
import os
2018-10-26 23:58:33 +03:00
SUBMIT_ARGS = "--packages eisber:sarplus:0.2.2 pyspark-shell"
2018-10-26 21:56:22 +03:00
os.environ["PYSPARK_SUBMIT_ARGS"] = SUBMIT_ARGS
from pyspark.sql import SparkSession
spark = (
SparkSession.builder.appName("sample")
.master("local[*]")
.config("memory", "1G")
.config("spark.sql.shuffle.partitions", "1")
.config("spark.sql.crossJoin.enabled", True)
.config("spark.ui.enabled", False)
.getOrCreate()
)
```
## PySpark Shell
2018-10-26 13:51:41 +03:00
2018-10-26 21:28:59 +03:00
```bash
pip install pysarplus
2018-10-26 23:58:33 +03:00
pyspark --packages eisber:sarplus:0.2.2 --conf spark.sql.crossJoin.enabled=true
2018-10-26 13:51:41 +03:00
```
2018-10-26 21:56:22 +03:00
## Databricks
2018-10-26 13:51:41 +03:00
2018-10-26 21:28:59 +03:00
One must set the crossJoin property to enable calculation of the similarity matrix (Clusters / <Cluster> / Configuration / Spark Config)
2018-10-26 13:51:41 +03:00
2018-10-26 20:58:29 +03:00
```
spark.sql.crossJoin.enabled true
2018-10-26 13:51:41 +03:00
```
2018-10-26 20:58:29 +03:00
1. Navigate to your workspace
2. Create library
3. Under 'Source' select 'Maven Coordinate'
2018-10-26 23:58:33 +03:00
4. Enter eisber:sarplus:0.2.2
2018-10-26 20:58:29 +03:00
5. Hit 'Create Libbrary'
This will install C++, Python and Scala code on your cluster.
2018-10-26 13:51:41 +03:00
# Packaging
For [databricks](https://databricks.com/) to properly install a [C++ extension](https://docs.python.org/3/extending/building.html), one must take a detour through [pypi](https://pypi.org/).
Use [twine](https://github.com/pypa/twine) to upload the package to [pypi](https://pypi.org/).
```bash
cd python
python setup.py sdist
twine upload dist/pysarplus-*.tar.gz
```
On [Spark](https://spark.apache.org/) one can install all 3 components (C++, Python, Scala) in one pass by creating a [Spark Package](https://spark-packages.org/). Documentation is rather sparse. Steps to install
1. Package and publish the [pip package](python/setup.py) (see above)
2. Package the [Spark package](scala/build.sbt), which includes the [Scala formatter](scala/src/main/scala/eisber/sarplus) and references the [pip package](scala/python/requirements.txt) (see below)
2018-10-26 13:53:44 +03:00
3. Upload the zipped Scala package to [Spark Package](https://spark-packages.org/) through a browser. [sbt spPublish](https://github.com/databricks/sbt-spark-package) has a few [issues](https://github.com/databricks/sbt-spark-package/issues/31) so it always fails for me. Don't use spPublishLocal as the packages are not created properly (names don't match up, [issue](https://github.com/databricks/sbt-spark-package/issues/17)) and furthermore fail to install if published to [Spark-Packages.org](https://spark-packages.org/).
2018-10-26 13:51:41 +03:00
```bash
2018-10-26 21:28:59 +03:00
cd scala
2018-10-26 13:51:41 +03:00
sbt spPublish
```
2018-10-26 13:57:46 +03:00
2018-10-26 20:58:29 +03:00
# Testing
2018-10-26 13:57:46 +03:00
2018-10-26 20:58:29 +03:00
To test the python UDF + C++ backend
2018-10-26 13:57:46 +03:00
2018-10-26 20:58:29 +03:00
```bash
cd python
python setup.py install && pytest -s tests/
2018-10-26 13:57:46 +03:00
```
2018-10-26 20:58:29 +03:00
To test the Scala formatter
2018-10-26 13:57:46 +03:00
2018-10-26 20:58:29 +03:00
```bash
cd scala
sbt test
2018-10-26 13:57:46 +03:00
```
2018-10-26 20:58:29 +03:00
(use ~test and it will automatically check for changes in source files, but not build.sbt)
2018-10-26 21:48:12 +03:00