changes in README files
This commit is contained in:
Родитель
3867f2e5ce
Коммит
ac74b10ce7
|
@ -45,6 +45,9 @@ To contributors: please add your name to the list when you submit a patch to the
|
|||
* **[Aaron He](https://github.com/AaronHeee)**
|
||||
* Reco utils of NCF
|
||||
* Deep dive notebook demonstrating the use of NCF
|
||||
* **[Alexandros Ioannou](https://github.com/aioannou96)**
|
||||
* Standard VAE algorithm
|
||||
* Multinomial VAE algorithm
|
||||
* **[Bamdev Mishra](https://github.com/bamdevm)**
|
||||
* RLRMC algorithm
|
||||
* GeoIMC algorithm
|
||||
|
@ -56,6 +59,9 @@ To contributors: please add your name to the list when you submit a patch to the
|
|||
* SAR PySpark improvement
|
||||
* **[Daniel Schneider](https://github.com/danielsc)**
|
||||
* FastAI notebook
|
||||
* **[Evgenia Chroni](https://github.com/EvgeniaChroni)**
|
||||
* Multinomial VAE algorithm
|
||||
* Standard VAE algorithm
|
||||
* **[Gianluca Campanella](https://github.com/gcampanella)**
|
||||
* Spark optimization and support
|
||||
* **[Heather Spetalnick (Shapiro)](https://github.com/heatherbshapiro)**
|
||||
|
|
|
@ -78,6 +78,7 @@ The table below lists the recommender algorithms currently available in the repo
|
|||
| LightGBM/Gradient Boosting Tree<sup>*</sup> | [Python CPU](examples/00_quick_start/lightgbm_tinycriteo.ipynb) / [PySpark](examples/02_model_content_based_filtering/mmlspark_lightgbm_criteo.ipynb) | Content-Based Filtering | Gradient Boosting Tree algorithm for fast training and low memory usage in content-based problems |
|
||||
| LightGCN | [Python CPU / Python GPU](examples/02_model_collaborative_filtering/lightgcn_deep_dive.ipynb) | Collaborative Filtering | Deep learning algorithm with simplifies the design of GCN for predicting implicit feedback |
|
||||
| GRU4Rec | [Python CPU / Python GPU](examples/00_quick_start/sequential_recsys_amazondataset.ipynb) | Collaborative Filtering | Sequential-based algorithm that aims to capture both long and short-term user preferences using recurrent neural networks |
|
||||
| Multinomial VAE | [Python CPU / Python GPU](examples/02_model_collaborative_filtering/multi_vae_deep_dive.ipynb) | Collaborative Filtering | Generative Model for predicting user/item interactions |
|
||||
| Neural Recommendation with Long- and Short-term User Representations (LSTUR)<sup>*</sup> | [Python CPU / Python GPU](examples/00_quick_start/lstur_MIND.ipynb) | Content-Based Filtering | Neural recommendation algorithm with long- and short-term user interest modeling |
|
||||
| Neural Recommendation with Attentive Multi-View Learning (NAML)<sup>*</sup> | [Python CPU / Python GPU](examples/00_quick_start/naml_MIND.ipynb) | Content-Based Filtering | Neural recommendation algorithm with attentive multi-view learning |
|
||||
| Neural Collaborative Filtering (NCF) | [Python CPU / Python GPU](examples/00_quick_start/ncf_movielens.ipynb) | Collaborative Filtering | Deep learning algorithm with enhanced performance for implicit feedback |
|
||||
|
@ -88,6 +89,7 @@ The table below lists the recommender algorithms currently available in the repo
|
|||
| Riemannian Low-rank Matrix Completion (RLRMC)<sup>*</sup> | [Python CPU](examples/00_quick_start/rlrmc_movielens.ipynb) | Collaborative Filtering | Matrix factorization algorithm using Riemannian conjugate gradients optimization with small memory consumption. |
|
||||
| Simple Algorithm for Recommendation (SAR)<sup>*</sup> | [Python CPU](examples/00_quick_start/sar_movielens.ipynb) | Collaborative Filtering | Similarity-based algorithm for implicit feedback dataset |
|
||||
| Short-term and Long-term preference Integrated Recommender (SLi-Rec)<sup>*</sup> | [Python CPU / Python GPU](examples/00_quick_start/sequential_recsys_amazondataset.ipynb) | Collaborative Filtering | Sequential-based algorithm that aims to capture both long and short-term user preferences using attention mechanism, a time-aware controller and a content-aware controller |
|
||||
| Standard VAE | [Python CPU / Python GPU](examples/02_model_collaborative_filtering/standard_vae_deep_dive.ipynb) | Collaborative Filtering | Generative Model for predicting user/item interactions |
|
||||
| Surprise/Singular Value Decomposition (SVD) | [Python CPU](examples/02_model_collaborative_filtering/surprise_svd_deep_dive.ipynb) | Collaborative Filtering | Matrix factorization algorithm for predicting explicit rating feedback in datasets that are not very large |
|
||||
| Term Frequency - Inverse Document Frequency (TF-IDF) | [Python CPU](examples/00_quick_start/tfidf_covid.ipynb) | Content-Based Filtering | Simple similarity-based algorithm for content-based recommendations with text datasets |
|
||||
| Vowpal Wabbit (VW)<sup>*</sup> | [Python CPU (online training)](examples/02_model_content_based_filtering/vowpal_wabbit_deep_dive.ipynb) | Content-Based Filtering | Fast online learning algorithms, great for scenarios where user features / context are constantly changing |
|
||||
|
|
|
@ -8,8 +8,10 @@ In this directory, notebooks are provided to give a deep dive of collaborative f
|
|||
| [baseline_deep_dive](baseline_deep_dive.ipynb) | --- | Deep dive on baseline performance estimation.
|
||||
| [cornac_bpr_deep_dive](cornac_bpr_deep_dive.ipynb) | Python CPU | Deep dive on the BPR algorithm and implementation.
|
||||
| [lightgcn_deep_dive](lightgcn_deep_dive.ipynb) | Python CPU, GPU | Deep dive on a LightGCN algorithm and implementation.
|
||||
| [multi_vae_deep_dive](multi_vae_deep_dive.ipynb) | Python CPU, GPU | Deep dive on the Multinomial VAE algorithm and implementation.
|
||||
| [rbm_deep_dive](rbm_deep_dive.ipynb)| Python CPU, GPU | Deep dive on the rbm algorithm and its implementation.
|
||||
| [sar_deep_dive](sar_deep_dive.ipynb) | Python CPU | Deep dive on the SAR algorithm and implementation.
|
||||
| [standard_vae_deep_dive](standard_vae_deep_dive.ipynb) | Python CPU, GPU | Deep dive on the Standard VAE algorithm and implementation.
|
||||
| [surprise_svd_deep_dive](surprise_svd_deep_dive.ipynb) | Python CPU | Deep dive on a SVD algorithm and implementation.
|
||||
|
||||
Details on model training are best found inside each notebook.
|
||||
|
|
|
@ -263,7 +263,7 @@ class StandardVAE:
|
|||
self.beta = beta
|
||||
|
||||
# Compute total annealing steps
|
||||
self.total_anneal_steps = (self.number_of_batches * (self.n_epochs - int(self.n_epochs * 0.2)))
|
||||
self.total_anneal_steps = (self.number_of_batches * (self.n_epochs - int(self.n_epochs * 0.2))) // self.anneal_cap
|
||||
|
||||
# Dropout parameters
|
||||
self.drop_encoder = drop_encoder
|
||||
|
|
Загрузка…
Ссылка в новой задаче