Content deleted Content added
m →Funk MF |
m task, replaced: |journal=ACM Transactions on Knowledge Discovery from Data (TKDD)| → |journal=ACM Transactions on Knowledge Discovery from Data | (2) |
||
Line 3:
{{Recommender systems}}
'''Matrix factorization''' is a class of [[collaborative filtering]] algorithms used in [[recommender system]]s<!-- I agree that this is the description but I think the wording should be different. The scientific community as a whole wouldn't commonly know matrix factorization defined in this way -->. Matrix factorization algorithms work by decomposing the user-item interaction [[Matrix (mathematics)|matrix]] into the product of two lower dimensionality rectangular matrices.<ref name="Koren09">{{cite journal |last1=Koren |first1=Yehuda |last2=Bell |first2=Robert |last3=Volinsky |first3=Chris |title=Matrix Factorization Techniques for Recommender Systems |journal=Computer |date=August 2009 |volume=42 |issue=8 |pages=30–37 |doi=10.1109/MC.2009.263|citeseerx=10.1.1.147.8295 |s2cid=58370896 }}</ref> This family of methods became widely known during the [[Netflix prize]] challenge due to its effectiveness as reported by Simon Funk in his 2006 blog post,<ref name="Funkblog">{{cite web |last1=Funk |first1=Simon |title=Netflix Update: Try This at Home |url=http://sifter.org/~simon/journal/20061211.html}}</ref> where he shared his findings with the research community. The prediction results can be improved by assigning different regularization weights to the latent factors based on items' popularity and users' activeness.<ref>{{Cite journal|last1=ChenHung-Hsuan|last2=ChenPu|date=2019-01-09|title=Differentiating Regularization Weights -- A Simple Mechanism to Alleviate Cold Start in Recommender Systems|journal=ACM Transactions on Knowledge Discovery from Data
== Techniques ==
Line 71:
===Deep-Learning MF===
In recent years a number of neural and deep-learning techniques have been proposed, some of which generalize traditional Matrix factorization algorithms via a non-linear neural architecture.<ref>{{cite journal |last1=He |first1=Xiangnan |last2=Liao |first2=Lizi |last3=Zhang |first3=Hanwang |last4=Nie |first4=Liqiang |last5=Hu |first5=Xia |last6=Chua |first6=Tat-Seng |title=Neural Collaborative Filtering |journal=Proceedings of the 26th International Conference on World Wide Web |date=2017 |pages=173–182 |doi=10.1145/3038912.3052569 |url=https://dl.acm.org/citation.cfm?id=3052569 |accessdate=16 October 2019 |publisher=International World Wide Web Conferences Steering Committee|isbn=9781450349130 |arxiv=1708.05031 |s2cid=13907106 }}</ref>
While deep learning has been applied to many different scenarios: context-aware, sequence-aware, social tagging etc. its real effectiveness when used in a simple [[Collaborative filtering]] scenario has been put into question. Systematic analysis of publications applying deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW, RecSys, IJCAI), has shown that on average less than 40% of articles are reproducible, with as little as 14% in some conferences. Overall the studies identify 26 articles, only 12 of them could be reproduced and 11 of them could be outperformed by much older and simpler properly tuned baselines. The articles also highlights a number of potential problems in today's research scholarship and call for improved scientific practices in that area.
==See also==
|