Content deleted Content added
m Reverted edits by 2600:6C5A:A00:2A55:C950:934C:56ED:D84C (talk): unexplained content removal (HG) (3.4.10) |
|||
(15 intermediate revisions by 12 users not shown) | |||
Line 1:
{{Short description|Mathematical procedure}}
{{
{{Recommender systems}}
'''Matrix factorization''' is a class of [[collaborative filtering]] algorithms used in [[recommender system]]s<!-- I agree that this is the description but I think the wording should be different. The scientific community as a whole wouldn't commonly know matrix factorization defined in this way -->. Matrix factorization algorithms work by decomposing the user-item interaction [[Matrix (mathematics)|matrix]] into the product of two lower dimensionality rectangular matrices.<ref name="Koren09">{{cite journal |last1=Koren |first1=Yehuda |last2=Bell |first2=Robert |last3=Volinsky |first3=Chris |title=Matrix Factorization Techniques for Recommender Systems |journal=Computer |date=August 2009 |volume=42 |issue=8 |pages=30–37 |doi=10.1109/MC.2009.263|citeseerx=10.1.1.147.8295 |s2cid=58370896 }}</ref> This family of methods became widely known during the [[Netflix prize]] challenge due to its effectiveness as reported by Simon Funk in his 2006 blog post,<ref name="Funkblog">{{cite web |last1=Funk |first1=Simon |title=Netflix Update: Try This at Home |url=http://sifter.org/~simon/journal/20061211.html}}</ref> where he shared his findings with the research community. The prediction results can be improved by assigning different regularization weights to the latent factors based on items' popularity and users' activeness.<ref>{{Cite journal|last1=ChenHung-Hsuan|last2=ChenPu|date=2019-01-09|title=Differentiating Regularization Weights
== Techniques ==
The idea behind matrix factorization is to represent users and items in a lower dimensional [[latent space
=== Funk MF ===
The original algorithm proposed by Simon Funk in his blog post
The predicted ratings can be computed as <math>\tilde{R}=H W</math>, where <math>\tilde{R} \in \mathbb{R}^{\text{users} \times \text{items}}</math> is the user-item rating matrix, <math>H \in \mathbb{R}^{\text{users} \times \text{latent factors}}</math> contains the user's latent factors and <math>W \in \mathbb{R}^{\text{latent factors} \times \text{items}}</math> the item's latent factors.
Specifically, the predicted rating user ''u'' will give to item ''i'' is computed as:
:<math>\tilde{r}_{ui} = \sum_{f=0}^{\text{n factors} } H_{u,f}W_{f,i}</math>
It is possible to tune the expressive power of the model by changing the number of latent factors. It has been demonstrated
Funk MF was developed as a ''rating prediction'' problem, therefore it uses explicit numerical ratings as user-item interactions.
All things considered, Funk MF minimizes the following objective function:
:<math>\underset{H, W}{\operatorname{arg\,min}}\, \|R - \tilde{R}\|_{\rm F} + \alpha\|H\| + \beta\|W\|</math>
Where <math>\|.\|_{\rm F}</math> is defined to be the [[frobenius norm]] whereas the other norms might be either frobenius or another norm depending on the specific recommending problem.<ref name="Paterek07">{{cite journal |last1=Paterek |first1=Arkadiusz |title=Improving regularized singular value decomposition for collaborative filtering |journal=Proceedings of KDD Cup and Workshop |date=2007 |url=https://www.mimuw.edu.pl/~paterek/ap_kdd.pdf}}</ref>
=== SVD++ ===
While Funk MF is able to provide very good recommendation quality, its ability to use only explicit numerical ratings as user-items interactions constitutes a limitation. Modern day [[recommender systems]] should exploit all available interactions both explicit (e.g. numerical ratings) and implicit (e.g. likes, purchases, skipped, bookmarked). To this end SVD++ was designed to take into account implicit interactions as well.<ref name="Cao15">{{cite book |last1=Cao |first1=Jian |last2=Hu |first2=Hengkui |last3=Luo |first3=Tianyan |last4=Wang |first4=Jia |last5=Huang |first5=May |last6=Wang |first6=Karl |last7=Wu |first7=Zhonghai |last8=Zhang |first8=Xing |title=Distributed Design and Implementation of SVD++ Algorithm for E-commerce Personalized Recommender System |volume=572 |date=2015 |pages=30–44 |doi=10.1007/978-981-10-0421-6_4 |publisher=Springer Singapore |language=en|series=Communications in Computer and Information Science |isbn=978-981-10-0420-9 }}</ref><ref name="Jia14">{{cite book |last1=Jia |first1=Yancheng |
Compared to Funk MF, SVD++ takes also into account user and item bias.
The predicted rating user ''u'' will give to item ''i'' is computed as:
:<math>\tilde{r}_{ui} = \mu + b_i + b_u + \sum_{f=0}^{\text{n factors}} H_{u,f}W_{f,i}</math>
Where <math>\mu</math> refers to the overall average rating over all items and <math>b_i</math> and <math>b_u</math> refers to the observed deviation of the item {{mvar|i}} and the user {{mvar|u}} respectively from the average.<ref>{{Cite journal |last1=Koren |first1=Yehuda |last2=Bell |first2=Robert |last3=Volinsky |first3=Chris |date=August 2009 |title=Matrix factorization techniques for recommender systems |url=https://datajobs.com/data-science-repo/Recommender-Systems-[Netflix].pdf |journal=Computer |pages=45}}</ref> SVD++ has however some disadvantages, with the main drawback being that this method is not ''model-based.'' This means that if a new user is added, the algorithm is incapable of modeling it unless the whole model is retrained. Even though the system might have gathered some interactions for that new user, its latent factors are not available and therefore no recommendations can be computed. This is an example of a [[Cold start (recommender systems)|cold-start]] problem, that is the recommender cannot deal efficiently with new users or items and specific strategies should be put in place to handle this disadvantage.<ref name="Kluver14">{{cite book|last1=Kluver |first1=Daniel |title=Proceedings of the 8th ACM Conference on Recommender systems
A possible way to address this cold start problem is to modify SVD++ in order for it to become a ''model-based'' algorithm, therefore allowing to easily manage new items and new users.
Line 40:
Note that this does not entirely solve the [[Cold start (recommender systems)|cold-start]] problem, since the recommender still requires some reliable interactions for new users, but at least there is no need to recompute the whole model every time. It has been demonstrated that this formulation is almost equivalent to a SLIM model,<ref name="Zheng14">{{cite book |last1=Zheng |first1=Yong |last2=Mobasher |first2=Bamshad |last3=Burke |first3=Robin |title=CSLIM: contextual SLIM recommendation algorithms |date=6 October 2014 |pages=301–304 |doi=10.1145/2645710.2645756 |publisher=ACM|chapter=CSLIM |isbn=9781450326681 |s2cid=15931532 }}</ref> which is an [[item-item recommender|item-item]] model based recommender.
:<math>\tilde{r}_{ui} = \mu + b_i + b_u + \sum_{f=0}^{\text{n factors}} \biggl( \sum_{j=0}^{\text{n items}} r_{uj}
With this formulation, the equivalent [[item-item recommender]] would be <math>\tilde{R} = R S = R W^{\rm T} W</math>. Therefore the similarity matrix is symmetric.
=== Asymmetric SVD ===
Asymmetric SVD aims at combining the advantages of SVD++ while being a model based algorithm, therefore being able to consider new users with a few ratings without needing to retrain the whole model. As opposed to the model-based SVD here the user latent factor matrix H is replaced by Q, which learns the user's preferences as function of their ratings.<ref name="Pu13">{{cite book |last1=Pu |first1=Li |title=Proceedings of the 7th ACM conference on Recommender systems
The predicted rating user ''u'' will give to item ''i'' is computed as:
<math>\tilde{r}_{ui} = \mu + b_i + b_u + \sum_{f=0}^{\text{n factors}} \sum_{j=0}^{\text{n items}} r_{uj} Q_{j,f}W_{f,i}</math>
With this formulation, the equivalent [[item-item recommender]] would be <math>\tilde{R} = R S = R Q^{\rm T} W</math>. Since matrices Q and W are different the similarity matrix is asymmetric, hence the name of the model.
Line 58:
The predicted rating user ''u'' will give to item ''i'' is computed as:
:<math>\tilde{r}_{ui} = \sum_{f=0}^{\text{n factors}} (H_{u,f}+S_{v_u,f})(W_{f,i}+T_{f,j_i})</math>
Here <math>v_u</math> and <math>j_i</math> represent the group label of user ''u'' and item ''i'', respectively, which are identical across members from the same group. And
:<math>\tilde{r}_{u_{new}i} = \sum_{f=0}^{\text{n factors}} S_{v_{u_{new}},f}(W_{f,i}+T_{f,j_i})</math>
This provides a good approximation to the unobserved ratings.
=== Hybrid MF ===
In recent years many other matrix factorization models have been developed to exploit the ever increasing amount and variety of available interaction data and use cases. Hybrid matrix factorization algorithms are capable of merging explicit and implicit interactions
===Deep-
In recent years a number of neural and deep-learning techniques have been proposed, some of which generalize traditional Matrix factorization algorithms via a non-linear neural architecture.<ref>{{cite
While deep learning has been applied to many different scenarios
==See also==
|