A post by Hannah Sansford, PhD student on the Compass programme.
Like many others, I interact with recommendation systems on a daily basis; from which toaster to buy on Amazon, to which hotel to book on booking.com, to which song to add to a playlist on Spotify. They are everywhere. But what is really going on behind the scenes?
Recommendation systems broadly fit into two main categories:
1) Content-based filtering. This approach uses the similarity between items to recommend items similar to what the user already likes. For instance, if Ed watches two hair tutorial videos, the system can recommend more hair tutorials to Ed.
2) Collaborative filtering. This approach uses the the similarity between users’ past behaviour to provide recommendations. So, if Ed has watched similar videos to Ben in the past, and Ben likes a cute cat video, then the system can recommend the cute cat video to Ed (even if Ed hasn’t seen any cute cat videos).
Both systems aim to map each item and each user to an embedding vector in a common low-dimensional embedding space . That is, the dimension of the embeddings () is much smaller than the number of items or users. The hope is that the position of these embeddings captures some of the latent (hidden) structure of the items/users, and so similar items end up ‘close together’ in the embedding space. What is meant by being ‘close’ may be specified by some similarity measure.
In this blog post we will focus on the collaborative filtering system. We can break it down further depending on the type of data we have:
1) Explicit feedback data: aims to model relationships using explicit data such as user-item (numerical) ratings.
2) Implicit feedback data: analyses relationships using implicit signals such as clicks, page views, purchases, or music streaming play counts. This approach makes the assumption that: if a user listens to a song, for example, they must like it.
The majority of the data on the web comes from implicit feedback data, hence there is a strong demand for recommendation systems that take this form of data as input. Furthermore, this form of data can be collected at a much larger scale and without the need for users to provide any extra input. The rest of this blog post will assume we are working with implicit feedback data.
Suppose we have a group of users and a group of items . Then we let be the ratings matrix where position represents whether user interacts with item . Note that, in most cases the matrix is very sparse, since most users only interact with a small subset of the full item set . For any items that user does not interact with, we set equal to zero. To be clear, a value of zero does not imply the user does not like the item, but that they have not interacted with it. The final goal of the recommendation system is to find the best recommendations for each user of items they have not yet interacted with.
Matrix Factorisation (MF)
A simple model for finding user emdeddings, , and item embeddings, , is Matrix Factorisation. The idea is to find low-rank embeddings such that the product is a good approximation to the ratings matrix by minimising some loss function on the known ratings.
A natural loss function to use would be the squared loss, i.e.
This corresponds to minimising the Frobenius distance between and its approximation , and can be solved easily using the singular value decomposition .
Once we have our embeddings and , we can look at the row of corresponding to user and recommend the items corresponding to the highest values (that they haven’t already interacted with).
Minimising the loss function in the previous section is equivalent to modelling the probability that user interacts with item as the inner product , i.e.
and maximising the likelihood over and .
In a research paper from Spotify , this relationship is instead modelled according to a logistic function parameterised by the sum of the inner product above and user and item bias terms, and ,
Relation to my research
A recent influential paper  proved an impossibility result for modelling certain properties of networks using a low-dimensional inner product model. In my 2023 AISTATS publication  we show that using a kernel, such as the logistic one in the previous section, to model probabilities we can capture these properties with embeddings lying on a low-dimensional manifold embedded in infinite-dimensional space. This has various implications, and could explain part of the success of Spotify’s logistic kernel in producing good recommendations.
 Seshadhri, C., Sharma, A., Stolman, A., and Goel, A. (2020). The impossibility of low-rank representations for triangle-rich complex networks. Proceedings of the National Academy of Sciences, 117(11):5631–5637.
 Sansford, H., Modell, A., Whiteley, N., and Rubin-Delanchy, P. (2023). Implications of sparsity and high triangle density for graph representation learning. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:5449-5473.
 Johnson, C. C. (2014). Logistic matrix factorization for implicit feedback data. Advances in Neural Information Processing Systems, 27(78):1–9.