How we used a simpler re-ranker based on UserDNA to improve our recommendations
Recommending more personalized music helps users quickly and effortlessly find music more tailored towards their taste and preferences. With the vast amount of content being ingested daily, the need to consider the uniqueness of each user is an important signal to incorporate.
There are many exciting research papers discussing, implementing and evaluating complex re-ranking algorithms using neural networks. Although these models might be very efficient, you may be surprised to find that a simple re-ranker might actually be also efficient.
With now over 70 million active users, 57 million songs and 1 billion yearly streams, Anghami is the leading music streaming platform in the MENA region. As Anghami is growing, so is the number of unique users, thus the need for more personalization, and a quick and simple re-ranking strategy to accommodate the fast growth of active users and their diversified preferences.
In this article, I will discuss our simple and efficient re-ranker based on UserDNA and song embeddings to improve the recommendations and increase users listening time.
Music recommendations, Is it uniquely yours?
Simply put, when listening to a song, music recommendation entails suggesting other similar songs which a user might like and what the recommendation model thinks a user might enjoy. This is done by recommending top K similar songs to the song a user is interacting with. Although trained on millions of data using collaborative filtering, word2vec or neural networks… all these models showed improved results across many research, but what if there are some songs you might enjoy more, or better suits your preferences? What if there are some songs at the bottom of the list of songs we are recommending which a user might like but we are missing on pushing it more towards the top. What if a user is into popular vs not popular songs, mellow vs not so mellow, live version vs studio version, new vs old… We don’t want to miss on giving the users what they actually prefer. Thus the need for a personalized strategy to give each user a unique set of songs taking into account the many what ifs we raised. What makes every human unique ? Its their DNA (the 0.1% difference, the rest 99.9% is identical)
User Sub Genre DNA
A music UserDNA, is basically a representation of a User’s preferences and taste. This involves the songs the user streamed, liked, disliked, shared, inserted into a playlist… Those implicit and explicit signals mutate the users Music DNA as it evolves and sub-optimizes with app usage.
For User Sub Genre DNA, sub genres metadata are the factors responsible for the mutation. Having a diversified taste and listening to a variety of genres such as Contemporary Jazz, Progressive Rock, Folk Rock or Lebanese indie… causes the User Music DNA to mutate and produce many Sub Genres replicas.
Our music sub genres metadata aka “Vibes”, comes from our metadata library and the Auto-Tagging system we developed. These metadata genres labels are also used to easily and quickly filter your playlists. The below screenshots are from “My Likes” tab in Anghami app. They show how a user can filter the songs by genres and sub-genres.
Constructing User Sub Genre DNA
We use two types of embeddings to construct the User Sub Genre DNA. A song embedding which is the output of our main recommendation model, and an embedding generated by our Music Information Retrieval algorithm which outputs a set of descriptors for Mood, Tempo, Energy… of a song. These two embeddings are concatenated together with certain weights to derive a final embedding. From these concatenated embeddings we proceed to construct the User Sub Genre DNA. We take users streams for the last X days (>0.8 percentage completion) and users likes, we join the songs with their corresponding sub genres metadata “Vibes”, and we aggregate by averaging these embedding by sub genres and by user. So now, every user has multiple specialized vectors corresponding to each “vibe” he interacted with. The construction of the DNA is illustrated below.
The advantages of this strategy are that the user embedding will be in the same feature space as the songs, and the math to derive it is straightforward. Additionally, we can quickly validate the user embedding by getting its k songs nearest neighbors by calculating the cosine distance between the user embedding and nearby songs. Here is the formula of how we derive the users embeddings give a certain tag m.
The re-ranking strategy is performed as the following:
- Given a seed song with a tag m, we generate a list of k song candidates
- We fetch the User’s Sub Genre DNA with the same tag
- We re-rank the songs using a distance function
In our strategy, we use the cosine distance between the song candidates and the UserDNA as part of the distance function, to push songs with the lowest distances to the top, and songs with greater distances down the list. The following schematic summarizes the strategy
The formula below shows how we derive the final score of a song by weighting the distances with α and β given a threshold λ.
Notice that if we suspect that the re-ranking is not valid (distance greater than λ ), we only consider the initial distance. We selected α=0.3 and β=0.7 to give more weight for the user reranking while keeping a certain amount of diversity and discoverability for the user. We set λ=0.3 as a threshold to ensure a valid reranking.
Users Listening Time
We deployed the personalized re-ranking model to 10% of our user base, and through an A/B testing, we noticed a 3.5% average increase in Users Average Seconds Streamed. Currently deployed to all users, the percentage remains high when comparing songs with and without re-ranking.
Given a seed song, we now show a subjective assessment of the re-ranking for personalized music recommendation. If I want to describe my music taste as a User, I would say it is a combination of Mellow/Chill, preferring unpopular songs and not very much into very recent songs. This also applies to other genres I listen to such Progressive Rock, Indie, Jazz, Lebanese Jazz. Strange DNA!
Taking as seed song Coldplay’s song The Scientist, we show below a comparison between the recommendation generated by the model versus the personalized recommendation using my DNA.
The personalized recommendation better fit my preferences in terms of more chill, not so popular songs. Also more folky songs with acoustic guitars. What’s also interesting is the recommendation of Live performances which I usually listen to very often. So the combination of acoustic features with song features to construct my Sub Genre DNA was well able to model my acoustic and mood taste along with my popularity and recency preferences.
This simple re-ranker was shown to be very helpful in matching user preferences, and required little effort compared to more advanced models or strategies. The work for this ranker mainly focused around feature processing and implementing the logic in our recommendation micro-service. Moreover, its was shown to increase user’s listening time, thus more user satisfaction.
More advanced models might produce more satisfactory results, which is always a work in progress. However, it’s advantageous to start small, first of all to have a baseline to compare with for future more complex models, and second, it’s important to have something up and running quickly.
Start small, think big!
Helmi Rifai, Abdallah Moussawi, Mohamad Makkawi