One of the most important applications of machine learning in the business context is for the development of a recommender system. The most well known recommender engine is Amazon´s. There are two approaches to implement a recommender engine:
- Collaborative filtering: Base the predictions from user’s past behavior (rated, ranked or purchased items) in order to recommend additional items to users who made similar decisions. We don’t need to know the features (content) of the users or the items
- Content-based filtering: Base the predictions on the features of the items in order to recommend additional items with similar features
For our example, we’re going to use collaborative filtering approach because our dataset is in the format (user_id,item_id(movies),rating,timestamp) to recommend a list of movies to a user without knowing anything about the characteristics of the users and movies.
The driver for collaborative filtering comes from the assumption that people often get the best recommendations from someone with similar tastes and opinions to themselves. Apache Sparks implements Alternating Least Squares (ALS) algorithm as part of its MLLib library.
As the sample dataset, we’re going to use the movielens at http://grouplens.org/datasets/movielens/. The dataset summary is:
- data file contains the full dataset recording when a user rating a movie. Each row contains (user_id, movie(item)_id, rating, timestamp) separated by a tab
- user file contains the details of the users
- item file contains the details of the movies
- 100,000 rating (between 1 and 5)
- 943 users
- 1682 movies (items)
First step is to prepare the environment. Read the input file path from the environment variables and start a Spark Context as shown listing 01.
Next step is to represent the external data into an internal structure using a RDD and the case class Rating, by splitting each line by a tab separator and mapping fields to the case class Rating. Finally, we cache in memory the RDD because ALS algorithm is iterative and needs to access the data several times. Without caching, the RDD must repeatedly be recomputed each time ALS iterates. The code is shown in the listing 02.
Next step is to build the recommendation model.
It’s remarkable to say that there are two types of user preferences:
- Explicit preference. Treats each entry in the user-item matrix as explicit preference given by the user to the item. For example, a rating given to items by users
- Implicit preference. Getting an implicit feedback. For example, views, clicks, buy history
In this case, we’re using the ALS.train method as shown in the listing 03.
Next step is to evaluate the performance of the recommendation model as shown below in the Listing 04
Finally, if the performance is very good, we can start making recommendations for our users based on the former recommendation model. Let’s suppose that we want to make five movies recommendations for the user with id equal to 196, then we write the code as shown in the listing 05. Of course, to see the real name of the movies, we need to look for the u.item file by the returning values.
Now, we can apply this principles, knowledge and examples to your own solutions.