How do you build a movie recommendation system via SVD using Apache Spark?

Asked by Last Modified  

3 Answers

Learn Hadoop

Follow 3
Answer

Please enter your answer

Technical Lead & Archiect - Azure & Databricks, Spark,Kafka,Snowflake,Scala,Pyspark,AWS Cloud,NoSQL

Hi Deepak , You need to use machine learning algorithm for the recommendation system . you can select CollabFiltering Algorithm and SBT Algorithm for recommendation of the movie .
Comments

Big Data Architect

Some students got trained under me done this mini project. Please reach out, can guide you.
Comments

Getting and processing the data In order to build an on-line movie recommender using Spark, we need to have our model data as preprocessed as possible. Parsing the dataset and building the model everytime a new recommendation needs to be done is not the best of the strategies. The list of task...
read more
Getting and processing the data In order to build an on-line movie recommender using Spark, we need to have our model data as preprocessed as possible. Parsing the dataset and building the model everytime a new recommendation needs to be done is not the best of the strategies. The list of task we can pre-compute includes: Loading and parsing the dataset. Persisting the resulting RDD for later use. Building the recommender model using the complete dataset. Persist the dataset for later use. This notebook explains the first of these tasks. complete_dataset_url = 'http://files.grouplens.org/datasets/movielens/ml-latest.zip' small_dataset_url = 'http://files.grouplens.org/datasets/movielens/ml-latest-small.zip' We also need to define download locations. import os datasets_path = os.path.join('..', 'datasets') complete_dataset_path = os.path.join(datasets_path, 'ml-latest.zip') small_dataset_path = os.path.join(datasets_path, 'ml-latest-small.zip') import urllib small_f = urllib.urlretrieve (small_dataset_url, small_dataset_path) complete_f = urllib.urlretrieve (complete_dataset_url, complete_dataset_path) Both of them are zip files containing a folder with ratings, movies, etc. We need to extract them into its individual folders so we can use each file later on. import zipfile with zipfile.ZipFile(small_dataset_path, "r") as z: z.extractall(datasets_path) with zipfile.ZipFile(complete_dataset_path, "r") as z: z.extractall(datasets_path) Loading and parsing datasets Now we are ready to read in each of the files and create an RDD consisting of parsed lines. Each line in the ratings dataset (ratings.csv) is formatted as: userId,movieId,rating,timestamp Each line in the movies (movies.csv) dataset is formatted as: movieId,title,genres Were genres has the format: Genre1|Genre2|Genre3... The tags file (tags.csv) has the format: userId,movieId,tag,timestamp And finally, the links.csv file has the format: movieId,imdbId,tmdbId The format of these files is uniform and simple, so we can use Python split() to parse their lines once they are loaded into RDDs. Parsing the movies and ratings files yields two RDDs: For each line in the ratings dataset, we create a tuple of (UserID, MovieID, Rating). We drop the timestamp because we do not need it for this recommender. For each line in the movies dataset, we create a tuple of (MovieID, Title). We drop the genres because we do not use them for this recommender. So let's load the raw ratings data. We need to filter out the header, included in each file. small_ratings_file = os.path.join(datasets_path, 'ml-latest-small', 'ratings.csv') small_ratings_raw_data = sc.textFile(small_ratings_file) small_ratings_raw_data_header = small_ratings_raw_data.take(1)[0] Now we can parse the raw data into a new RDD small_ratings_data = small_ratings_raw_data.filter(lambda line: line!=small_ratings_raw_data_header)\ .map(lambda line: line.split(",")).map(lambda tokens: (tokens[0],tokens[1],tokens[2])).cache() For illustrative purposes, we can take the first few lines of our RDD to see the result. In the final script we don't call any Spark action (e.g. take) until needed, since they trigger actual computations in the cluster. small_ratings_data.take(3) [(u'1', u'6', u'2.0'), (u'1', u'22', u'3.0'), (u'1', u'32', u'2.0')] We proceed in a similar way with the movies.csv file. small_movies_file = os.path.join(datasets_path, 'ml-latest-small', 'movies.csv') small_movies_raw_data = sc.textFile(small_movies_file) small_movies_raw_data_header = small_movies_raw_data.take(1)[0] small_movies_data = small_movies_raw_data.filter(lambda line: line!=small_movies_raw_data_header)\ .map(lambda line: line.split(",")).map(lambda tokens: (tokens[0],tokens[1])).cache() small_movies_data.take(3) [(u'1', u'Toy Story (1995)'), (u'2', u'Jumanji (1995)'), (u'3', u'Grumpier Old Men (1995)')] read less
Comments

View 1 more Answers

Related Questions

Hello, I have completed B.com , MBA fin & M and 5 yr working experience in SAP PLM 1 - Engineering documentation management 2 - Documentation management Please suggest me which IT course suitable to my career growth and scope in market ? Thanks.
If you think you are strong in finance and costing, I would suggest you a SAP FICO course which is definitely always in demand. if you have an experience as a end user on SAP PLM / Documentation etc, even a course on SAP PLM DMS should be good.
Priya
1 0
9
Can anyone suggest about Hadoop?
Hadoop is good but it depends on your experience. If you don't know basic java, linux, shell scripting. Hadoop is not beneficial for you.
Ajay
Hi everyone, What is Hadoop /bigdata and what is required qualification and work experience background for Hadoop/bigdata?
Hadoop is the core platform for structuring Big Data, and solves the problem of formatting it for subsequent analytics purposes. Hadoop uses a distributed computing architecture consisting of multiple...
Priya
What are some of the best blogs for Hadoop?
DBMS2 is the best personal database and analytics blog. Hortonworks’ blog is a must-read for Hadoop users. Cloudera also maintains an important Hadoop blog.
Rahul
What are some of the big data processing frameworks one should know about?
Apache Spark ,Apache Akka , Apache Flink ,Hadoop
Arun
0 0
5

Now ask question in any of the 1000+ Categories, and get Answers from Tutors and Trainers on UrbanPro.com

Ask a Question

Related Lessons

HDFS And Mapreduce
1. HDFS (Hadoop Distributed File System): Makes distributed filesystem look like a regular filesystem. Breaks files down into blocks. Distributes blocks to different nodes in the cluster based on...

Linux File System
Linux File system: Right click on Desktop and click open interminal Login to Linux system and run simple commands: Check present Working Directory: $pwd /home/cloudera/Desktop Change Directory: $cd...

Best way to learn any software Course
Hi First conform whether you are learning from a real time consultant. Get some Case Studies from the consultant and try to complete with the help of google not with consultant. Because in real time same situation will arise. Thank you

Use of Piggybank and Registration in Pig
What is a Piggybank? Piggybank is a jar and its a collection of user contributed UDF’s that is released along with Pig. These are not included in the Pig JAR, so we have to register them manually...
S

Sachin Patil

0 0
0

Big DATA Hadoop Online Training
Course Content for Hadoop DeveloperThis Course Covers 100% Developer and 40% Administration Syllabus.Introduction to BigData, Hadoop:- Big Data Introduction Hadoop Introduction What is Hadoop? Why Hadoop?...

Recommended Articles

Big data is a phrase which is used to describe a very large amount of structured (or unstructured) data. This data is so “big” that it gets problematic to be handled using conventional database techniques and software.  A Big Data Scientist is a business employee who is responsible for handling and statistically evaluating...

Read full article >

In the domain of Information Technology, there is always a lot to learn and implement. However, some technologies have a relatively higher demand than the rest of the others. So here are some popular IT courses for the present and upcoming future: Cloud Computing Cloud Computing is a computing technique which is used...

Read full article >

Hadoop is a framework which has been developed for organizing and analysing big chunks of data for a business. Suppose you have a file larger than your system’s storage capacity and you can’t store it. Hadoop helps in storing bigger files than what could be stored on one particular server. You can therefore store very,...

Read full article >

We have already discussed why and how “Big Data” is all set to revolutionize our lives, professions and the way we communicate. Data is growing by leaps and bounds. The Walmart database handles over 2.6 petabytes of massive data from several million customer transactions every hour. Facebook database, similarly handles...

Read full article >

Find Hadoop near you

Looking for Hadoop ?

Learn from the Best Tutors on UrbanPro

Are you a Tutor or Training Institute?

Join UrbanPro Today to find students near you