UrbanPro

Learn Hadoop from the Best Tutors

  • Affordable fees
  • 1-1 or Group class
  • Flexible Timings
  • Verified Tutors

Search in

How do you build a movie recommendation system via SVD using Apache Spark?

Asked by Last Modified  

3 Answers

Learn Hadoop

Follow 3
Answer

Please enter your answer

Technical Lead & Archiect - Azure & Databricks, Spark,Kafka,Snowflake,Scala,Pyspark,AWS Cloud,NoSQL

Hi Deepak , You need to use machine learning algorithm for the recommendation system . you can select CollabFiltering Algorithm and SBT Algorithm for recommendation of the movie .
Comments

Big Data Architect

Some students got trained under me done this mini project. Please reach out, can guide you.
Comments

Getting and processing the data In order to build an on-line movie recommender using Spark, we need to have our model data as preprocessed as possible. Parsing the dataset and building the model everytime a new recommendation needs to be done is not the best of the strategies. The list of task...
read more
Getting and processing the data In order to build an on-line movie recommender using Spark, we need to have our model data as preprocessed as possible. Parsing the dataset and building the model everytime a new recommendation needs to be done is not the best of the strategies. The list of task we can pre-compute includes: Loading and parsing the dataset. Persisting the resulting RDD for later use. Building the recommender model using the complete dataset. Persist the dataset for later use. This notebook explains the first of these tasks. complete_dataset_url = 'http://files.grouplens.org/datasets/movielens/ml-latest.zip' small_dataset_url = 'http://files.grouplens.org/datasets/movielens/ml-latest-small.zip' We also need to define download locations. import os datasets_path = os.path.join('..', 'datasets') complete_dataset_path = os.path.join(datasets_path, 'ml-latest.zip') small_dataset_path = os.path.join(datasets_path, 'ml-latest-small.zip') import urllib small_f = urllib.urlretrieve (small_dataset_url, small_dataset_path) complete_f = urllib.urlretrieve (complete_dataset_url, complete_dataset_path) Both of them are zip files containing a folder with ratings, movies, etc. We need to extract them into its individual folders so we can use each file later on. import zipfile with zipfile.ZipFile(small_dataset_path, "r") as z: z.extractall(datasets_path) with zipfile.ZipFile(complete_dataset_path, "r") as z: z.extractall(datasets_path) Loading and parsing datasets Now we are ready to read in each of the files and create an RDD consisting of parsed lines. Each line in the ratings dataset (ratings.csv) is formatted as: userId,movieId,rating,timestamp Each line in the movies (movies.csv) dataset is formatted as: movieId,title,genres Were genres has the format: Genre1|Genre2|Genre3... The tags file (tags.csv) has the format: userId,movieId,tag,timestamp And finally, the links.csv file has the format: movieId,imdbId,tmdbId The format of these files is uniform and simple, so we can use Python split() to parse their lines once they are loaded into RDDs. Parsing the movies and ratings files yields two RDDs: For each line in the ratings dataset, we create a tuple of (UserID, MovieID, Rating). We drop the timestamp because we do not need it for this recommender. For each line in the movies dataset, we create a tuple of (MovieID, Title). We drop the genres because we do not use them for this recommender. So let's load the raw ratings data. We need to filter out the header, included in each file. small_ratings_file = os.path.join(datasets_path, 'ml-latest-small', 'ratings.csv') small_ratings_raw_data = sc.textFile(small_ratings_file) small_ratings_raw_data_header = small_ratings_raw_data.take(1)[0] Now we can parse the raw data into a new RDD small_ratings_data = small_ratings_raw_data.filter(lambda line: line!=small_ratings_raw_data_header)\ .map(lambda line: line.split(",")).map(lambda tokens: (tokens[0],tokens[1],tokens[2])).cache() For illustrative purposes, we can take the first few lines of our RDD to see the result. In the final script we don't call any Spark action (e.g. take) until needed, since they trigger actual computations in the cluster. small_ratings_data.take(3) [(u'1', u'6', u'2.0'), (u'1', u'22', u'3.0'), (u'1', u'32', u'2.0')] We proceed in a similar way with the movies.csv file. small_movies_file = os.path.join(datasets_path, 'ml-latest-small', 'movies.csv') small_movies_raw_data = sc.textFile(small_movies_file) small_movies_raw_data_header = small_movies_raw_data.take(1)[0] small_movies_data = small_movies_raw_data.filter(lambda line: line!=small_movies_raw_data_header)\ .map(lambda line: line.split(",")).map(lambda tokens: (tokens[0],tokens[1])).cache() small_movies_data.take(3) [(u'1', u'Toy Story (1995)'), (u'2', u'Jumanji (1995)'), (u'3', u'Grumpier Old Men (1995)')] read less
Comments

View 1 more Answers

Related Questions

What is the speculative execution in hadoop?
Speculative execution in Hadoop is a process of running duplicate tasks on different nodes to finish the job faster by using the result from the task that completes first.
Divya
0 0
5
Hello, I have completed B.com , MBA fin & M and 5 yr working experience in SAP PLM 1 - Engineering documentation management 2 - Documentation management Please suggest me which IT course suitable to my career growth and scope in market ? Thanks.
If you think you are strong in finance and costing, I would suggest you a SAP FICO course which is definitely always in demand. if you have an experience as a end user on SAP PLM / Documentation etc, even a course on SAP PLM DMS should be good.
Priya
1 0
9
How many nodes can be there in a single hadoop cluster?
A single Hadoop cluster can have **thousands of nodes**, depending on hardware and configuration.
Tahir
0 0
7
Is an mba persuing student eligible for persuing hadoop course?
Yes there are some institutes are offering courses on big data . Those are like MBA in analytics. Google it you will find more info
Osheen
0 0
9

Now ask question in any of the 1000+ Categories, and get Answers from Tutors and Trainers on UrbanPro.com

Ask a Question

Related Lessons

CheckPointing Process - Hadoop
CHECK POINTING Checkpointing process is one of the vital concept/activity under Hadoop. The Name node stores the metadata information in its hard disk. We all know that metadata is the heart core...

Solving the issue of Namenode not starting during Single Node Hadoop installation
On firing jps command, if you see that name node is not running during single node hadoop installation , then here are the steps to get Name Node running Problem: namenode not getting started Solution:...
B

Biswanath Banerjee

1 0
0

HDFS And Mapreduce
1. HDFS (Hadoop Distributed File System): Makes distributed filesystem look like a regular filesystem. Breaks files down into blocks. Distributes blocks to different nodes in the cluster based on...

Up, Up And Up of Hadoop's Future
The onset of Digital Architectures in enterprise businesses implies the ability to drive continuous online interactions with global consumers/customers/clients or patients. The goal is not just to provide...

REDHAT
Configuring sudo Basic syntax USER MACHINE = (RUN_AS) COMMANDS Examples: %group ALL = (root) /sbin/ifconfig %wheel ALL=(ALL) ALL %admins ALL=(ALL) NOPASSWD: ALL Grant use access to commands in NETWORKING...

Recommended Articles

Big data is a phrase which is used to describe a very large amount of structured (or unstructured) data. This data is so “big” that it gets problematic to be handled using conventional database techniques and software.  A Big Data Scientist is a business employee who is responsible for handling and statistically evaluating...

Read full article >

In the domain of Information Technology, there is always a lot to learn and implement. However, some technologies have a relatively higher demand than the rest of the others. So here are some popular IT courses for the present and upcoming future: Cloud Computing Cloud Computing is a computing technique which is used...

Read full article >

Hadoop is a framework which has been developed for organizing and analysing big chunks of data for a business. Suppose you have a file larger than your system’s storage capacity and you can’t store it. Hadoop helps in storing bigger files than what could be stored on one particular server. You can therefore store very,...

Read full article >

We have already discussed why and how “Big Data” is all set to revolutionize our lives, professions and the way we communicate. Data is growing by leaps and bounds. The Walmart database handles over 2.6 petabytes of massive data from several million customer transactions every hour. Facebook database, similarly handles...

Read full article >

Looking for Hadoop ?

Learn from the Best Tutors on UrbanPro

Are you a Tutor or Training Institute?

Join UrbanPro Today to find students near you
X

Looking for Hadoop Classes?

The best tutors for Hadoop Classes are on UrbanPro

  • Select the best Tutor
  • Book & Attend a Free Demo
  • Pay and start Learning

Learn Hadoop with the Best Tutors

The best Tutors for Hadoop Classes are on UrbanPro

This website uses cookies

We use cookies to improve user experience. Choose what cookies you allow us to use. You can read more about our Cookie Policy in our Privacy Policy

Accept All
Decline All

UrbanPro.com is India's largest network of most trusted tutors and institutes. Over 55 lakh students rely on UrbanPro.com, to fulfill their learning requirements across 1,000+ categories. Using UrbanPro.com, parents, and students can compare multiple Tutors and Institutes and choose the one that best suits their requirements. More than 7.5 lakh verified Tutors and Institutes are helping millions of students every day and growing their tutoring business on UrbanPro.com. Whether you are looking for a tutor to learn mathematics, a German language trainer to brush up your German language skills or an institute to upgrade your IT skills, we have got the best selection of Tutors and Training Institutes for you. Read more