About the Course
Apache Hadoop enables organizations to analyze massive volumes of structured and unstructured data and is currently very hot trend across the software tech industry. Hadoop will be adopted as default enterprise data hub by most of the enterprise soon. Hence Hadoop is being tagged by many as one of the most desired tech skills for 2014 and coming years.
This course will provide you an excellent kick start in building your fundamentals in developing big data solutions using hadoop platform and its ecosystem tools. The course is well balanced between theory and hands-on lab (more than 15 lab exercises) spread on real world uses cases like retail data analysis, sentiment analysis, log analysis, real time trend analysis etc.
Topics CoveredCourse Content:
• What is Big Data & Why Hadoop?
• Big Data Characteristics, Challenges with traditional system
• Hadoop Overview & it’s Ecosystem
• Anatomy of Hadoop Cluster, Installing and Configuring Hadoop
• Setting up hadoop cluster (Single Node)
• HDFS and YARN
• HDFS Architecture, Name Nodes, Data Nodes and Secondary Name Node
• Understanding HDFS HA and Federation architecture
• YARN Architecture, Resource Manager, Node Manager and Application Master
• Hands-On Exercise
• Map Reduce Anatomy (MR2)
• How Map Reduce Works?
• Writing Mapper, Reducer and Driver using Java APIs,
• Understanding Hadoop Data Type, Input& Output Formats
• Hands On Exercises
• Developing Map Reduce Programs
• Setting up Eclipse Development Environment, Creating Map Reduce Projects, Debugging and Unit Testing
• Developing a map reduce algorithm on real world scenario
• Hands On Exercises
• Advanced Map Reduce Concepts
• Combiner, Partitioner, Counter, Setup and cleanup, Distributed Cache
• Passing parameters, Multiple Inputs, Chaining multiple jobs
• Applying Compression, Speculative Execution, Zero Reducers
• Handling small files and bad records, Handling Binary data like images, documents etc.
• Map and Reduce Side Joins, data partitioning
• Sqoop & Flume
• Importing and Exporting data from RDBMS using Sqoop
• Importing and Exporting data from non-RDBMS sources using Flume
• Hands On Exercise using Sqoop
• Structured Data Analysis using Hive
• Hive Architecture, Internal & External Tables, Partitioning, Buckets
• Writing queries – Joins, Union, Dynamic partitioning, Sampling
• Writing UDFs, reading different data formats
• Hands On Exercise
• Semi or Unstructured Data Analysis using Pig
• Pig Basics, Loading data files
• Writing queries – SPLIT, FILTER, JOIN, GROUP, SAMPLE, ILLUSTRATE etc.
• Writing UDFs
• Hands On Exercise – Tweets Analysis
• Orchestrating data workflows using Oozie
• Understanding Oozie workflow definitions
• Hands On Exercise – Writing an workflow
• Hadoop Best Practices, Advanced Tips & Techniques
• Managing HDFS and YARN
• Hadoop Cluster sizing, capacity planning and optimization
• Hadoop Deployment options
Who should attendArchitects and developers, who wish to write, build and maintain Apache Hadoop jobs.
Pre-requisitesThe participants should have basic knowledge of java, SQL and Linux. It is advised to refresh these skills to obtain maximum benefit from this workshop.
What you need to bring-
Key TakeawaysThe attendees will learn below topics through lectures and hands-on exercises
– Understand Big Data, Hadoop 2.0 architecture and it’s Ecosystem
– Deep Dive into HDFS and YARN Architecture
– Writing map reduce algorithms using java APIs
– Advanced Map Reduce features & Algorithms
– How to leverage Hive & Pig for structured and unstructured data analysis
– Data import and export using Sqoop and Flume and create workflows using Oozie
– Hadoop Best Practices, Sizing and capacity planning
– Creating reference architectures for big data solutions