true

Find the best tutors and institutes for Big Data

Find Best Big Data Training

Please select a Category.

Please select a Locality.

No matching category found.

No matching Locality found.

Outside India?

Search for topics

Big Data Updates

Ask a Question

Post a Lesson

All

All

Lessons

Discussion

Lesson Posted on 01 May IT Courses/Apache Spark IT Courses/Big Data Tuition/BTech Tuition/Big Data Analytics +1 IT Courses/Hadoop less

Loading Hive tables as a parquet File

Silvia Priya

Experienced datawarehouse professional for 10 years. Certified Big data-Hadoop and Python Trainer. I...

Hive tables are very important when it comes to Hadoop and Spark as both can integrate and process the tables in Hive. Let's see how we can create a hive table that internally stores the records in it in a parquet fashion. Storing a hive table as a parquet file with a snappy compression in a traditional... read more

Hive tables are very important when it comes to Hadoop and Spark as both can integrate and process the tables in Hive.

Let's see how we can create a hive table that internally stores the records in it in a parquet fashion.

 

Storing a hive table as a parquet file with a snappy compression in a traditional hive shell

 

  1. Create a hive table called transaction and load it with records using the load command.

 create table transaction(no int,tdate string,userno int,amt int,pro string,city string,pay string) row format delimited fields terminated by ',';

 

load data local inpath '/home/cloudera/online/hive/transactions' into table transaction;

 

  1. Create another hive table named tran_snappy with storage type as parquet and compression technique as snappy.

 create table tran_snappy(no int,tdate string,userno int,amt int,pro string,city string,pay string)  stored as parquet tblproperties('parquet.compression' = 'SNAPPY');

 

  1. Insert the second table with records from the first table.

insert into table tran_snappy select * from transaction;

 

  1. Go to the /user/hive/warehouse directory to check whether the file is in snappy format or not.

 

Storing a hive table as a parquet file with a snappy compression in spark sql

 

1.Import the hive context in the spark shell and create and load the hive table in a parquet format.

Import org.apache.spark.sql.hive.HiveContext

Val sqlContext = new HiveContext(sc)

Scala> sqlContext.sql(“create table transaction(no int,tdate string,userno int,amt int,pro string,city string,pay string) row format delimited fields terminated by ','

”)

 

2.Load the created table

Scala>sqlContext.sql(“load data local inpath '/home/cloudera/online/hive/transactions' into table transaction”)

 

3.Create a snappy compressed parquet table

Scala>sqlContext.sql(“create table tran_snappy(no int,tdate string,userno int,amt int,pro string,city string,pay string)  stored as parquet tblproperties('parquet.compression' = 'SNAPPY')”)

 

4.Load the table from the table gets created in the step 1.

Scala>val records_tran=sqlContext.sql(“select * from transaction”)

 

Scala>records_tran.insertInto(“tran_snappy”)

 

Now the records are inserted into the snappy compressed hive table. Go to the /user/hive/warehouse directory to check whether the parquet file gets generated for the corresponding table.

read less
Comments
Dislike Bookmark

Answered on 27/11/2019 IT Courses/Big Data

I have Apache pig query. Can you give me answer for this query. On the average which state has the highest... read more

 I have Apache pig query. Can you give me answer for this query. On the average which state has the highest amount of precipitation in September 2013?

read less

URIAH Training Center

Kerala
Answers 2 Comments
Dislike Bookmark

Lesson Posted on 14/09/2019 IT Courses/Infor ERP IT Courses/Big Data

training #bigdatalab #online

JOSHUA CHARLES

I have 6+ years of IT experience and involved responsibilities such as Production/Application Support...

# Fully equiped bigdata lab , for training and practice .Users can practice bigdata, datascience and machine learning technologies . User Can access this through internet , learn from anywhere. Kindly contact me for activation and subscription read more

# Fully equiped bigdata lab , for training and practice .Users can practice bigdata, datascience and machine learning technologies . User Can access this through internet , learn from anywhere. Kindly contact me for activation and subscription

read less
Comments
Dislike Bookmark

Looking for Big Data Training

Find best Big Data Training in your locality on UrbanPro.

FIND NOW

Answered on 19/08/2019 IT Courses/Big Data

Sathya Narayanan

Experienced BigData professional with 17+ years of IT experience

Go for Spark training. Spark has already started replacing Hadoop (Mapreduce in particular), and since you are already familiar with SQL, it will be easier for you. Python is a much-used language in BigData, but you don't need to learn. It is mainly used for ML / Data science projects. As for learning... read more

Go for Spark training. Spark has already started replacing Hadoop (Mapreduce in particular), and since you are already familiar with SQL, it will be easier for you. Python is a much-used language in BigData, but you don't need to learn. It is mainly used for ML / Data science projects. As for learning Spark, you need to know the fundamental concepts like distributed computing, the internals of Spark etc. All the best! 

read less
Answers 11 Comments 2
Dislike Bookmark

Lesson Posted on 29/03/2019 IT Courses/Big Data IT Courses/Hadoop IT Courses/Big Data/Big Data Testing +2 Tuition/BTech Tuition/Big Data Analytics IT Courses/Hadoop/Hadoop Testing less

CheckPointing Process - Hadoop

Silvia Priya

Experienced datawarehouse professional for 10 years. Certified Big data-Hadoop and Python Trainer. I...

Check pointing process is one of the important concept/activity under Hadoop. The Name node stores the metadata information in it's hard disk. We all know that metadata is the heart core of the distributed file system, it if is lost we cannot access any files inside the file system. ... read more

 

 

 

 

 

 

Check pointing process is one of the important concept/activity under Hadoop. The Name node stores the metadata information in it's hard disk.

We all know that metadata is the heart core of the distributed file system, it if is lost we cannot access any files inside the file system.

 

The metadata physically gets stored in the machine in the form of two files

1. FSIMAGE - Snapshot of the file system at a point of time

2. EDITS FILE - Contains every transaction (creation,deletion,moving,renaming,copying ..etc of files)  in the file system.

Based on HA(High Availability) in Hadoop V2, the backup of the NN's metadata will be stored in a another machine called SNN(StandBy Name Node). Since metadata is very frequently accessed by different clients for reading different files, instead of storing it in the hard disk, it is good to store it in the RAM, so that it can be accessed faster.

But Stop...What happens if the machine goes down.. :(. We will loose everything in the RAM. Hence taking a backup of the data stored in the RAM is a viable option.

 

 

FSIMAGE0 -- Represents the fsimage file at a particular time

FSIMAGE1 -- Represents the copy of the FSIMAGE0 file, taken as a backup.

Let's imagine the backup of the file has to be taken for every 6 hours, if suppose something goes wrong in the cluster and the machine gets down before taking the backup i.e before 6 hours, then we end up in losing the latest fsimage file.

 

So to overcome this problem, a new system has to be exclusively added  in the cluster for doing the process of safeguarding the metadata in a efficient way and that process is called CheckPointing Process.

 

 

Have a look on the picture and let's understand the process step by step.

STEP 1

A copy of the Metadata(Fsimage and Edits file) from NameNode will be taken and placed inside the Secondary name node(SNN).

STEP 2

Once the copy is placed in SNN , the Edits file which captures every single transaction happening in the file system will be merged with the Fsimage file (Snap shot of the filesystem). The merged result will give the updated or latest file system.

STEP 3

The latest merged Fsimage will be moved to the NN's metadata location.

STEP 4

During the process of merging also, some of the files may be deleted or created or copied basically some transactions could have happened and those details will be stored in a new file called Edits.new , because the original Edits file is been opened/utilized for copying into the SNN, remember the deadlock principle.

STEP 5

Now the Edits.new file will become the latest Edits file and the Merged fsimage will become the original fsimage file. This process will be continued for a specific interval.

So, now no more backup's are needed to save the metadata in NN in case of failover scenarios.

Will see more details and programs in the upcoming lessons.

Thank you!!

read less
Comments
Dislike Bookmark

Lesson Posted on 29/03/2019 IT Courses/Big Data IT Courses/Big Data/Big Data Testing Tuition/BTech Tuition/Big Data Analytics

Big Data for Beginners

Silvia Priya

Experienced datawarehouse professional for 10 years. Certified Big data-Hadoop and Python Trainer. I...

Hello Big Data Enthusiast, Many of you would have heard about this term "Big Data" getting buzzed out everywhere and wondering what it could be. Ok, let's sort out things with an example. Imagine you have a machine with a capacity of 8 GB storage, and you want to store a data of size 12 GB from a... read more

Hello Big Data Enthusiast,

Many of you would have heard about this term "Big Data" getting buzzed out everywhere and wondering what it could be.

Ok, let's sort out things with an example.

Imagine you have a machine with a capacity of  8 GB storage, and you want to store a data of size 12 GB from a client and perform some analytics on it. So, think of the possible ways in which you can  store the desired data.

1. Extend your HD capacity to around 15GB or beyond for a succesfull storage.

2. Hire a cloud serivce and upload the data in cloud for analysis, but if the client don't want to upload the data into cloud due to multiple reasons, then this option will be ruled out.

3. Upload the data into a distributed file system, after analysing it pro's and con's.

True, you can follow any one of the above mentioned cases.

This data of 12GB size which is beyond the machine storage capacity is actually called as BIG DATA.

This BIG DATA can be in any format/type like structured,unstructured and semistructured.

Structured - RDBMS data(table data with proper rows,columns,keys etc..)

Unstructured - Images, Pictures ,Videos etc.

Semistructured - Files of format HTML,XML etc.

 

A data can be big data to you and not necessarily be a big data to other person.

Seems confusing..., giong back to the previous example, if you have have machine of 20 GB storage capacity, you can conveniently store our 12GB and it is not a big data for you at first place.

 

Hope now have you climbed up a little, on the mountain of Big data!!..

Big data can also be defined in other way if it satisifies the below criterias.

 

 

Volume  

If the size of the data you are planning to analyse, is much bigger than the capacity of your machine, then call it as a big data.

Velocity

If the rate or speed of the data entering into your machine increases exponentially with respect to time , then call it as a big data.

Variety

The data could be in any formats like Structured,Unstructured,Semi-structured as we have seen previously.

Veracity

The data that we are going to process can contain some uncertain information or incorrect data.

Value

The data should makes some sense to the business, that is we should be able to make some analysis out of the data.

 

Now cooking up all of the above information, you can keep your first step into learning Big data.

We will see more insights about this on our next lesson.

Thank you!!

read less
Comments
Dislike Bookmark

Answered on 23/12/2018 IT Courses/Hadoop IT Courses/Hadoop/Hadoop Testing IT Courses/Big Data +1 Tuition/BTech Tuition/Big Data Analytics less

What is Hive on Spark?

Hemanth Reddy

Hi Priyanka, Spark has many Tools like Spark-Sql,Mlib.. Spark-Sql is the Tool which will work as like hive in Spark. We just need to login to the Spark-Sql Console By using 'Spark-Sql'Command on Linux box where you can run the same sql. Spark-Sql is Much much faster than Hive so, we use to go with... read more

Hi Priyanka,

    Spark has many Tools like Spark-Sql,Mlib..

Spark-Sql is the Tool which will work as like hive in Spark. We just need to login to the Spark-Sql Console By using 'Spark-Sql'Command on Linux box where you can run the same sql. Spark-Sql is Much much faster than Hive so, we use to go with Spark-Sql rather than Hive. I think I have answered your question. Still you have any queries you can reach out to me directly.

 

 

 

read less
Answers 2 Comments
Dislike Bookmark

Answered on 23/12/2018 IT Courses/Hadoop IT Courses/Hadoop/Hadoop Testing IT Courses/Big Data +1 Tuition/BTech Tuition/Big Data Analytics less

How difficult is it to get a job on Big Data/Hadoop?

Sivaram

Nothing is difficult in Hadoop . Its very easy to learn.
Answers 1 Comments 2
Dislike Bookmark

Looking for Big Data Training

Find best Big Data Training in your locality on UrbanPro.

FIND NOW

Answered on 23/12/2018 IT Courses/Hadoop IT Courses/Hadoop/Hadoop Testing IT Courses/Big Data +1 Tuition/BTech Tuition/Big Data Analytics less

Will data flair be a good site to learn big data and Hadoop?

Hemanth Reddy

Data flair and Acadglud is also a good source to study Hadoop
Answers 1 Comments
Dislike Bookmark

About UrbanPro

UrbanPro.com helps you to connect with the best Big Data Training in India. Post Your Requirement today and get connected.

Overview

Questions 529

Lessons 61

Total Shares  

+ Follow 12,010 Followers

Top Contributors

Connect with Expert Tutors & Institutes for Big Data

x

Ask a Question

Please enter your Question

Please select a Tag

X

Looking for Big Data Classes?

Find best tutors for Big Data Classes by posting a requirement.

  • Post a learning requirement
  • Get customized responses
  • Compare and select the best

Looking for Big Data Classes?

Find best Big Data Classes in your locality on UrbanPro

Post your learning requirement

UrbanPro.com is India's largest network of most trusted tutors and institutes. Over 25 lakh students rely on UrbanPro.com, to fulfill their learning requirements across 1,000+ categories. Using UrbanPro.com, parents, and students can compare multiple Tutors and Institutes and choose the one that best suits their requirements. More than 6.5 lakh verified Tutors and Institutes are helping millions of students every day and growing their tutoring business on UrbanPro.com. Whether you are looking for a tutor to learn mathematics, a German language trainer to brush up your German language skills or an institute to upgrade your IT skills, we have got the best selection of Tutors and Training Institutes for you. Read more