UrbanPro
true

Learn Apache Spark from the Best Tutors

  • Affordable fees
  • 1-1 or Group class
  • Flexible Timings
  • Verified Tutors

Search in

Loading Hive tables as a parquet File

Silvia Priya
01/05/2020 0 0

Hive tables are very important when it comes to Hadoop and Spark as both can integrate and process the tables in Hive.

Let's see how we can create a hive table that internally stores the records in it in a parquet fashion.

 

Storing a hive table as a parquet file with a snappy compression in a traditional hive shell

 

  1. Create a hive table called transaction and load it with records using the load command.

 create table transaction(no int,tdate string,userno int,amt int,pro string,city string,pay string) row format delimited fields terminated by ',';

 

load data local inpath '/home/cloudera/online/hive/transactions' into table transaction;

 

  1. Create another hive table named tran_snappy with storage type as parquet and compression technique as snappy.

 create table tran_snappy(no int,tdate string,userno int,amt int,pro string,city string,pay string)  stored as parquet tblproperties('parquet.compression' = 'SNAPPY');

 

  1. Insert the second table with records from the first table.

insert into table tran_snappy select * from transaction;

 

  1. Go to the /user/hive/warehouse directory to check whether the file is in snappy format or not.

 

Storing a hive table as a parquet file with a snappy compression in spark sql

 

1.Import the hive context in the spark shell and create and load the hive table in a parquet format.

Import org.apache.spark.sql.hive.HiveContext

Val sqlContext = new HiveContext(sc)

Scala> sqlContext.sql(“create table transaction(no int,tdate string,userno int,amt int,pro string,city string,pay string) row format delimited fields terminated by ','

”)

 

2.Load the created table

Scala>sqlContext.sql(“load data local inpath '/home/cloudera/online/hive/transactions' into table transaction”)

 

3.Create a snappy compressed parquet table

Scala>sqlContext.sql(“create table tran_snappy(no int,tdate string,userno int,amt int,pro string,city string,pay string)  stored as parquet tblproperties('parquet.compression' = 'SNAPPY')”)

 

4.Load the table from the table gets created in the step 1.

Scala>val records_tran=sqlContext.sql(“select * from transaction”)

 

Scala>records_tran.insertInto(“tran_snappy”)

 

Now the records are inserted into the snappy compressed hive table. Go to the /user/hive/warehouse directory to check whether the parquet file gets generated for the corresponding table.

0 Dislike
Follow 1

Please Enter a comment

Submit

Other Lessons for You

Understanding Big Data
Introduction to Big Data This blog is about Big Data, its meaning, and applications prevalent currently in the industry.It’s an accepted fact that Big Data has taken the world by storm and has become...
M

MyMirror

0 0
0

What is Hyperion?
- Its an Business Intelligence tools. Like Brio which was an independent product bought over my Hyperion has converted this product name to Hyperion Intelligence. Is it an OLAP tool? - Yes. You can analyse...

Microsoft Word
Microsoft Word is a widely used commercial word processor designed by Microsoft. Microsoft Word is a component of the Microsoft Office suite of productivity software, but can also be purchased as a stand-alone...

BigDATA HADOOP Infrastructure & Services: Basic Concept
Hadoop Cluster & Processes What is Hadoop Cluster? Hadoop cluster is the collections of one or more than one Linux Boxes. In a Hadoop cluster there should be a single Master(Linux machine/box) machine...

Understanding Big Data
Introduction to Big Data This blog is about Big Data, its meaning, and applications prevalent currently in the industry.It’s an accepted fact that Big Data has taken the world by storm and has become...
M

MyMirror

0 0
0

Looking for Apache Spark ?

Learn from Best Tutors on UrbanPro.

Are you a Tutor or Training Institute?

Join UrbanPro Today to find students near you
X

Looking for Apache Spark Classes?

The best tutors for Apache Spark Classes are on UrbanPro

  • Select the best Tutor
  • Book & Attend a Free Demo
  • Pay and start Learning

Learn Apache Spark with the Best Tutors

The best Tutors for Apache Spark Classes are on UrbanPro

This website uses cookies

We use cookies to improve user experience. Choose what cookies you allow us to use. You can read more about our Cookie Policy in our Privacy Policy

Accept All
Decline All

UrbanPro.com is India's largest network of most trusted tutors and institutes. Over 55 lakh students rely on UrbanPro.com, to fulfill their learning requirements across 1,000+ categories. Using UrbanPro.com, parents, and students can compare multiple Tutors and Institutes and choose the one that best suits their requirements. More than 7.5 lakh verified Tutors and Institutes are helping millions of students every day and growing their tutoring business on UrbanPro.com. Whether you are looking for a tutor to learn mathematics, a German language trainer to brush up your German language skills or an institute to upgrade your IT skills, we have got the best selection of Tutors and Training Institutes for you. Read more