true

Find the best tutors and institutes for Apache Spark

Find Best Apache Spark

Please select a Category.

Please select a Locality.

No matching category found.

No matching Locality found.

Outside India?

Search for topics

Loading Hive tables as a parquet File

Silvia Priya
01 May 0 0

Hive tables are very important when it comes to Hadoop and Spark as both can integrate and process the tables in Hive.

Let's see how we can create a hive table that internally stores the records in it in a parquet fashion.

 

Storing a hive table as a parquet file with a snappy compression in a traditional hive shell

 

  1. Create a hive table called transaction and load it with records using the load command.

 create table transaction(no int,tdate string,userno int,amt int,pro string,city string,pay string) row format delimited fields terminated by ',';

 

load data local inpath '/home/cloudera/online/hive/transactions' into table transaction;

 

  1. Create another hive table named tran_snappy with storage type as parquet and compression technique as snappy.

 create table tran_snappy(no int,tdate string,userno int,amt int,pro string,city string,pay string)  stored as parquet tblproperties('parquet.compression' = 'SNAPPY');

 

  1. Insert the second table with records from the first table.

insert into table tran_snappy select * from transaction;

 

  1. Go to the /user/hive/warehouse directory to check whether the file is in snappy format or not.

 

Storing a hive table as a parquet file with a snappy compression in spark sql

 

1.Import the hive context in the spark shell and create and load the hive table in a parquet format.

Import org.apache.spark.sql.hive.HiveContext

Val sqlContext = new HiveContext(sc)

Scala> sqlContext.sql(“create table transaction(no int,tdate string,userno int,amt int,pro string,city string,pay string) row format delimited fields terminated by ','

”)

 

2.Load the created table

Scala>sqlContext.sql(“load data local inpath '/home/cloudera/online/hive/transactions' into table transaction”)

 

3.Create a snappy compressed parquet table

Scala>sqlContext.sql(“create table tran_snappy(no int,tdate string,userno int,amt int,pro string,city string,pay string)  stored as parquet tblproperties('parquet.compression' = 'SNAPPY')”)

 

4.Load the table from the table gets created in the step 1.

Scala>val records_tran=sqlContext.sql(“select * from transaction”)

 

Scala>records_tran.insertInto(“tran_snappy”)

 

Now the records are inserted into the snappy compressed hive table. Go to the /user/hive/warehouse directory to check whether the parquet file gets generated for the corresponding table.

0 Dislike
Follow 1

Please Enter a comment

Submit

Other Lessons for You

training #bigdatalab #online
# Fully equiped bigdata lab , for training and practice .Users can practice bigdata, datascience and machine learning technologies . User Can access this through internet , learn from anywhere. Kindly contact me for activation and subscription

Joshua Charles | 14/09/2019

0 0
0

What is M.S.Project ?
MICROSOFT PROJECT contains project work and project groups, schedules and finances.Microsoft Project permits its users to line realistic goals for project groups and customers by making schedules, distributing...

Big Data & Hadoop - Introductory Session - Data Science for Everyone
Data Science for Everyone An introductory video lesson on Big Data, the need, necessity, evolution and contributing factors. This is presented by Skill Sigma as part of the "Data Science for Everyone" series.

Skill Sigma | 21/12/2018

0 0
0

Why is the Hadoop essential?
Capacity to store and process large measures of any information, rapidly. With information volumes and assortments always expanding, particularly from web-based life and the Internet of Things (IoT), that...

Use of Piggybank and Registration in Pig
What is a Piggybank? Piggybank is a jar and its a collection of user contributed UDF’s that is released along with Pig. These are not included in the Pig JAR, so we have to register them manually...

Sachin Patil | 16/09/2018

0 0
0

Looking for Apache Spark ?

Find best Apache Spark in your locality on UrbanPro.

Are you a Tutor or Training Institute?

Join UrbanPro Today to find students near you
Sponsored
X

Looking for Apache Spark Classes?

Find best tutors for Apache Spark Classes by posting a requirement.

  • Post a learning requirement
  • Get customized responses
  • Compare and select the best

Looking for Apache Spark Classes?

Find best Apache Spark Classes in your locality on UrbanPro

Post your learning requirement

UrbanPro.com is India's largest network of most trusted tutors and institutes. Over 25 lakh students rely on UrbanPro.com, to fulfill their learning requirements across 1,000+ categories. Using UrbanPro.com, parents, and students can compare multiple Tutors and Institutes and choose the one that best suits their requirements. More than 6.5 lakh verified Tutors and Institutes are helping millions of students every day and growing their tutoring business on UrbanPro.com. Whether you are looking for a tutor to learn mathematics, a German language trainer to brush up your German language skills or an institute to upgrade your IT skills, we have got the best selection of Tutors and Training Institutes for you. Read more