1,093 Student Reviews
Objectives – Having 18.5 years of experience As a Technical Lead and Architect, I am passionate about leveraging my expertise in Databricks, Data Build Tool, Spark, Confluent Kafka, Data Lake, Lake House and Cloud Solutions to drive innovation and efficiency. With a solid background in data architecture and IT Infrastructure, I am to contribute to the robust and vision of your company by providing ro bust technical solutions that align with strategic business goals. My Goal is to enhance data-driven decision-making processes, optimize big data pipelines and implement secure and scalable cloud architectures that people the organization forward in the ever-evolving technical landscape. Certification & Achievements – 1. Confluent Certified Administrator for Apache Kafka: expiry July 2026. Confluent Certified Administrator for Apache Kafka • Amit Raj • Confluent 2. Databricks Certified Data Engineer Professional: expiry Aug 2026 Databricks Certified Data Engineer Professional • Amit Raj • Databricks Badges 3. Databricks Accredited Lakehouse Fundamentals: expiry June 2025 Academy Accreditation - Databricks Lakehouse Fundamentals • Amit Raj • Databricks Badges 4. Microsoft Certified: Azure Administrator Associate (AZ104): Microsoft certification ID: 1100039942 Expires on: August 3, 2025 Credentials - AmitRaj-8869 | Microsoft Learn 5. Microsoft Certified: Azure Security Engineer Associate (AZ500): Microsoft certification ID: 1100039942 Expires on: August 6, 2025 Credentials - AmitRaj-8869 | Microsoft Learn 6. Recognition certificate from Fidelity for designing global solutions for Data exchange. 7. Got Achievement medal from DIB(Client) with appreciation for design event-based enterprise architecture & contribution – EventHub SUMMARY • Overall total of 18.5+ years of Experience years of Experience in Application Design, Development & Deployment of Hadoop Eco System/Java/J2EE systems with good exposure to Enterprise Architecture. • Relevant Experience 9.2 years in Big Data technologies working with multiple clients and domain knowledge. • Experienced in Cassandra data modelling, cluster setup and data management. • Experienced in working with Spark-SQL, Spark SQL and Spark Structure Streaming, MLib to process and analyse data queries. • Experienced in designing solutions using Spark Streaming and Kafka Streaming for Payment Gateway/point of sales events. • Individual Contribution (Kafka Architect): Delivered UAT and PROD Cluster within the timeline for Kafka cluster using Cloudera 6.x, CSP 2.0. • Implemented a unified data platform to gather data from different sources using Kafka Producers and consumers in Scala and java. • Solid background in Object-Oriented analysis & design, UML and various design patterns. • Worked using Azure cloud(Blob, EventHub), Kubernetes, docker with Spark, scala, Schema Registry, Avro Schema with home security application for Honeywell • Implemented KSQL, KTable and KStream using Confluent Kafka along with Kafka Connect. • Hands-on Data bricks - Databricks Clusters, Data Lakehouse, Delta lake, DBFS, EXPLORE, Analyze, Clean, Transform and Load Data using Databricks. • Experience with Azure: Azure Synapse Analytics, ADLS, ADF, CosmoDB, Azure Function, Stream Analytics, Power BI. • Experience with SQL and NoSQL databases including Mysql, Oracle, Cassandra, and PostgreSQL. BigTable • Experience building and optimizing the ‘big data’ data pipeline. • Experince with Azure Devops, CI/CD pipeline, Kubernetes and docker • Motivated Technical Architect with 5 years of progressive experience. • Having Experience AWS (Ec2,S3) • Having experience with Snowflake to design data lake and load data from multiple sources to the Snowflake database. • Effectively manages assignments and team members. • Dedicated to self-development to provide expectation-exceeding service. Customer-focused, successfully contributing to company profits by improving team efficiency and productivity. • Utilizes excellent organizational skills to enhance efficiency and lead teams to achieve outstanding delivery. SKILLS == =================== Database architecture Database architecture development Data Architecture Big Data ETL Technical solution development Azure data solutions Data insight provision Technical guidance IT Architecture Technical solutions Big data frameworks Technical Skills: Hortonworks2.5, Cloudera5/6, Apache Hadoop2/3 ,Spark2/3,Apache Kafka, Confluent Kafka, Hive 2/3, Impala, Sqoop, OOZie, Zookeeper, Snowflake, Data Build tool (DBT), HBase, Apache Cassandra /DataStax Cassandra, Data Bricks, Azure Cloud, AWS cloud, Talend, Airflow etc. Programming Language Python , Scala & Java Other Tools Kibana, Logstash, ElasticSearch, ELK. ============================= PROJECT UNDERTAKEN: Project: Implementation of Data Warehouse and reporting platform Roles: Databricks Architect & Engineer Teams: 12 members Technical Skills: Azure Cloud, Azure Data Factory (ADF), ADLS, Databricks, Spark3.x, Python, Scala2.15, DB2, Oracle 12g, Azure SQL My Contribution Data Bricks Infrastructure Solution: - Configured Unified Data Access Control using Unity Catalog – E1 & BY System provide a specific set of permissions, like Read Only, or, Write Only to a specific Group of Users on one, or, some of the Delta Tables, or, even at the Row Level, or, Column Level, which can contain Personally Identifiable Information, i.e., PII, of that Delta Tables - Provide Data Governance with centralized place: administer (TAI) the access to the data, and, also audit the access to the data. - Applied Data lineage for E1 & BY tables with look-up tables using Unity Catalog. - Implemented Data sharing protocol to apply secure data sharing downstream using Unity Catalog and - Design Architecture of Unity Catalog which can be linked to multiple Databricks Workspaces- DEV, UAT, PROD environment. - Created Metastore for the Unity Catalog - Apply User Management of the Unity Catalog for the TAI Lakehouse project: Users, Groups, or, the Service Principle, and, the permissions those have - Configure Data Bricks Cluste with spark 3.x for DEV, UAT & PROD for TAI -E1 & BY System. - Design & apply medallion architecture, Setup a Data Lake house with Bronze, Silver and Gold layers of a storage system using Azure Data Lake Gen2. Azure Cloud Infra and Security: - Install self-hosted integration runtime for the DB2 ON DEV, UAT & PROD and Oracle on-prem cluster on the source system. - Install Azure Virtual network managed IR On DEV, UAT & PROD. - Installed Db2 connector on DEV, UAT & PROD. - Created linked service lnk_BY_Azure_SQL, lnk_E1_Azure_SQL, lnk_Db2_E1 - Install and Configure Azure Key-Vault, added all the credential for Azure SQL, ADLS, Databricks, Users, global users, linked service to Azure Key Vault. – DEV, UAT & PROD. - Created 3 nodes for DEV and 5 nodes for PROD cluster to migrate data. - Setup and configure Azure Active Directory to provide team access policy for Databricks cluster, Azure Data Factory, Azure SQL, Azure Data Lake house. - Coordinate with TAI Client and Microsoft support team to resolve throughput issues. As Azure & Databricks Data Engineer: - developed most critical data ingestion pipelines using Azure Data Factory (ADF) for E1 to migrate 12.8TB of 120 tables from Db2 to ADLS RAW as a parquet file. There are many large tables with 2-4 TB of volume data containing 400 to 800 million records. - Initial & Incremental migration pipelines for both the E1 and BY sources with a watermark based on Julian's date & time - Design Audit table (Process log) and Control table (System) to achieve dynamic pipeline and audit information for master and child pipeline. - Design architecture solution to achieve delete for PKSNAPSHOT – E1 & BY. - Build dynamic delete pipeline using ADF (load PKTBL), Databricks PySpark for Daily, Weekly, OnDemand, and Yearly frequency to delete records from target (Analytics layer – gold layer) based on source system delete column and delete table - Build transformation using Datarbicks Spark with Scala for E1 to - Apply a transformation with a lookup table and transform to the Silver layer. - Build transformation to transform on Analytics layer (Gold) using Databricks Spark & scala. - Implemented UPSERT using Spark Structure Streaming with 5 minutes on the Analytics layer - Design pipeline architecture for master pipeline, child pipeline with different activity ID, Pipeline ID, Master pipeline ID with different pipeline Run ID to make sure for smooth transition audit. - Build logic, developed using Pyspark on Datarbicks – applied on DEV, UAT & PROD to check counter – master pipeline IN PROGRESS - or NOT so that pipeline execution should not overlap. - Pass pipeline parameter to insert or update Audit/Control table using Databricks -Pyspark. - Monitor Performance in DEV & PROD, worked with the team to reduce time. - Milestone – to achieve 10-minute SLA for Incremental load on E1 & BY (end-to-end completion time) - Milestone – achieved 1.53.45hrs to load (400millions record with2.3TB) at RAW as parquet file using ADF pipeline - Interact with Azure Devos’s engineer to build a CICD pipeline for DEV, UAT & PROD with - Develop pipeline as POC using Databricks Workflow, compare the cost with Azure Pipeline, and present to the client. My Contribution to Past Project: Project: Data Exchange (Security Framework) Roles: Technical Lead & Architect – Confluent KStream & KSQL Client: Fidelity & Westpac Team: 9 members Technical Skills: AzureDevops, Jdk 19.0, Confluent Kafka, Kstream, KSQL, Azure Databricks, DBFS, Delta lake, Azure Data Factory, ADLSGen2, Confluent Schema Registry, AES Algorithm, Hash Algorithm, Kubernetes Cluster(AKS).
Objectives – Having 18.5 years of experience As a Technical Lead and Architect, I am passionate about leveraging my expertise in Databricks, Data Build Tool, Spark, Confluent Kafka, Data Lake, Lake House and Cloud Solutions to drive innovation and efficiency. With a solid background in data architecture and IT Infrastructure, I am to contribute to the robust and vision of your company by providing ro bust technical solutions that align with strategic business goals. My Goal is to enhance data-driven decision-making processes, optimize big data pipelines and implement secure and scalable cloud architectures that people the organization forward in the ever-evolving technical landscape. Certification & Achievements – 1. Confluent Certified Administrator for Apache Kafka: expiry July 2026. Confluent Certified Administrator for Apache Kafka • Amit Raj • Confluent 2. Databricks Certified Data Engineer Professional: expiry Aug 2026 Databricks Certified Data Engineer Professional • Amit Raj • Databricks Badges 3. Databricks Accredited Lakehouse Fundamentals: expiry June 2025 Academy Accreditation - Databricks Lakehouse Fundamentals • Amit Raj • Databricks Badges 4. Microsoft Certified: Azure Administrator Associate (AZ104): Microsoft certification ID: 1100039942 Expires on: August 3, 2025 Credentials - AmitRaj-8869 | Microsoft Learn 5. Microsoft Certified: Azure Security Engineer Associate (AZ500): Microsoft certification ID: 1100039942 Expires on: August 6, 2025 Credentials - AmitRaj-8869 | Microsoft Learn 6. Recognition certificate from Fidelity for designing global solutions for Data exchange. 7. Got Achievement medal from DIB(Client) with appreciation for design event-based enterprise architecture & contribution – EventHub SUMMARY • Overall total of 18.5+ years of Experience years of Experience in Application Design, Development & Deployment of Hadoop Eco System/Java/J2EE systems with good exposure to Enterprise Architecture. • Relevant Experience 9.2 years in Big Data technologies working with multiple clients and domain knowledge. • Experienced in Cassandra data modelling, cluster setup and data management. • Experienced in working with Spark-SQL, Spark SQL and Spark Structure Streaming, MLib to process and analyse data queries. • Experienced in designing solutions using Spark Streaming and Kafka Streaming for Payment Gateway/point of sales events. • Individual Contribution (Kafka Architect): Delivered UAT and PROD Cluster within the timeline for Kafka cluster using Cloudera 6.x, CSP 2.0. • Implemented a unified data platform to gather data from different sources using Kafka Producers and consumers in Scala and java. • Solid background in Object-Oriented analysis & design, UML and various design patterns. • Worked using Azure cloud(Blob, EventHub), Kubernetes, docker with Spark, scala, Schema Registry, Avro Schema with home security application for Honeywell • Implemented KSQL, KTable and KStream using Confluent Kafka along with Kafka Connect. • Hands-on Data bricks - Databricks Clusters, Data Lakehouse, Delta lake, DBFS, EXPLORE, Analyze, Clean, Transform and Load Data using Databricks. • Experience with Azure: Azure Synapse Analytics, ADLS, ADF, CosmoDB, Azure Function, Stream Analytics, Power BI. • Experience with SQL and NoSQL databases including Mysql, Oracle, Cassandra, and PostgreSQL. BigTable • Experience building and optimizing the ‘big data’ data pipeline. • Experince with Azure Devops, CI/CD pipeline, Kubernetes and docker • Motivated Technical Architect with 5 years of progressive experience. • Having Experience AWS (Ec2,S3) • Having experience with Snowflake to design data lake and load data from multiple sources to the Snowflake database. • Effectively manages assignments and team members. • Dedicated to self-development to provide expectation-exceeding service. Customer-focused, successfully contributing to company profits by improving team efficiency and productivity. • Utilizes excellent organizational skills to enhance efficiency and lead teams to achieve outstanding delivery. SKILLS == =================== Database architecture Database architecture development Data Architecture Big Data ETL Technical solution development Azure data solutions Data insight provision Technical guidance IT Architecture Technical solutions Big data frameworks Technical Skills: Hortonworks2.5, Cloudera5/6, Apache Hadoop2/3 ,Spark2/3,Apache Kafka, Confluent Kafka, Hive 2/3, Impala, Sqoop, OOZie, Zookeeper, Snowflake, Data Build tool (DBT), HBase, Apache Cassandra /DataStax Cassandra, Data Bricks, Azure Cloud, AWS cloud, Talend, Airflow etc. Programming Language Python , Scala & Java Other Tools Kibana, Logstash, ElasticSearch, ELK. ============================= PROJECT UNDERTAKEN: Project: Implementation of Data Warehouse and reporting platform Roles: Databricks Architect & Engineer Teams: 12 members Technical Skills: Azure Cloud, Azure Data Factory (ADF), ADLS, Databricks, Spark3.x, Python, Scala2.15, DB2, Oracle 12g, Azure SQL My Contribution Data Bricks Infrastructure Solution: - Configured Unified Data Access Control using Unity Catalog – E1 & BY System provide a specific set of permissions, like Read Only, or, Write Only to a specific Group of Users on one, or, some of the Delta Tables, or, even at the Row Level, or, Column Level, which can contain Personally Identifiable Information, i.e., PII, of that Delta Tables - Provide Data Governance with centralized place: administer (TAI) the access to the data, and, also audit the access to the data. - Applied Data lineage for E1 & BY tables with look-up tables using Unity Catalog. - Implemented Data sharing protocol to apply secure data sharing downstream using Unity Catalog and - Design Architecture of Unity Catalog which can be linked to multiple Databricks Workspaces- DEV, UAT, PROD environment. - Created Metastore for the Unity Catalog - Apply User Management of the Unity Catalog for the TAI Lakehouse project: Users, Groups, or, the Service Principle, and, the permissions those have - Configure Data Bricks Cluste with spark 3.x for DEV, UAT & PROD for TAI -E1 & BY System. - Design & apply medallion architecture, Setup a Data Lake house with Bronze, Silver and Gold layers of a storage system using Azure Data Lake Gen2. Azure Cloud Infra and Security: - Install self-hosted integration runtime for the DB2 ON DEV, UAT & PROD and Oracle on-prem cluster on the source system. - Install Azure Virtual network managed IR On DEV, UAT & PROD. - Installed Db2 connector on DEV, UAT & PROD. - Created linked service lnk_BY_Azure_SQL, lnk_E1_Azure_SQL, lnk_Db2_E1 - Install and Configure Azure Key-Vault, added all the credential for Azure SQL, ADLS, Databricks, Users, global users, linked service to Azure Key Vault. – DEV, UAT & PROD. - Created 3 nodes for DEV and 5 nodes for PROD cluster to migrate data. - Setup and configure Azure Active Directory to provide team access policy for Databricks cluster, Azure Data Factory, Azure SQL, Azure Data Lake house. - Coordinate with TAI Client and Microsoft support team to resolve throughput issues. As Azure & Databricks Data Engineer: - developed most critical data ingestion pipelines using Azure Data Factory (ADF) for E1 to migrate 12.8TB of 120 tables from Db2 to ADLS RAW as a parquet file. There are many large tables with 2-4 TB of volume data containing 400 to 800 million records. - Initial & Incremental migration pipelines for both the E1 and BY sources with a watermark based on Julian's date & time - Design Audit table (Process log) and Control table (System) to achieve dynamic pipeline and audit information for master and child pipeline. - Design architecture solution to achieve delete for PKSNAPSHOT – E1 & BY. - Build dynamic delete pipeline using ADF (load PKTBL), Databricks PySpark for Daily, Weekly, OnDemand, and Yearly frequency to delete records from target (Analytics layer – gold layer) based on source system delete column and delete table - Build transformation using Datarbicks Spark with Scala for E1 to - Apply a transformation with a lookup table and transform to the Silver layer. - Build transformation to transform on Analytics layer (Gold) using Databricks Spark & scala. - Implemented UPSERT using Spark Structure Streaming with 5 minutes on the Analytics layer - Design pipeline architecture for master pipeline, child pipeline with different activity ID, Pipeline ID, Master pipeline ID with different pipeline Run ID to make sure for smooth transition audit. - Build logic, developed using Pyspark on Datarbicks – applied on DEV, UAT & PROD to check counter – master pipeline IN PROGRESS - or NOT so that pipeline execution should not overlap. - Pass pipeline parameter to insert or update Audit/Control table using Databricks -Pyspark. - Monitor Performance in DEV & PROD, worked with the team to reduce time. - Milestone – to achieve 10-minute SLA for Incremental load on E1 & BY (end-to-end completion time) - Milestone – achieved 1.53.45hrs to load (400millions record with2.3TB) at RAW as parquet file using ADF pipeline - Interact with Azure Devos’s engineer to build a CICD pipeline for DEV, UAT & PROD with - Develop pipeline as POC using Databricks Workflow, compare the cost with Azure Pipeline, and present to the client. My Contribution to Past Project: Project: Data Exchange (Security Framework) Roles: Technical Lead & Architect – Confluent KStream & KSQL Client: Fidelity & Westpac Team: 9 members Technical Skills: AzureDevops, Jdk 19.0, Confluent Kafka, Kstream, KSQL, Azure Databricks, DBFS, Delta lake, Azure Data Factory, ADLSGen2, Confluent Schema Registry, AES Algorithm, Hash Algorithm, Kubernetes Cluster(AKS).
May 2006- Oct 2008 : Java Trainer Nov 2008 - Till date : Software Analyst Programmer.
May 2006- Oct 2008 : Java Trainer Nov 2008 - Till date : Software Analyst Programmer.
I teach Angular, React, Javascript, Nodejs and Mongo DB. I have 10 years of teaching experience and 14 years of working experience as an Application developer. I also give placement assistant. If you follow my classes you will get a job easily.
I teach Angular, React, Javascript, Nodejs and Mongo DB. I have 10 years of teaching experience and 14 years of working experience as an Application developer. I also give placement assistant. If you follow my classes you will get a job easily.
Trainer having 13+ Years of IT Experience and training will be done till you get the knowledge. For details. Hemanth 7760033212
Trainer having 13+ Years of IT Experience and training will be done till you get the knowledge. For details. Hemanth 7760033212
I am data scientist with 5 years of experience in SAS, Python across Banking, Retail and Media & Sports Analytics. I can take up classes online at your convenience.
I am data scientist with 5 years of experience in SAS, Python across Banking, Retail and Media & Sports Analytics. I can take up classes online at your convenience.
I am working in a start up as a Technical Specialist. Since its a start up I have explored a lot. Especially on Databases and Linux part. I have real time exposure in LIVE DB project and in Linux too. I have been doing the DB Administrator work during my live projects. I have a wide exposure in these concepts and I would definitely want others to grow in the same field so that we can achieve a lot and we can also make others to grow in their careers.
I am working in a start up as a Technical Specialist. Since its a start up I have explored a lot. Especially on Databases and Linux part. I have real time exposure in LIVE DB project and in Linux too. I have been doing the DB Administrator work during my live projects. I have a wide exposure in these concepts and I would definitely want others to grow in the same field so that we can achieve a lot and we can also make others to grow in their careers.
Hi, I am an experienced big data professional passionate about teaching big data and java programming languages.I have experience teaching professionals big data and Hadoop and other big data ecosystem technologies.I am a certified big data professional and like to teach professionals and students.Thanks
Hi, I am an experienced big data professional passionate about teaching big data and java programming languages.I have experience teaching professionals big data and Hadoop and other big data ecosystem technologies.I am a certified big data professional and like to teach professionals and students.Thanks
I have 7 years of experience in teaching the students Mobile Application Development Training Java Script Training classes Angular.JS Training
I have 7 years of experience in teaching the students Mobile Application Development Training Java Script Training classes Angular.JS Training
I have 3+ years of experience in Big Data and providing training in all the modules of Hadoop which includes Hdfs, Map reduce, Hive, Pig, Sqoop, Flume, Hbase, Hue, Oozie, Impala etc,
I have 3+ years of experience in Big Data and providing training in all the modules of Hadoop which includes Hdfs, Map reduce, Hive, Pig, Sqoop, Flume, Hbase, Hue, Oozie, Impala etc,
I have 5 Years experience on big data technologies Hadoop,Hive,Sqoop,Oozie,Kafka and spark. Worked real time in data analytics tools . I am also currently working on python as well as machine learning platform on data science.
I have 5 Years experience on big data technologies Hadoop,Hive,Sqoop,Oozie,Kafka and spark. Worked real time in data analytics tools . I am also currently working on python as well as machine learning platform on data science.
Algorithmica, founded in 2008, is a world class corporate training company that focuses on improving and expanding the engineering skills of developers and on enhancing the quality of the software they develop. Algorithmica supplies wall-to-wall solutions that include training, consulting and development services (on- and off-site).
Algorithmica, founded in 2008, is a world class corporate training company that focuses on improving and expanding the engineering skills of developers and on enhancing the quality of the software they develop. Algorithmica supplies wall-to-wall solutions that include training, consulting and development services (on- and off-site).
INN QUEST IT is an established development & Outsourcing Company delivering development services of any complexity to clients worldwide. Being in IT business for over 6 years now IQIT has a strong team of 370 skilled experienced IT experts. Our customers are companies of all sizes ranging from start-ups to large enterprises who realize that they need a professional solution to generate revenue streams, establish communication channels or streamline business operations.
INN QUEST IT is an established development & Outsourcing Company delivering development services of any complexity to clients worldwide. Being in IT business for over 6 years now IQIT has a strong team of 370 skilled experienced IT experts. Our customers are companies of all sizes ranging from start-ups to large enterprises who realize that they need a professional solution to generate revenue streams, establish communication channels or streamline business operations.
Having 9+ years of Experience in IT and have been in several roles and responsibilities in my career such as Data Quality, Data Analyst, Text Mining Analyst. Managing teams and working with clients on requirements gathering and fixing bugs/errors in reports. Certifications: Tableau 8.0 Certified (Developer and Admin) MapR Certified for Big Data Analyst. Training Experience: Have trained more than 400 participants on Tableau Development/Admin, PL/SQL, Hadoop (Data Analyst). Corporates Served : Flipkart.com Manthan Systems Infogix, Inc. Online Training for various institutions across Globe. 1. CDRP Technologies 2. Onlinetrainings 3. Monstercourses 4. Nxgprs
Having 9+ years of Experience in IT and have been in several roles and responsibilities in my career such as Data Quality, Data Analyst, Text Mining Analyst. Managing teams and working with clients on requirements gathering and fixing bugs/errors in reports. Certifications: Tableau 8.0 Certified (Developer and Admin) MapR Certified for Big Data Analyst. Training Experience: Have trained more than 400 participants on Tableau Development/Admin, PL/SQL, Hadoop (Data Analyst). Corporates Served : Flipkart.com Manthan Systems Infogix, Inc. Online Training for various institutions across Globe. 1. CDRP Technologies 2. Onlinetrainings 3. Monstercourses 4. Nxgprs
Kelly Technologies has been established with the primary objective of offering superior IT training services and support for different business organizations. Kelly Technologies provide integrated IT training services and the complete range of IT consulting support to cater all the requirements of both individual learners and corporate clients. Kelly Technologies- the best online IT training and consultancy Kelly Technologies are the thought-leaders in the domain of professional IT training services in implementing world-class infrastructure and technology. As the state-of-the-art IT training service enable, Kelly Technologies possess deep domain expertise and professional acumen among the key management team and training consultants.
Kelly Technologies has been established with the primary objective of offering superior IT training services and support for different business organizations. Kelly Technologies provide integrated IT training services and the complete range of IT consulting support to cater all the requirements of both individual learners and corporate clients. Kelly Technologies- the best online IT training and consultancy Kelly Technologies are the thought-leaders in the domain of professional IT training services in implementing world-class infrastructure and technology. As the state-of-the-art IT training service enable, Kelly Technologies possess deep domain expertise and professional acumen among the key management team and training consultants.
Jigsaw Academy is the brainchild of Gaurav Vohra and Sarita Digumarti who together have over 20 years of experience in the analytics industry. Having worked in varied industries, across multiple roles they brought their expertise and experience together, to form Jigsaw. They passionately believe that the future of analytics is poised to soar and that Jigsaw Academy will play a crucial role in identifying and nurturing analytic talent worldwide. They also firmly believe that online learning is changing the face of education and they are proud that Jigsaw’s virtual platform is a part of this revolution. Jigsaw has firmly established itself as a center for quality analytics training. Jigsaw Academy has won the Brands Academy Award 2013 for ‘Best Upcoming Academy for Analytics Courses in Bangalore’. It is also featured as one of the top analytics training institutes in India by AIM magazine.Co-founder of Jigsaw Academy Gaurav Vohra says – “We strive to deliver quality analytics training to each and every one of our students and do all we can to make the Jigsaw experience for them unique and valuable”. We have a range of courses, from beginner to advanced in Data Analytics and Big Data. Some of our most popular courses are: 1. Analytics for Beginners Learn a step-by-step approach to analysis using a case study from the game of Cricket. 2. Foundation Course in Analytics Start your career in Analytics. Learn analytical skills that are most valued at the work place. 3. Data Scientist Certification This course will help you learn Big data analytics using R and Hadoop. 4. HR Analytics Course Learn HR Analytics techniques to make data driven HR decisions. 5. Web Analytics Learn how to drive growth for online businesses by becoming a data driven decision maker. 6. Big Data Specialist Earn a globally recognized certificate from Jigsaw Academy and Wiley in Big Data tools and technologies. We have taught more than 10,000 students since our inception. Jigsaw Academy offers analytic training using an online medium. You can attend the classes from your home, office or any place suitable to you. Learn how it works.
Jigsaw Academy is the brainchild of Gaurav Vohra and Sarita Digumarti who together have over 20 years of experience in the analytics industry. Having worked in varied industries, across multiple roles they brought their expertise and experience together, to form Jigsaw. They passionately believe that the future of analytics is poised to soar and that Jigsaw Academy will play a crucial role in identifying and nurturing analytic talent worldwide. They also firmly believe that online learning is changing the face of education and they are proud that Jigsaw’s virtual platform is a part of this revolution. Jigsaw has firmly established itself as a center for quality analytics training. Jigsaw Academy has won the Brands Academy Award 2013 for ‘Best Upcoming Academy for Analytics Courses in Bangalore’. It is also featured as one of the top analytics training institutes in India by AIM magazine.Co-founder of Jigsaw Academy Gaurav Vohra says – “We strive to deliver quality analytics training to each and every one of our students and do all we can to make the Jigsaw experience for them unique and valuable”. We have a range of courses, from beginner to advanced in Data Analytics and Big Data. Some of our most popular courses are: 1. Analytics for Beginners Learn a step-by-step approach to analysis using a case study from the game of Cricket. 2. Foundation Course in Analytics Start your career in Analytics. Learn analytical skills that are most valued at the work place. 3. Data Scientist Certification This course will help you learn Big data analytics using R and Hadoop. 4. HR Analytics Course Learn HR Analytics techniques to make data driven HR decisions. 5. Web Analytics Learn how to drive growth for online businesses by becoming a data driven decision maker. 6. Big Data Specialist Earn a globally recognized certificate from Jigsaw Academy and Wiley in Big Data tools and technologies. We have taught more than 10,000 students since our inception. Jigsaw Academy offers analytic training using an online medium. You can attend the classes from your home, office or any place suitable to you. Learn how it works.
Located on IT capital of India,Bangalore, our goal is to provide pure hands-on training, meeting industry expectation. Training is applicable for professional who are new in industry as well as experienced professional who want to change their career into data science. We ensure that professionals attending the training gets empowered with recent industry trends related to BigData and Hadoop and becomes ready for contributing their innovation in the industry. Our trainers posseses more than 10 years of industry experience and from product development background, who are architects in Big Data projects and well versed with industry trends. So if your are looking for a training mixed with adequate theory with rigorous hands-on project experience, we welcome you to attend our training. We do not permit more than 20 participants in a batch in order to focus on each participants significantly.
Located on IT capital of India,Bangalore, our goal is to provide pure hands-on training, meeting industry expectation. Training is applicable for professional who are new in industry as well as experienced professional who want to change their career into data science. We ensure that professionals attending the training gets empowered with recent industry trends related to BigData and Hadoop and becomes ready for contributing their innovation in the industry. Our trainers posseses more than 10 years of industry experience and from product development background, who are architects in Big Data projects and well versed with industry trends. So if your are looking for a training mixed with adequate theory with rigorous hands-on project experience, we welcome you to attend our training. We do not permit more than 20 participants in a batch in order to focus on each participants significantly.
Browse hundreds of experienced dance tutors across Bangalore. Compare profiles, teaching styles, reviews, and class timings to find the one that fits your goals — whether it's Apache Spark, Hadoop, Scala,
Select your preferred tutor and book a free demo session. Experience their teaching style, ask questions, and understand the class flow before you commit.
Once you're satisfied, make the payment securely through UrbanPro and start your dance journey! Learn at your own pace — online or in-person — and track your progress easily.
Find the best Big Data Tutor Training
Selected Location Do you offer Big Data Training?
Create Free Profile >>You can browse the list of best Big Data tutors on UrbanPro.com. You can even book a free demo class to decide which Tutor to start classes with.
The fee charged varies between online and offline classes. Generally you get the best quality at the lowest cost in the online classes, as the best tutors don’t like to travel to the Student’s location.
It definitely helps to join Big Data Training near me in Bagmane Tech Park, Bangalore, as you get the desired motivation from a Teacher to learn. If you need personal attention and if your budget allows, select 1-1 Class. If you need peer interaction or have budget constraints, select a Group Class.
UrbanPro has a list of best Big Data Training
If you know Java then that's an advantage for moving to Hadoop. In Hadoop also you can use Java to code.Hadoop itself is developed in Java.
LinkedIn leverages big data in various ways to enhance its platform and services. Here are a few examples: 1....
1. Idea/Concept: Apache Hadoop: Hadoop is based on the MapReduce programming model, which involves...
Big data primarily refers to data sets that are too large or complex to be dealt with by traditional data-processing application software.
Hi Sweta, you can learn it at home through various books and materials available online. You can also...
Software developed and manufactured by Microsoft Corporation that allows users to organize, format, and calculate data with formulas using a spreadsheet...
HDFS commands : HDFS commands will interact with namenode to show results some commands like cat ,tail will interact with datanode to show results HDFS...
Microsoft Word is a widely used commercial word processor designed by Microsoft. Microsoft Word is a component of the Microsoft Office suite of productivity...
Apache Spark is the most popular open source product today to work with Big Data. More and more Big Data developers are using Spark to generate solutions...
Dynamic HyerText Markup Language (DHTML) is a combination of Web development technologies used to create dynamically changing websites. Web pages may include...