I have completed training from IOT Analytix Corp. It was an awesome experience for me. They conducted mock interviews after my training is completed.
1,093 Student Reviews
Objectives – Having 18.5 years of experience As a Technical Lead and Architect, I am passionate about leveraging my expertise in Databricks, Data Build Tool, Spark, Confluent Kafka, Data Lake, Lake House and Cloud Solutions to drive innovation and efficiency. With a solid background in data architecture and IT Infrastructure, I am to contribute to the robust and vision of your company by providing ro bust technical solutions that align with strategic business goals. My Goal is to enhance data-driven decision-making processes, optimize big data pipelines and implement secure and scalable cloud architectures that people the organization forward in the ever-evolving technical landscape. Certification & Achievements – 1. Confluent Certified Administrator for Apache Kafka: expiry July 2026. Confluent Certified Administrator for Apache Kafka • Amit Raj • Confluent 2. Databricks Certified Data Engineer Professional: expiry Aug 2026 Databricks Certified Data Engineer Professional • Amit Raj • Databricks Badges 3. Databricks Accredited Lakehouse Fundamentals: expiry June 2025 Academy Accreditation - Databricks Lakehouse Fundamentals • Amit Raj • Databricks Badges 4. Microsoft Certified: Azure Administrator Associate (AZ104): Microsoft certification ID: 1100039942 Expires on: August 3, 2025 Credentials - AmitRaj-8869 | Microsoft Learn 5. Microsoft Certified: Azure Security Engineer Associate (AZ500): Microsoft certification ID: 1100039942 Expires on: August 6, 2025 Credentials - AmitRaj-8869 | Microsoft Learn 6. Recognition certificate from Fidelity for designing global solutions for Data exchange. 7. Got Achievement medal from DIB(Client) with appreciation for design event-based enterprise architecture & contribution – EventHub SUMMARY • Overall total of 18.5+ years of Experience years of Experience in Application Design, Development & Deployment of Hadoop Eco System/Java/J2EE systems with good exposure to Enterprise Architecture. • Relevant Experience 9.2 years in Big Data technologies working with multiple clients and domain knowledge. • Experienced in Cassandra data modelling, cluster setup and data management. • Experienced in working with Spark-SQL, Spark SQL and Spark Structure Streaming, MLib to process and analyse data queries. • Experienced in designing solutions using Spark Streaming and Kafka Streaming for Payment Gateway/point of sales events. • Individual Contribution (Kafka Architect): Delivered UAT and PROD Cluster within the timeline for Kafka cluster using Cloudera 6.x, CSP 2.0. • Implemented a unified data platform to gather data from different sources using Kafka Producers and consumers in Scala and java. • Solid background in Object-Oriented analysis & design, UML and various design patterns. • Worked using Azure cloud(Blob, EventHub), Kubernetes, docker with Spark, scala, Schema Registry, Avro Schema with home security application for Honeywell • Implemented KSQL, KTable and KStream using Confluent Kafka along with Kafka Connect. • Hands-on Data bricks - Databricks Clusters, Data Lakehouse, Delta lake, DBFS, EXPLORE, Analyze, Clean, Transform and Load Data using Databricks. • Experience with Azure: Azure Synapse Analytics, ADLS, ADF, CosmoDB, Azure Function, Stream Analytics, Power BI. • Experience with SQL and NoSQL databases including Mysql, Oracle, Cassandra, and PostgreSQL. BigTable • Experience building and optimizing the ‘big data’ data pipeline. • Experince with Azure Devops, CI/CD pipeline, Kubernetes and docker • Motivated Technical Architect with 5 years of progressive experience. • Having Experience AWS (Ec2,S3) • Having experience with Snowflake to design data lake and load data from multiple sources to the Snowflake database. • Effectively manages assignments and team members. • Dedicated to self-development to provide expectation-exceeding service. Customer-focused, successfully contributing to company profits by improving team efficiency and productivity. • Utilizes excellent organizational skills to enhance efficiency and lead teams to achieve outstanding delivery. SKILLS == =================== Database architecture Database architecture development Data Architecture Big Data ETL Technical solution development Azure data solutions Data insight provision Technical guidance IT Architecture Technical solutions Big data frameworks Technical Skills: Hortonworks2.5, Cloudera5/6, Apache Hadoop2/3 ,Spark2/3,Apache Kafka, Confluent Kafka, Hive 2/3, Impala, Sqoop, OOZie, Zookeeper, Snowflake, Data Build tool (DBT), HBase, Apache Cassandra /DataStax Cassandra, Data Bricks, Azure Cloud, AWS cloud, Talend, Airflow etc. Programming Language Python , Scala & Java Other Tools Kibana, Logstash, ElasticSearch, ELK. ============================= PROJECT UNDERTAKEN: Project: Implementation of Data Warehouse and reporting platform Roles: Databricks Architect & Engineer Teams: 12 members Technical Skills: Azure Cloud, Azure Data Factory (ADF), ADLS, Databricks, Spark3.x, Python, Scala2.15, DB2, Oracle 12g, Azure SQL My Contribution Data Bricks Infrastructure Solution: - Configured Unified Data Access Control using Unity Catalog – E1 & BY System provide a specific set of permissions, like Read Only, or, Write Only to a specific Group of Users on one, or, some of the Delta Tables, or, even at the Row Level, or, Column Level, which can contain Personally Identifiable Information, i.e., PII, of that Delta Tables - Provide Data Governance with centralized place: administer (TAI) the access to the data, and, also audit the access to the data. - Applied Data lineage for E1 & BY tables with look-up tables using Unity Catalog. - Implemented Data sharing protocol to apply secure data sharing downstream using Unity Catalog and - Design Architecture of Unity Catalog which can be linked to multiple Databricks Workspaces- DEV, UAT, PROD environment. - Created Metastore for the Unity Catalog - Apply User Management of the Unity Catalog for the TAI Lakehouse project: Users, Groups, or, the Service Principle, and, the permissions those have - Configure Data Bricks Cluste with spark 3.x for DEV, UAT & PROD for TAI -E1 & BY System. - Design & apply medallion architecture, Setup a Data Lake house with Bronze, Silver and Gold layers of a storage system using Azure Data Lake Gen2. Azure Cloud Infra and Security: - Install self-hosted integration runtime for the DB2 ON DEV, UAT & PROD and Oracle on-prem cluster on the source system. - Install Azure Virtual network managed IR On DEV, UAT & PROD. - Installed Db2 connector on DEV, UAT & PROD. - Created linked service lnk_BY_Azure_SQL, lnk_E1_Azure_SQL, lnk_Db2_E1 - Install and Configure Azure Key-Vault, added all the credential for Azure SQL, ADLS, Databricks, Users, global users, linked service to Azure Key Vault. – DEV, UAT & PROD. - Created 3 nodes for DEV and 5 nodes for PROD cluster to migrate data. - Setup and configure Azure Active Directory to provide team access policy for Databricks cluster, Azure Data Factory, Azure SQL, Azure Data Lake house. - Coordinate with TAI Client and Microsoft support team to resolve throughput issues. As Azure & Databricks Data Engineer: - developed most critical data ingestion pipelines using Azure Data Factory (ADF) for E1 to migrate 12.8TB of 120 tables from Db2 to ADLS RAW as a parquet file. There are many large tables with 2-4 TB of volume data containing 400 to 800 million records. - Initial & Incremental migration pipelines for both the E1 and BY sources with a watermark based on Julian's date & time - Design Audit table (Process log) and Control table (System) to achieve dynamic pipeline and audit information for master and child pipeline. - Design architecture solution to achieve delete for PKSNAPSHOT – E1 & BY. - Build dynamic delete pipeline using ADF (load PKTBL), Databricks PySpark for Daily, Weekly, OnDemand, and Yearly frequency to delete records from target (Analytics layer – gold layer) based on source system delete column and delete table - Build transformation using Datarbicks Spark with Scala for E1 to - Apply a transformation with a lookup table and transform to the Silver layer. - Build transformation to transform on Analytics layer (Gold) using Databricks Spark & scala. - Implemented UPSERT using Spark Structure Streaming with 5 minutes on the Analytics layer - Design pipeline architecture for master pipeline, child pipeline with different activity ID, Pipeline ID, Master pipeline ID with different pipeline Run ID to make sure for smooth transition audit. - Build logic, developed using Pyspark on Datarbicks – applied on DEV, UAT & PROD to check counter – master pipeline IN PROGRESS - or NOT so that pipeline execution should not overlap. - Pass pipeline parameter to insert or update Audit/Control table using Databricks -Pyspark. - Monitor Performance in DEV & PROD, worked with the team to reduce time. - Milestone – to achieve 10-minute SLA for Incremental load on E1 & BY (end-to-end completion time) - Milestone – achieved 1.53.45hrs to load (400millions record with2.3TB) at RAW as parquet file using ADF pipeline - Interact with Azure Devos’s engineer to build a CICD pipeline for DEV, UAT & PROD with - Develop pipeline as POC using Databricks Workflow, compare the cost with Azure Pipeline, and present to the client. My Contribution to Past Project: Project: Data Exchange (Security Framework) Roles: Technical Lead & Architect – Confluent KStream & KSQL Client: Fidelity & Westpac Team: 9 members Technical Skills: AzureDevops, Jdk 19.0, Confluent Kafka, Kstream, KSQL, Azure Databricks, DBFS, Delta lake, Azure Data Factory, ADLSGen2, Confluent Schema Registry, AES Algorithm, Hash Algorithm, Kubernetes Cluster(AKS).
Objectives – Having 18.5 years of experience As a Technical Lead and Architect, I am passionate about leveraging my expertise in Databricks, Data Build Tool, Spark, Confluent Kafka, Data Lake, Lake House and Cloud Solutions to drive innovation and efficiency. With a solid background in data architecture and IT Infrastructure, I am to contribute to the robust and vision of your company by providing ro bust technical solutions that align with strategic business goals. My Goal is to enhance data-driven decision-making processes, optimize big data pipelines and implement secure and scalable cloud architectures that people the organization forward in the ever-evolving technical landscape. Certification & Achievements – 1. Confluent Certified Administrator for Apache Kafka: expiry July 2026. Confluent Certified Administrator for Apache Kafka • Amit Raj • Confluent 2. Databricks Certified Data Engineer Professional: expiry Aug 2026 Databricks Certified Data Engineer Professional • Amit Raj • Databricks Badges 3. Databricks Accredited Lakehouse Fundamentals: expiry June 2025 Academy Accreditation - Databricks Lakehouse Fundamentals • Amit Raj • Databricks Badges 4. Microsoft Certified: Azure Administrator Associate (AZ104): Microsoft certification ID: 1100039942 Expires on: August 3, 2025 Credentials - AmitRaj-8869 | Microsoft Learn 5. Microsoft Certified: Azure Security Engineer Associate (AZ500): Microsoft certification ID: 1100039942 Expires on: August 6, 2025 Credentials - AmitRaj-8869 | Microsoft Learn 6. Recognition certificate from Fidelity for designing global solutions for Data exchange. 7. Got Achievement medal from DIB(Client) with appreciation for design event-based enterprise architecture & contribution – EventHub SUMMARY • Overall total of 18.5+ years of Experience years of Experience in Application Design, Development & Deployment of Hadoop Eco System/Java/J2EE systems with good exposure to Enterprise Architecture. • Relevant Experience 9.2 years in Big Data technologies working with multiple clients and domain knowledge. • Experienced in Cassandra data modelling, cluster setup and data management. • Experienced in working with Spark-SQL, Spark SQL and Spark Structure Streaming, MLib to process and analyse data queries. • Experienced in designing solutions using Spark Streaming and Kafka Streaming for Payment Gateway/point of sales events. • Individual Contribution (Kafka Architect): Delivered UAT and PROD Cluster within the timeline for Kafka cluster using Cloudera 6.x, CSP 2.0. • Implemented a unified data platform to gather data from different sources using Kafka Producers and consumers in Scala and java. • Solid background in Object-Oriented analysis & design, UML and various design patterns. • Worked using Azure cloud(Blob, EventHub), Kubernetes, docker with Spark, scala, Schema Registry, Avro Schema with home security application for Honeywell • Implemented KSQL, KTable and KStream using Confluent Kafka along with Kafka Connect. • Hands-on Data bricks - Databricks Clusters, Data Lakehouse, Delta lake, DBFS, EXPLORE, Analyze, Clean, Transform and Load Data using Databricks. • Experience with Azure: Azure Synapse Analytics, ADLS, ADF, CosmoDB, Azure Function, Stream Analytics, Power BI. • Experience with SQL and NoSQL databases including Mysql, Oracle, Cassandra, and PostgreSQL. BigTable • Experience building and optimizing the ‘big data’ data pipeline. • Experince with Azure Devops, CI/CD pipeline, Kubernetes and docker • Motivated Technical Architect with 5 years of progressive experience. • Having Experience AWS (Ec2,S3) • Having experience with Snowflake to design data lake and load data from multiple sources to the Snowflake database. • Effectively manages assignments and team members. • Dedicated to self-development to provide expectation-exceeding service. Customer-focused, successfully contributing to company profits by improving team efficiency and productivity. • Utilizes excellent organizational skills to enhance efficiency and lead teams to achieve outstanding delivery. SKILLS == =================== Database architecture Database architecture development Data Architecture Big Data ETL Technical solution development Azure data solutions Data insight provision Technical guidance IT Architecture Technical solutions Big data frameworks Technical Skills: Hortonworks2.5, Cloudera5/6, Apache Hadoop2/3 ,Spark2/3,Apache Kafka, Confluent Kafka, Hive 2/3, Impala, Sqoop, OOZie, Zookeeper, Snowflake, Data Build tool (DBT), HBase, Apache Cassandra /DataStax Cassandra, Data Bricks, Azure Cloud, AWS cloud, Talend, Airflow etc. Programming Language Python , Scala & Java Other Tools Kibana, Logstash, ElasticSearch, ELK. ============================= PROJECT UNDERTAKEN: Project: Implementation of Data Warehouse and reporting platform Roles: Databricks Architect & Engineer Teams: 12 members Technical Skills: Azure Cloud, Azure Data Factory (ADF), ADLS, Databricks, Spark3.x, Python, Scala2.15, DB2, Oracle 12g, Azure SQL My Contribution Data Bricks Infrastructure Solution: - Configured Unified Data Access Control using Unity Catalog – E1 & BY System provide a specific set of permissions, like Read Only, or, Write Only to a specific Group of Users on one, or, some of the Delta Tables, or, even at the Row Level, or, Column Level, which can contain Personally Identifiable Information, i.e., PII, of that Delta Tables - Provide Data Governance with centralized place: administer (TAI) the access to the data, and, also audit the access to the data. - Applied Data lineage for E1 & BY tables with look-up tables using Unity Catalog. - Implemented Data sharing protocol to apply secure data sharing downstream using Unity Catalog and - Design Architecture of Unity Catalog which can be linked to multiple Databricks Workspaces- DEV, UAT, PROD environment. - Created Metastore for the Unity Catalog - Apply User Management of the Unity Catalog for the TAI Lakehouse project: Users, Groups, or, the Service Principle, and, the permissions those have - Configure Data Bricks Cluste with spark 3.x for DEV, UAT & PROD for TAI -E1 & BY System. - Design & apply medallion architecture, Setup a Data Lake house with Bronze, Silver and Gold layers of a storage system using Azure Data Lake Gen2. Azure Cloud Infra and Security: - Install self-hosted integration runtime for the DB2 ON DEV, UAT & PROD and Oracle on-prem cluster on the source system. - Install Azure Virtual network managed IR On DEV, UAT & PROD. - Installed Db2 connector on DEV, UAT & PROD. - Created linked service lnk_BY_Azure_SQL, lnk_E1_Azure_SQL, lnk_Db2_E1 - Install and Configure Azure Key-Vault, added all the credential for Azure SQL, ADLS, Databricks, Users, global users, linked service to Azure Key Vault. – DEV, UAT & PROD. - Created 3 nodes for DEV and 5 nodes for PROD cluster to migrate data. - Setup and configure Azure Active Directory to provide team access policy for Databricks cluster, Azure Data Factory, Azure SQL, Azure Data Lake house. - Coordinate with TAI Client and Microsoft support team to resolve throughput issues. As Azure & Databricks Data Engineer: - developed most critical data ingestion pipelines using Azure Data Factory (ADF) for E1 to migrate 12.8TB of 120 tables from Db2 to ADLS RAW as a parquet file. There are many large tables with 2-4 TB of volume data containing 400 to 800 million records. - Initial & Incremental migration pipelines for both the E1 and BY sources with a watermark based on Julian's date & time - Design Audit table (Process log) and Control table (System) to achieve dynamic pipeline and audit information for master and child pipeline. - Design architecture solution to achieve delete for PKSNAPSHOT – E1 & BY. - Build dynamic delete pipeline using ADF (load PKTBL), Databricks PySpark for Daily, Weekly, OnDemand, and Yearly frequency to delete records from target (Analytics layer – gold layer) based on source system delete column and delete table - Build transformation using Datarbicks Spark with Scala for E1 to - Apply a transformation with a lookup table and transform to the Silver layer. - Build transformation to transform on Analytics layer (Gold) using Databricks Spark & scala. - Implemented UPSERT using Spark Structure Streaming with 5 minutes on the Analytics layer - Design pipeline architecture for master pipeline, child pipeline with different activity ID, Pipeline ID, Master pipeline ID with different pipeline Run ID to make sure for smooth transition audit. - Build logic, developed using Pyspark on Datarbicks – applied on DEV, UAT & PROD to check counter – master pipeline IN PROGRESS - or NOT so that pipeline execution should not overlap. - Pass pipeline parameter to insert or update Audit/Control table using Databricks -Pyspark. - Monitor Performance in DEV & PROD, worked with the team to reduce time. - Milestone – to achieve 10-minute SLA for Incremental load on E1 & BY (end-to-end completion time) - Milestone – achieved 1.53.45hrs to load (400millions record with2.3TB) at RAW as parquet file using ADF pipeline - Interact with Azure Devos’s engineer to build a CICD pipeline for DEV, UAT & PROD with - Develop pipeline as POC using Databricks Workflow, compare the cost with Azure Pipeline, and present to the client. My Contribution to Past Project: Project: Data Exchange (Security Framework) Roles: Technical Lead & Architect – Confluent KStream & KSQL Client: Fidelity & Westpac Team: 9 members Technical Skills: AzureDevops, Jdk 19.0, Confluent Kafka, Kstream, KSQL, Azure Databricks, DBFS, Delta lake, Azure Data Factory, ADLSGen2, Confluent Schema Registry, AES Algorithm, Hash Algorithm, Kubernetes Cluster(AKS).
I am an IT Professional with 9 years of experience and passionate about teaching Big Data - Hadoop, Spark, and Scala. I am a passionate Big Data tutor, who believes in practical teaching methods more than theoretical/academic coaching. I do have 9 years of experience in professional as well as teaching Big Data, and my expertise includes Spark, Scala, Hadoop, Cassandra, HIVE.i do have SQLexprerince as well as Tableau . By providing his students with hands-on experience in the technology, I always tries my best to make them understand the concepts thoroughly with his unique method of teaching. The course of five important components as below1. Hadoop Ecosystem 2. Scala Basics 3. Spark in Detail 4. Small Projects 5. Interview questions and mock interview. Please attend a free demo class and decide to attend a course. Please feel free to check the introduction video and call me for any clarifications
I am an IT Professional with 9 years of experience and passionate about teaching Big Data - Hadoop, Spark, and Scala. I am a passionate Big Data tutor, who believes in practical teaching methods more than theoretical/academic coaching. I do have 9 years of experience in professional as well as teaching Big Data, and my expertise includes Spark, Scala, Hadoop, Cassandra, HIVE.i do have SQLexprerince as well as Tableau . By providing his students with hands-on experience in the technology, I always tries my best to make them understand the concepts thoroughly with his unique method of teaching. The course of five important components as below1. Hadoop Ecosystem 2. Scala Basics 3. Spark in Detail 4. Small Projects 5. Interview questions and mock interview. Please attend a free demo class and decide to attend a course. Please feel free to check the introduction video and call me for any clarifications
We are a software company situated in bangalore, india. We are into manufacturing of iot products and providing solutions for bigdata. We use all the latest technologies in the market to provide sustainable solutions to our customers. We are also into training of following technologies for corporates and experienced professionals. Bigdata with hadoop Bigdata with spark Iot using arduino Iot using raspberry pi Kids robotics We also have a placement cell to support trained candidates in getting placed in software companies . The following are some of the advantages you get, when you get trained from us Real time projects Placement support Mock interviews Interview questions Classroom training Online training 24/7 wifi with technical library Iot labs with iot projects and components Expert trainers.
I have completed training from IOT Analytix Corp. It was an awesome experience for me. They conducted mock interviews after my training is completed.
We are a software company situated in bangalore, india. We are into manufacturing of iot products and providing solutions for bigdata. We use all the latest technologies in the market to provide sustainable solutions to our customers. We are also into training of following technologies for corporates and experienced professionals. Bigdata with hadoop Bigdata with spark Iot using arduino Iot using raspberry pi Kids robotics We also have a placement cell to support trained candidates in getting placed in software companies . The following are some of the advantages you get, when you get trained from us Real time projects Placement support Mock interviews Interview questions Classroom training Online training 24/7 wifi with technical library Iot labs with iot projects and components Expert trainers.
I have completed training from IOT Analytix Corp. It was an awesome experience for me. They conducted mock interviews after my training is completed.
Highly accomplished Big Data Tutor with over 12 years of experience in teaching, mentoring, and guiding students and professionals in data engineering, data analytics, and cloud-based big data solutions. Expert in designing and delivering comprehensive learning programs on Hadoop, Spark, Kafka, NoSQL databases, and modern data warehousing technologies. Skilled in simplifying complex concepts and fostering hands-on learning through real-world projects and case studies. Adept at leveraging cloud platforms such as AWS, Azure, and Google Cloud to teach scalable big data architectures and advanced analytics. Passionate about empowering learners with practical knowledge, industry best practices, and career-oriented skills to excel in the evolving data ecosystem.
Highly accomplished Big Data Tutor with over 12 years of experience in teaching, mentoring, and guiding students and professionals in data engineering, data analytics, and cloud-based big data solutions. Expert in designing and delivering comprehensive learning programs on Hadoop, Spark, Kafka, NoSQL databases, and modern data warehousing technologies. Skilled in simplifying complex concepts and fostering hands-on learning through real-world projects and case studies. Adept at leveraging cloud platforms such as AWS, Azure, and Google Cloud to teach scalable big data architectures and advanced analytics. Passionate about empowering learners with practical knowledge, industry best practices, and career-oriented skills to excel in the evolving data ecosystem.
Walsoul Private Limited provides Database Training classes,Big Data (Scala) Training,Big Data (Hadoop) Training,Data Science Classes.
Walsoul Private Limited provides Database Training classes,Big Data (Scala) Training,Big Data (Hadoop) Training,Data Science Classes.
We BBTH is focused on Training the individual from testing/Dev background to make them aware and help them to understand the concept of testing BIG DATA application by making them aware of big data application. We would be happy to collaborate and provide corporate training.
We BBTH is focused on Training the individual from testing/Dev background to make them aware and help them to understand the concept of testing BIG DATA application by making them aware of big data application. We would be happy to collaborate and provide corporate training.
I am an expert looking for a traine ron bigdata so let me know if any one available immdiately since its a request from the client end
I am an expert looking for a traine ron bigdata so let me know if any one available immdiately since its a request from the client end
Dbtechnosolutions offers optimized SQL Server database solutions for challenging business environment and BIG Data analytic using Apache HADOOP ECO Systems such as Map Reduce, PIG, HIVE, HBASE, SQOOP, FLUME and Oozie etc.. We provide Industry-level Corporate, In-person and Online training in BIG Data HADOOP and in SQL Server Technology. We also offer specialized SQL Server services including SQL Server Remote DBA Support, SQL Server Consulting, SQL Server Health Check, SQL Server Troubleshooting and Performance Tuning, SQL Server High Availability & Disaster Recovery planning and SQL Server Auditing. We have consulted more than 10+ customers and trained more than 100+ professionals on BIG Data Hadoop and SQL Server technology. We demonstrate in-depth expertise and offer world-class consulting services. We work remotely as well as travels extensively across the globe based on the requirement.
Dbtechnosolutions offers optimized SQL Server database solutions for challenging business environment and BIG Data analytic using Apache HADOOP ECO Systems such as Map Reduce, PIG, HIVE, HBASE, SQOOP, FLUME and Oozie etc.. We provide Industry-level Corporate, In-person and Online training in BIG Data HADOOP and in SQL Server Technology. We also offer specialized SQL Server services including SQL Server Remote DBA Support, SQL Server Consulting, SQL Server Health Check, SQL Server Troubleshooting and Performance Tuning, SQL Server High Availability & Disaster Recovery planning and SQL Server Auditing. We have consulted more than 10+ customers and trained more than 100+ professionals on BIG Data Hadoop and SQL Server technology. We demonstrate in-depth expertise and offer world-class consulting services. We work remotely as well as travels extensively across the globe based on the requirement.
Evarcity-An online/offline platform for learning the latest courses like Hadoop,Spark and Scala,Data Science,R Programming,Machine Learning,Python.We have experienced trainers,alumnus of IITs,IIMs,NITs,DTU(DCE),BITS.
Evarcity-An online/offline platform for learning the latest courses like Hadoop,Spark and Scala,Data Science,R Programming,Machine Learning,Python.We have experienced trainers,alumnus of IITs,IIMs,NITs,DTU(DCE),BITS.
Provided training at corporate level as well as for different institute. I am working as a solution architect for Ericsson Ireland on Big data platform. Previously have worked with Cisco on the big data platform. I have experience on both Batch processing as well as Real time using Hadoop and Spark.
Provided training at corporate level as well as for different institute. I am working as a solution architect for Ericsson Ireland on Big data platform. Previously have worked with Cisco on the big data platform. I have experience on both Batch processing as well as Real time using Hadoop and Spark.
I can teach each and every topics very easily and make it stronger for the students and professional's .
I can teach each and every topics very easily and make it stronger for the students and professional's .
hadoop architecture , basics , linux, server , networking knowledge will be shared
hadoop architecture , basics , linux, server , networking knowledge will be shared
I have 2 year of IT industry experience and passionate about teaching like data structure , computer science fundamental , Android application crash course, Java programming , hadoop installation programming , C programming, Computer networks, Data structure and algorithm all computer science engineering subject
I have 2 year of IT industry experience and passionate about teaching like data structure , computer science fundamental , Android application crash course, Java programming , hadoop installation programming , C programming, Computer networks, Data structure and algorithm all computer science engineering subject
Best servicenow, pythin, devopa training Expert trainers with 10+ year experience Trainer has delivered multiple corporate training Get industry ready quickly with more experienced trainers. Ro add value we have a huge library of archives, learning materials and cloud labs, special aetup for integration lab, itom lab for real-time project execution.
Best servicenow, pythin, devopa training Expert trainers with 10+ year experience Trainer has delivered multiple corporate training Get industry ready quickly with more experienced trainers. Ro add value we have a huge library of archives, learning materials and cloud labs, special aetup for integration lab, itom lab for real-time project execution.
I can teach each and every topics very easily and make it stronger for the students.
I can teach each and every topics very easily and make it stronger for the students.
Informatica developer with Big data technologies Like Hadoop, Cassandra, Hive Hbase Parsing structure and unstructured data. Poc,case study implementation Design data warehouse solutions JSON parsing Data modelling ELT solutions
Informatica developer with Big data technologies Like Hadoop, Cassandra, Hive Hbase Parsing structure and unstructured data. Poc,case study implementation Design data warehouse solutions JSON parsing Data modelling ELT solutions
I am a software engineer in one of the top mnc in Bangalore, India. I am having more than 5 years of experience in teaching field.
I am a software engineer in one of the top mnc in Bangalore, India. I am having more than 5 years of experience in teaching field.
I have got 5 years of experience in data visualization , data analytics and data mining. Using d3.js, rapheal,git,mercurial.nvd3,c3d3 .. etc I am passionate on playing around the data and get the best out of it .???? I have recently done an Election Analytics for the indian state election(Tamil Nadu) http://data-analytics.github.io/Election_Data/tamil_nadu.html
I have got 5 years of experience in data visualization , data analytics and data mining. Using d3.js, rapheal,git,mercurial.nvd3,c3d3 .. etc I am passionate on playing around the data and get the best out of it .???? I have recently done an Election Analytics for the indian state election(Tamil Nadu) http://data-analytics.github.io/Election_Data/tamil_nadu.html
Hi
I am a Certified Talend Professional and have more than 6 years of IT experience. I provide Talend Training on Talend Open Studio, Talend Integration Suite & RDBMS.
I am a Certified Talend Professional and have more than 6 years of IT experience. I provide Talend Training on Talend Open Studio, Talend Integration Suite & RDBMS.
Worked as hadoop trainer. I am also cloudera certified hadoop developer. I am having total 9+ years of experience with 3+ years of experience in hadoop. Able to handle all concepts of hadoop ecosystem and introduction to spark.
Worked as hadoop trainer. I am also cloudera certified hadoop developer. I am having total 9+ years of experience with 3+ years of experience in hadoop. Able to handle all concepts of hadoop ecosystem and introduction to spark.
We are conducting professional courses like AWS Cloud, Java/Advanced Java , Frameworks (Spring/ Hibernate/ Web services), Big Data & Hadoop, Artificial intelligence, Block chain, Python, Machine learning, Devops, Pl sql, Sql, Projects, Mean Stack Technologies ( Angular Js, React Js, Express js, Mango DB, Node Js), Android. "Digital India Online training program", Save Time Save Money Fastest Growth India is moving to digital India as the trend now we started online training program much better to offline. Live classroom , No time waste in travel , No matter where you are , No limitations , revise multiple as recording available , Now physical location ( Kashmir to Kanyakumari) doesn't matter only talent matter. Whoever needs we are serving the society by this online training program in India. Save Money, Save Time, Fastest growth in career. After this training you are able to get millions of software developer opportunities not in India across the world. We are providing that much of professional & practical & programmatically training. You can perform best in your team as you are very good in technical with real time industry parameters. Expert trainers- who has worked in Top MNC companies like IBM, Accenture, TCS, Oracle, Wipro, HCL, GE etc. for almost 8 years. With this rich industry experience, we started online corporate trainings. In these 9 years of training, trained more than 2000 people so far. More than 90% of trained students placed in various Top MNC companies.
We are conducting professional courses like AWS Cloud, Java/Advanced Java , Frameworks (Spring/ Hibernate/ Web services), Big Data & Hadoop, Artificial intelligence, Block chain, Python, Machine learning, Devops, Pl sql, Sql, Projects, Mean Stack Technologies ( Angular Js, React Js, Express js, Mango DB, Node Js), Android. "Digital India Online training program", Save Time Save Money Fastest Growth India is moving to digital India as the trend now we started online training program much better to offline. Live classroom , No time waste in travel , No matter where you are , No limitations , revise multiple as recording available , Now physical location ( Kashmir to Kanyakumari) doesn't matter only talent matter. Whoever needs we are serving the society by this online training program in India. Save Money, Save Time, Fastest growth in career. After this training you are able to get millions of software developer opportunities not in India across the world. We are providing that much of professional & practical & programmatically training. You can perform best in your team as you are very good in technical with real time industry parameters. Expert trainers- who has worked in Top MNC companies like IBM, Accenture, TCS, Oracle, Wipro, HCL, GE etc. for almost 8 years. With this rich industry experience, we started online corporate trainings. In these 9 years of training, trained more than 2000 people so far. More than 90% of trained students placed in various Top MNC companies.
Have 10 years of experience in Hadoop, Bigdata, spark, Tableau, Hive
Have 10 years of experience in Hadoop, Bigdata, spark, Tableau, Hive
I am having total 4.8 years of work experience in IT. Initial 2 years of experience was in Mainframe-DB2. After that I am working in BigData- Hadoop.
I am having total 4.8 years of work experience in IT. Initial 2 years of experience was in Mainframe-DB2. After that I am working in BigData- Hadoop.
I can teach each and every topics very easily and make it stronger for the students.
I can teach each and every topics very easily and make it stronger for the students.
We like to share our knowledge.
We like to share our knowledge.
Browse hundreds of experienced dance tutors across Bangalore. Compare profiles, teaching styles, reviews, and class timings to find the one that fits your goals — whether it's Apache Spark, Hadoop, Scala,
Select your preferred tutor and book a free demo session. Experience their teaching style, ask questions, and understand the class flow before you commit.
Once you're satisfied, make the payment securely through UrbanPro and start your dance journey! Learn at your own pace — online or in-person — and track your progress easily.
Find the best Big Data Tutor Training
Selected Location Do you offer Big Data Training?
Create Free Profile >>You can browse the list of best Big Data tutors on UrbanPro.com. You can even book a free demo class to decide which Tutor to start classes with.
The fee charged varies between online and offline classes. Generally you get the best quality at the lowest cost in the online classes, as the best tutors don’t like to travel to the Student’s location.
It definitely helps to join Big Data Training near me in Mico Layout Police Station, Bangalore, as you get the desired motivation from a Teacher to learn. If you need personal attention and if your budget allows, select 1-1 Class. If you need peer interaction or have budget constraints, select a Group Class.
UrbanPro has a list of best Big Data Training
Big data analytics is the process of collecting, examining, and analysing large amounts of data to discover...
Hello - There are many institutes who could promise you that they would give you these real world scenerios...
You should learn both( Dev part nd Admin part) to crack the interviws, only admin part which may not...
Hi Shantanu - There will be lot of opportunities across IT industry in Hadoop Admin as it is still in...
Hi Rajaram, You can learn Data analytics and work in India. There is very good prospects in this field.
VBA code is stored and typed in the VBA Editor in what are called modules As stated on the VBA Editor page, a collection of modules is what is called...
MICROSOFT PROJECT contains project work and project groups, schedules and finances.Microsoft Project permits its users to line realistic goals for project...
R is a programming language and software environment for statistical computing and graphics supported by the R Foundation for Statistical Computing. The...
i. Macros are little programs that run within Excel and help automate common repetitive tasks. Macros are one of Excel's most powerful, yet underutilized...
Microsoft Access has been around for some time, yet people often still ask me what is Microsoft Access and what does it do? Microsoft Access is a part...