2,327 Student Reviews
Objectives – Having 18.5 years of experience As a Technical Lead and Architect, I am passionate about leveraging my expertise in Databricks, Data Build Tool, Spark, Confluent Kafka, Data Lake, Lake House and Cloud Solutions to drive innovation and efficiency. With a solid background in data architecture and IT Infrastructure, I am to contribute to the robust and vision of your company by providing ro bust technical solutions that align with strategic business goals. My Goal is to enhance data-driven decision-making processes, optimize big data pipelines and implement secure and scalable cloud architectures that people the organization forward in the ever-evolving technical landscape. Certification & Achievements – 1. Confluent Certified Administrator for Apache Kafka: expiry July 2026. Confluent Certified Administrator for Apache Kafka • Amit Raj • Confluent 2. Databricks Certified Data Engineer Professional: expiry Aug 2026 Databricks Certified Data Engineer Professional • Amit Raj • Databricks Badges 3. Databricks Accredited Lakehouse Fundamentals: expiry June 2025 Academy Accreditation - Databricks Lakehouse Fundamentals • Amit Raj • Databricks Badges 4. Microsoft Certified: Azure Administrator Associate (AZ104): Microsoft certification ID: 1100039942 Expires on: August 3, 2025 Credentials - AmitRaj-8869 | Microsoft Learn 5. Microsoft Certified: Azure Security Engineer Associate (AZ500): Microsoft certification ID: 1100039942 Expires on: August 6, 2025 Credentials - AmitRaj-8869 | Microsoft Learn 6. Recognition certificate from Fidelity for designing global solutions for Data exchange. 7. Got Achievement medal from DIB(Client) with appreciation for design event-based enterprise architecture & contribution – EventHub SUMMARY • Overall total of 18.5+ years of Experience years of Experience in Application Design, Development & Deployment of Hadoop Eco System/Java/J2EE systems with good exposure to Enterprise Architecture. • Relevant Experience 9.2 years in Big Data technologies working with multiple clients and domain knowledge. • Experienced in Cassandra data modelling, cluster setup and data management. • Experienced in working with Spark-SQL, Spark SQL and Spark Structure Streaming, MLib to process and analyse data queries. • Experienced in designing solutions using Spark Streaming and Kafka Streaming for Payment Gateway/point of sales events. • Individual Contribution (Kafka Architect): Delivered UAT and PROD Cluster within the timeline for Kafka cluster using Cloudera 6.x, CSP 2.0. • Implemented a unified data platform to gather data from different sources using Kafka Producers and consumers in Scala and java. • Solid background in Object-Oriented analysis & design, UML and various design patterns. • Worked using Azure cloud(Blob, EventHub), Kubernetes, docker with Spark, scala, Schema Registry, Avro Schema with home security application for Honeywell • Implemented KSQL, KTable and KStream using Confluent Kafka along with Kafka Connect. • Hands-on Data bricks - Databricks Clusters, Data Lakehouse, Delta lake, DBFS, EXPLORE, Analyze, Clean, Transform and Load Data using Databricks. • Experience with Azure: Azure Synapse Analytics, ADLS, ADF, CosmoDB, Azure Function, Stream Analytics, Power BI. • Experience with SQL and NoSQL databases including Mysql, Oracle, Cassandra, and PostgreSQL. BigTable • Experience building and optimizing the ‘big data’ data pipeline. • Experince with Azure Devops, CI/CD pipeline, Kubernetes and docker • Motivated Technical Architect with 5 years of progressive experience. • Having Experience AWS (Ec2,S3) • Having experience with Snowflake to design data lake and load data from multiple sources to the Snowflake database. • Effectively manages assignments and team members. • Dedicated to self-development to provide expectation-exceeding service. Customer-focused, successfully contributing to company profits by improving team efficiency and productivity. • Utilizes excellent organizational skills to enhance efficiency and lead teams to achieve outstanding delivery. SKILLS == =================== Database architecture Database architecture development Data Architecture Big Data ETL Technical solution development Azure data solutions Data insight provision Technical guidance IT Architecture Technical solutions Big data frameworks Technical Skills: Hortonworks2.5, Cloudera5/6, Apache Hadoop2/3 ,Spark2/3,Apache Kafka, Confluent Kafka, Hive 2/3, Impala, Sqoop, OOZie, Zookeeper, Snowflake, Data Build tool (DBT), HBase, Apache Cassandra /DataStax Cassandra, Data Bricks, Azure Cloud, AWS cloud, Talend, Airflow etc. Programming Language Python , Scala & Java Other Tools Kibana, Logstash, ElasticSearch, ELK. ============================= PROJECT UNDERTAKEN: Project: Implementation of Data Warehouse and reporting platform Roles: Databricks Architect & Engineer Teams: 12 members Technical Skills: Azure Cloud, Azure Data Factory (ADF), ADLS, Databricks, Spark3.x, Python, Scala2.15, DB2, Oracle 12g, Azure SQL My Contribution Data Bricks Infrastructure Solution: - Configured Unified Data Access Control using Unity Catalog – E1 & BY System provide a specific set of permissions, like Read Only, or, Write Only to a specific Group of Users on one, or, some of the Delta Tables, or, even at the Row Level, or, Column Level, which can contain Personally Identifiable Information, i.e., PII, of that Delta Tables - Provide Data Governance with centralized place: administer (TAI) the access to the data, and, also audit the access to the data. - Applied Data lineage for E1 & BY tables with look-up tables using Unity Catalog. - Implemented Data sharing protocol to apply secure data sharing downstream using Unity Catalog and - Design Architecture of Unity Catalog which can be linked to multiple Databricks Workspaces- DEV, UAT, PROD environment. - Created Metastore for the Unity Catalog - Apply User Management of the Unity Catalog for the TAI Lakehouse project: Users, Groups, or, the Service Principle, and, the permissions those have - Configure Data Bricks Cluste with spark 3.x for DEV, UAT & PROD for TAI -E1 & BY System. - Design & apply medallion architecture, Setup a Data Lake house with Bronze, Silver and Gold layers of a storage system using Azure Data Lake Gen2. Azure Cloud Infra and Security: - Install self-hosted integration runtime for the DB2 ON DEV, UAT & PROD and Oracle on-prem cluster on the source system. - Install Azure Virtual network managed IR On DEV, UAT & PROD. - Installed Db2 connector on DEV, UAT & PROD. - Created linked service lnk_BY_Azure_SQL, lnk_E1_Azure_SQL, lnk_Db2_E1 - Install and Configure Azure Key-Vault, added all the credential for Azure SQL, ADLS, Databricks, Users, global users, linked service to Azure Key Vault. – DEV, UAT & PROD. - Created 3 nodes for DEV and 5 nodes for PROD cluster to migrate data. - Setup and configure Azure Active Directory to provide team access policy for Databricks cluster, Azure Data Factory, Azure SQL, Azure Data Lake house. - Coordinate with TAI Client and Microsoft support team to resolve throughput issues. As Azure & Databricks Data Engineer: - developed most critical data ingestion pipelines using Azure Data Factory (ADF) for E1 to migrate 12.8TB of 120 tables from Db2 to ADLS RAW as a parquet file. There are many large tables with 2-4 TB of volume data containing 400 to 800 million records. - Initial & Incremental migration pipelines for both the E1 and BY sources with a watermark based on Julian's date & time - Design Audit table (Process log) and Control table (System) to achieve dynamic pipeline and audit information for master and child pipeline. - Design architecture solution to achieve delete for PKSNAPSHOT – E1 & BY. - Build dynamic delete pipeline using ADF (load PKTBL), Databricks PySpark for Daily, Weekly, OnDemand, and Yearly frequency to delete records from target (Analytics layer – gold layer) based on source system delete column and delete table - Build transformation using Datarbicks Spark with Scala for E1 to - Apply a transformation with a lookup table and transform to the Silver layer. - Build transformation to transform on Analytics layer (Gold) using Databricks Spark & scala. - Implemented UPSERT using Spark Structure Streaming with 5 minutes on the Analytics layer - Design pipeline architecture for master pipeline, child pipeline with different activity ID, Pipeline ID, Master pipeline ID with different pipeline Run ID to make sure for smooth transition audit. - Build logic, developed using Pyspark on Datarbicks – applied on DEV, UAT & PROD to check counter – master pipeline IN PROGRESS - or NOT so that pipeline execution should not overlap. - Pass pipeline parameter to insert or update Audit/Control table using Databricks -Pyspark. - Monitor Performance in DEV & PROD, worked with the team to reduce time. - Milestone – to achieve 10-minute SLA for Incremental load on E1 & BY (end-to-end completion time) - Milestone – achieved 1.53.45hrs to load (400millions record with2.3TB) at RAW as parquet file using ADF pipeline - Interact with Azure Devos’s engineer to build a CICD pipeline for DEV, UAT & PROD with - Develop pipeline as POC using Databricks Workflow, compare the cost with Azure Pipeline, and present to the client. My Contribution to Past Project: Project: Data Exchange (Security Framework) Roles: Technical Lead & Architect – Confluent KStream & KSQL Client: Fidelity & Westpac Team: 9 members Technical Skills: AzureDevops, Jdk 19.0, Confluent Kafka, Kstream, KSQL, Azure Databricks, DBFS, Delta lake, Azure Data Factory, ADLSGen2, Confluent Schema Registry, AES Algorithm, Hash Algorithm, Kubernetes Cluster(AKS).
Objectives – Having 18.5 years of experience As a Technical Lead and Architect, I am passionate about leveraging my expertise in Databricks, Data Build Tool, Spark, Confluent Kafka, Data Lake, Lake House and Cloud Solutions to drive innovation and efficiency. With a solid background in data architecture and IT Infrastructure, I am to contribute to the robust and vision of your company by providing ro bust technical solutions that align with strategic business goals. My Goal is to enhance data-driven decision-making processes, optimize big data pipelines and implement secure and scalable cloud architectures that people the organization forward in the ever-evolving technical landscape. Certification & Achievements – 1. Confluent Certified Administrator for Apache Kafka: expiry July 2026. Confluent Certified Administrator for Apache Kafka • Amit Raj • Confluent 2. Databricks Certified Data Engineer Professional: expiry Aug 2026 Databricks Certified Data Engineer Professional • Amit Raj • Databricks Badges 3. Databricks Accredited Lakehouse Fundamentals: expiry June 2025 Academy Accreditation - Databricks Lakehouse Fundamentals • Amit Raj • Databricks Badges 4. Microsoft Certified: Azure Administrator Associate (AZ104): Microsoft certification ID: 1100039942 Expires on: August 3, 2025 Credentials - AmitRaj-8869 | Microsoft Learn 5. Microsoft Certified: Azure Security Engineer Associate (AZ500): Microsoft certification ID: 1100039942 Expires on: August 6, 2025 Credentials - AmitRaj-8869 | Microsoft Learn 6. Recognition certificate from Fidelity for designing global solutions for Data exchange. 7. Got Achievement medal from DIB(Client) with appreciation for design event-based enterprise architecture & contribution – EventHub SUMMARY • Overall total of 18.5+ years of Experience years of Experience in Application Design, Development & Deployment of Hadoop Eco System/Java/J2EE systems with good exposure to Enterprise Architecture. • Relevant Experience 9.2 years in Big Data technologies working with multiple clients and domain knowledge. • Experienced in Cassandra data modelling, cluster setup and data management. • Experienced in working with Spark-SQL, Spark SQL and Spark Structure Streaming, MLib to process and analyse data queries. • Experienced in designing solutions using Spark Streaming and Kafka Streaming for Payment Gateway/point of sales events. • Individual Contribution (Kafka Architect): Delivered UAT and PROD Cluster within the timeline for Kafka cluster using Cloudera 6.x, CSP 2.0. • Implemented a unified data platform to gather data from different sources using Kafka Producers and consumers in Scala and java. • Solid background in Object-Oriented analysis & design, UML and various design patterns. • Worked using Azure cloud(Blob, EventHub), Kubernetes, docker with Spark, scala, Schema Registry, Avro Schema with home security application for Honeywell • Implemented KSQL, KTable and KStream using Confluent Kafka along with Kafka Connect. • Hands-on Data bricks - Databricks Clusters, Data Lakehouse, Delta lake, DBFS, EXPLORE, Analyze, Clean, Transform and Load Data using Databricks. • Experience with Azure: Azure Synapse Analytics, ADLS, ADF, CosmoDB, Azure Function, Stream Analytics, Power BI. • Experience with SQL and NoSQL databases including Mysql, Oracle, Cassandra, and PostgreSQL. BigTable • Experience building and optimizing the ‘big data’ data pipeline. • Experince with Azure Devops, CI/CD pipeline, Kubernetes and docker • Motivated Technical Architect with 5 years of progressive experience. • Having Experience AWS (Ec2,S3) • Having experience with Snowflake to design data lake and load data from multiple sources to the Snowflake database. • Effectively manages assignments and team members. • Dedicated to self-development to provide expectation-exceeding service. Customer-focused, successfully contributing to company profits by improving team efficiency and productivity. • Utilizes excellent organizational skills to enhance efficiency and lead teams to achieve outstanding delivery. SKILLS == =================== Database architecture Database architecture development Data Architecture Big Data ETL Technical solution development Azure data solutions Data insight provision Technical guidance IT Architecture Technical solutions Big data frameworks Technical Skills: Hortonworks2.5, Cloudera5/6, Apache Hadoop2/3 ,Spark2/3,Apache Kafka, Confluent Kafka, Hive 2/3, Impala, Sqoop, OOZie, Zookeeper, Snowflake, Data Build tool (DBT), HBase, Apache Cassandra /DataStax Cassandra, Data Bricks, Azure Cloud, AWS cloud, Talend, Airflow etc. Programming Language Python , Scala & Java Other Tools Kibana, Logstash, ElasticSearch, ELK. ============================= PROJECT UNDERTAKEN: Project: Implementation of Data Warehouse and reporting platform Roles: Databricks Architect & Engineer Teams: 12 members Technical Skills: Azure Cloud, Azure Data Factory (ADF), ADLS, Databricks, Spark3.x, Python, Scala2.15, DB2, Oracle 12g, Azure SQL My Contribution Data Bricks Infrastructure Solution: - Configured Unified Data Access Control using Unity Catalog – E1 & BY System provide a specific set of permissions, like Read Only, or, Write Only to a specific Group of Users on one, or, some of the Delta Tables, or, even at the Row Level, or, Column Level, which can contain Personally Identifiable Information, i.e., PII, of that Delta Tables - Provide Data Governance with centralized place: administer (TAI) the access to the data, and, also audit the access to the data. - Applied Data lineage for E1 & BY tables with look-up tables using Unity Catalog. - Implemented Data sharing protocol to apply secure data sharing downstream using Unity Catalog and - Design Architecture of Unity Catalog which can be linked to multiple Databricks Workspaces- DEV, UAT, PROD environment. - Created Metastore for the Unity Catalog - Apply User Management of the Unity Catalog for the TAI Lakehouse project: Users, Groups, or, the Service Principle, and, the permissions those have - Configure Data Bricks Cluste with spark 3.x for DEV, UAT & PROD for TAI -E1 & BY System. - Design & apply medallion architecture, Setup a Data Lake house with Bronze, Silver and Gold layers of a storage system using Azure Data Lake Gen2. Azure Cloud Infra and Security: - Install self-hosted integration runtime for the DB2 ON DEV, UAT & PROD and Oracle on-prem cluster on the source system. - Install Azure Virtual network managed IR On DEV, UAT & PROD. - Installed Db2 connector on DEV, UAT & PROD. - Created linked service lnk_BY_Azure_SQL, lnk_E1_Azure_SQL, lnk_Db2_E1 - Install and Configure Azure Key-Vault, added all the credential for Azure SQL, ADLS, Databricks, Users, global users, linked service to Azure Key Vault. – DEV, UAT & PROD. - Created 3 nodes for DEV and 5 nodes for PROD cluster to migrate data. - Setup and configure Azure Active Directory to provide team access policy for Databricks cluster, Azure Data Factory, Azure SQL, Azure Data Lake house. - Coordinate with TAI Client and Microsoft support team to resolve throughput issues. As Azure & Databricks Data Engineer: - developed most critical data ingestion pipelines using Azure Data Factory (ADF) for E1 to migrate 12.8TB of 120 tables from Db2 to ADLS RAW as a parquet file. There are many large tables with 2-4 TB of volume data containing 400 to 800 million records. - Initial & Incremental migration pipelines for both the E1 and BY sources with a watermark based on Julian's date & time - Design Audit table (Process log) and Control table (System) to achieve dynamic pipeline and audit information for master and child pipeline. - Design architecture solution to achieve delete for PKSNAPSHOT – E1 & BY. - Build dynamic delete pipeline using ADF (load PKTBL), Databricks PySpark for Daily, Weekly, OnDemand, and Yearly frequency to delete records from target (Analytics layer – gold layer) based on source system delete column and delete table - Build transformation using Datarbicks Spark with Scala for E1 to - Apply a transformation with a lookup table and transform to the Silver layer. - Build transformation to transform on Analytics layer (Gold) using Databricks Spark & scala. - Implemented UPSERT using Spark Structure Streaming with 5 minutes on the Analytics layer - Design pipeline architecture for master pipeline, child pipeline with different activity ID, Pipeline ID, Master pipeline ID with different pipeline Run ID to make sure for smooth transition audit. - Build logic, developed using Pyspark on Datarbicks – applied on DEV, UAT & PROD to check counter – master pipeline IN PROGRESS - or NOT so that pipeline execution should not overlap. - Pass pipeline parameter to insert or update Audit/Control table using Databricks -Pyspark. - Monitor Performance in DEV & PROD, worked with the team to reduce time. - Milestone – to achieve 10-minute SLA for Incremental load on E1 & BY (end-to-end completion time) - Milestone – achieved 1.53.45hrs to load (400millions record with2.3TB) at RAW as parquet file using ADF pipeline - Interact with Azure Devos’s engineer to build a CICD pipeline for DEV, UAT & PROD with - Develop pipeline as POC using Databricks Workflow, compare the cost with Azure Pipeline, and present to the client. My Contribution to Past Project: Project: Data Exchange (Security Framework) Roles: Technical Lead & Architect – Confluent KStream & KSQL Client: Fidelity & Westpac Team: 9 members Technical Skills: AzureDevops, Jdk 19.0, Confluent Kafka, Kstream, KSQL, Azure Databricks, DBFS, Delta lake, Azure Data Factory, ADLSGen2, Confluent Schema Registry, AES Algorithm, Hash Algorithm, Kubernetes Cluster(AKS).
I have an interest in teaching I would like to share my knowledge to you how is interested in learning it from the basic coding .
I have an interest in teaching I would like to share my knowledge to you how is interested in learning it from the basic coding .
I am an experienced data scientist, specializing in Data Analysis, Machine Learning, and Natural language processing with experience of more than 5+ years. I built complete end-to-end machine learning systems using Natural Language Processing and MLOps. Solving real-world problems using NLP, and Transformers. Qualifications: ✓ Coding skills in Python (scripting, Jupyter, google Colab) ✓ Python data analytics libraries (scikit-learn, pandas, numpy) ✓ Natural Language Processing tools and packages (gensim, spacy, nltk) Transformers ✓ Pandas, Numpy, Scipy, Seaborn, and Matplotlib for data analysis, and visualization. ✓ Deep learning libraries and toolboxes (TensorFlow, Keras) ✓ Docker, GitHub. ✓ Custom Named Entity Recognition ✓ Experience in Text and Document Classification. ✓ Topic Modeling ✓ Sentiment Analysis, Aspect Based Sentiment Analysis ✓ Question Answering System I look forward to working with you. Reach out, and let’s get started!
I am an experienced data scientist, specializing in Data Analysis, Machine Learning, and Natural language processing with experience of more than 5+ years. I built complete end-to-end machine learning systems using Natural Language Processing and MLOps. Solving real-world problems using NLP, and Transformers. Qualifications: ✓ Coding skills in Python (scripting, Jupyter, google Colab) ✓ Python data analytics libraries (scikit-learn, pandas, numpy) ✓ Natural Language Processing tools and packages (gensim, spacy, nltk) Transformers ✓ Pandas, Numpy, Scipy, Seaborn, and Matplotlib for data analysis, and visualization. ✓ Deep learning libraries and toolboxes (TensorFlow, Keras) ✓ Docker, GitHub. ✓ Custom Named Entity Recognition ✓ Experience in Text and Document Classification. ✓ Topic Modeling ✓ Sentiment Analysis, Aspect Based Sentiment Analysis ✓ Question Answering System I look forward to working with you. Reach out, and let’s get started!
I am a senior Software cryineer, who has passion in teaching. During my 8 years of experience i had trained many of the engineers on multiple skills to accomadate in the projects. With that in mind, I would like continue the same here as well.
I am a senior Software cryineer, who has passion in teaching. During my 8 years of experience i had trained many of the engineers on multiple skills to accomadate in the projects. With that in mind, I would like continue the same here as well.
1. VTECH INTEGRATED SOLUTIONS is a parallel education organization to provide career oriented mentoring to engineering graduates Pan India. This proposal is being sent by us to only a few select companies. 2. We are like a Finishing School for the students and provide them with a focused, dedicated and personalized career oriented mentoring to guide them towards fulfillment of their desired aim in life. We excel in IT education profession and have given a jump start to professional life cycle of thousands of students. We treasure the trust of our students, parents and their colleges/universities in Karnataka, Andhra, Telangana, Tamil Nadu, Kerala, Maharashtra and Orissa etc.…. 3. Our Mission. Our mission is to motivate and transform logical and innovative young minds in to agile executers and disruptive thinkers capable of standing up to a rapidly evolving IT world order to excel in their chosen domain. 4. In addition to international syllabus for technical training, our services to the students include personalized guidance in honing up logical, computational and aptitude skills, assisting students to build up their confidence by immersive workshops on GD, elocution, HR skills & personality development, and thought provoking sessions on preparation for Interview. 5. Our Team. Our dedicated team of mentors, trainers and consultants is highly qualified and experienced. Our career advancement team and network of partners are always eager to churn out not only “job needy” but “Career Ready” professionals as per the industry /Individual Company QR. 6. Our Campus Connect (On Campus Model). We work on a “Center of Excellence” model with our large network of associate colleges. This model permits additional preparation of a student right from 1st year of degree till he obtains a career in a company. The students of our affiliated colleges are monitored for their degree subjects and constantly given leadership training and skills for an all-round personality. This degree training is superimposed with an additional technical training in programming languages and skills to meet the stringent parameters of IT industry. We encourage open ended exploration by providing a structured environment right from 1st year of degree in the college up to final year, so that the learner is able to use his core subject skills as well as technical IT skills progressively in an authentic real life context for the chosen career. 7. Our Off Campus Model. We also have the conventional off campus training model for students. A large number of students join us through our incubation drives in colleges throughout the country and with their referral. They are trained/can be trained as per Industry/Company specific requirements. These training parameters can be mutually discussed and students pre-selected before/after the desired technical training or you could choose from our open pool of students continuously streaming out of our technical training batches after completing aptitude, language and life skill sessions. 8. Our Professional Recruitment Model. We work on a ‘Zero Cost’ recruitment model for our clients. To make your recruitment process easy, we will take you to the colleges (Campus Connect Model). You have two options; either to select the student directly for development job or offer LOI [Letter Of Intent] to undertake additional technical training from us and join the job there after. Secondly, we will also take up all the ownership of presenting hundreds of trained & skilled candidates in our premises based on your business needs (Our Off Campus Model). We will coordinate all the recruitment activities for any of your on/off campus drives absolutely free of cost.
1. VTECH INTEGRATED SOLUTIONS is a parallel education organization to provide career oriented mentoring to engineering graduates Pan India. This proposal is being sent by us to only a few select companies. 2. We are like a Finishing School for the students and provide them with a focused, dedicated and personalized career oriented mentoring to guide them towards fulfillment of their desired aim in life. We excel in IT education profession and have given a jump start to professional life cycle of thousands of students. We treasure the trust of our students, parents and their colleges/universities in Karnataka, Andhra, Telangana, Tamil Nadu, Kerala, Maharashtra and Orissa etc.…. 3. Our Mission. Our mission is to motivate and transform logical and innovative young minds in to agile executers and disruptive thinkers capable of standing up to a rapidly evolving IT world order to excel in their chosen domain. 4. In addition to international syllabus for technical training, our services to the students include personalized guidance in honing up logical, computational and aptitude skills, assisting students to build up their confidence by immersive workshops on GD, elocution, HR skills & personality development, and thought provoking sessions on preparation for Interview. 5. Our Team. Our dedicated team of mentors, trainers and consultants is highly qualified and experienced. Our career advancement team and network of partners are always eager to churn out not only “job needy” but “Career Ready” professionals as per the industry /Individual Company QR. 6. Our Campus Connect (On Campus Model). We work on a “Center of Excellence” model with our large network of associate colleges. This model permits additional preparation of a student right from 1st year of degree till he obtains a career in a company. The students of our affiliated colleges are monitored for their degree subjects and constantly given leadership training and skills for an all-round personality. This degree training is superimposed with an additional technical training in programming languages and skills to meet the stringent parameters of IT industry. We encourage open ended exploration by providing a structured environment right from 1st year of degree in the college up to final year, so that the learner is able to use his core subject skills as well as technical IT skills progressively in an authentic real life context for the chosen career. 7. Our Off Campus Model. We also have the conventional off campus training model for students. A large number of students join us through our incubation drives in colleges throughout the country and with their referral. They are trained/can be trained as per Industry/Company specific requirements. These training parameters can be mutually discussed and students pre-selected before/after the desired technical training or you could choose from our open pool of students continuously streaming out of our technical training batches after completing aptitude, language and life skill sessions. 8. Our Professional Recruitment Model. We work on a ‘Zero Cost’ recruitment model for our clients. To make your recruitment process easy, we will take you to the colleges (Campus Connect Model). You have two options; either to select the student directly for development job or offer LOI [Letter Of Intent] to undertake additional technical training from us and join the job there after. Secondly, we will also take up all the ownership of presenting hundreds of trained & skilled candidates in our premises based on your business needs (Our Off Campus Model). We will coordinate all the recruitment activities for any of your on/off campus drives absolutely free of cost.
I am a Data Analytics Trainer at Prwatech, Bangalore, where I specialize in teaching Python and SQL. My role involves conducting both online and offline training sessions, guiding students through data manipulation, analysis, and visualization techniques. I have hands-on experience in NumPy, Pandas, SQL queries, and data transformation, helping learners build a strong foundation in data analytics and database management.
I am a Data Analytics Trainer at Prwatech, Bangalore, where I specialize in teaching Python and SQL. My role involves conducting both online and offline training sessions, guiding students through data manipulation, analysis, and visualization techniques. I have hands-on experience in NumPy, Pandas, SQL queries, and data transformation, helping learners build a strong foundation in data analytics and database management.
Learneaze is a technology education company, launched with an objective to provide an excellent learning experience in the area of high-end technologies. At Learneaze we focus on helping people to build expertise in the cutting-edge technologies. We believe in constantly innovating the way we deliver the learning experience. We have uniquely designed our offerings in modular packages to provide a customized experience. We currently offering training on latest technologies in following areas: + Cloud Computing + Data Science + DevOps + AI & Machine Learning + IoT.
Learneaze is a technology education company, launched with an objective to provide an excellent learning experience in the area of high-end technologies. At Learneaze we focus on helping people to build expertise in the cutting-edge technologies. We believe in constantly innovating the way we deliver the learning experience. We have uniquely designed our offerings in modular packages to provide a customized experience. We currently offering training on latest technologies in following areas: + Cloud Computing + Data Science + DevOps + AI & Machine Learning + IoT.
Good day, marvelous students! I hope you’re all ready for an adventure-filled academic ride because I am thrilled to be your guide on this learning expedition. Please allow me to introduce myself. I am Madhoo, your friendly facilitator of knowledge and laughter. Picture me as your trusty sidekick, here to make learning enjoyable and accessible to all. From mind-bending math puzzles to captivating science experiments, we’ll conquer academic challenges together.
Good day, marvelous students! I hope you’re all ready for an adventure-filled academic ride because I am thrilled to be your guide on this learning expedition. Please allow me to introduce myself. I am Madhoo, your friendly facilitator of knowledge and laughter. Picture me as your trusty sidekick, here to make learning enjoyable and accessible to all. From mind-bending math puzzles to captivating science experiments, we’ll conquer academic challenges together.
Electronics and Communication Engineer Graduate with 5.5 years of industry experience in product development company. As a person who believes in learning as a experience to evolve at all levels, I would like to leverage my knowledge in helping students in academics. My areas of interest are 1.Chemistry(10th,11th,12th), 2.C++ programming with OOPS concepts 3.Python scripting language
Electronics and Communication Engineer Graduate with 5.5 years of industry experience in product development company. As a person who believes in learning as a experience to evolve at all levels, I would like to leverage my knowledge in helping students in academics. My areas of interest are 1.Chemistry(10th,11th,12th), 2.C++ programming with OOPS concepts 3.Python scripting language
Largest training center in the world. 15.3% Software developers in the IT industry are our students. 3000+ Multinational companies hire from us. 5-7 Companies conduct interviews every working day. 800-1200 Students attend interviews everyday across companies through Jspiders. Each trainer is a certified industry expert with min of 5-16 Yrs of IT experience. Trained at least 3,00,000+ students Global presence - India, USA, UK
Largest training center in the world. 15.3% Software developers in the IT industry are our students. 3000+ Multinational companies hire from us. 5-7 Companies conduct interviews every working day. 800-1200 Students attend interviews everyday across companies through Jspiders. Each trainer is a certified industry expert with min of 5-16 Yrs of IT experience. Trained at least 3,00,000+ students Global presence - India, USA, UK
Browse hundreds of experienced dance tutors across Bangalore. Compare profiles, teaching styles, reviews, and class timings to find the one that fits your goals — whether it's Automation with Python, Core Python, Data Analysis with Python, and more
Select your preferred tutor and book a free demo session. Experience their teaching style, ask questions, and understand the class flow before you commit.
Once you're satisfied, make the payment securely through UrbanPro and start your dance journey! Learn at your own pace — online or in-person — and track your progress easily.
Find the best Python Training Tutor classes
Selected Location Do you offer Python Training classes?
Create Free Profile >>You can browse the list of best Python tutors on UrbanPro.com. You can even book a free demo class to decide which Tutor to start classes with.
The fee charged varies between online and offline classes. Generally you get the best quality at the lowest cost in the online classes, as the best tutors don’t like to travel to the Student’s location.
It definitely helps to join Python Training classes near me in Bannerghatta Main Road, Bangalore, as you get the desired motivation from a Teacher to learn. If you need personal attention and if your budget allows, select 1-1 Class. If you need peer interaction or have budget constraints, select a Group Class.
UrbanPro has a list of best Python Training classes
As an experienced tutor registered on UrbanPro.com, I specialize in providing high-quality Python Training...
you can learn mongo db in youtube channel , there are so many youtubers in youtube who teach mongoDB, you can watch their tutorials
Jessy , better to take online traning
import file_name with out extension. For example import my_function
Tons of features out of the box for web development Quick to embrace new things
https://vz-3ad30922-ba4.b-cdn.net/abdeb0d5-8fc9-44bf-b82f-8d7c9c19c9fb/play_480p.mp4
We take a small task of slicing out the username and domain name from an email address. We ask the user to type his email address. We take his email address...
BigData What is BigData Characterstics of BigData Problems with BigData Handling BigData • Distributed Systems Introduction to Distributed...
Day 1: Python Basics Objective: Understand the fundamentals of Python programming language. Variables and Data Types (Integers, Strings, Floats,...
https://vz-3ad30922-ba4.b-cdn.net/328d16c1-ccf3-47ed-a237-2e5269f142bb/play_480p.mp4