As a tutor you can connect with more than a million students and grow your network.

Currently a Research Assistant at Matrix ComSec. Stanford University Certified Machine Learning Developer.

I have recently completed my B.E. and I have taken an year break and looking to do some productive work.

I did my final year project in Natural Language processing using Artificial Intelligence and it was purely based on Python, which is my area of expertise.

I offer a special 3 weeks course in Python with an optional 4th week during which I can teach you either Natural Language Processing or Image and video processing or Machine learning and Neural network basics.

I have recently completed my B.E. and I have taken an year break and looking to do some productive work.

I did my final year project in Natural Language processing using Artificial Intelligence and it was purely based on Python, which is my area of expertise.

I offer a special 3 weeks course in Python with an optional 4th week during which I can teach you either Natural Language Processing or Image and video processing or Machine learning and Neural network basics.

Hindi, Gujarati, English

Bachelor of Engineering (B.E.) from Sardar Vallabhbhai Patel Institute Of Technology in 2017.

Baroda, Vadodara, India- 390001.

Python Training classes

Course Duration provided

1-3 months

Seeker background catered to

Educational Institution, Corporate company, Individual

Certification provided

No

Class Location

Student's Home

Tutor's Home

Years of Experience in Python Training classes

1

this is test message this is test message this is test message this is test message this is test message this is test message this is test message

No Reviews yet! Be the first one to Review

Basics Of Machine Learning

We have all been hearing recently about the term "Artificial Intelligence" recently, and how it will shape our future. Well, Machine Learning is nothing but a minor subfield of the vast field of A.I. Some...

Linear Regression Without Any Libraries

I am here to help you understand and implement Linear Regression from scratch without any libraries. Here, I will implement this code in Python, but you can implement the algorithm in any other programming...

K-Means Clustering For Image Compression From Scratch

Hello World, This is Saumya, and I am here to help you understand and implement K-Means Clustering Algorithm from scratch without using any Machine Learning libraries. We will further use this algorithm...

"How can I find OpenCV-Python jobs?" in IT Courses/Programming Languages/Python

there is a new website called https://jobs.pyimagesearch.com/ which is quite good. Also, you can mention keywords like computer vision, image processing, video processing, etc on various job portals. Moreover, it is always better to create a job rather than find one.

0

| 0

"Which website should I learn Python from?" in IT Courses/Programming Languages/Python

pythonprogramming.net or the sentdex playlist on youtube. Both are the same, and they not only cover python, but also cover various topics of using opencv, scikit, tensorflow, opengl etc.

0

| 0

"How do I write a program for a Fuzzy C-means/Fuzzy K-means algorithm in C/C++ for multi-dimensional data?" in IT Courses/Programming Languages/C Language

I will help you understand and implement K-Means Clustering Algorithm from scratch without using any Machine Learning libraries. We will further use this algorithm to compress an image. Here, I will implement this code in Python, but you can implement the algorithm in any other programming language of your choice just by basically developing 4-5 simple functions. So now, first of all, what exactly is Clustering and in particular K-Means? As discussed in my blog on Machine Learning, Clustering is a type of unsupervised machine learning problem in which, we find clusters of similar data. K-means is the most widely used clustering algorithm. So basically, our task is to find those centers for the clusters around which our data points are associated. These centres of the Clusters are called centroids(K). Note that, these cluster centroids, may or may not belong to our dataset itself. Since our problem is to choose these points, let's first of all define the K-Means algorithm. 1. Randomly initialize K cluster centroids. 2. Repeat { Closest Assignment Step. Update Cluster Centroids. } So what does it actually mean? First of all, we'll randomly choose K number of points from our dataset {x(1),x(2),x(3)…,x(m)}, and initialize them as our cluster centroids {µ1, µ2, µ3…µk}. def ClusterInit(arr,K): print("Generating ",K," Clusters from Arr") s="Generating "+str(K)+" Clusters from Arr\n" a=random.sample(arr,K) return a Then, the next main step is the Closest Assignment Step. Here, For each point in dataset x(1),x(2),x(3)…,x(m), 1. Calculate it's distance from each centroid {µ1, µ2, µ3…µk}. 2. Select the index of the centroid closest to x(i), and assign it to c(i). def Closest(arr,Clusteroids): print("Computing Closest Clusteroids") indexes=[] count=1 for i in tqdm(arr): a="for "+str(count)+" element\n" temp =[] for j in Clusteroids: temp.append(norm(i,j)) indexes.append(temp.index(min(temp))) count+=1 print(indexes) return indexes Where, the distance between the centroids and a data point is calculated as a norm of the distance between two vectors. But for the simplicity sake, as distance is rather a relative feature, we'll simple calculate as a sum of the absolute values of the difference between the coordinates of them both. def norm(P1,P2): sum=0 for (i,j) in zip(P1,P2): sum+=(abs(i-j)) return sum Now moving ahead, next is the update centroid step. In this step, we select every data point associated with a particular cluster centroid and replace that particular centroid with the mean of all those associated data points. For this purpose, we'll refer to the C array which contains the index of the centroids associated with the particular data point. i.e. for k=1 to K, µk=mean of points assigned to cluster k. i.e. mean(x(i)) where i is such that c(i)=k. def ComputeMeans(arr,indexes,Clusteroids): newClus=[] print(len(arr)) print(len(indexes)) print(len(Clusteroids)) for i in range(len(Clusteroids)): z=[] for j in indexes: if i == j: z.append(arr[indexes.index(j)]) print(z) if len(z)==0: continue else: newClus.append(getmean(z)) for a in newClus: if str(newClus)==str(Clusteroids): return ("end K Means",newClus) return (None,newClus) Here, we can use numpy to calculate the column-wise mean, but the axis argument is quite confusing for me, so I devised my own function to calculate the column-wise mean. def getmean(z): temp=[] for j in range(len(z[0])): sum=0 for i in range(len(z)): sum+=z[i][j]; sum/=len(z) temp.append(int(sum)) return temp That's it ? THAT'S ALL there is to K-Means Clustering. So, the cluster centroids are the K centres for our K number of clusters. And the C vector contains the indexes of all centroid to which our X data sets are associated with. Note that, Size of C == length of the data set, and not the length of the features in the data set. However, there arises a question, what should be the ideal number of K, as we can see, K can take a value between 1 and m (length of our dataset). So, what should be the ideal value of K. To choose that, we need to first decide a cost function. J(µ1, µ2, µ3…. µk, c1, c2, c3…. cm) = ( ? || x(i) - µc(i) ||2 )/ m what it means is that, we are trying to find a particular pair of centroids and their associated clusters, such that, the average of the sum of their squared distance is minimum. So, now, if we choose a very low value of K, let's say 1, then the Cost Function J would have a very high value. Similarly, if we pick a very high value of K, let's say m (size of the data set), we get a very low value of J, which would in fact be zero. Moreover, it would cancel out the objective of clustering. So, what should be the ideal value of K? if we plot K à J, we get an arm shaped graph plot. And what we do is, look for the elbow point in that graph shaped plot. The corresponding value of K is considered to be the ideal no of clusters to be taken. Now, let's apply this algorithm, to compress an image. As, we know, K-Means helps us in locating those particular set of points, which are the centroids of clusters. How can it be applied for image compression? It's quite simple! We'll treat the image as an array of [R,G,B] values, and we'll find a particular set of values, around which, many other values are clustered around. Then we'll replace all the values in the clusters, with their particular set of centroids. and thus reduce the values of the number of colours used in the image. So let's device a function, which de-shapes the whole image in a set of array of RGB values. def deshape(img): arr=[] for i in img: for x in np.array(i,dtype="uint8").tolist(): print(x) arr.append(x) return arr Now, what we have done is covert a Height*Width*3 3D array into a [Height*Width]*3 2D array. So, we'll read an image using openCV, and then pass it to our deshape function to obtain an array of values, arr. Now, we'll decide the number of K, as well as the iterations. Using our value of K, we'll initialize our random Cluster Centroids and then pass it to our clustering function along with arr and no of iterations. img=(cv2.imread("112.jpeg")) arr=deshape(img) K=100 iterations=5 Clusteroids=ClusterInit(arr, K) print(arr[0]) print(Clusteroids) data=Clusetering(arr, Clusteroids, iterations) Now, our Clustering Function, will perform the Kmeans algorithm for the number of iterations and meanwhile keep on updating our clusteroids and indexes, simultaneously. When the number of iterations are over, we'll pass our arr, indexes and the cluster centroids to our compress functions, which would effectively, replace the respective points in our cluster with the value of the clusteroid. def Clusetering(arr,Clusteroids,iterations): for i in range(iterations): a=str(i)+"th Iteration\n" print(a) indexes=Closest(arr,Clusteroids) print("Computing means of clusteroids") a,Clusteroid=ComputeMeans(arr, indexes, Clusteroids) if(a=="end K means"): i=iterations Clusteroids=Clusteroid print("======================================================") compressed_data=Compress(arr,Clusteroids,indexes) return compressed_data def Compress(arr,Clusteroids,indexes): a=[] for i in indexes: a.append(Clusteroids[i]) return a Hence, the value returned by the Clustering function is a 2D array of compressed data. Which we'll now reshape back to a 3D array using the reshape function, and then display the image. def reshape(arr,r,c): img1=[] for i in range(len(arr)): if i==0: temp=[] elif (i%c)==0: img1.append(temp) temp=[] temp.append(arr[i]) return img1 img2=reshape(data,img.shape[0], img.shape[1]) cv2.imshow("Original",cv2.resize(img,(500,500))) cv2.imshow("Compressed",cv2.resize(np.array(img2,dtype="uint8"),(500,500))) cv2.waitKey(0) Result As, you can see, we have reduced the amount of colours used to just 100, and yet, we have maintained more than 95% visibility of our image. Below, attached is the code for this particular program. You can try it on your set of images and tweak the value of K and iterations to your convenience. P.S. the tqdm is a nice little tool you can use to view the iteration progress, which is quite useful since this program may take time for higher number of K. That's it from this blog, if there are any suggestions, or corrections, feel free to mention in the comment section. Also if you have any doubts, feel free to ask. References:- - Machine Learning by Andrew Ng, Coursera.org (among the best MOOCs).

0

| 0

"How do I write code in C programming that will invert a string like "Welcome to programming" to “gnimmargorp ot emocleW”?" in IT Courses/Programming Languages/C Language

First, find the position of \0 in the string, and it will be the size of the string. Now run a for loop from (size-1) to 0 and copy the char at the index in a temp array of chars. That's it

0

| 0

"How can I master the GRE Quantitative Section?" in Exam Coaching/Foreign Education Exam Coaching/GRE Coaching

I scored 170Q in a space of 20 days between my TOEFL(8th October) and my GRE(28th October). Here's what I did! Take Manhattan 5lb first of all, and take a timer. There are around 30 chapters on Quant section. and each section has around roughly 40-50 sums. Set the timer to around 1-1.5mins per sum i.e. 60mins per chapter. And try to solve the whole chapter in one sitting in the given time limit. Solve at least 3-4 chapters a day, even though you have plenty of months left for you GRE. This will have 2 benefits. 1. You'll get accustomed to solving within the time constraint. 2. GRE is 4 hrs long, and you can't get up before completing it. So, you need to train yourself to solving Quant section with a pretty worked up and tired brain as well. That's why, do 3-4 lessons a day. Make sure you devote as much concentration as much as you can. After this, at the end of the day, self-evaluate your mistakes and work on those. Try identifying what you did wrong and what you should do to make it right in future situations. And in the end, don't expect all the GRE quant sections to be easy. Their difficulty will increase, so be prepared to face the toughest and the hardest by the end of the day.

0

| 0

Saumya Rajen Shah address

x Python Training classes

Course Duration provided

1-3 months

Seeker background catered to

Educational Institution, Corporate company, Individual

Certification provided

No

Class Location

Student's Home

Tutor's Home

Years of Experience in Python Training classes

1

Machine Learning Training

Years of Experience in Machine Learning Training

1

Data Science Classes

Years of Experience in Data Science Classes

4

C Language Classes

Years of Experience in C Language Classes

4

C++ Language classes

Proficiency level taught

Advanced C++, Basic C++

Class Location

Student's Home

Tutor's Home

Years of Experience in C++ Language classes

4

Java Training Classes

Teaches

Core Java

Certification training offered

No

Class Location

Student's Home

Tutor's Home

Years of Experience in Java Training Classes

3

BTech Tuition

BTech Computer Science subjects

Artificial Intelligence, Natural Language Processing, Machine Learning, Object Oriented Programming & Systems, Java Programming

BTech Branch

BTech Computer Science Engineering, BTech Information Science Engineering

BTech Information Science subjects

Artificial Intelligence, Object Oriented Programming, Neural Network and Fuzzy Logic, Machine Learning

Experience in School or College

Type of class

Regular Classes, Crash Course

Class strength catered to

Group Classes

Taught in School or College

No

Class Location

Student's Home

Tutor's Home

BCA Tuition

Experience in School or College

BCA Subject

Programming in C++ , C Language Programming, Object Oriented Technologies, Java Programming

Type of class

Regular Classes, Crash Course

Class strength catered to

Group Classes

Taught in School or College

No

Class Location

Student's Home

Tutor's Home

MCA Coaching classes

GRE Coaching classes

Demo Class Provided

No

Name of Awards and Recognition

Background

Working Professional

Experience in taking GRE exam

Yes

Awards and Recognition

Yes

Class Location

Student's Home

Tutor's Home

Years of Experience in GRE Coaching classes

2

Teaching Experience in detail in GRE Coaching classes

I scored.

this is test message this is test message this is test message this is test message this is test message this is test message this is test message

No Reviews yet! Be the first one to Review

"How can I find OpenCV-Python jobs?" in IT Courses/Programming Languages/Python

there is a new website called https://jobs.pyimagesearch.com/ which is quite good. Also, you can mention keywords like computer vision, image processing, video processing, etc on various job portals. Moreover, it is always better to create a job rather than find one.

0

| 0

"Which website should I learn Python from?" in IT Courses/Programming Languages/Python

pythonprogramming.net or the sentdex playlist on youtube. Both are the same, and they not only cover python, but also cover various topics of using opencv, scikit, tensorflow, opengl etc.

0

| 0

"How do I write a program for a Fuzzy C-means/Fuzzy K-means algorithm in C/C++ for multi-dimensional data?" in IT Courses/Programming Languages/C Language

I will help you understand and implement K-Means Clustering Algorithm from scratch without using any Machine Learning libraries. We will further use this algorithm to compress an image. Here, I will implement this code in Python, but you can implement the algorithm in any other programming language of your choice just by basically developing 4-5 simple functions. So now, first of all, what exactly is Clustering and in particular K-Means? As discussed in my blog on Machine Learning, Clustering is a type of unsupervised machine learning problem in which, we find clusters of similar data. K-means is the most widely used clustering algorithm. So basically, our task is to find those centers for the clusters around which our data points are associated. These centres of the Clusters are called centroids(K). Note that, these cluster centroids, may or may not belong to our dataset itself. Since our problem is to choose these points, let's first of all define the K-Means algorithm. 1. Randomly initialize K cluster centroids. 2. Repeat { Closest Assignment Step. Update Cluster Centroids. } So what does it actually mean? First of all, we'll randomly choose K number of points from our dataset {x(1),x(2),x(3)…,x(m)}, and initialize them as our cluster centroids {µ1, µ2, µ3…µk}. def ClusterInit(arr,K): print("Generating ",K," Clusters from Arr") s="Generating "+str(K)+" Clusters from Arr\n" a=random.sample(arr,K) return a Then, the next main step is the Closest Assignment Step. Here, For each point in dataset x(1),x(2),x(3)…,x(m), 1. Calculate it's distance from each centroid {µ1, µ2, µ3…µk}. 2. Select the index of the centroid closest to x(i), and assign it to c(i). def Closest(arr,Clusteroids): print("Computing Closest Clusteroids") indexes=[] count=1 for i in tqdm(arr): a="for "+str(count)+" element\n" temp =[] for j in Clusteroids: temp.append(norm(i,j)) indexes.append(temp.index(min(temp))) count+=1 print(indexes) return indexes Where, the distance between the centroids and a data point is calculated as a norm of the distance between two vectors. But for the simplicity sake, as distance is rather a relative feature, we'll simple calculate as a sum of the absolute values of the difference between the coordinates of them both. def norm(P1,P2): sum=0 for (i,j) in zip(P1,P2): sum+=(abs(i-j)) return sum Now moving ahead, next is the update centroid step. In this step, we select every data point associated with a particular cluster centroid and replace that particular centroid with the mean of all those associated data points. For this purpose, we'll refer to the C array which contains the index of the centroids associated with the particular data point. i.e. for k=1 to K, µk=mean of points assigned to cluster k. i.e. mean(x(i)) where i is such that c(i)=k. def ComputeMeans(arr,indexes,Clusteroids): newClus=[] print(len(arr)) print(len(indexes)) print(len(Clusteroids)) for i in range(len(Clusteroids)): z=[] for j in indexes: if i == j: z.append(arr[indexes.index(j)]) print(z) if len(z)==0: continue else: newClus.append(getmean(z)) for a in newClus: if str(newClus)==str(Clusteroids): return ("end K Means",newClus) return (None,newClus) Here, we can use numpy to calculate the column-wise mean, but the axis argument is quite confusing for me, so I devised my own function to calculate the column-wise mean. def getmean(z): temp=[] for j in range(len(z[0])): sum=0 for i in range(len(z)): sum+=z[i][j]; sum/=len(z) temp.append(int(sum)) return temp That's it ? THAT'S ALL there is to K-Means Clustering. So, the cluster centroids are the K centres for our K number of clusters. And the C vector contains the indexes of all centroid to which our X data sets are associated with. Note that, Size of C == length of the data set, and not the length of the features in the data set. However, there arises a question, what should be the ideal number of K, as we can see, K can take a value between 1 and m (length of our dataset). So, what should be the ideal value of K. To choose that, we need to first decide a cost function. J(µ1, µ2, µ3…. µk, c1, c2, c3…. cm) = ( ? || x(i) - µc(i) ||2 )/ m what it means is that, we are trying to find a particular pair of centroids and their associated clusters, such that, the average of the sum of their squared distance is minimum. So, now, if we choose a very low value of K, let's say 1, then the Cost Function J would have a very high value. Similarly, if we pick a very high value of K, let's say m (size of the data set), we get a very low value of J, which would in fact be zero. Moreover, it would cancel out the objective of clustering. So, what should be the ideal value of K? if we plot K à J, we get an arm shaped graph plot. And what we do is, look for the elbow point in that graph shaped plot. The corresponding value of K is considered to be the ideal no of clusters to be taken. Now, let's apply this algorithm, to compress an image. As, we know, K-Means helps us in locating those particular set of points, which are the centroids of clusters. How can it be applied for image compression? It's quite simple! We'll treat the image as an array of [R,G,B] values, and we'll find a particular set of values, around which, many other values are clustered around. Then we'll replace all the values in the clusters, with their particular set of centroids. and thus reduce the values of the number of colours used in the image. So let's device a function, which de-shapes the whole image in a set of array of RGB values. def deshape(img): arr=[] for i in img: for x in np.array(i,dtype="uint8").tolist(): print(x) arr.append(x) return arr Now, what we have done is covert a Height*Width*3 3D array into a [Height*Width]*3 2D array. So, we'll read an image using openCV, and then pass it to our deshape function to obtain an array of values, arr. Now, we'll decide the number of K, as well as the iterations. Using our value of K, we'll initialize our random Cluster Centroids and then pass it to our clustering function along with arr and no of iterations. img=(cv2.imread("112.jpeg")) arr=deshape(img) K=100 iterations=5 Clusteroids=ClusterInit(arr, K) print(arr[0]) print(Clusteroids) data=Clusetering(arr, Clusteroids, iterations) Now, our Clustering Function, will perform the Kmeans algorithm for the number of iterations and meanwhile keep on updating our clusteroids and indexes, simultaneously. When the number of iterations are over, we'll pass our arr, indexes and the cluster centroids to our compress functions, which would effectively, replace the respective points in our cluster with the value of the clusteroid. def Clusetering(arr,Clusteroids,iterations): for i in range(iterations): a=str(i)+"th Iteration\n" print(a) indexes=Closest(arr,Clusteroids) print("Computing means of clusteroids") a,Clusteroid=ComputeMeans(arr, indexes, Clusteroids) if(a=="end K means"): i=iterations Clusteroids=Clusteroid print("======================================================") compressed_data=Compress(arr,Clusteroids,indexes) return compressed_data def Compress(arr,Clusteroids,indexes): a=[] for i in indexes: a.append(Clusteroids[i]) return a Hence, the value returned by the Clustering function is a 2D array of compressed data. Which we'll now reshape back to a 3D array using the reshape function, and then display the image. def reshape(arr,r,c): img1=[] for i in range(len(arr)): if i==0: temp=[] elif (i%c)==0: img1.append(temp) temp=[] temp.append(arr[i]) return img1 img2=reshape(data,img.shape[0], img.shape[1]) cv2.imshow("Original",cv2.resize(img,(500,500))) cv2.imshow("Compressed",cv2.resize(np.array(img2,dtype="uint8"),(500,500))) cv2.waitKey(0) Result As, you can see, we have reduced the amount of colours used to just 100, and yet, we have maintained more than 95% visibility of our image. Below, attached is the code for this particular program. You can try it on your set of images and tweak the value of K and iterations to your convenience. P.S. the tqdm is a nice little tool you can use to view the iteration progress, which is quite useful since this program may take time for higher number of K. That's it from this blog, if there are any suggestions, or corrections, feel free to mention in the comment section. Also if you have any doubts, feel free to ask. References:- - Machine Learning by Andrew Ng, Coursera.org (among the best MOOCs).

0

| 0

"How do I write code in C programming that will invert a string like "Welcome to programming" to “gnimmargorp ot emocleW”?" in IT Courses/Programming Languages/C Language

First, find the position of \0 in the string, and it will be the size of the string. Now run a for loop from (size-1) to 0 and copy the char at the index in a temp array of chars. That's it

0

| 0

"How can I master the GRE Quantitative Section?" in Exam Coaching/Foreign Education Exam Coaching/GRE Coaching

I scored 170Q in a space of 20 days between my TOEFL(8th October) and my GRE(28th October). Here's what I did! Take Manhattan 5lb first of all, and take a timer. There are around 30 chapters on Quant section. and each section has around roughly 40-50 sums. Set the timer to around 1-1.5mins per sum i.e. 60mins per chapter. And try to solve the whole chapter in one sitting in the given time limit. Solve at least 3-4 chapters a day, even though you have plenty of months left for you GRE. This will have 2 benefits. 1. You'll get accustomed to solving within the time constraint. 2. GRE is 4 hrs long, and you can't get up before completing it. So, you need to train yourself to solving Quant section with a pretty worked up and tired brain as well. That's why, do 3-4 lessons a day. Make sure you devote as much concentration as much as you can. After this, at the end of the day, self-evaluate your mistakes and work on those. Try identifying what you did wrong and what you should do to make it right in future situations. And in the end, don't expect all the GRE quant sections to be easy. Their difficulty will increase, so be prepared to face the toughest and the hardest by the end of the day.

0

| 0

Load More

Basics Of Machine Learning

We have all been hearing recently about the term "Artificial Intelligence" recently, and how it will shape our future. Well, Machine Learning is nothing but a minor subfield of the vast field of A.I. Some...

Linear Regression Without Any Libraries

I am here to help you understand and implement Linear Regression from scratch without any libraries. Here, I will implement this code in Python, but you can implement the algorithm in any other programming...

K-Means Clustering For Image Compression From Scratch

Hello World, This is Saumya, and I am here to help you understand and implement K-Means Clustering Algorithm from scratch without using any Machine Learning libraries. We will further use this algorithm...

Share this Profile

Also have a look at

Reply to 's review

Enter your reply*

Your reply has been successfully submitted.