Take BSc Tuition from the Best Tutors
Search in
Answered on 16/03/2019 Learn BSc Computer Science
Ruhinaz
Tutor for all studying n needing students
Clear the basic studies already done in 1st n 2nd year of bsc subject then go through the syllabus and note down the things u know then u cann make ur own notes and google will help u in that as per a tutor take help of best IT tutor for hard topics who can make you understand better than 1st
read lessLesson Posted on 23/12/2017 Learn BSc Computer Science
C++: Passing A Function As an Argument
Ashutosh Singh
Subject matter expert (Computer Science & Engineering) at Chegg India since June 2019. Teaching programming...
#include
using namespace std;
int sum(int a,int b)
{
return a+b;
}
void f2(int (*f)(int,int),int a,int b) //'*f' is a POINTER TO A FUNCTION whose RETURN TYPE is //'int' and arg type (int,int) . Here, '*f' is FORMAL PARAMETER
{
cout<<sum(a,b)<<endl;
}
int main()
{ int (*p1)(int,int)=∑ //Here, '*pf1' is ACTUAL PARAMETER
f2(sum,10,20); //NAME OF A FUNCTION IS ITSELF A POINTER TO ITSELF
f2(p1,20,79);
return 0;
}
Lesson Posted on 09/11/2017 Learn BSc Computer Science
GCC
Java 9 is here! A major feature release in the Java Platform Standard Edition is Java 9
Lets see what more it offers more than its previous versions
Moreover it includes enhancements for microsoft windows and MAcOS OS platforms
read less
Take BSc Tuition from the Best Tutors
Lesson Posted on 23/08/2017 Learn BSc Computer Science
SR-IT Academy
SR - IT Academy is one of the leading tutorial point providing services like tutoring and computer training...
while (left <= right)
The loop invariant is:
all items in A[low] to A[left-1] are <= the pivot
all items in A[right+1] to A[high] are >= the pivot
Each time around the loop:
left is incremented until it "points" to a value > the pivot
right is decremented until it "points" to a value < the pivot
if left and right have not crossed each other,
then swap the items they "point" to.
Lesson Posted on 07/08/2017 Learn BSc Computer Science
Deadlocks In Distributed Systems
SR-IT Academy
SR - IT Academy is one of the leading tutorial point providing services like tutoring and computer training...
Lesson Posted on 05/07/2017 Learn BSc Computer Science
Introductory Discussions On Complexity Analysis
Shiladitya Munshi
Well, I love spending time with students and to transfer whatever computing knowledge I have acquired...
What is Complexity Analysis of Algorithm?
Complexity Analysis, simply put, is a technique through which you can judge about how good one particular algorithm is. Now the term “good” can mean many things at different times.
Suppose you have to go from your home to the Esplanade! There are many ways from your home that may lead to Esplanade. Take any one, and ask whether this route is good or bad. It may so happen that this route is good if the time of travel is concerned (that is the route is short enough), but at the same time, it may be considered bad taking the comfort into considerations (This route may have many speed breakers leading to discomforts). So, the goodness (or badness as well) of any solution depends on the situations and whatever is good to you right now, may seem as bad if the situation changes. In a nutshell, the goodness/badness or the efficiency of a particular solution depends on some criteria of measurements.
So what are the criteria while analyzing complexities of algorithms?
Focusing only on algorithms, the criteria are Time and Space. The criteria Time, judges how fast or slow the algorithms run when executed; and the criteria Space judges how big or small amount of memory (on primary/hard disks) is required to execute the algorithm. Depending on these two measuring criteria, two type of Algorithm Analysis are done; one is called Time Complexity Analysis and the second one is Space Complexity Analysis.
Which one is more important over the other?
I am sorry! I do not know the answer; rather there is no straight forward answer to this question. Think of yourself. Thinking of the previous example of many solutions that you have for travelling from your home to Esplanade, which criteria is most important? Is it Time of Travel, or is it Comfort? Or is it Financial Cost? It depends actually. While you are in hurry for shopping at New Market, the Time Taken would probably be your choice. If you have enough time in your hand, if you are in jolly mood and if you are going for a delicious dinner with your friends, probably you would choose Comfort; and at the end of the month, when you are running short with your pocket money, the Financial Cost would be most important to you. So the most important criterion is a dynamic notion that evolves with time.
Twenty or thirty years back, when the pace of advancement of Electronics and Computer Hardware was timid, computer programs were forced to run with lesser amount of memory. Today you may have gigantic memory even as RAM, but that time, thinking of a very large hard disk was a day dreaming! So at that time, Space Complexity was much more important than the Time Complexity, because we had lesser memory but ample times.
Now the time has changed! Now a day, we generally enjoy large memories but sorry, we don’t have enough time with us. We need every program to run as quick as possible! So currently, Time Complexity wins over Space Complexity. Honestly, both of these options are equally important from theoretical perspective but the changing time has an effect to these.
read lessTake BSc Tuition from the Best Tutors
Lesson Posted on 05/07/2017 Learn BSc Computer Science
Getting A Bit Deeper Into Time Complexity Analysis
Shiladitya Munshi
Well, I love spending time with students and to transfer whatever computing knowledge I have acquired...
What is Time Complexity Analysis?
In the last blog, you got a rough idea about the Time Complexity and that was all about to judge how fast the algorithm can run, or putting it in another way, how much time an algorithm should require while running.
Suppose, someone asks you that how fast you can do a job or how much time do you require finishing a job. Most probably, your answer will be any of these threes (A) at most time t or (B) at least time t or (C) exactly time t. Whatever your answer might be, your answer must have been guided by identification of the number of basic operation needed to complete the job. So unless and until you do have a clear cut idea of how many basic operations are needed to be performed, you cannot say how much time you do require finishing a job, Right?
Similar is the case of answering how much time is required executing an algorithm. You need to identify first that how many basic operations (like comparison, memory access etc.) this algorithm performs. Once you figure it out, then you are all set to compute how fast you can do these all.
This computation is not an easy business. It is done through establishing a relationship between the run time (execution time) of the given algorithm and the input size of the problem, which the given algorithm is supposed to solve. Sounds weird? You might be wondering what about the “number of basic operations needed”? Why did you identified that all?
Genuine, your doubts are well accepted. But the thing is, the number of basic operations needed for an algorithm depends on the input size of the problem and that is why we are really interested to have a relationship between the run time and the input size. This relationship actually gives you a clear idea about the time required to run an algorithm and hence about the time complexity.
What is the physical significance of this relationship? What are its features?
Well, a relationship between the run time and input size gives you the nature of the growth of time complexity. The growth indicates physically that on increasing the input size, how the run time changes. This relationship has a form of function which does not deal with any exact quantification; rather, it is expressed as a proportion called the “Order”. Suppose, if the run time of an algorithm increases with the same proportion of the increase in input size, then the run time is of linear order.
Why do we need to know about the run time complexity? What are the objectives?
The objective of computing run time complexity is twofold. Firstly it gives a measure of efficiency of an algorithm with respect to run/execution time and secondly, in presence of multiple algorithms for a specific problem, it helps in deciding which one to choose for implementation. That means comparison among different algorithms for the same problem is a huge benefit that we can derive from Time Complexity analysis.
How does the Time Complexity Analysis get realized?
There are mainly three classes of Time Complexity Analysis. (A) Worst Case Analysis – It estimates the upper bound of the number of basic operations needed to be performed if the algorithm is to be executed, and (B) Best Case Analysis – It estimates the lower bound of the number of operations needed to be performed if the algorithm is to be executed, and lastly (C) Average Case Analysis – any estimation in between.
If not said otherwise, in the rest of all our discussions, Time Complexity Analysis will always indicate the Worst Case Analysis.
Characteristics of computing Time Complexity
While computing time complexity, generally following characteristics are maintained –
Give me one example
Let us suppose we need to compute f(x) = 7x^4 + 6x^3 -5x^2 + 8x -5.
To compute this polynomial, we can consider two algorithms, (A) Brute Force Algorithm and (B) Horner’s Algorithm.
Brute Force Algorithm computes f(x) as 7*x*x*x*x + 6*x*x*x - 5*x*x + 8*x -5 and Horner’s Algorithm computes f(x) as (((7*x + 6)*x – 5)*x + 8)*x – 5.
Both the algorithms perform two additions and two subtractions apart from some number of multiplications which are different for the two cases. Hence we will ignore the basic operations addition and subtraction and will concentrate only on the number of multiplications.
Brute Force Algorithm, according to the example, for an input size n does k number of multiplications where k = n + (n-1)+(n-2)+……+2+1+0 = (n(n-1))/2.
On the other hand Horner’s Algorithm, as seen in the example, for an input size n, does k’ number of multiplications where k’ = n.
So the order of Time Complexity of Brute Force Algorithm is expressed by the polynomial (n(n-1))/2 I,e n^2/2 + n/2. Hence finally the order of time complexity for Brute Force Algorithm will be dictated by n^2. It means the run time complexity of the Brute Force Algorithm increases proportional to the square of the input size.
In comparison, the time complexity of Horner’s algorithm is dictated by the term n only, which means that the increase in run time of Horner’s algorithm is linearly proportional to the input size.
It is evident from the discussion that the Horner’s Algorithm has a better run time than Brute Force Algorithm for computation of polynomial expressions.
read lessLesson Posted on 05/07/2017 Learn BSc Computer Science
What Are The Two Forms Of #Include?
Shiladitya Munshi
Well, I love spending time with students and to transfer whatever computing knowledge I have acquired...
Lesson Posted on 05/07/2017 Learn BSc Computer Science
Shiladitya Munshi
Well, I love spending time with students and to transfer whatever computing knowledge I have acquired...
Are reference and pointers same?
No.
I have seen this confusion crumbling up among the student from the first day. So better clear out this confusion at thevery beginning.
Pointers and reference both hold the address of other variables. Up to this they look similar, but their syntax and further consequences are totally different. Just consider the following pieces of code
Code 1 Code 2
int i; int i;
int *p = &i; int &r = i ;
Here in code 1 we have declared and defined one integer pointer p which points to variable i, that is now , p holds the address of i.
In code 2, we have declared and defined one integer reference r which points to variable i, that is now, r holds the address of i. This is completely same as that of p.
So where is the difference?
The first difference can be found just by looking at the code. Their syntaxes!
Secondly the difference will come up when they would be used differently to assign a value (suppose 10) to i.
If you are using a pointer, you can do it like *p = 10; but if you are using a reference, you can do it like r = 10. Just be careful to understand that when you are using pointers, the address must be dereferenced using the *, whereas, when you are using references, the address is dereferenced without using any operators at all.
This notion leaves a huge effect as consequences. As the address of the variable is dereferenced by * operator, while using a pointer, you are free to do any arithmetic operations on it. That is you can increment the pointer p to point to the next address just by doing p++. But, this is not possible using references. So a pointer can point to many different elements during its lifetime; where as a reference can refer to only one element during its life time.
Does C language support references?
No. The concept of reference has been added to C++, not in C. So if you run the following code, C compiler will object then and there.
#include
#include
int main(void)
{
int i;
int &r = i;
r = 10;
printf("\n Value of i assigned with reference r = %d",i);
getch();
return 0;
}
But if you are using any C++ compiler, this code will work fine as expected.
If there is no concept of reference in C language, then how come there exists C function call by reference?
Strictly speaking, there is no concept of function call by reference in C language. C only supports function call by value. Though in some books ( I will not name any one) it is written that C supports function call by reference or the simulation of function call by reference can be achieved through pointers, I will strongly say that C language neither directly supports function call by reference, nor provides any other mechanism to simulate the same effect.
I know you are at your toes to argue that what about calling a C function with address of a variable and receiving it with a pointer? The change made to that variable within the function has a global effect. How this cannot be treated as an example of function call by reference?
You probably argue with a code like following
#include
#include
void foo(int* p)
{
*p = 5;
printf("\n Inside foo() the value of the variable: %d",*p);
}
int main(void)
{
int i = 10;
printf("\n before calling foo() the value of the variable: %d",i);
foo(&i);
printf("\n after calling foo() the value of the variable: %d",i);
getch();
return 0;
}
Your code will show the result as
Your points are well taken. But the thing is what you are showing is not at all calling a function by reference. It just the function call by value! Here you are essentially copying the value of address of your variable i and calling the function foo with that copy. Now eventually in this case, the value that is being passed contains the address of another variable. Within the function, you are accepting this value with a pointer and changing the value of the content addressed by that pointer. So it is nothing but a function call by value only.
Please note that to change the value of the content addressed by a pointer, you are to use *, no way could it be thought of as a reference.
Now let me give you one example of true function call by reference
#include
#include
void foo (int& r1)
{
r1 = 5;
printf("\n Inside foo() the value of the variable: %d", r1);
}
int main(void)
{
int i = 10;
int &r = i;
printf("\n before calling foo() the value of the variable: %d",i);
foo(r);
printf("\n after calling foo() the value of the variable: %d",i);
getch();
return 0;
}
Will this run with your C compiler? No.
Note:: I have used DevC++ as the coding platform
read less
Take BSc Tuition from the Best Tutors
Answered on 22/03/2017 Learn BSc Computer Science
Kousalya Pappu
Tutor
UrbanPro.com helps you to connect with the best BSc Tuition in India. Post Your Requirement today and get connected.
Ask a Question
The best tutors for BSc Tuition Classes are on UrbanPro
The best Tutors for BSc Tuition Classes are on UrbanPro