Take BTech Tuition from the Best Tutors

- Affordable fees
- 1-1 or Group class
- Flexible Timings
- Verified Tutors

✕

Search in

Lesson Posted on 24/02/2018 Learn BTech Information Science Engineering

Infix Expression To Post-fix Expression Conversion Procedure

SR-IT Academy

SR - IT Academy is one of the leading tutorial point providing services like tutoring and computer training...

Algorithm 1. Scan the infix expression from left to right. 2. If the scanned character is an operand, output it. 3. Else, a. If the precedence of the scanned operator is greater than the precedence of the operator in the stack (or the stack is empty), push it. b. Else, Pop the operator from the... read more

Algorithm

1. Scan the infix expression from left to right.

2. If the scanned character is an operand, output it.

3. Else,

a. If the precedence of the scanned operator is greater than the precedence of the operator in the stack (or the stack is empty), push it.

b. Else, Pop the operator from the stack until the precedence of the scanned operator is less-equal to the precedence of the operator residing on the top of the stack. Push the scanned operator to the stack.

4. If the scanned character is an ‘(‘, push it to the stack.

5. If the scanned character is an ‘)’, pop and output from the stack until an ‘(‘is encountered.

6. Repeat steps 2-6 until infix expression is scanned.

7. Pop and output from the stack until it is not empty.

read less Like

Comments Lesson Posted on 03/01/2018 Learn BTech Information Science Engineering

Debraj Paul

I had completed MSc in Maths I am an experienced, qualified icse board school teacher with over 12+...

Heat transfer mechanism in which no medium is required is called radiation. It refers to the movement of heat in waves, as it does not need molecules to travel through. The object need not be in direct contact with one another to transmit heat. Whenever you feel heat without actually touching the object,... read more

Like

Comments Lesson Posted on 05/07/2017 Learn BTech Information Science Engineering

Shiladitya Munshi

Well, I love spending time with students and to transfer whatever computing knowledge I have acquired...

Registers are faster than memories to access. Hence when we declare a variable with a registerkey word, the compiler gets to know that the variable can be put to registers. Now whether the variables will be put into the register indeed or not, that depends on the compiler and the number and size of the... read more

Like

Comments Take BTech Tuition from the Best Tutors

- Affordable fees
- Flexible Timings
- Choose between 1-1 and Group class
- Verified Tutors

Lesson Posted on 05/07/2017 Learn BTech Information Science Engineering

What Is The Difference Between Scope And Lifetime?

Shiladitya Munshi

Well, I love spending time with students and to transfer whatever computing knowledge I have acquired...

Scope of a variable is defined as the block of code from where we can refer or access it. On the other hand the life time of a variable is defined as the time in between allocating memory for it and relinquishing the memory from it. Let us have an example void func1(void){ int x = 5; // do other stuffs } void... read more

Like

Comments Lesson Posted on 05/07/2017 Learn BTech Information Science Engineering

Getting A Bit Deeper Into Time Complexity Analysis

Shiladitya Munshi

Well, I love spending time with students and to transfer whatever computing knowledge I have acquired...

What is Time Complexity Analysis? In the last blog, you got a rough idea about the Time Complexity and that was all about to judge how fast the algorithm can run, or putting it in another way, how much time an algorithm should require while running. Suppose, someone asks you that how fast you can do... read more

**What is Time Complexity Analysis?**

In the last blog, you got a rough idea about the Time Complexity and that was all about to judge how fast the algorithm can run, or putting it in another way, how much time an algorithm should require while running.

Suppose, someone asks you that how fast you can do a job or how much time do you require finishing a job. Most probably, your answer will be any of these threes (A) at most time t or (B) at least time t or (C) exactly time t. Whatever your answer might be, your answer must have been guided by identification of the number of basic operation needed to complete the job. So unless and until you do have a clear cut idea of how many basic operations are needed to be performed, you cannot say how much time you do require finishing a job, Right?

Similar is the case of answering how much time is required executing an algorithm. You need to identify first that how many basic operations (like comparison, memory access etc.) this algorithm performs. Once you figure it out, then you are all set to compute how fast you can do these all.

This computation is not an easy business. It is done through establishing a relationship between the run time (execution time) of the given algorithm and the input size of the problem, which the given algorithm is supposed to solve. Sounds weird? You might be wondering what about the “*number of basic operations needed*”? Why did you identified that all?

Genuine, your doubts are well accepted. But the thing is, the number of basic operations needed for an algorithm depends on the input size of the problem and that is why we are really interested to have a relationship between the run time and the input size. This relationship actually gives you a clear idea about the time required to run an algorithm and hence about the time complexity.

**What is the physical significance of this relationship? What are its features?**

Well, a relationship between the run time and input size gives you the nature of the growth of time complexity. The growth indicates physically that on increasing the input size, how the run time changes. This relationship has a form of function which does not deal with any exact quantification; rather, it is expressed as a proportion called the “Order”. Suppose, if the run time of an algorithm increases with the same proportion of the increase in input size, then the run time is of *linear* order.

**Why do we need to know about the run time complexity? What are the objectives?**

The objective of computing run time complexity is twofold. Firstly it gives a measure of efficiency of an algorithm with respect to run/execution time and secondly, in presence of multiple algorithms for a specific problem, it helps in deciding which one to choose for implementation. That means comparison among different algorithms for the same problem is a huge benefit that we can derive from Time Complexity analysis.

**How does the Time Complexity Analysis get realized?**

There are mainly three classes of Time Complexity Analysis. (A) *Worst Case Analysis* – It estimates the *upper bound* of the number of basic operations needed to be performed if the algorithm is to be executed, and (B) *Best Case Analysis* – It estimates the lower bound of the number of operations needed to be performed if the algorithm is to be executed, and lastly (C) Average Case Analysis – any estimation in between.

If not said otherwise, in the rest of all our discussions, Time Complexity Analysis will always indicate the Worst Case Analysis.

**Characteristics of computing Time Complexity**

While computing time complexity, generally following characteristics are maintained –

- If any number of basic operations get identified which are independent of input size of the problem, the time complexity for that gets identified with
*Constant*order. - While comparing multiple algorithms, the Time Complexity analysis should ignore the common tasks
- If the Time Complexity expression is realized with a polynomial, then the order of time complexity is dictated by the highest order of the polynomial

**Give me one example**

Let us suppose we need to compute f(x) = 7x^4 + 6x^3 -5x^2 + 8x -5.

To compute this polynomial, we can consider two algorithms, (A) Brute Force Algorithm and (B) Horner’s Algorithm.

Brute Force Algorithm computes f(x) as 7*x*x*x*x + 6*x*x*x - 5*x*x + 8*x -5 and Horner’s Algorithm computes f(x) as (((7*x + 6)*x – 5)*x + 8)*x – 5.

Both the algorithms perform two additions and two subtractions apart from some number of multiplications which are different for the two cases. Hence we will ignore the basic operations addition and subtraction and will concentrate only on the number of multiplications.

Brute Force Algorithm, according to the example, for an input size n does k number of multiplications where k = n + (n-1)+(n-2)+……+2+1+0 = (n(n-1))/2.

On the other hand Horner’s Algorithm, as seen in the example, for an input size n, does k’ number of multiplications where k’ = n.

So the order of Time Complexity of Brute Force Algorithm is expressed by the polynomial (n(n-1))/2 I,e n^2/2 + n/2. Hence finally the order of time complexity for Brute Force Algorithm will be dictated by n^2. It means the run time complexity of the Brute Force Algorithm increases proportional to the square of the input size.

In comparison, the time complexity of Horner’s algorithm is dictated by the term n only, which means that the increase in run time of Horner’s algorithm is linearly proportional to the input size.

It is evident from the discussion that the Horner’s Algorithm has a better run time than Brute Force Algorithm for computation of polynomial expressions.

read less Like

Comments Lesson Posted on 05/07/2017 Learn BTech Information Science Engineering

Understanding Asymptotic Notations And Big Oh(O)

Shiladitya Munshi

What are asymptotic notations for complexity analysis?A problem may have numerous algorithmic solutions. In order to choose the best algorithm for a particular task, you need to be able to judge how long a particular solution will take to run. Or, more accurately, you need to be able to judge how long... read more

**What are asymptotic notations for complexity analysis?**

A problem may have numerous algorithmic solutions. In order to choose the best algorithm for a particular task, you need to be able to judge how long a particular solution will take to run. Or, more accurately, you need to be able to judge how long two solutions will take to run, and choose the better of the two. You don't need to know how many minutes and seconds they will take, but you do need some way to compare algorithms against one another.

Asymptotic notations are just used for this, i,e they are used for comparing two functions of two different algorithms.

Let us suppose I write f(n) = W(g(n)) where W is an asymptotic notation. Here this expression says how the function f(x) grows in comparison to another function g(n).**What are the most common Asymptotic Notations used in Complexity Analysis? **

Most common asymptotic notations used for time complexity analysis are Big Oh (O), Big Theta (Θ), Big Omega (Ω), Small Oh (o) and Small Omega (ω). Let us discuss them one by one**How to describe Big Oh (O) ?**

If run time of an algorithm is of O(g(n)), it means that the running time of the algorithm (as n gets larger) is at most proportional to g(n). Hence it helps in estimating an upper bound of the number of basic operations to be performed.

Mathematically, a function f(x) is equal to Big Oh of another function g(x), i,e f(x) = O(g(x) is true if and only if there exists two constants (C1 and C2)such that

a) C1 and C2 are always positive

b) 0<= f(n) <= C1*g(n) for any value of n => C2

Let us have an example for this with f(n) being 2n^2 and g(n) being n^3.

Let us check whether 2n^2 = O(n^3) is true or not

if we take C1 = 1 and C2 = 2 (all positive maintaining condition described in a), then for n = 3 (that is value of n greater than value of C2), we have the following relation true

0<= 1*(2*3^2)<= 3^3.

Hence 2n^2 = O(n^3) is true for C1 = 1 and C2 = 2.

Like

Comments Take BTech Tuition from the Best Tutors

- Affordable fees
- Flexible Timings
- Choose between 1-1 and Group class
- Verified Tutors

Lesson Posted on 05/07/2017 Learn BTech Information Science Engineering

Understanding Big Omega (?), Big Theta (?), Small Oh (o) And Small Omega (?) Notations

Shiladitya Munshi

How to describe Big Omega(Ω) ? If run time of an algorithm is of Ω(g(n)), it means that the running time of the algorithm (as n gets larger) is at least proportional to g(n). Hence it helps in estimating a lower bound of the number of basic operations to be performed. More specifically,... read more

If run time of an algorithm is of Ω(g(n)), it means that the running time of the algorithm (as n gets larger) is at least proportional to g(n). Hence it helps in estimating a lower bound of the number of basic operations to be performed.

More specifically,

`f(x) = Ω(g(x))`

(big-omega) means that the growth rate of `f(x)`

is asymptotically greater than or equal to the growth rate of `g(x)`

Mathematically, a function f(x) is equal to Big Omega of another function g(x), i,e f(x) = Ω(g(x) is true if and only if there exists two constants (C1 and C2)such that

a) C1 and C2 are always positive

b) 0<= C1*g(n) <= f(n) for any value of n => C2

If run time of an algorithm is of Θ(g(n)), it means that the running time of the algorithm (as n gets larger) is equal to the growth rate of g(n). Hence it helps in estimating a tight bound of the number of basic operations to be performed.

Hence

`f(x) = Θ(g(x))`

(big - theta) means that the growth rate of `f(x)`

is asymptotically equal to the growth rate of `g(x)`

Mathematically, a function f(x) is equal to Big Theta of another function g(x), i,e f(x) = Θ(g(x) is true if and only if there exists three constants (C1 and C2 and C3)such that

a) C1, C2 and C3 are always positive

`b) 0<= C1*g(n) <= f(n) <= C2*g(n) for any value of n => C3`

`What are Small Oh and Small Omega?`

`f(x) = o(g(x)) (small-oh) means that the growth rate of f(x) is asymptotically less than to the growth rate of g(x).`

Mathematically, a function f(x) is equal to Small Oh of another function g(x), i,e f(x) = o(g(x) is true if and only if there exists two constants (C1 and C2)such that

a) C1 and C2 are always positive

b) 0<= f(n) <C1*g(n) for any value of n => C2

So this gives a loose upper bound for complexities of f(x).

`On the other hand, f(x) = ω(g(x))`

(small-omega) means that the growth rate of `f(x)`

is asymptotically greater than the growth rate of `g(x).`

Mathematically, a function f(x) is equal to Small Omega of another function g(x), i,e f(x) = ω(g(x) is true if and only if there exists two constants (C1 and C2)such that

a) C1 and C2 are always positive

`b) 0<= C1*g(n) < f(n) for any value of n => C2`

`So this gives a loose lower bound for complexities of f(x).`

Like

Comments Lesson Posted on 05/07/2017 Learn BTech Information Science Engineering

An Interesting Discussion About Malloc And Calloc

Shiladitya Munshi

What are malloc and calloc? Simply putting they are the predefined functions in C language. Malloc( ) and calloc( ) are two such functions which are used for more or less identical purpose and that is to allocate memories for any variable dynamically during execution time of the program. Give me a... read more

**What are malloc and calloc?**

Simply putting they are the predefined functions in C language.

Malloc( ) and calloc( ) are two such functions which are used for more or less identical purpose and that is to allocate memories for any variable dynamically during execution time of the program.

**Give me a brief of malloc:**

So as I have said malloc is a function; malloc must have three valid components as all other functions do have. These are (a) function name (b) function arguments and (c) function return type, that is, malloc must have a signature or prototype. Now what is the prototype of malloc? It is like following:

**(void*) malloc(unsigned int)** [Note: The actual prototype is slightly different than this, but for the time being this simplification will work]

It shows that the function name is malloc itself, it expects an unsigned integer as argument and it returns a void pointer or generic pointer. So let us discuss a bit about them.

As per the prototype, malloc expects an unsigned integer. To assimilate this, let us recall why should we use malloc? Malloc is used to allocate a chunk of memory at program run time dynamically. So we must let malloc know what the size of that chunk of memory is. Suppose we want malloc to allocate a chunk of memory where we can store 10 integers. In this case, we actually want malloc to allocate 20 bytes of memory (if we are running our program on windows). And we do it by supplying the size 20 as an argument. So the unsigned integer that is passed to malloc is actually the requirement of memory in bytes; and it is evident that the memory requirement cannot be negative any time, so the data type is unsigned.

Now what that void pointer is doing at the beginning as return type? To understand this, we have to recall that a function gives a feedback to its caller through the return type. So after allocating the memory (I admit, you do not how) malloc has to return the base pointer, I,e, the address of the starting location of the memory chunk allocated. Though void pointer, malloc actually returns this base address.

I am sure you will scream out “why void?” Wait a minute. Answer me, who has written this malloc function? Is it you or someone else? Obviously it is not you, malloc function is written by someone else. Now the original developer of malloc had no idea for which purpose you would be using this malloc. Currently, you can use malloc to allocate memory for storing some integers, or some floats or some structures defined by you. There are so many options and those options are equally likely for you. You as a programmer currently know that this malloc is being used for storing some integers or whatever else, but the original programmer had no idea about that. So malloc returns void pointer or generic pointer, a special one which can point to any type of memory generically. So it is your responsibility to properly type cast this void pointer. If you are using it to store some integers, cast void* as int*, if you are using malloc to store some Node type structures, then cast it to Struct Node*. Got it?

One last point still exists. You must have seen malloc argument to contain sizeof(), right? Why it is so?

As I have mentioned that the memory requirement is informed through the argument, it is highly possible that the memory requirement may change as the OS changes. For example in one system you may require 20 bytes to store 10 integers, while it may be 40 bytes in other system or OS. So if I hardcode my memory requirement as 20 or 40 for any specific platform, it won’t suffice the scenarios generated by other systems. So to meet the portability issue, that may rise up later, we as a programmer should not deal with the memory requirement directly. We should inform the compiler that that we need to store n number of elements only and let the compiler decide what the size of each such element is by using sizeof.

So we should call malloc with an argument (n * sizeof(int)) if we want to store n number of integers. By doing this, we are actually letting the C compiler to calculate the memory requirement (under the current OS) of one element and multiplying this with n, compiler can compute the exact memory requirement as per the current system.

As a good programmer you should never hardcode your memory requirement, it may end up in a serious problem if your program gets ported to any other OS.

Look at the following example for clear undestanding:

**To store 15 characters, do like:**

*char *pointerToCharArray;*

*int size = 15;*

*pointerToCharArray = (char*)malloc(size*sizeof(char));*

**To store n numbers of MyStructure type of structure, do like:**

*struct MyStructure *pointerToMyStructureArray;*

*pointerToMyStructureArray = (struct MyStructure*)malloc(n*sizeof(struct MyStructure));*

**To store k numbers of integers, do like:**

*int* point;*

*point = (int*)malloc(k*sizeof(int));*

Once you are done with these, you can use your pointer (like *pointerToCharArray* or *pointerToMyStructureArray* or *point as seen in the examples*) like an array with indexing. That means now *point[5]*** **will give you the integer stored at 5^{th} index of point or the sixth integer stored within the memory chunk, base address of which, is being pointed by a pointer *point***.**

**So what is calloc then? How it is different from malloc?**

Calloc is another function which, like malloc, is used for dynamic memory allocations.

Malloc if called to allocate a memory space that can store 10 integers (under windows),occupies the memory as a single chunk of 20 bytes, where calloc, if called for the same purpose, occupies 20 bytes, but not as a single chunk of memory, rather , it occupies 10 blocks of memory, each having 2 bytes (as it is under windows, as we have supposed). Hence calloc has the following prototype:

**(void*) calloc(unsigned int, unsigned int)**

The first *unsigned int* is for number of elements that you want to store and the second *unsigned int* is for memory allocation needed for storing a single element.

This is why calloc can occupy the memory space with multiple blocks, each of equal size. The void* is doing the same thing as that of malloc

So look at the following example for clear understanding:

**To store 15 characters, do like**

*char *pointerToCharArray;*

*int size = 15;*

*pointerToCharArray = (char*)calloc(size, sizeof(char));*

**To store n numbers of MyStructure type of structure, do like:**

*struct MyStructure *pointerToMyStructureArray;*

*pointerToMyStructureArray = (struct MyStructure*)calloc(n, sizeof(struct MyStructure));*

**To store k numbers of integers, do like:**

*int* point;*

*point = (int*)calloc(k, sizeof(int));*

So the syntactical difference is malloc takes one argument, while calloc takes two. But you need to have an idea why it is so. I think the above discussion tells you that.

Another point of difference is like following:

Malloc allocates the memory chunk and initializes the memory with garbage values, whereas calloc allocates the multiple block of memory chunks and initializes those with 0s (zeros)

*So we come to an end of this discussion. I am open to clear your doubts if any.*

Like

Comments Lesson Posted on 05/07/2017 Learn BTech Information Science Engineering

What Are The Two Forms Of #Include?

Shiladitya Munshi

There are two variants of #include. The one is #include and the other one is #include”file”. In general the first form that is #include is used to include system defined header files which are searched in the standard list of system directories. The second form i,e #include”file”... read more

Like

Comments Take BTech Tuition from the Best Tutors

- Affordable fees
- Flexible Timings
- Choose between 1-1 and Group class
- Verified Tutors

Answered on 12/02/2017 Learn BTech Information Science Engineering

Prashanth Kannadaguli

BEST Technical Trainer & Freelancer

Can change or create new.

Like

Answers 22 Comments UrbanPro.com helps you to connect with the best BTech Tuition in India. Post Your Requirement today and get connected.

x

Ask a Question

X ### Looking for BTech Tuition Classes?

The best tutors for BTech Tuition Classes are on UrbanPro

- Select the best Tutor
- Book & Attend a Free Demo
- Pay and start Learning

The best Tutors for BTech Tuition Classes are on UrbanPro