UrbanPro
true

Learn Advanced Statistics from the Best Tutors

  • Affordable fees
  • 1-1 or Group class
  • Flexible Timings
  • Verified Tutors

Search in

Learn Advanced Statistics with Free Lessons & Tips

Ask a Question

Post a Lesson

All

All

Lessons

Discussion

Lesson Posted on 02/12/2020 Learn Advanced Statistics

Scales of Measurement

Medha K

As a Senior Business Data Analyst, I provide private training in the field of Statistics and Data Analysis....

Scales of Measurement are used to define & categories variables. Scales of Measurement include: Nominal Ordinal Interval Ratio Nominal : Numbers or values assigned to variables identify & classify objects. It labels merely objects into different unordered categories. Example: Gender,... read more

Scales of Measurement are used to define & categories variables.

Scales of Measurement include:

  • Nominal
  • Ordinal
  • Interval
  • Ratio

Nominal :

  • Numbers or values assigned to variables identify & classify objects.
  • It labels merely objects into different unordered categories.
  • Example: Gender, Religion, Marital Status

Ordinal :

  • Numbers indicate relative positions of objects, i.e. they establish order only.
  • But they do not give information about the magnitude of differences between objects.
  • Examples: Ranks, Socio-economic class

Interval :

  • Equal intervals between objects represent similar differences, but there is no true zero.
  • Zero does not represent the absolute lowest value.
  • Example: Temperature in Fahrenheit

Ratio :

  • Possesses all the properties of nominal, ordinal, interval scale & there is a true zero.
  • Examples: Age, Height, Weight, Income
read less
Comments
Dislike Bookmark

Asked on 04/04/2020 Learn Advanced Statistics

can somebody pls answer these questions.need them urgently read more

can somebody pls answer these questions.need them urgently

read less

Answer

Lesson Posted on 19/05/2018 Learn Advanced Statistics +2 Business Mathematics and Statistics Statistics and Probability

Simulation of Traffic

Parth Loya

I am a professional traffic engineer working with the city of Mumbai. I have done Master's in Traffic...

Ever wondered how statistics deal with traffic that you encounter every day while travelling?Well, we model arrival of vehicles as Poisson distribution. For example, 10 vehicles arrive in the 1st hour, 14 vehicles arrive in the 2nd hour, 7 vehicles arrive in the 3rd hour and so on. So you can get the... read more

Ever wondered how statistics deal with traffic that you encounter every day while travelling?

Well, we model arrival of vehicles as Poisson distribution. For example, 10 vehicles arrive in the 1st hour, 14 vehicles arrive in the 2nd hour, 7 vehicles arrive in the 3rd hour and so on. So you can get the mean number of vehicles that will be the parameter for the Poisson distribution.

Now can we simulate the traffic arrival using this information? We know the formula for pmf of a Poisson distribution. So for every hour, we can get the flow of traffic.

This is how advanced traffic simulators work!

read less
Comments
Dislike Bookmark

Learn Advanced Statistics from the Best Tutors

  • Affordable fees
  • Flexible Timings
  • Choose between 1-1 and Group class
  • Verified Tutors

Lesson Posted on 19/12/2017 Learn Advanced Statistics +2 Data Science Machine Learning

Tuning Parameters Of Decision Tree Models

Ashish R.

SAS certified analytics professionals, more than 11 years of industrial and 12 years of teaching experience....

Implementations of the decision tree algorithm usually provide a collection of parameters for tuning how the tree is built. The defaults in Rattle often provide a basically good tree. They are certainly a very good starting point, and indeed may be a satisfactory end point! However, tuning will be necessary... read more

Implementations of the decision tree algorithm usually provide a collection of parameters for tuning how the tree is built. The defaults in Rattle often provide a basically good tree. They are certainly a very good starting point, and indeed may be a satisfactory end point! However, tuning will be necessary where, for example, the target variable has very few observations of the particular class of interest.(Why?)

The following tuning parameters are quite useful to know and use in developing many tree based classifications. For more details about all the tuning parameters to know please type “?rpart.control” in the console of Rstudio. J   

  • Minsplit : It is used for Minimum number of observations for a node to be considered for a split.
  • Minbucket: It is the minimum number of observations in any terminal node.

If only one of “minbucket” or “minsplit” is specified, the code either set  minsplit = minbucket*3 or minbucket = minsplit/3, as appropriate. The two options “minbucket” and “minsplit” are closely related. In rpart if either is not specified then by default the other one is calculated by using the above formula. Usually  a node will always have at least “minbucket” entities, and it will be considered for splitting if it has at least “minsplit” entities and on splitting, each of its children have at least “minbucket” entities. Simply saying if we specify “minbucket”, it would automatically take 3 time of minbucket as minsplit parameter by default.

  • Maxdepth: Set the maximum depth of any node of the final tree, with the root node counted as depth 0.  Input value greater than 30 “rpart” will give nonsense results on 32-bit machines.
  • Priors: Sometimes the proportions of classes in a training set do not reflect their true proportions in the population. We can inform the population proportions to Rattle package, and the resulting model will reflect these.

The so-called priors can also be used to “boost” a particularly important class, by giving it a higher prior probability, although this might best be done through the Loss Matrix.

In Rattle the priors are expressed as a list of numbers that sum up to 1. The list must be of the same length as the number of unique classes in the target variable. An example for binary classification is 0.5, 0.5.

The default priors are set to be the class proportions as found in the training dataset.

Using rpart directly we specify prior within an option called parm:

Complexity parameter (cp): This parameter controls how splits are carried out (i.e., the number of branches in the tree). The value should be under 1, and the smaller the value, the more branches in the final tree. A value of "Auto" or omitting a value will result in the "best" complexity parameter being selected based on cross-validation. Usually the default cp value is considered to be 0.01.

The complexity parameter (cp) is used to control the size of the decision tree and to select the optimal tree size. If the cost of adding another variable to the decision tree from the current node is above the value of cp, then tree building does not continue. We could also say that tree construction does not continue unless it would decrease the overall lack of fit by a factor of cp. Setting this to zero will build a tree to its maximum depth (and perhaps will build a very, very, large tree). This is useful if we want to look at the values for CP for various tree sizes. This information will be in the console window. We will look for the number of splits where the sum of the xerror (cross validation error, relative to the root node error) and xstd(variance of the relative errors ) is minimum. This is usually early in the list.

The option to use cp parameter in R is as follows:

# control = rpart.control(minsplit = 50)

## example control = rpart.control(cp = , minsplit =))

 

read less
Comments
Dislike Bookmark

Lesson Posted on 10/08/2017 Learn Advanced Statistics

Regression Analysis

R. K. Shukla

Ripunjai is highly enthusiastic analytics professional with more than nine year experience in diversified...

In statistical modeling, regression analysis is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables (or... read more

In statistical modeling, regression analysis is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables (or 'predictors').

Multiple regression is an extension of simple linear regression. It is used when we want to predict the value of a variable based on the value of two or more other variables. The variable we want to predict is called the dependent variable (or sometimes, the outcome, target or criterion variable).

read less
Comments
Dislike Bookmark

Answered on 24/05/2017 Learn Advanced Statistics

Bestintown Analytics Private Limited

Hello Chithra , Best in Town Analytics, Bangalore is one the Best choice for Learning Statistics . We will help you to learn from the Basic to the Advanced level towards your concepts understanding. Trainers have rich experienced and IIM / IIT/ Corp background. We would like to share a NEW BATCH on... read more
Hello Chithra , Best in Town Analytics, Bangalore is one the Best choice for Learning Statistics . We will help you to learn from the Basic to the Advanced level towards your concepts understanding. Trainers have rich experienced and IIM / IIT/ Corp background. We would like to share a NEW BATCH on Math and Stat Crash Course for Data Science Early bird discounted fee : 6000 plus tax Course Curriculum: Probability and Statistics: Basic Probability Discrete Random Variables Continuous Random Variables Discrete Probability Distribution Continuous Probability Distribution Normal Distribution Descriptive Statistics Derivation of Simple Linear Regression Mathematics for Analytics: Basics of Indices and Logarithms Sets and Functions Progressions Limits and Continuity Essential Differential Calculus Essential Integral Calculus Basics of Linear Algebra Eigen values and Eigen vectors Applications of Linear Algebra in analytics Duration : 24 hours Timings : Sat and Sun - 9 am to 4 pm on May 27,28 and June 3,4 Course Fee : 7500 plus tax Early bird discounted fee : 6000 plus tax Group of 5 or more : 5000 per person plus tax Thanks and Best Wishes . read less
Answers 3 Comments
Dislike Bookmark

Learn Advanced Statistics from the Best Tutors

  • Affordable fees
  • Flexible Timings
  • Choose between 1-1 and Group class
  • Verified Tutors

Lesson Posted on 06/01/2017 Learn Advanced Statistics +7 Data Science Business Analytics Training Business Analysis Training Data Analysis Data Visualization Machine Learning Data Modeling

Beware Of Trainers Of Data Science.

Data Labs Training and Consulting Services

We provide online /Classroom training. Our team qualified from National Institute of Technology(NIT)...

Most of the trainers in the market are teaching DATA SCIENCE as 1) Some software tools like R/Python/SAS/Hadoop etc 2)They are spending less amount of time on Mathematics and Statistics(Mostly 10 hrs on mathematics/statistics.Most of the trainers are teaching few algorithms and calling that as DATA... read more

Most of the trainers in the market are teaching  DATA SCIENCE as

1) Some software tools like R/Python/SAS/Hadoop etc

2)They are spending less amount of time on Mathematics and Statistics(Mostly 10 hrs on mathematics/statistics.Most of the trainers are teaching few algorithms and calling that  as DATA SCIENCE without explaining the background mathematics)

If you know only above two things,you never become a data scientist.You may get the job as there is a lot of demand in the data science job market.But once you get into the company,you can not do the job.

How to evaluate the trainer and its syllabus?

1) Ask trainer, what amount of mathematics/statistics is he going to teach?

If you get the answer as 80%-90% mathematics and statistics using paper and pen method,then you can choose that trainer.Once you know the mathematics and statistics,learning any software will not take more than a week.So do not ask the trainer what software tools he is going to teach and ask how much is the mathematics and statistics he is going to teaching

If any trainer says mathematics/statistics are not required,only learn some software then you can conclude  that particular training is not good.

2)Ask  him how much of Probability,matrices,calculus and co-ordinate geometry he is going to teach

If he says around 30-40 hours apart from Inferenatial Statistics/Predictiona Analysis/Machine learning,then you can join with that particular trainer.

Once you are comfortable with Probability,Matrices,calculus and co-ordinate geometry then learning machine learning/prediction analytics etc are cake walk.

if above are meeting ,the learn any tool like R/python etc.

 

 

 

read less
Comments
Dislike Bookmark

Lesson Posted on 08/12/2016 Learn Advanced Statistics +3 Data Science Business Analytics Training Business Analysis Training

Principal component analysis- A dimension reduction technique

Ashish R.

SAS certified analytics professionals, more than 11 years of industrial and 12 years of teaching experience....

In simple words, principal component analysis(PCA) is a method of extracting important variables (in form of components) from a large set of variables . It extracts low dimensional set of features from a high dimensional data set with a motive to capture as much information as possible. With fewer variables,... read more

In simple words, principal component analysis(PCA) is a method of extracting important variables (in form of components) from a large set of variables . It extracts low dimensional set of features from a high dimensional data set with a motive to capture as much information as possible. With fewer variables, visualization also becomes much more meaningful. This is why PCA is called dimension reduction technique. PCA is more useful when dealing with higher dimensional data and the variables have significant correlation among them.

Principal components analysis is one of the simplest of the multivariate methods. The objective of the analysis is to take p variables (x1,x2,x3.....xp) and find linear combination of these to produce transformed variabels (z1,z2,z3...zp) so that they are uncorelated in order of their importance and that describe the overall variation in the data set. 

The lack of correlation means that the indices are measuring different “dimensions” of the data, and the ordering is such that var(z1)≥var(z2)≥var(z3)....var(zp), where var denotes the variance of . The Z indices are then the principal components. When doing principal components analysis, there is always the hope that the variances of most of the indices  will be as low as to be negligible. In that case, most of the variation in the full data set can be adequately described by the few Z variables with variances that are not negligible, and some degree of economy is then achieved. For this reason this is also called dimension reduction technique. Often the significant variances explained by the Z variables  have a dominant load factor associated with the original X variables and Z describe a specific degree of quantitative or qualitative nature of the X attributes. Hence such newly formed Z variables are called latent factor analysis.

Principal components analysis does not always work, in the sense that a large number of original variables are reduced to a small number of transformed variables. Indeed, if the original variables are uncorrelated, then the analysis achieves nothing. The best results are obtained when the original variables are very highly correlated, positively or negatively. If that is the case, then it is quite conceivable that for example 20 or more original variables can be adequately represented by two or three principal components. If this desirable state of affairs does occur, then the important principal components will be of some interest as measures of the underlying dimensions in the data. It will also be of value to know that there is a good deal of redundancy in the original variables, with most of them measuring similar things.

Where it is used?

A multi-dimensional hyper-space is often difficult to visualize. The main objectives of unsupervised learning methods are to reduce dimensionality, scoring all observations based on a composite index and clustering similar observations together based on multivariate attributes. Summarizing multivariate attributes by two or three variables that can be displayed graphically with minimal loss of information is useful in knowledge discovery. Because it is hard to visualize a multi-dimensional space, PCA is mainly used to reduce the dimensionality of d multivariate attributes into two or three dimensions.

PCA summarizes the variation in correlated multivariate attributes to a set of non-correlated components, each of which is a particular linear combination of the original variables. The extracted non-correlated components are called Principal Components (PC) and are estimated from the eigenvectors of the covariance matrix of the original variables. Therefore, the objective of PCA is to achieve parsimony and reduce dimensionality by extracting the smallest number components that account for most of the variation in the original multivariate data and to summarize the data with little loss of information. 

A few use cases where PCA is used:
Survey data: Any kind of market survey data which is collected in a Likert scale (0-5/0-10 etc.) can be used to derived principal components that can describe a specific sentiment of the customers/participants in the survey. The principal components with Eigen value >1 are the important ones to be considered.

Market mix model: In developing market mix model usually 52-104 weeks of sales and marketing  spend data along with many brand image variables that are measured in monthly/quarterly basis are used to derive the contribution of the marketing spends in generating revenue. In the overall ROI calculation a mix model is developed.  Realized sales/Revenue/Pipeline sales are modeled with the help of many spend related attributes and its various derived adstock values . In such scenario PCA is used to reduce the overall dimension of the data.  

Brand image: To create brand image from many brand variables often PCA is used to calculate brand value index

NPA score calculation: In the calculation of NPA (Net promoter score) from customer survey data often PCA is used by considering the overall effect of all the considered variables

CSAT score calculation:  Similarly in CSAT score calculation PCA is used.

 

read less
Comments
Dislike Bookmark

Lesson Posted on 08/12/2016 Learn Advanced Statistics +3 Data Science Business Analysis Training Business Analytics Training

What is Logistic Regression Model ?

Ashish R.

SAS certified analytics professionals, more than 11 years of industrial and 12 years of teaching experience....

Logistic regression is a form of regression which is used when the dependent is a dichotomy (yes or no) and the independents of any type (either continuous or binary). Logistic regression can be used to predict a dependent variable on the basis of continuous and/ or categorical independents and to determine... read more

Logistic regression is a form of regression which is used when the dependent is a dichotomy (yes or no) and the independents of any type (either continuous or binary).

Logistic regression can be used to predict a dependent variable on the basis of continuous and/ or categorical independents and to determine the percentage variance in the dependent variable explained by the independent variables. The impact of predictor (independent) variables is usually explained in terms of odds ratios. This is one of the most prefered linear classifier model that is used in solving many problems in our client services industry in India.

One important point to say is that the choice of any model selection in solving a practical use case only depends on the type of the dependent variable and often the underlying probability distribution. It does not depends on the types of the independent/ predictor variables.  

Where it is used:

Logistic model  is used in  various instances. Among them the following are very common where it is used quite often:

  • Customer attrition model/Churn model: To predict the customer likely to attrite from a bank/financial institution/telecom services
  • Next purchase propensity model: To predict whether a customer is likely to purchase if we target by a promotion/campaign
  • Cross sell model: whether a customer is likely to buy a new product or a service across the all possible products/services
  • Upsell model: Whether a customer likely to buy more in the next quarter than his existing purchase pattern if we target them with right promotional offer
  • Customer conversion model: Whether the prepaid customer for a telecom giant will convert to a postpaid customer if we target them suitably
  • Insurance model: Whether a customer is likelihood to be hospitalized in the coming quarters.

This model is one of the predictive models that used across many industries like aviation, bank and finance, retail, pharma, CPG, telecom, shipping line, online retail, E commerce, FMCG etc.

Aim to build logit model:

The basic aim to build such model is to investigate the probability of a customer/patience is likely to respond to a defined event. Event is the fact that we are trying to predict. Based on the predicted probability right business steps should be taken to optimize the margin of the business. To study the drivers responsible for an event is also another parallel objective in building such models. By building logit model

  1. The right targeting list could be generated to maximize the response rate
  2. Right set of customers could be targeted to cross sell or upsell any product/service
  3. Right business decision could be taken in advance the value the customers when they are active in the system
read less
Comments
Dislike Bookmark

Learn Advanced Statistics from the Best Tutors

  • Affordable fees
  • Flexible Timings
  • Choose between 1-1 and Group class
  • Verified Tutors

Lesson Posted on 29/10/2016 Learn Advanced Statistics +7 Big Data Business Analysis Training Data Analysis Data Modeling Data Science Data Visualization Machine Learning

Approach for Mastering Data Science

Data Labs Training and Consulting Services

We provide online /Classroom training. Our team qualified from National Institute of Technology(NIT)...

Few tips to Master Data Science 1)Do not start your learning with some software like R/Python/SAS etc 2)Start with very basics like 10th class Matrices/Coordinate Geometry/ 3) Understand little bit more about a)matrices like transpose/Inverse/Symmetric/Idempotent matrices/SVD/Eigen values and Vectors/Orthogonal... read more

Few tips to Master Data Science

1)Do not start your learning with some software like R/Python/SAS etc

2)Start with very basics like 10th class Matrices/Coordinate Geometry/

3) Understand little bit more about

a)matrices like transpose/Inverse/Symmetric/Idempotent matrices/SVD/Eigen values and Vectors/Orthogonal Matrices etc

b)Vectors like subspace/span/basis/linear combination/Linear Dependent/Linear independance etc

c)Coordinate Geometry like Equation of st:line/Perpendiculat distance/Parallel distance etc

If you do not understand above,you can not beoome a data scientist as those are very basics.

4)Start with Statistics for Management by Levin abd Rubin

then get into the books which I mentioned in previous message

Regards

DATA LABS

read less
Comments 1
Dislike Bookmark

About UrbanPro

UrbanPro.com helps you to connect with the best Advanced Statistics Training in India. Post Your Requirement today and get connected.

Overview

Lessons 11

Total Shares  

+ Follow 3,230 Followers

Top Contributors

Connect with Expert Tutors & Institutes for Advanced Statistics

x

Ask a Question

Please enter your Question

Please select a Tag

X

Looking for Advanced Statistics Classes?

The best tutors for Advanced Statistics Classes are on UrbanPro

  • Select the best Tutor
  • Book & Attend a Free Demo
  • Pay and start Learning

Learn Advanced Statistics with the Best Tutors

The best Tutors for Advanced Statistics Classes are on UrbanPro

This website uses cookies

We use cookies to improve user experience. Choose what cookies you allow us to use. You can read more about our Cookie Policy in our Privacy Policy

Accept All
Decline All

UrbanPro.com is India's largest network of most trusted tutors and institutes. Over 55 lakh students rely on UrbanPro.com, to fulfill their learning requirements across 1,000+ categories. Using UrbanPro.com, parents, and students can compare multiple Tutors and Institutes and choose the one that best suits their requirements. More than 7.5 lakh verified Tutors and Institutes are helping millions of students every day and growing their tutoring business on UrbanPro.com. Whether you are looking for a tutor to learn mathematics, a German language trainer to brush up your German language skills or an institute to upgrade your IT skills, we have got the best selection of Tutors and Training Institutes for you. Read more