0% found this document useful (0 votes)
108 views48 pages

QM PDF

This document provides an overview of quantitative methods taught in a 16MBA14 course. It includes 6 units covering topics like descriptive statistics, correlation, regression, probability distributions, decision theory, linear programming, and project management. Descriptive statistics are covered in Unit 1, including measures of central tendency like the mean, median, and mode. Units 2 covers correlation and regression. Probability distributions are the focus of Unit 3.

Uploaded by

Pramod Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
108 views48 pages

QM PDF

This document provides an overview of quantitative methods taught in a 16MBA14 course. It includes 6 units covering topics like descriptive statistics, correlation, regression, probability distributions, decision theory, linear programming, and project management. Descriptive statistics are covered in Unit 1, including measures of central tendency like the mean, median, and mode. Units 2 covers correlation and regression. Probability distributions are the focus of Unit 3.

Uploaded by

Pramod Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

QUANTITATIVE METHODS 16MBA14

Subject Code: 16MBA14 IA Marks: 20


No. of Lecture Hours / Week: 03 Exam Hours: 03
Total Number of Lecture Hours: 56 Exam Marks: 80
Practical Component: 02 Hours / Week

Unit 1 (6 Hours)
Descriptive Statistics: Measures of Dispersion- Mean deviation, Standard deviation-
variance,Co-efficient of Variance.

Unit 2 (8 Hours)
Correlation and Regression: Scatter Diagram, Karl Pearson correlation, Spearman‘s Rank
correlation (one way table only), simple and multiple regression (problems on simple
regression only)

Unit 3 (6 Hours)
Probability Distribution: Concept and definition - Rules of probability – Random variables
–Concept of probability distribution – Theoretical probability distributions: Binomial,
Poisson, Normal and Exponential – Baye‘s theorem (No derivation) (Problems only on
Binomial, Poisson and Normal).

Unit4 (6 Hours)
Decision Theory: Introduction – Steps of decision making process-types of decision making
environments-decision making under uncertainty-Decision making under Risk- Decision tree
analysis (only Theory)

Unit 5 (10 Hours)


Linear Programming: structure, advantages, disadvantages, formulation of LPP, solution
using Graphical method. Transportation problem: basic feasible solution using NWCM,
LCM, and VAM unbalanced, restricted and maximization problems.

DEPT.OF MBA/SJBIT Page 1


QUANTITATIVE METHODS 16MBA14

Unit 6 (10 Hours)


Project Management: Introduction – Basic difference between PERT & CPM – Network
components and precedence relationships – Critical path analysis – Project scheduling –
Project time-cost trade off – Resource allocation

DEPT.OF MBA/SJBIT Page 2


QUANTITATIVE METHODS 16MBA14

CONTENTS

UNIT NO. UNIT NAME PAGE NOS.

1 DESCRIPTIVE STATISTICS 4-12

2 CORRELATION AND REGRESSION 13 -17

3 PROBABILITY DISTRIBUTION 18 -25

4 DECISION THEORY 26 -32

5 LINEAR PROGRAMMING 33 -42

6 PROJECT MANAGEMENT 43 -48

DEPT.OF MBA/SJBIT Page 3


QUANTITATIVE METHODS 16MBA14

Unit - 1

Descriptive Statistics

Measures of Central Tendency

A classified statistical data may sometimes be described as distributed around some value
called the central value or average is some sense. It gives the most representative value of the
entire data. Different methods give different central values and are referred to as the measures
of central tendency.
Thus, the most important objective of statistical analysis is to determine a single value that
represents the characteristics of the entire raw data. This single value representing the entire
data is called ‗Central value ‗or a average‗. This value is the point around which all other
values of data cluster. Therefore, it is known as the measure of location and since this value is
located at central point nearest to other values of the data it is also called as measures of
central tendency.
Different methods give different central values and are referred as measures of central
tendency. The common measures of central tendency are a) Mean b) Median c) Mode.
These values are very useful not only in presenting overall picture of entire data, but also for
the purpose of making comparison among two or more sets of data.

Mean:

Average is a value which is typical or representative of a set of data.


- Murry R. Speigal
Average is an attempt to find one single figure to describe whole of figures.
- Clark & Sekkade
From above definitions it is clear that average is a typical value of the entire data and is a
measure of central tendency.

Functions of an average
 To represents complex or large data.

 It facilitates comparative study of two variables.

 Helps to study population from sample data.

 Helps in decision making.

DEPT.OF MBA/SJBIT Page 4


QUANTITATIVE METHODS 16MBA14

 Represents single value for a series of data.

 To establish mathematical relationship.

Characteristics of a typical average


 It should be rigidly defined and easily understandable.

 It should be simple to compute and in the form of mathematical formula.

 It should be based on all the items in the data.

 It should not be unduly influenced by any single item.

 It should be capable of further mathematical treatment.

 It should have sampling stability.

Types of average
Average or measures of central tendency are of following types.
1. Mathematical average
a. Arithmetical mean
i. Simple mean

ii. Weighted mean


b. Geometric mean

c. Harmonic mean

2. Positional Averages
a. Median

b. Mode

Arithmetic mean
Arithmetic mean is also called arithmetic average. It is most commonly used measures of
central tendency. Arithmetic average of a series is the value obtained by dividing the total
value of various item by its number.

Arithmetic average is of two types


a. Simple arithmetic average

DEPT.OF MBA/SJBIT Page 5


QUANTITATIVE METHODS 16MBA14

b. Weighted arithmetic average

Continuous series
In continuous frequency distribution, the individual value of each item in the frequency
distribution is not known. In a continuous series the mid points of various class intervals are
written down to replace the class interval. In continuous series the mean can be calculated by
any of the following methods.

a. Direct method

b. Short cut method

c. Step deviation method

Merits of Arithmetic Mean


1. It is simple and easy to compute.

2. It is rigidly defined.

3. It can be used for further calculation.

4. It is based on all observations in the series.

5. It helps for direct comparison.

6. It is more stable measure of central tendency (ideal average).

Limitations / Demerits of Mean


1. It is unduly affected by extreme items.

2. It is sometimes un-realistic.

3. It may leads to confusion.

4. Suitable only for quantitative data (for variables).

5. It cannot be located by graphical method or by observations.


Geometric Mean (GM)
The GM is nth root of product of quantities of the series. It is observed by multiplying the
values of items together and extracting the root of the product corresponding to the number of

DEPT.OF MBA/SJBIT Page 6


QUANTITATIVE METHODS 16MBA14

items. Thus, square root of the products of two items and cube root of the products of the
three items are the Geometric Mean.

In the field of business management various problems often arise relating to average
percentage rate of change over a period of time. In such cases, the arithmetic mean is not an
appropriate average to employ, so, that we can use geometric mean in such case. GM are
highly useful in the construction of index numbers

Merits of GM
a. It is based on all the observations in the series.

b. It is rigidly defined.

c. It is best suited for averages and ratios.

d. It is less affected by extreme values.

e. It is useful for studying social and economic data.

Demerits of GM
a. It is not simple to understand.

b. It requires computational skill.

c. GM cannot be computed if any of items is zero or negative.

d. It has restricted application.

Harmonic Mean
It is the total number of items of a value divided by the sum of reciprocal of values of
variable. It is a specified average which solves problems involving variables expressed in
within ‗Time rates‗that vary according to time.

Ex: Speed in km/hr, min/day, and price/unit.

Merits of Harmonic Mean


1. It is based on all observations.

2. It is rigidly defined.

3. It is suitable in case of series having wide dispersion.

DEPT.OF MBA/SJBIT Page 7


QUANTITATIVE METHODS 16MBA14

4. It is suitable for further mathematical treatment.

Demerits of Harmonic Mean


1. It is not easy to compute.
2. Cannot used when one of the item is zero.

3. It cannot represent distribution.

Median:
Median is the value of that item in a series which divides the array into two equal parts, one
consisting of all the values less than it and other consisting of all the values more than it.
Median is a positional average. The number of items below it is equal to the number. The
number of items below it is equal to the number of items above it. It occupies central
position.
Thus, Median is defined as the mid value of the variants. If the values are arranged in
ascending or descending order of their magnitude, median is the middle value of the number
of variant is odd and average of two middle values if the number of variants is even.

Merits of Median
a. It is simple, easy to compute and understand.

b. It‗s value is not affected by extreme variables.

c. It is capable for further algebraic treatment.

d. It can be determined by inspection for arrayed data.

e. It can be found graphically also.

f. It indicates the value of middle item.

Demerits of Median
a. It may not be representative value as it ignores extreme values.

b. It can‗t be determined precisely when its size falls between the two values.

c. It is not useful in cases where large weights are to be given to extreme values

DEPT.OF MBA/SJBIT Page 8


QUANTITATIVE METHODS 16MBA14

Mode
It is the value which occurs with the maximum frequency. It is the most typical or common
value that receives the height frequency. It represents fashion and often it is used in business.
Thus, it corresponds to the values of variable which occurs most frequently. The model class
of a frequency distribution is the class with highest frequency. It is denoted by ‗Z‘
Mode is the value of variable which is repeated the greatest number of times in the series. It
is the usual, and not casual, size of item in the series. It lies at the position of greatest density.

Partition values
Median divides in to two equal parts. There are other values also which divides the series
partitioned value (PV).
Just as one point divides as series in to two equal parts (halves), 3 points divides in to four
points (Quartiles) 9 points divides in to 10 points (deciles) and 99 divide in to 100 parts
(percentage). The partitioned values are useful to know the exact composition of series.

Quartiles
A measure, which divides an array, in to four equal parts is known as quartile. Each portion
contains equal number of items. The first second and third point is termed as first quartile
(Q1). Second quartile (Q2) and third quartile (Qs). The first quartile is also known as lower
quartiles as 25% of observation of distribution below it, 75% of observations of the
distribution below it and 25% of observation above it.

Measures of Dispersion
Measures of dispassion are the ‗average of second order‘. They are based on the average of
deviations of the values obtained from central tendencies, Me or z. The variability is the basic
feature of the values of variables. Such type of variation or dispersion refers to the ‗lack of
uniformity‗.x‘

Definition: A measure of dispersion may be defined as a statistics signifying the extent of the
scatteredness of items around a measure of central tendency.

Absolute and Relative Measures of Dispersion:


A measure of dispersion may be expressed in an absolute form, or in a relative form. It is said
to be in absolute form when it states the actual amount by which the value of item on an

DEPT.OF MBA/SJBIT Page 9


QUANTITATIVE METHODS 16MBA14

average deviates from a measure of central tendency. Absolute measures are expressed in
concrete units i.e., units in terms of which the data have been expressed e.g.: Rupees,
Centimetres, Kilogram etc. and are used to describe frequency distribution.
A relative measure of dispersion is a quotient by dividing the absolute measures by a quality
in respect to which absolute deviation has been computed. It is as such a pure number and is
usually expressed in a percentage form. Relative measures are used for making comparisons
between two or more distribution.
Thus, absolute measures are expressed in terms of original units and they are not suitable for
comparative studies. The relative measures are expressed in ratios or percentage and they are
suitable for comparative studies.

Measures of Dispersion Types


Following are the common measures of dispersions.
a. The Range

b. The Quartile Deviation (QD)

c. The Mean Deviation (MD)

d. The Standard Deviation (SD)

Range
Range‗represents the differences between the values of the extremes‘. The range of any such
is the difference between the highest and the lowest values in the series.
The values in between two extremes are not all taken into consideration. The range is a
simple indicator of the variability of a set of observations. It is denoted by R. In a frequency
distribution, the range is taken to be the difference between the lower limit of the class at the
lower extreme of the distribution and the upper limit of the distribution and the upper limit of
the class at the upper extreme. Range can be computed using following equation.
Range = Large value – Small value

Range Merits
i. It is very simplest to measure.

ii. It is defined rigidly

iii. It is very much useful in Statistical Quality Control (SBC).

DEPT.OF MBA/SJBIT Page 10


QUANTITATIVE METHODS 16MBA14

iv. It is useful in studying variation in price of shares and stocks.

Limitations
i. It is not stable measure of dispersion affected by extreme values.

ii. It does not consider class intervals and is not suitable for C.I. problems.

iii. It considers only extreme values.

Quartile Deviation
Quartile divides the total frequency in to four equal parts. The lower quartile Q1 refers to the
values of variants corresponding to the cumulative frequency N/4.
Upper quartile Q3 refers the value of variants corresponding to cumulative frequency ¾ N.

Merits of Quartile Deviation


 It is very easy to compute

 It is not affected by extreme values of variable.

 It is not at all affected by open and class intervals.

Demerits of Quartile Deviation


 It ignores completely the portions below the lower quartile and above the upper of quartile.

 It is not capable for further mathematical treatment.


 It is only the positional average but not mathematical average.

Mean Deviation
Mean deviation is the average differences among the items in a series from the mean itself or
median or mode of that series. It is concerned with the extent of which the values are
dispersed about the mean or median or the mode. It is found by averaging all the deviations
from control tendency. These deviations are taken into computations with regard to negative
sign. Theoretically the deviations of item are taken preferably from median instead than from
the mean and mode.

Merits of Mean Deviation


 It is rigidly defined and easy to compute.

DEPT.OF MBA/SJBIT Page 11


QUANTITATIVE METHODS 16MBA14

 It takes all items in to considerations and gives weight to deviation according to these sign.

 It is less affected by extreme values.

 It removes all irregularities by obtaining deviation and provides correct measures.

Demerits of Mean Deviation


 It is not suitable for algebraic treatments.

 It is positive which is not justified mathematically.

 It is not satisfactory measure when the deviations are taken from mode.

 It is not suitable when class intervals are open end.

Standard Deviation
Standard deviation is the root of sum of the squares of deviations divided by their numbers. It
is also called ‗Mean error deviation‘. It is also called mean square error deviation (or) Root
mean square deviation. It is a second moment of dispersion. Since the sum of squares of
deviations from the mean is a minimum, the deviations are taken only from the mean (But not
from median and mode). The standard deviation is Root Mean Square (RMS) average of all
the deviations from the mean. It is denoted by sigma
Merits
1. It is based on all observations.

2. It can be smoothly handled algebraically.

3. It is a well defined and definite measure of dispersion.

4. It is of great importance when we are making comparison between variability of two


series.

Demerits
1. It is difficult to calculate and understand.

2. It gives more weightage to extreme values as the deviation is squared.

3. It is not useful in economic studies.

DEPT.OF MBA/SJBIT Page 12


QUANTITATIVE METHODS 16MBA14

Unit -2
Correlation and Regression

Correlation:

Correlation refers to any of a broad class of statistical relationships involving dependence.


Familiar examples of dependent phenomena include the correlation between the
physical statures of parents and their offspring, and the correlation between the demand for a
product and its price.

Correlations are useful because they can indicate a predictive relationship that can be
exploited in practice. For example, an electrical utility may produce less power on a mild day
based on the correlation between electricity demand and weather. In this example there is
a causal relationship, because extreme weather causes people to use more electricity for
heating or cooling; however, statistical dependence is not sufficient to demonstrate the
presence of such a causal relationship (i.e., correlation does not imply causation).

Formally, dependence refers to any situation in which random variables do not satisfy a
mathematical condition of probabilistic independence. In loose usage, correlation can refer to
any departure of two or more random variables from independence, but technically it refers to
any of several more specialized types of relationship between mean values. There are
several correlation coefficients, often denoted ρ or r, measuring the degree of correlation.
The most common of these is the Pearson correlation coefficient, which is sensitive only to a
linear relationship between two variables (which may exist even if one is a nonlinear function
of the other). Other correlation coefficients have been developed to be more robust than the
Pearson correlation – that is, more sensitive to nonlinear relationships Mutual
information can also be applied to measure dependence between two variables.

Karl Pearson coefficient of correlation:

The most familiar measure of dependence between two quantities is the Pearson product-
moment correlation coefficient, or "Pearson's correlation coefficient", commonly called
simply "the correlation coefficient". It is obtained by dividing the covariance of the two
variables by the product of their standard deviations. Karl Pearson developed the coefficient
from a similar but slightly different idea by Francis Galton.[4]

DEPT.OF MBA/SJBIT Page 13


QUANTITATIVE METHODS 16MBA14

The population correlation coefficient ρX,Y between two random


variables X and Y with expected values μX and μY and standard deviations σX and σY is defined
as:

Where E is the expected value operator, cov means covariance, and corr is a widely used
alternative notation for the correlation coefficient.

The Pearson correlation is defined only if both of the standard deviations are finite and
nonzero. It is a corollary of the Cauchy–Schwarz inequality that the correlation cannot exceed
1 in absolute value. The correlation coefficient is symmetric: corr(X,Y) = corr(Y,X).

The Pearson correlation is +1 in the case of a perfect direct (increasing) linear relationship
(correlation), −1 in the case of a perfect decreasing (inverse) linear relationship
(anticorrelation),[5] and some value between −1 and 1 in all other cases, indicating the
degree of linear dependence between the variables. As it approaches zero there is less of a
relationship (closer to uncorrelated). The closer the coefficient is to either −1 or 1, the
stronger the correlation between the variables.

If the variables are independent, Pearson's correlation coefficient is 0, but the converse is not
true because the correlation coefficient detects only linear dependencies between two
variables. For example, suppose the random variable X is symmetrically distributed about
zero, and Y = X2. Then Y is completely determined by X, so that X and Y are perfectly
dependent, but their correlation is zero; they are uncorrelated. However, in the special case
when X and Y are jointly normal, uncorrelatedness is equivalent to independence.

If we have a series of n measurements of X and Y written as xi and yi where i = 1, 2, ..., n, then


the sample correlation coefficient can be used to estimate the population Pearson
correlation r between X and Y. The sample correlation coefficient is written

Spearman’s rank correlation:

DEPT.OF MBA/SJBIT Page 14


QUANTITATIVE METHODS 16MBA14

Spearman‘s coefficients, such as Spearman's rank correlation coefficient and Kendall's rank
correlation coefficient (τ) measure the extent to which, as one variable increases, the other
variable tends to increase, without requiring that increase to be represented by a linear
relationship. If, as the one variable increases, the other decreases, the rank correlation
coefficients will be negative. It is common to regard these rank correlation coefficients as
alternatives to Pearson's coefficient, used either to reduce the amount of calculation or to
make the coefficient less sensitive to non-normality in distributions. However, this view has
little mathematical basis, as rank correlation coefficients measure a different type of
relationship than the Pearson product-moment correlation coefficient, and are best seen as
measures of a different type of association, rather than as alternative measure of the
population correlation coefficient.

To illustrate the nature of rank correlation, and its difference from linear correlation, consider
the following four pairs of numbers (x, y):

(0, 1), (10, 100), (101, 500), (102, 2000).

As we go from each pair to the next pair x increases, and so does y. This relationship is
perfect, in the sense that an increase in x is always accompanied by an increase in y. This
means that we have a perfect rank correlation and both Spearman's and Kendall's correlation
coefficients are 1, whereas in this example Pearson product-moment correlation coefficient is
0.7544, indicating that the points are far from lying on a straight line. In the same way
if y always decreases when x increases, the rank correlation coefficients will be −1, while the
Pearson product-moment correlation coefficient may or may not be close to −1, depending on
how close the points are to a straight line. Although in the extreme cases of perfect rank
correlation the two coefficients are both equal (being both +1 or both −1) this is not in
general so, and values of the two coefficients cannot meaningfully be compared. [7] For
example, for the three pairs (1, 1) (2, 3) (3, 2) Spearman's coefficient is 1/2, while Kendall's
coefficient is 1/3.

Pvrankis a very recent R package that computes rank correlations and their p-values with
various options for tied ranks. It is possible to compute exact Spearman coefficient test p-
values for n ≤ 26 and exact Kendall coefficient test p-values for n ≤ 60.

DEPT.OF MBA/SJBIT Page 15


QUANTITATIVE METHODS 16MBA14

Regression analysis:

Regression analysis is a statistical process for estimating the relationships among variables.
It includes many techniques for modelling and analysing several variables, when the focus is
on the relationship between a dependent variable and one or more independent variables (or
'predictors'). More specifically, regression analysis helps one understand how the typical
value of the dependent variable (or 'criterion variable') changes when any one of the
independent variables is varied, while the other independent variables are held fixed. Most
commonly, regression analysis estimates the conditional expectation of the dependent
variable given the independent variables – that is, the average value of the dependent variable
when the independent variables are fixed. Less commonly, the focus is on a quintile, or
other location parameter of the conditional distribution of the dependent variable given the
independent variables. In all cases, the estimation target is a function of the independent
variables called the regression function. In regression analysis, it is also of interest to
characterize the variation of the dependent variable around the regression function which can
be described by a probability distribution.

Regression analysis is widely used for prediction and forecasting, where its use has
substantial overlap with the field of machine learning. Regression analysis is also used to
understand which among the independent variables are related to the dependent variable, and
to explore the forms of these relationships. In restricted circumstances, regression analysis
can be used to infer causal relationships between the independent and dependent variables.
However this can lead to illusions or false relationships, so caution is advisable; [1] for
example, correlation does not imply causation.

Many techniques for carrying out regression analysis have been developed. Familiar methods
such as linear regression and ordinary least squares regression are parametric, in that the
regression function is defined in terms of a finite number of unknown parameters that are
estimated from the data. Nonparametric regression refers to techniques that allow the
regression function to lie in a specified set of functions, which may be infinite-dimensional.

The performance of regression analysis methods in practice depends on the form of the data
generating process, and how it relates to the regression approach being used. Since the true
form of the data-generating process is generally not known, regression analysis often depends
to some extent on making assumptions about this process. These assumptions are sometimes
testable if a sufficient quantity of data is available. Regression models for prediction are often

DEPT.OF MBA/SJBIT Page 16


QUANTITATIVE METHODS 16MBA14

useful even when the assumptions are moderately violated, although they may not perform
optimally. However, in many applications, especially with small effects or questions of
causality based on observational data, regression methods can give misleading results.

DEPT.OF MBA/SJBIT Page 17


QUANTITATIVE METHODS 16MBA14

Unit -3

Probability Distribution

Probability:

Probability is the measure of the likeliness that an event will occur. Probability is quantified
as a number between 0 and 1 (where 0 indicates impossibility and 1 indicates certainty). The
higher the probability of an event, the more certain we are that the event will occur. A simple
example is the toss of a fair (unbiased) coin. Since the two outcomes are equally probable, the
probability of "heads" equals the probability of "tails", so the probability is 1/2 (or 50%)
chance of either "heads" or "tails".

These concepts have been given an axiomatic mathematical formalization in probability


theory (see probability axioms), which is used widely in such areas of
study as mathematics, statistics, finance, gambling, science (in particular physics), artificial
intelligence/machine learning, computer science, game theory, and philosophy to, for
example, draw inferences about the expected frequency of events. Probability theory is also
used to describe the underlying mechanics and regularities of complex systems

Probability theory is applied in everyday life in risk assessment and in trade on financial
markets. Governments apply probabilistic methods in environmental regulation, where it is
called pathway analysis. A good example is the effect of the perceived probability of any
widespread Middle East conflict on oil prices—which have ripple effects in the economy as a
whole. An assessment by a commodity trader that a war is more likely vs. less likely sends
prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are
neither assessed independently nor necessarily very rationally. The theory of behavioural
finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace
and conflict.

In addition to financial assessment, probability can be used to analyze trends in biology (e.g.
disease spread) as well as ecology (e.g. biological Punnett squares). As with finance, risk
assessment can be used as a statistical tool to calculate the likelihood of undesirable events
occurring and can assist with implementing protocols to avoid encountering such
circumstances.

DEPT.OF MBA/SJBIT Page 18


QUANTITATIVE METHODS 16MBA14

The discovery of rigorous methods to assess and combine probability assessments has
changed society. It is important for most citizens to understand how probability assessments
are made, and how they contribute to decisions.

Another significant application of probability theory in everyday life is reliability. Many


consumer products, such as automobiles and consumer electronics, use reliability theory in
product design to reduce the probability of failure. Failure probability may influence a
manufacturer's decisions on a product's warranty.

The cache language model and other statistical language models that are used in natural
language processing are also examples of applications of probability theory.

Random Variable
A random variable x takes on a defined set of values with different probabilities.
For example, if you roll a die, the outcome is random (not fixed) and there are 6 possible
outcomes, each of which occur with probability one-sixth.
For example, if you poll people about their voting preferences, the percentage of the sample
that responds ―Yes on Proposition 100‖ is an also a random variable (the percentage will be
slightly different every time you poll).
Roughly, probability is how frequently we expect different outcomes to occur if we repeat the
experiment over and over ( frequent view)
Random variables can be discrete or continuous:

 Discreterandom variables have a countable number of outcomes


Examples: Dead/alive, treatment/placebo, dice, counts, etc.
 Continuous random variables have an infinite continuum of possible values.
Examples: blood pressure, weight, the speed of a car, the real numbers from 1 to 6.

Probability functions
A probability function maps the possible values of x against their respective probabilities of
occurrence, p(x)

 P(x) is a number from 0 to 1.0.

 The area under a probability function is always 1.

DEPT.OF MBA/SJBIT Page 19


QUANTITATIVE METHODS 16MBA14

Independent events

If two events, A and B are independent then the joint probability is

For example, if two coins are flipped the chance of both being heads is .

Mutually exclusive events

If either event A or event B occurs on a single performance of an experiment this is called the

union of the events A and B denoted as . If two events are mutually


exclusive then the probability of either occurring is

For example, the chance of rolling a 1 or 2 on a six-

sided die is

Not mutually exclusive events

If the events are not mutually exclusive then

For example, when drawing a single card at random from a regular deck of cards, the chance

of getting a heart or a face card (J,Q,K) (or one that is both) is ,


because of the 52 cards of a deck 13 are hearts, 12 are face cards, and 3 are both: here the
possibilities included in the "3 that are both" are included in each of the "13 hearts" and the
"12 face cards" but should only be counted once.

Conditional probability

Conditional probability is the probability of some event A, given the occurrence of some

other event B. Conditional probability is written , and is read "the probability of


A, given B". It is defined by

DEPT.OF MBA/SJBIT Page 20


QUANTITATIVE METHODS 16MBA14

If then is formally undefined by this expression. However, it is


possible to define a conditional probability for some zero-probability events using a σ-
algebra of such events (such as those arising from a continuous random variable).

For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking

a red ball is ; however, when taking a second ball, the probability of it being either a red
ball or a blue ball depends on the ball previously taken, such as, if a red ball was taken, the

probability of picking a red ball again would be since only 1 red and 2 blue balls would
have been remaining.

Inverse probability

In probability theory and applications, Baye’s' rule relates the odds of event to event ,
before (prior to) and after (posterior to) conditioning on another event . The odds on to
event is simply the ratio of the probabilities of the two events. When arbitrarily many
events are of interest, not just two, the rule can be rephrased as posterior is proportional
to prior times likelihood, where the proportionality symbol means that the left hand side is
proportional to (i.e., equals a constant times) the right hand side as varies, for fixed or
given

Baye’s' theorem

Describes the probability of an event, based on conditions that might be related to the event.
For example, suppose one is interested in whether Addison has cancer. Furthermore, suppose
that Addison is age 65. If cancer is related to age, information about Addison's age can be
used to more accurately assess his or her chance of having cancer using Baye‘s' Theorem.

When applied, the probabilities involved in Baye‘s' theorem may have


different interpretations. In one of these interpretations, the theorem is used directly as part of
a particular approach to statistical inference. In particular, with the Bayesian interpretation of
probability, the theorem expresses how a subjective degree of belief should rationally change
to account for evidence: this is Bayesian inference, which is fundamental to Bayesian
statistics. However, Baye‘s' theorem has applications in a wide range of calculations
involving probabilities, not just in Bayesian inference.

Baye‘s' theorem is stated mathematically as the following equation:[2]

DEPT.OF MBA/SJBIT Page 21


QUANTITATIVE METHODS 16MBA14

Where A and B are events.

 P(A) and P(B) are the probabilities of A and B without regard to each other.
 P(A | B), a conditional probability, is the probability of A given that B is true.
 P(B | A), is the probability of B given that A is true.

Binomial distribution

In probability theory and statistics, the binomial distribution with parameters n and p is
the discrete probability distribution of the number of successes in a sequence
of n independent yes/no experiments, each of which yields success with probability p. A
success/failure experiment is also called a Bernoulli experiment or Bernoulli trial; when n =
1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis
for the popular binomial test of statistical.

The binomial distribution is frequently used to model the number of successes in a sample of
size n drawn with replacement from a population of size N. If the sampling is carried out
without replacement, the draws are not independent and so the resulting distribution is
a hyper geometric distribution, not a binomial one. However, for N much larger than n, the
binomial distribution is a good approximation, and widely used.

In general, if the random variable X follows the binomial distribution with


parameters n and p, we write X ~ B(n, p). The probability of getting exactly k successes
in n trials is given by the probability mass function:

For k = 0, 1, 2, ..., n, where

is the binomial coefficient, hence the name of the distribution. The formula can be understood
as follows: we want exactly successes (pk) and n − k failures (1 − p)n − k. However

DEPT.OF MBA/SJBIT Page 22


QUANTITATIVE METHODS 16MBA14

the k successes can occur anywhere among the n trials, and there are different ways of
distributing k successes in a sequence of n trials.

Poisson distribution:

Is a discrete probability distribution that expresses the probability of a given number of


events occurring in a fixed interval of time and/or space if these events occur with a known
average rate and independently of the time since the last event.[1] The Poisson distribution can
also be used for the number of events in other specified intervals such as distance, area or
volume.

For instance, an individual keeping track of the amount of mail they receive each day may
notice that they receive an average number of 4 letters per day. If receiving any particular
piece of mail doesn't affect the arrival times of future pieces of mail, i.e., if pieces of mail
from a wide range of sources arrive independently of one another, then a reasonable
assumption is that the number of pieces of mail received per day obeys a Poisson distribution.

Other examples that may follow a Poisson: the number of phone calls received by a call
centre per hour, the number of decay events per second from a radioactive source, or the
number of taxis passing a particular street corner per hour.

A discrete random variable X is said to have a Poisson distribution with parameter λ > 0, if,
for k = 0, 1, 2… the probability of X is given by

Where

 e is Euler's number (e = 2.71828...)


 k! Is the factorial of k.

The positive real number λ is equal to the expected value of X and also to its variance

The Poisson distribution can be applied to systems with a large number of possible events,
each of which is rare. How many such events will occur during a fixed time interval? Under
the right circumstances, this is a random number with a Poisson distribution

DEPT.OF MBA/SJBIT Page 23


QUANTITATIVE METHODS 16MBA14

Normal distribution:

In probability theory, the normal (or Gaussian) distribution is a very common continuous
probability distribution. Normal distributions are important in statistics and are often used in
the natural and social sciences to represent real-valued random variables whose distributions
are not known.

The normal distribution is remarkably useful because of the central limit theorem. In its most
general form, under mild conditions, it states that averages of random variables independently
drawn from independent distributions are normally distributed. Physical quantities that are
expected to be the sum of many independent processes (such as measurement) often have
distributions that are nearly normal. [3] Moreover, many results and methods (such as
propagation and least squares parameter fitting) can be derived analytically in explicit form
when the relevant variables are normally distributed.

The normal distribution is sometimes informally called the bell curve. However, many other
distributions are bell-shaped (such as Cauchy's, Student's, and logistic). The terms Gaussian
function and Gaussian bell curve are also ambiguous because they sometimes refer to
multiples of the normal distribution that cannot be directly interpreted in terms of
probabilities.

The probability density of the normal distribution is:

Here, is the mean or expectation of the distribution (and also its median and mode). The
parameter is its standard deviation with its variance then . A random variable with a
Gaussian distribution is said to be normally distributed and is called a normal deviate.

If and , the distribution is called the standard normal distribution or

the unit normal distribution denoted by and a random variable with that
distribution is a standard normal deviate.

The normal distribution is the only absolutely continuous distribution


whose cumulates beyond the first two (i.e., other than the mean and variance) are zero. It is
also the continuous distribution with the maximum entropy for a specified mean and
variance.[4][5]

DEPT.OF MBA/SJBIT Page 24


QUANTITATIVE METHODS 16MBA14

The normal distribution is a subclass of the elliptical distributions. The normal distribution
is symmetric about its mean, and is non-zero over the entire real line. As such it may not be a
suitable model for variables that are inherently positive or strongly skewed, such as
the weight of a person or the price of a share. Such variables may be better described by other
distributions, such as the log-normal distribution or the Pareto distribution.

The value of the normal distribution is practically zero when the value x lies more than a
few standard deviations away from the mean. Therefore, it may not be an appropriate model
when one expects a significant fraction of outliers — values that lie many standard deviations
away from the mean — and least squares and other inference methods that are optimal for
normally distributed variables often become highly unreliable when applied to such data. In
those cases, a more heavy-tailed distribution should be assumed and the appropriate inference
methods applied.

DEPT.OF MBA/SJBIT Page 25


QUANTITATIVE METHODS 16MBA14

Unit -4

Decision theory

Decision theory:

Decision theory is a body of knowledge and related analytical techniques of different degrees
of formality designed to help a decision maker choose among a set of alternatives in light of
their possible consequences. Decision theory can apply to conditions of certainty, risk, and
uncertainty

Steps in decision making process:

DEPT.OF MBA/SJBIT Page 26


QUANTITATIVE METHODS 16MBA14

Decision making environment:

1. Certainty:

This type of decision making environment, there is only one type of event that can take place.
It is very difficult to find complete certainty in most of the business decisions. However, in
much routine type of decisions, almost complete certainty can be noticed. These decisions,
generally, are of very little significance to the success of business.

2. Uncertainty:

In the environment of uncertainty, more than one type of event can take place and the
decision maker is completely in dark regarding the event that is likely to take place. The
decision maker is not in a position, even to assign the probabilities of hap-pening of the
events.

Such situations generally arise in cases where happening of the event is determined by
external factors. For example, demand for the product, moves of competitors, etc. are the
factors that involve uncertainty.

3. Risk:

Under the condition of risk, there is more than one possible event that can take place.
However, the decision maker has adequate information to assign probability to the happening
or non- happening of each possible event. Such information is generally based on the past
experience.

Decision Making Under Uncertainty

In decision making under pure uncertainty, the decision-maker has no knowledge regarding
any of the states of nature outcomes, and/or it is costly to obtain the needed information. In
such cases, the decision making depends merely on the decision-makers personality type.

Personality Types and Decision Making:

1. Pessimism, or Conservative (MaxMin).

Worst case scenario. Bad things always happen to me.

B 3

DEPT.OF MBA/SJBIT Page 27


QUANTITATIVE METHODS 16MBA14

a) Write min # in each action row, S -2

b) Choose max # and do that action. D 7 *

2. Optimism or Aggressive (MaxMax).

Good things always happen to me.

B 12

a) Write max # in each action row, S 15 *

b) Choose max # and do that action. D 7

3. Coefficient of Optimism (Hurwitz‘s Index),

Middle of the road: I am neither too optimistic nor too pessimistic.

a) Choose an a between 0 & 1, 1 means optimistic and 0 means pessimistic,

b) Choose largest and smallest # for each action,

c) Multiply largest payoff (row-wise) by a and the smallest by (1- a ),

d) Pick action with largest sum.

For example, for a = 0.7, we have

B (.7*12) + (.3*3) = 9.3

S (.7*15) + .3*(-2) = 9.9 *

D (.7*7) + (.3*7) = 7

4. Minimize Regret: (Savag's Opportunity Loss)

I hate regrets and therefore I have to minimize my regrets. My decision should be made so
that it is worth repeating. I should only do those things that I feel I could happily repeat. This
reduces the chance that the outcome will make me feel regretful, or disappointed, or that it
will be an unpleasant surprise.

DEPT.OF MBA/SJBIT Page 28


QUANTITATIVE METHODS 16MBA14

Regret is the payoff on what would have been the best decision in the circumstances minus
the payoff for the actual decision in the circumstances. Therefore, the first step is to setup the
regret table:

a) Take the largest number in each states of nature column (say, L).

b) Subtract all the numbers in that state of nature column from it (i.e. L - Xi,j).

c) Choose maximum number of each action.

d) Choose minimum number from step (d) and take that action.

The Regret Matrix

G MG NC L

Bonds (15-12)(9-8) (7-7) (7-3) 4 *

Stocks (15-15)(9-9) (7-5) (7+2) 9

Deposit (15-7) (9-7) (7-7) (7-7) 8

Decision Making Under Risk

Risk implies a degree of uncertainty and an inability to fully control the outcomes or
consequences of such an action. Risk or the elimination of risk is an effort that managers
employ. However, in some instances the elimination of one risk may increase some other
risks. Effective handling of a risk requires its assessment and its subsequent impact on the
decision process. The decision process allows the decision-maker to evaluate alternative
strategies prior to making any decision. The process is as follows:

Whenever the decision maker has some knowledge regarding the states of nature, he/she may
be able to assign subjective probability estimates for the occurrence of each state. In such
cases, the problem is classified as decision making under risk. The decision-maker is able to
assign probabilities based on the occurrence of the states of nature. The decision making
under risk process is as follows:

a) Use the information you have to assign your beliefs (called subjective probabilities)
regarding each state of the nature, p(s),

b) Each action has a payoff associated with each of the states of nature X(a,s),

DEPT.OF MBA/SJBIT Page 29


QUANTITATIVE METHODS 16MBA14

c) We compute the expected payoff, also called the return (R), for each action R(a) = Sums of
[X(a,s) p(s)],

d) We accept the principle that we should minimize (or maximize) the expected payoff,

e) Execute the action which minimizes (or maximize) R (a).

Expected monetary value (EMV)

The actual outcome will not equal the expected value. What you get is not what you expect,
i.e. the "Great Expectations!"

a) For each action, multiply the probability and payoff and then,

b) Add up the results by row,

c) Choose largest number and take that action

G (0.4) MG (0.3) NC (0.2) L (0.1) Exp. Value

B 0.4(12) + 0.3(8) + 0.2(7) + 0.1(3) = 8.9

S 0.4(15) + 0.3(9) + 0.2(5) + 0.1(-2) = 9.5*

D 0.4(7) + 0.3(7) + 0.2(7) + 0.1(7) = 7

The Most Probable States of Nature (good for non-repetitive decisions)

Expected Opportunity Loss (EOL):

a) Setup a loss payoff matrix by taking largest number in each state of nature column (say L),
and subtract all numbers in that column from it, L - Xij,

b) For each action, multiply the probability and loss then add up for each action,

c) Choose the action with smallest EOL.

Loss Payoff Matrix

G (0.4) MG (0.3) NC (0.2) L (0.1) EOL

B 0.4(15-12) + 0.3(9-8) + 0.2(7-7) + 0.1(7-3) 1.9

DEPT.OF MBA/SJBIT Page 30


QUANTITATIVE METHODS 16MBA14

S 0.4(15-15) + 0.3(9-9) + 0.2(7-5) + 0.1(7+2)


1.3*

D 0.4(15-7) + 0.3(9-7) + 0.2(7-7) + 0.1(7-7) 3.8

Expected Value of Perfect Information (EVPI)

EVPI helps to determine the worth of an insider who possesses perfect information. Recall
that EVPI = EOL.

a) Take the maximum payoff for each state of nature,

b) Multiply each case by the probability for that state of nature and then add them up,

c) Subtract the expected payoff from the number obtained in step (b)

G 15(0.4) = 6.0

MG 9(0.3) = 2.7

NC 7(0.2) = 1.4

L 7(0.1) = 0.7

10.8

Therefore, EVPI = 10.8 - Expected Payoff = 10.8 - 9.5 = 1.3. Verify that EOL=EVPI.

The efficiency of the perfect information is defined as 100 [EVPI/ (Expected Payoff)]

Decision Tree Approach:

A decision tree is a chronological representation of the decision process. It utilizes a network


of two types of nodes: decision (choice) nodes (represented by square shapes), and states of
nature (chance) nodes (represented by circles). Construct a decision tree utilizing the logic of
the problem. For the chance nodes, ensure that the probabilities along any outgoing branch
sum to one. Calculate the expected payoffs by rolling the tree backward (i.e., starting at the
right and working toward the left).

DEPT.OF MBA/SJBIT Page 31


QUANTITATIVE METHODS 16MBA14

You may imagine driving your car; starting at the foot of the decision tree and moving to the
right along the branches. At each square you have control, to make a decision and then turn
the wheel of your car. At each circle, Lady Fortuna takes over the wheel and you are
powerless.

Here is a step-by-step description of how to build a decision tree:

1. Draw the decision tree using squares to represent decisions and circles to represent
uncertainty,
2. Evaluate the decision tree to make sure all possible outcomes are included,
3. Calculate the tree values working from the right side back to the left,
4. Calculate the values of uncertain outcome nodes by multiplying the value of the
outcomes by their probability (i.e., expected values).

A Typical Decision Tree

DEPT.OF MBA/SJBIT Page 32


QUANTITATIVE METHODS 16MBA14

Unit– 5

Linear Programming

Linear programming:
Linear Programming is a mathematical technique for optimum allocation of limited or scarce
resources, such as labour, material, machine, money, energy and so on , to several competing
activities such as products, services, jobs and so on, on the basis of a given criteria of
optimality.
The term ‘Linear’ is used to describe the proportionate relationship of two or more variables
in a model. The given change in one variable will always cause a resulting proportional
change in another variable.
The word , ‘Programming’ is used to specify a sort of planning that involves the economic
allocation of limited resources by adopting a particular course of action or strategy among
various alternatives strategies to achieve the desired objective.
Hence, Linear Programming is a mathematical technique for optimum allocation of limited
or scarce resources, such as labour, material, machine, money energy etc

Structure of linear program model:


The general structure of the Linear Programming model essentially consists of three
Components.
i) The activities (variables) and their relationships
ii) The objective function and
iii) The constraints
The activities are represented by X1 X2, X3 ……..Xn.
These are known as Decision variables.

The objective function of an LPP (Linear Programming Problem) is a mathematical


Representation of the objective in terms a measurable quantity such as profit, cost, revenue,
etc.
Optimize (Maximize or Minimize) Z=C1X1 +C2X2+ ………..CnXn
Where Z is the measure of performance variable
X1, X2, X3, X4…..Xn are the decision variables
C1, C2, Cn are the parameters that give contribution to decision variables.

DEPT.OF MBA/SJBIT Page 33


QUANTITATIVE METHODS 16MBA14

The constraints: These are the set of linear inequalities and/or equalities which impose
restriction of the limited resource

Advantages & Limitations of Linear Programming

Advantages of Linear Programming .Following are some of the advantages of Linear


Programming approach

1. Scientific Approach to Problem Solving. Linear Programming is the application of


scientific approach to problem solving. Hence it results in a better and true picture of the
problems-which can then be minutely analyzed and solutions ascertained.
2. Evaluation of All Possible Alternatives. Most of the problems faced by the present
Organisations are highly complicated - which cannot be solved by the traditional approach to
decision making. The technique of Linear Programming ensures that‘ll possible solutions are
generated - out of which the optimal solution can be selected.
3. Helps in Re-Evaluation. Linear Programming can also be used in .re-evaluation of a basic
plan for changing conditions. Should the conditions change while the plan is carried out only
partially, these conditions can be accurately determined with the help of Linear Programming
so as to adjust the remainder of the plan for best results.
4. Quality of Decision. Linear Programming provides practical and better quality of
decisions‟ that reflect very precisely the limitations of the system i.e.; the various restrictions
under which the system must operate for the solution to be optimal. If it becomes necessary
to deviate from the optimal path, Linear Programming can quite easily evaluate the associated
costs or penalty.
5. Focus on Grey-Areas. Highlighting of grey areas or bottlenecks in the production process
is the most significant merit of Linear Programming. During the periods of bottlenecks,
imbalances occur in the production department. Some of the machines remain idle for long
periods of time, while the other machines are unable toffee the demand even at the peak
performance level.
6. Flexibility. Linear Programming is an adaptive & flexible mathematical technique and
hence can be utilized in analyzing a variety of multi-dimensional problems quite successfully.

7. Creation of Information Base. By evaluating the various possible alternatives in the light
of the prevailing constraints, Linear Programming models provide an important database
from which the allocation of precious resources can be don rationally and judiciously.

DEPT.OF MBA/SJBIT Page 34


QUANTITATIVE METHODS 16MBA14

8. Maximum optimal Utilization of Factors of Production. Linear Programming helps in


optimal utilization of various existing factors of production such as installed capacity, Labour
and raw materials etc.

Limitations of Linear Programming.


Although Linear Programming is a highly successful having wide applications in business
and trade for solving optimization' problems, yet it has certain demerits or defects. Some of
the important-limitations in the application of Linear Programming are as follows:

1. Linear Relationship.
Linear Programming models can be successfully applied only in those situations where a
given problem can clearly be represented in the form of linear relationship between different
decision variables. Hence it is based on the implicit assumption that the objective as well as
all the constraints or the limiting factors can be stated in term of linear expressions - which
may not always hold well in real life situations. In practical business problems, many
objective function & constraints cannot

2. Constant Value of objective & Constraint Equations.


Before a Linear Programming technique could be applied to a given situation, the values or
the coefficients of the objective function as well as the constraint equations must be
completely known. Further, Linear Programming assumes these values to be constant over a
period of time. In other words, if the values were to change during the period of study, the
technique of LP would loosen its effectiveness and may fail to provide optimal solutions to
the problem.
However, in real life practical situations often it is not possible to determine the coefficients
of objective function and the constraints equations with absolute certainty. These variables in
fact may, lie on a probability distribution curve and hence at best, only the Iikelil1ood of their
occurrence can be predicted. Move over, often the value‘s change due to extremely as well as
internal factors during the period of study. Due to this, the actual applicability of Linear
Programming tools may be restricted.

DEPT.OF MBA/SJBIT Page 35


QUANTITATIVE METHODS 16MBA14

3. No Scope for Fractional Value Solutions.

There is absolutely no certainty that the solution to a LP problem can always be quantified as
an integer quite often, Linear Programming may give fractional-varied answers, which are
rounded off to the next integer. Hence, the solution would not be the optimal one. For
example, in finding out 'the pamper of men and machines required to perform a particular
job, a fractional Larson-integer solution would be meaningless.

4. Degree Complexity.
Many large-scale real life practical problems cannot be solved by employing Linear
Programming techniques even with the help of a computer due to highly complex and
Lengthy calculations. Assumptions and approximations are required to be made so that $e,
given problem can be broken down into several smaller problems and, then solve separately.
Hence, the validity of the final result, in all such cases, may be doubtful

5. Multiplicity of Goals.
The long-term objectives of an organisation are not confined to a single goal. An
organisation, at any point of time in its operations has a multiplicity of goals or the goals
hierarchy - all of which must be attained on a priority wise basis for its long term growth.
Some of the common goals can be Profit maximization or cost minimization, retaining
market share, maintaining leadership position and providing quality service to the consumers.
In cases where the management has conflicting, multiple goals, Linear Programming model
fails to provide an optimal solution. The reason being that under Linear Programming
techniques, there is only one goal which can be expressed in the objective function. Hence in
such circumstances, the situation or the given problem has to be solved by the help of a
different mathematical programming technique called the "Goal Programming".
6. Flexibility.
Once a problem has been properly quantified in terms of objective function and the constraint
equations and the tools of Linear Programming are applied to it, it becomes very difficult to
incorporate any changes in the system arising on account of any change in the decision
parameter. Hence, it lacks the desired operational flexibility.

Guidelines for formulation of linear programming model:


i) Identify and define the decision variable of the problem
ii) Define the objective function

DEPT.OF MBA/SJBIT Page 36


QUANTITATIVE METHODS 16MBA14

iii) State the constraints to which the objective function should be optimized (i.e.
Maximization or Minimization)
iv) Add the non-negative constraints from the consideration that the negative values of the
decision variables do not have any valid physical interpretation.

Duality in Linear Programming


Every LPP (called primal) is associated with another LPP (called its dual). The original
problem is then called primal problem while the other is called its Dual problem the
importance of duality concept is due to two main reasons
1. If the primal contains a large number of constraints and a small number of variables, the
labour of computation can be considerably reduced by converting it into the dual problem and
then solving it.
2. The interpretation of the dual variables from the cost or economic point of view, Proves
extremely useful in making the future decisions in the activities being programmed. The
symmetrical relationship between primal and dual problems.

Transportation problem

The Transportation problem is one of the subclasses of linear programming problem where
the objective is to transport various quantities of a single homogeneous product that are
initially stored at various origins, to different destinations in such a way that the total
transportation is minimum.
F.I. Hitchaxic developed the basic transportation problem in 1941. However it could be
solved for optimally as an answer to complex business problem only in 1951, when George
B. Danzig applied the concept of Linear Programming in solving the Transportation models.

Methods of finding initial basic feasible solution


The solution algorithm to a transpiration problem can be summarized into following steps:
i. North West Corner Rule (NWCR)
ii. Least cost Method (LCM)
iii. Vogel Approximation Method (VAM)

1. North-West corner method (NWCM)


The North West corner rule is a method for computing a basic feasible solution of a
transportation problem where the basic variables are selected from the North – West corner
(i.e., top left corner).

DEPT.OF MBA/SJBIT Page 37


QUANTITATIVE METHODS 16MBA14

Steps
1. Select the north west (upper left-hand) corner cell of the transportation table and allocate as
many units as possible equal to the minimum between available supply and demand
requirements, i.e., min (s1, d1).

2. Adjust the supply and demand numbers in the respective rows and columns allocation.
3. If the supply for the first row is exhausted then move down to the first cell in the second
row.
4. If the demand for the first cell is satisfied then move horizontally to the next cell in the
second column.
5. If for any cell supply equals demand then the next allocation can be made in cell either in
the next row or column.
6. Continue the procedure until the total available quantity is fully allocated to the cells as
required.

2. Least cost Method (LCM)


Matrix minimum method is a method for computing a basic feasible solution of a
transportation problem where the basic variables are chosen according to the unit cost of
transportation.
Steps
1. Identify the box having minimum unit transportation cost (cij).
2. If there are two or more minimum costs, select the row and the column corresponding to
the lower numbered row.
3. If they appear in the same row, select the lower numbered column.
4. Choose the value of the corresponding xij as much as possible subject to the capacity and
requirement constraints.
5. If demand is satisfied, delete the column.
6. If supply is exhausted, delete the row.
7. Repeat steps 1-6 until all restrictions are satisfied.

3. Vogel’s Approximation Method (VAM)


The Vogel approximation method is an iterative procedure for computing a basic feasible
solution of the transportation problem.
Steps

DEPT.OF MBA/SJBIT Page 38


QUANTITATIVE METHODS 16MBA14

1. Identify the boxes having minimum and next to minimum transportation cost in each row
and write the difference (penalty) along the side of the table against the corresponding row.
2. Identify the boxes having minimum and next to minimum transportation cost in each
column and write the difference (penalty) against the corresponding column
3. Identify the maximum penalty. If it is along the side of the table, make maximum allotment
to the box having minimum cost of transportation in that row. If it is below the table, make
maximum allotment to the box having minimum cost of transportation in that column.
4. If the penalties corresponding to two or more rows or columns are equal, select the top
most rows and the extreme left column.

Test for Optimality


Once the initial feasible solution is reached, the next step is to check the optimality. An
optimal solution is one where there is no other set of transportation routes (allocations) that
will further reduce the total transportation cost. Thus, we‘ll have to evaluate each unoccupied
cell (represents unused routes) in the transportation table in terms of an opportunity of
reducing total transportation cost.

Modified Distribution Method (MODI)


It is a method for computing optimum solution of a transportation problem.
STEPS
Step 1
Determine an initial basic feasible solution using any one of the three methods given below:
• North West Corner Rule
• Matrix Minimum Method
• Vogel Approximation Method
Step 2
Determine the values of dual variables, ui and vj, using ui + vj = cij
Step 3
Compute the opportunity cost using cij – ( ui + vj ).
Step 4
Check the sign of each opportunity cost. If the opportunity costs of all the unoccupied cells
are either positive or zero, the given solution is the optimum solution. On the other hand, if
one or more unoccupied cell has negative opportunity cost, the given solution is not an
optimum solution and further savings in transportation cost are possible.
Step 5

DEPT.OF MBA/SJBIT Page 39


QUANTITATIVE METHODS 16MBA14

Select the unoccupied cell with the smallest negative opportunity cost as the cell to be
included in the next solution.

Draw a closed path or loop for the unoccupied cell selected in the previous step. Please note
that the right angle turn in this path is permitted only at occupied cells and at the original
unoccupied cell.
Step 7
Assign alternate plus and minus signs at the unoccupied cells on the corner points of the
closed path with a plus sign at the cell being evaluated.
Step 8
Determine the maximum number of units that should be shipped to this unoccupied cell. The
smallest value with a negative position on the closed path indicates the number of units that
can be shipped to the entering cell. Now, add this quantity to all the cells on the corner points
of the closed path marked with plus signs and subtract it from those cells marked with minus
signs. In this way an unoccupied cell becomes an occupied cell.
Step 9
Repeat the whole procedure until an optimum solution is obtained.
Degeneracy:
In a transportation problem, degeneracy occurs when the number of Allocations are less than
(Rows +Columns – 1), where
M= number of rows
N=number of columns
This is also called as Rim condition. If rim condition is satisfied, the solution is not
degenerate. But if number of allocations are less than (m + n – 1), then the solution is
degenerate. To remove degeneracy, we need to take Epsilon Є which is an imaginary
allocation almost equal to zero.

Assignment problem
Introduction:
In the world of trade Business Organizations are confronting the conflicting need for optimal
utilization of their limited resources among competing activities. When the information
available on resources and relationship between variables is known we can use LP very
reliably. The course of action chosen will invariably lead to optimal or nearly optimal results.
The problems which gained much importance under LP are:

DEPT.OF MBA/SJBIT Page 40


QUANTITATIVE METHODS 16MBA14

The assignment problem is a special case of transportation problem in which the objective is
to assign a number of origins to the equal number of destinations at the minimum cost (or
maximum profit). Assignment problem is one of the special cases of the transportation
problem. It involves assignment of people to projects, jobs to machines, workers to jobs and
teachers to classes etc., while minimizing the total assignment costs. One of the important
characteristics of assignment problem is that only one job (or worker) is assigned to one
machine (or project). Hence the number of sources are equal the number of destinations and
each requirement and capacity value is exactly one unit.

Hungarian method
Step 1. Determine the cost table from the given problem.
(i) If the no. of sources is equal to no. of destinations, go to step 3.
(ii) If the no. of sources is not equal to the no. of destination, go to step2.
Step 2. Add a dummy source or dummy destination, so that the cost table becomes a square
matrix. The cost entries of the dummy source/destinations are always zero.
Step 3. Locate the smallest element in each row of the given cost matrix and then subtract the
same from each element of the row.
Step 4. In the reduced matrix obtained in the step 3, locate the smallest element of each
column and then subtract the same from each element of that column. Each column and row
now have at least one zero.
Step 5. In the modified matrix obtained in the step 4, search for the optimal assignment as
follows :(a) Examine the rows successively until a row with a single zero is found. En
rectangle this row () and cross off (X) all other zeros in its column. Continue in this manner
until all the rows have been taken care of.

(b) Repeat the procedure for each column of the reduced matrix.
(c) If a row and/or column have two or more zeros and one cannot be chosen by inspection
then assign arbitrary any one of these zeros and cross off all other zeros of that row / column.
(d) Repeat (a) through (c) above successively until the chain of assigning () or cross (X) ends.
Step 6. If the number of assignment () is equal to n (the order of the cost matrix), an optimum
solution is reached.
If the number of assignment is less than n(the order of the matrix), go to the next step.
Step7. Draw the minimum number of horizontal and/or vertical lines to cover all the zeros
of the reduced matrix.
Step 8. Develop the new revised cost matrix as follows:

DEPT.OF MBA/SJBIT Page 41


QUANTITATIVE METHODS 16MBA14

(a)Find the smallest element of the reduced matrix not covered by any of the lines.
(b)Subtract this element from all uncovered elements and add the same to all the elements
laying at the intersection of any two lines.

Step 9. Go to step 6 and repeat the procedure until an optimum solution is attained.

Minimization and Maximization case in Assignment Problem


Some assignment problems entail maximizing the profit, effectiveness, or layoff of an
assignment of persons to tasks or of jobs to machines. The Hungarian Method can also solve
such problems, as it is easy to obtain an equivalent minimization problem by converting
every number in the matrix to an opportunity loss. The conversion is accomplished by
subtracting all the elements of the given effectiveness matrix from the highest element. It
turns out that minimizing opportunity loss produces the same assignment solution as the
original maximization problem.

DEPT.OF MBA/SJBIT Page 42


QUANTITATIVE METHODS 16MBA14

Unit-6

Project Management

Project management is the process and activity of planning, organizing, motivating, and
controlling resources, procedures and protocols to achieve specific goals in scientific or daily
problems. A project is a temporary endeavour designed to produce a unique product, service
or result with a defined beginning and end (usually time-constrained, and often constrained
undertaken to meet unique goals and objectives, typically to bring about beneficial change or
added value. The temporary nature of projects stands in contrast with business as usual (or
operations), which are repetitive, permanent, or semi-permanent functional activities to
produce products or services. In practice, the management of these two systems is often quite
different, and as such requires the development of distinct technical skills and management
strategies.
The primary challenge of project management is to achieve all of the project goals and
objectives while honouring the preconceived constraints. The primary constraints are scope,
time, quality and budget. The secondary — and more ambitious — challenge is
to optimize the allocation of necessary inputs and integrate them to meet pre-defined
objectives.

Critical path method:

The critical path method (CPM) is a project modelling technique developed in the late 1950s
by Morgan R. Walker of DuPont and James E. Kelley, Jr. of Remington Rand.Kelley and
Walker related their memories of the development of CPM in 1989 Kelley attributed the term
"critical path" to the developers of the Program which was developed at about the same time
by Booz Allen Hamilton and the U.S. Navy. The precursors of what came to be known as
Critical Path were developed and put into practice by DuPont between 1940 and 1943 and
contributed to the success of the Manhattan Project.

CPM is commonly used with all forms of projects, including construction, aerospace and
defence, software development, research projects, product development, engineering, and
plant maintenance, among others. Any project with interdependent activities can apply this
method of mathematical analysis. Although the original CPM program and approach is no
longer used, the term is generally applied to any approach used to analyze a project network
logic diagram.

DEPT.OF MBA/SJBIT Page 43


QUANTITATIVE METHODS 16MBA14

The essential technique for using CPM is to construct a model of the project that includes the
following:

1. A list of all activities required to complete the project (typically categorized within
a work breakdown structure),
2. The time (duration) that each activity will take to complete,
3. The dependencies between the activities and,
4. Logical end points such as milestones or deliverable items.
Using these values, CPM calculates the longest path of planned activities to logical end
points or to the end of the project, and the earliest and latest that each activity can start and
finish without making the project longer.
This process determines which activities are "critical" (i.e., on the longest path) and which
have "total float" (i.e., can be delayed without making the project longer). In project
management, a critical path is the sequence of project network activities which add up to the
longest overall duration, regardless if that longest duration has float or not. This determines
the shortest time possible to complete the project.
There can be 'total float' (unused time) within the critical path. For example, if a project is
testing a solar panel and task 'B' requires 'sunrise', there could be a scheduling constraint on
the testing activity so that it would not start until the scheduled time for sunrise. This might
insert dead time (total float) into the schedule on the activities on that path prior to the sunrise
due to needing to wait for this event.
This path, with the constraint-generated total float would actually make the path longer, with
total float being part of the shortest possible duration for the overall project. In other words,
individual tasks on the critical path prior to the constraint might be able to be delayed without
elongating the critical path; this is the 'total float' of that task. However, the time added to the
project duration by the constraint is actually critical path drag, the amount by which the
project's duration is extended by each critical path activity and constraint.

DEPT.OF MBA/SJBIT Page 44


QUANTITATIVE METHODS 16MBA14

CPM analysis

• Draw the CPM network


• Analyze the paths through the network
• Determine the float for each activity
– Compute the activity‘s float

Float = LS - ES = LF - EF

– Float is the maximum amount of time that this activity can be delay in its completion before
it becomes a critical activity, i.e., delays completion of the project

• Find the critical path is that the sequence of activities and events where there is no ―slack‖
i.e. Zero slack

– Longest path through a network

• Find the project duration is minimum project completion time

Activity

– A task or a certain amount of work required in the project


– Requires time to complete
– Represented by an arrow

Dummy Activity

– Indicates only precedence relationships


– Does not require any time of effort

• Event

– Signals the beginning or ending of an activity


– Designates a point in time
– Represented by a circle (node)

• Network

– Shows the sequential relationships among activities using nodes and arrows
 Activity-on-node (AON)

Nodes represent activities, and arrows show precedence relationships


 Activity-on-arrow (AOA)

Arrows represent activities and nodes are events for points in time

DEPT.OF MBA/SJBIT Page 45


QUANTITATIVE METHODS 16MBA14

PERT

• PERT is based on the assumption that an activity‘s duration follows a probability


distribution instead of being a single value
• Three time estimates are required to compute the parameters of an activity‘s duration
distribution:

– Pessimistic time (tp ) - the time the activity would take if things did not go well
– Most likely time (tm ) - the consensus best estimate of the activity‘s duration
– Optimistic time (to ) - the time the activity would take if things did go well

PERT analysis

 Draw the network diagram.

 Analyze the paths through the network and find the critical path.
 The length of the critical path is the mean of the project duration probability
distribution which is assumed to be normal
 The standard deviation of the project duration probability distribution is computed by
adding the variances of the critical activities (all of the activities that make up the
critical path) and taking the square root of that sum
 Probability computations can now be made using the normal distribution table.

Cost consideration in project

• Project managers may have the option or requirement to crash the project, or accelerate the
completion of the project.
• This is accomplished by reducing the length of the critical path(s).
• The length of the critical path is reduced by reducing the duration of the activities on the
critical path.
• If each activity requires the expenditure of an amount of money to reduce its duration by
one unit of time, then the project manager selects the least cost critical activity, reduces it by
one time unit, and traces that change through the remainder of the network.
• As a result of a reduction in an activity‘s time, a new critical path may be created.
• When there is more than one critical path, each of the critical paths must be reduced.

If the length of the project needs to be reduced further, the process is repeated

Project Crashing

• Crashing

– reducing project time by expending additional resources

DEPT.OF MBA/SJBIT Page 46


QUANTITATIVE METHODS 16MBA14

Crash time
– An amount of time an activity is reduced

• Crash cost
– cost of reducing activity time

• Goal
– reduce project duration at minimum cost

Activity crashing

 Crashing costs increase as project duration decreases


 Indirect costs increase as project duration increases
 Reduce project length as long as crashing costs are less than indirect costs

Benefits of CPM/PERT

• Useful at many stages of project management


• Mathematically simple
• Give critical path and slack time
• Provide project documentation
• Useful in monitoring costs

Limitations to CPM/PERT

• Clearly defined, independent and stable activities


• Specified precedence relationships
• Over emphasis on critical paths

• Activity time estimates are subjective and depend on judgment


• PERT assumes a beta distribution for these time estimates, but the actual distribution may
be different
• PERT consistently underestimates the expected project completion time due to alternate
paths becoming critical

DEPT.OF MBA/SJBIT Page 47


QUANTITATIVE METHODS 16MBA14

Difference between CPM and PERT

CPM PERT

 CPM uses activity oriented network.  PERT uses event oriented Network.
 Durations of activity may be estimated  Estimate of time for activities are not so
with a fair degree of accuracy. accurate and definite.
 It is used extensively in construction  It is used mostly in research and
projects. development projects, particularly
projects of non-repetitive nature.
 Deterministic concept is used.  Probabilistic model concept is used.
 CPM can control both time and cost when  PERT is basically a tool for planning.
planning.
 In CPM, cost optimization is given prime  In PERT, it is assumed that cost varies
importance. The time for the completion directly with time. Attention is therefore
of the project depends upon cost given to minimize the time so that
optimization. The cost is not directly minimum cost results. Thus in PERT,
proportioned to time. Thus, cost is the time is the controlling factor.
controlling factor.

DEPT.OF MBA/SJBIT Page 48

You might also like