0% found this document useful (0 votes)
34 views148 pages

Unit 1 - Introductory Concepts

Uploaded by

vqr5wc7fjx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views148 pages

Unit 1 - Introductory Concepts

Uploaded by

vqr5wc7fjx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 148

SHRI G.S.

INSTITUTE OF TECHNOLOGY AND SCIENCE, INDORE


DEPARTMENT OF INDUSTRIAL AND PRODUCTION ENGINEERING

IP43612 – Six Sigma

Unit 1 - Introductory Concepts

Sumit Dwivedi
Assistant Professor, Department of IPE
SGSITS, Indore
CORE CONCEPTUALIZATION OF SIX SIGMA
How good is good enough?

At 99% quality level (i.e. equivalent to a sigma level of 3.8), there would be:

▪ At least 200,000 wrong drug prescriptions per year

▪ Unsafe drinking water almost 10 hours each month

▪ At least 20,000 Lost articles of mail per hour

▪ 5000 Incorrect surgeries per week

▪ 7 hour per month Hours without electricity

2
CORE CONCEPTUALIZATION OF SIX SIGMA
Even 99.9% is VERY GOOD

But what could happen at a quality level of 99.9% (i.e., 1000 ppm) in our everyday lives
(about 4.6)?

▪ 4000 wrong medical prescriptions each year.

▪ More than 3000 newborns accidentally falling from the hands of nurses or doctors

each year.

▪ Two long or short landings at American airports each day.

▪ 400 letters per hour which never arrive at their destination.

3
CORE CONCEPTUALIZATION OF SIX SIGMA
Q. How can we improve these results?

Answer: Aim as High as 6 Sigma i.e. at quality level 99.99966%.

▪ 13 wrong drug prescriptions per year.

▪ 1 hour per 34 years without electricity.

▪ 1.7 Incorrect surgeries per week

▪ 7 Lost articles of mail per hour

▪ 10 newborn babies dropped by doctors/nurses per year

▪ Two short or long landings per year in all the airports in the U.S.

4
CORE CONCEPTUALIZATION OF SIX SIGMA
What is Six Sigma?

▪ A Vision and Philosophical commitment to our consumers to offer the highest quality,

lowest cost products.

▪ A Metric that demonstrates quality levels at 99.9997% performance for products and

process.

▪ A Benchmark of our product and process capability for comparison to ‘best in class’.

▪ A practical application of statistical Tools and Methods to help us measure, analyze,

improve, and control our process.

5
CORE CONCEPTUALIZATION OF SIX SIGMA
Six Sigma is a problem-solving method and quality management approach used to improve
processes, reduce errors, and ensure customer satisfaction. It focuses on making things work
as perfectly as possible by identifying and fixing problems in any process. Six Sigma is
also known as zero defect.

In the narrow statistical sense, six sigma is a quality objective that identifies the variability of
a process in terms of the specifications of the product, so that product quality and reliability
meet and exceed today's demanding customer requirements.

Specifically, six sigma refers to a process capability that generates 3.4 defects per million
opportunities (DPMO). Most organizations today operate in the four-to-five sigma range
6,000–67,000 defects per million opportunities (DPMO); moving to six sigma is a challenge.
6
CORE CONCEPTUALIZATION OF SIX SIGMA
Successful use of the data-driven six sigma concepts helps organizations to eliminate waste,
hidden rework and undesirable variability in their processes, resulting in quality and cost
improvements, driving continued success.

Sigma (σ) is a letter in the Greek alphabet used by statisticians to measure the variability in
any process. A company’s performance is measured by the sigma level of their business
processes.

Despite its name, Six Sigma’s magic isn’t in statistical or high-tech razzle-dazzle. Six Sigma
relies on tried and true methods that have been used for decades.

7
CORE CONCEPTUALIZATION OF SIX SIGMA
Six Sigma as a Philosophy

 is a measure of how much


Internal & External Prevention &
Failure Costs Appraisal Costs
variation exists in a process

Old Belief
Costs

Old Belief
4
High Quality = High Cost

Quality
Internal & External Prevention &
Failure Costs Appraisal Costs

New Belief

Costs
4
New Belief
High Quality = Low Cost 5
6

Better Processes Reduce Cost Quality


8
BUILDING THE FOUNDATION FOR SIX SIGMA
Where did six sigma begin?

It was developed by Motorola and Bill Smith in the early 1980’s. Six sigma started as an
improvement program at Motorola in 1982. At the time, Motorola needed new analytical tools
to help reduce costs and improve quality. As a result, the initial six sigma tools were
developed. Bill Smith is known as “Father of six sigma”.

In the meantime, General Electric started to use them (with some modifications) in 1995.
Since then, other companies such as Polaroid, DuPont, Crane, Ford Motor Company,
American Express, Nokia and others have followed.

9
BUILDING THE FOUNDATION FOR SIX SIGMA
DPMO and Sigma level

A defect need not necessarily be only in product. It can show up in a service as well. For
example, if customers expect that a part be delivered within one day of the commitment
delivery date, then even if it is delivered just one day after that, it is a defect for them.

What customers Expect from a product or service can be considered as critical to quality or
CTQ. Quality is the ability of product or service to fulfill customers requirement. Joseph Juran
defined quality as fitness for use; Philip Cross by defined it as conformance to specification.

10
BUILDING THE FOUNDATION FOR SIX SIGMA
Practical meanings of Sigma level

Let us consider a few examples of SIX SIGMA quality level

In India, 14,444 trains were runs daily. Passengers expect these trains to reach on time. For
the sake of simplicity, let us consider each train as an opportunity for arriving late everyday
at the final destinations. Thus, there are, in total 365 x 14,444 = 52,72,060 opportunities in a
year. At Six Sigma Quality Level, there can be maximum of 3.4 defects per million
opportunities (DPMO). Hence, for 52,72,060 opportunities, the number of late trains will be
52,72,060
× 3.4 = 18. Thus, if Indian railways reaches the six sigma quality level for timelines,
10,00,000
there will have, at most 18 trains arriving late in the whole year.

11
BUILDING THE FOUNDATION FOR SIX SIGMA
Here is another example. In the case of bank transactions, customers expect that they should
get a complete set of documents within three days of submitting their application to open an
account. Any customer who gets the document after three days considers this to be a defect.
A particular bank opens a total of 5000 accounts in three months. It’s past three months’ data
show that in 125 cases, they missed the target of 3 days. Hence, their defect level per million

125
opportunity (DPMO) can be calculated as × 1000000 = 25000. This corresponds to a
5000
sigma level of 3.46 for the account opening period.

12
BUILDING THE FOUNDATION FOR SIX SIGMA
The Six Sigma methodology

Six Sigma is a project-based approach. Projects that have a sizable impact on customer
satisfaction and significant impact on the bottom line are selected. The senior management
of an organization has a very important role to play in the selection of projects and leaders.

The projects must be clearly defined in terms of expected key deliverables. These are
typically in terms of PPM levels or sigma quality levels, number of customer complaints,
cycle times, warranty failures, rejection levels, employee satisfaction index, supplier
delivery performance, transaction accuracy, cycle times, etc.

Practical Statistical Statistical Practical


Problem Problem Solution Solution
13
BUILDING THE FOUNDATION FOR SIX SIGMA
The road map for process six sigma

This is the most popular model of six sigma. The focus is on improvement of business
processes and solving problems process six sigma projects go through five phases.

Define – What needs to be improved.

Measure – Current performance and the gap with respect to desired target.

Analyze – The process to find the root cause of the problem

Improve – Select a solution and implement.

Control – The process parameters to assure and sustain improvement.

14
IS SIX SIGMA A PROBLEM-SOLVING METHODOLOGY?
The simple answer is that six sigma is a very formal, systematic approach to solving problems. It
follows a somewhat generic pattern. The problem-solving approach that six sigma takes is
basically:

▪ Defining the problem: Listing and prioritizing problems, defining the project and the team.

▪ Diagnosing the problem: Analyzing the symptoms, formulating theories of causes, testing

these theories, and identifying root causes.

▪ Remedying the problem: Considering alternative solutions, designing solutions and controls,

addressing resistance to implementation, implementing solutions and controls.

▪ Holding the gains: Checking performance and monitoring the control system.

15
WHAT ARE THE GOALS OF SIX SIGMA?
Among the many goals of this methodology, six stand out:

1. Reduce defects.

2. Improve yield.

3. Improve customer satisfaction.

4. Reduce variation.

5. Ensure continual improvement.

6. Increase shareholder value.

In some organizations the concept of "defect" has many legal ramifications, therefore the term
"nonconformance“ may be substituted.

16
SETTING PRIORITIES FOR SIX SIGMA
The Define Phase

During the defined phase, the case for project is prepared, the team is formed and the process is
identified. Defining a project can be done following the steps given below.

▪ Brainstorm or ‘brain-write’ potential project ideas. Brainwriting is slightly different from


brainstorming. It requires very participant to write their ideas individually and reduces the
influence of others on individuals.

▪ Classify the ideas and prioritize. This can be done in many different ways considerations could
be benefits, difficulty levels, linkage to business objectives, urgency to improve, etc. Tools such
as prioritization matrix can be used to finalize the list of projects which will give maximum
benefits to the company.

17
SETTING PRIORITIES FOR SIX SIGMA
▪ Identify the sponsor and appropriate belt for the project. Quite often, the projects are

suggested by belts and/or sponsors.

▪ Create a charter. The belt initiates a document which is usually referred to as a charter. A

charter defines the objectives and goals of the project, the areas where the participants
will focus, lays down the benefits in terms of process measures as well as financials, and
defines the scope that is the start and end points of the projects, and the schedule.

▪ Get the project charter approved by the appropriate authorities. Typically the sponsor,

financial controller and the champion. Once the charter is approved, the belt and the team
start working on the project.

18
SETTING PRIORITIES FOR SIX SIGMA
The Voice of the customer

Every business exists because of its customer. Someone willing to pay for the product or
services finds value in it.

We will use generic term product which includes service. When we want to implement 6
Sigma the first question that we must ask and try to answer is: What is important to the
customer in the product? What adds value?

Based on the answer, we may be able to start thinking about measures which could be
useful.

19
SETTING PRIORITIES FOR SIX SIGMA
Let us consider a simple product like a ball-point pen. A user will expect,

▪ Smooth running of the pen

▪ Even thickness of lines

▪ No excess ink

▪ Leaking of ink and

▪ Smooth operation of the retracting mechanism

20
SETTING PRIORITIES FOR SIX SIGMA
If we take a restaurant as an example, customers would probably expect

▪ A clean and quiet environment

▪ Good choice on the menu

▪ Polite waiters

▪ Timely service

▪ Good tasty food

▪ Reasonable rates.

21
SETTING PRIORITIES FOR SIX SIGMA
These can be considered as critical to quality (CTQ) characteristics that help determine the
level of customer satisfaction. Any instance when a product fails to meet the requirements of
customers is a defect. Here, we should understand the difference between features and
CTQS. More features usually cost more. However, fewer features do not necessarily mean
lower quality. For example, a car with automatic gears will be priced higher than the one
with manual transmission, but this does not mean that the former is of better quality.

CTQs can be identified by “walking in the customers’ shoes”, conducting customer survey
and using a tool such as quality function deployment (QFD).

22
RELEVANCE OF SIX SIGMA IN QUALITY ENGINEERING
What is quality?

Quality is a relative term and it is generally used with reference to the end use of the
product. The quality is thus defined as “The fitness for use or purpose at the most
economical level”.

It would be a mistake to think that Six Sigma is about quality in the traditional sense. Quality,
defined traditionally as conformance to internal requirements, has little to do with Six Sigma.

Six Sigma focuses on helping the organization make more money by improving customer
value and efficiency.

23
RELEVANCE OF SIX SIGMA IN QUALITY ENGINEERING
To link this objective of Six Sigma with quality requires a new definition of quality: the value
added by a productive endeavor. This quality may be expressed as potential quality and
actual quality.

Potential quality is the known maximum possible value added per unit of input. Actual
quality is the current value added per unit of input. The difference between potential and
actual quality is waste.

Six Sigma focuses on improving quality (i.e., reducing waste) by helping organizations
produce products and services better, faster, and cheaper.

24
RELEVANCE OF SIX SIGMA IN QUALITY ENGINEERING
There is a direct correspondence between quality levels and “sigma levels” of performance.
For example, a process operating at Six Sigma will fail to meet requirements about only 3
times per million opportunity. The typical company operates at roughly four sigma,
equivalent to approximately 6,210 errors per million opportunity.

Six Sigma focuses on customer requirements, defect prevention, cycle time reduction, and
cost savings.

25
RELEVANCE OF SIX SIGMA IN QUALITY ENGINEERING
For non-Six Sigma companies, the costs are often extremely high. Companies operating at
three or four sigma typically spend between 25 and 40 percent of the revenues fixing
problems. This is known as the cost of quality, or more accurately the cost of poor quality.
Companies operating at Six Sigma typically spend less than 5 percent of the revenues fixing
problems.

26
COMPANIES USING SIX SIGMA
Six Sigma is in use in virtually all industries around
the world. Some of companies can be listed as:

▪ Motorola

▪ Ericsson

▪ General Electric

▪ Sony

▪ Ford Motor Co.

▪ CITI bank

27
BASICS OF PROBABILITY
What is Probability?

Probability is the study of chance associated with the occurrence of random or stochastic
events.

Why Study Probability?

▪ Occurrence of defects in production is random or stochastic - such events cannot be

exactly predicted.

▪ In decisions about such events, we rely on the theory of probability.

▪ When our decisions require data analysis, the methods are obtained from statistics.

28
BASICS OF PROBABILITY
Theory of Probability:

▪ Probability is the likelihood of a particular event happening. It’s a branch of mathematics

that deals with the uncertainty of an event happening in the future.

▪ Probability value always occurs within a range of 0 to 1.

▪ Probability of an event to happen is given by P(E) = Number of favorable occurrences

divided by the number of possible occurrences.

Number of favorable occurrences


P(E) =
Number of possible occurrences

29
BASICS OF PROBABILITY
Let’s take a simple example of tossing an unbiased coin:

If an unbiased coin is flipped then what is the probability that its resultant is head? As we all
know a coin will have two faces one is head another is tail. As a result, we will have two
possible outcomes.

Probability of head is equal to probability of the tail is equal to one by two that is 0.5 or 50%.
So, we can say there is a 50% chance for the resultant of the toss to be a head.

Similarly, we can take an example of throwing a unbiased dice with 6 faces and probability
of any number between 1 to 6 is equal i.e. 1/6 or 16.67%.

30
BASICS OF PROBABILITY
Types of Probability

▪ Classical (Theoretical)

▪ Relative Frequency (Experimental)

31
BASICS OF PROBABILITY
Classical Probability

Rolling dice and tossing a coin are activities associated with a classical approach to
probability. In these cases, you can list (or enumerate) all the possible outcomes of an
experiment and determine the actual probabilities of each outcome.

32
BASICS OF PROBABILITY
Sample Space, Events and Random Variables (RVs)

▪ Sample Space is the list of all possible outcomes from a probabilistic experiment.

▪ The possible outcomes of a stochastic or random process are called events.

▪ An event is a deterministic process has only one possible outcome.

▪ The probability of a particular event is the fraction of outcomes in which the event occurs.

The probability of event A is denoted by P(A).


S
▪ Random variables map events to numbers. A
4.3 X
BASICS OF PROBABILITY
Relative Frequency Probability

▪ Uses actual experience or data to determine the likelihood of an outcome.

▪ What is the chance of scoring a B or better?

Grade Frequency

A 20
Think of students
taking a test… B 30

C 40

Below C 10
BASICS OF STATISTICS
Basic terms in statistics

Population and sample

The term population refers to all data points, and The term sample represents a certain
number of parts from the populations. Population is characterized by parameter such as
mean, standard deviation, variance etc.

Sample Population
Mean xത μ
Std. deviation s σ
Statistics Parameter

35
BASICS OF STATISTICS
Fractiles, quartiles and percentiles:

The median divides the data into two halves. Similarly, we can divide the data into 4
quarters. When we do that, we will get three division points – Q1 , Q2 , and Q3 . Q1 is the value
below which we have 1/4th or 25% of the observations. Q1 is also called the 25th percentile
P0.25 . Fractiles can be denoted as Pf where f is the fraction below its value. Similarly, Q2 is the
median or 50th percentile P0.5 and Q3 is 75th percentile P0.75 .

The difference between upper and lower quartiles is called the interquartile range (IQR).
Percentiles are denoted as 100 p-th such that at least 100% observations are at or below this
value and 100(1-P)% observations are above this value.

36
BASICS OF STATISTICS
The Normal Distribution

In the frequency distribution, if the number of observations are increased considerably, then
the number of cells will increase and the width of cell will become smaller and smaller. The
series of steps that constitutes the top line of the histogram will then approach a smooth
curve. The height of the curve at any point is proportional to the frequency at that point and
the area under it between any two limits is proportional to the frequency of occurrences
within these limits. Such curve is called “Normal curve”.

37
BASICS OF STATISTICS
The normal curve is the special type of density curve that is bell shaped. Hence, sometimes
it is called as bell curve. The normal distribution is the most important probability
distribution in statistics because many continuous data in nature and psychology displays
this bell-shaped curve when compiled and graphed.

For example, if we randomly sampled 100 individuals we would expect to see a normal
distribution frequency curve for many continuous variables. Its shape arises from various
data such as IQ, height, weight, volume, blood pressure etc.

38
BASICS OF STATISTICS
Most of the continuous data values in a normal
distribution tend to cluster around the mean, and
the further a value is from the mean, the less likely
it is to occur. The tails are asymptotic, which means
that they approach but never quite meet the
horizontal axis (i.e. x-axis).

For a perfectly normal distribution the mean,


median and mode will be equal or the same value,
visually represented by the peak of the curve.

39
BASICS OF STATISTICS
Properties of Normal Distribution

▪ The normal distribution is a continuous probability distribution that is symmetrical on both


sides of the mean, so the right side of the center is a mirror image of the left side.

▪ The normal distribution is unimodal i.e. the distribution has single peak or having one
mode.

▪ The area under the normal distribution curve represents probability and the total area
under the curve sums to one (1) means 100%.

▪ The parameter population mean (μ) characterizes the position of the normal distribution.

▪ The parameter population standard deviation (σ) characterizes the spread of the normal
distribution. 40
BASICS OF STATISTICS
Properties of Normal Distribution

▪ The above parameters can be represented as X ~ N(μ, σ) i.e. variable X follows Normal

distribution (ND) that has population mean (μ) and population standard deviation (σ).

▪ Normal distributions become more apparent (i.e. perfect), the finer the level of

measurement and the larger the sample from a population.

▪ Theoretically the ND curve extends from -∞ (minus infinity) to +∞ (plus infinity). However,

for all practical purposes we can consider normal curve as extending only 3σ values to the
left and 3σ value to the right of the mean (µ ± 3σ).

41
BASICS OF STATISTICS

Percent of total area within


Specification Limit
specified limit
µ ± 0.6745σ 50.00
µ±σ 68.27
µ ± 2σ 95.45
µ ± 3σ 99.73

42
BASICS OF STATISTICS
Standard Normal Distribution (SND)

The standard normal distribution, also called the z-distribution, is a special normal
distribution where the mean is 0 and the standard deviation is 1. Any normal distribution can
be standardized by converting its values into z scores. Z scores tell you how many standard
deviations from the mean each value lies.

43
BASICS OF STATISTICS
Z value or Z score or Standard Normal Variate

A z-score describes the position of a raw score in terms of its distance from the mean, when
measured in standard deviation units. The z-score is positive if the value lies above the
mean, and negative if it lies below the mean.

It is also known as a standard score, because it allows comparison of scores on different


kinds of variables by standardizing the distribution. A standard normal distribution (SND) is
a normally shaped distribution with a mean of 0 and a standard deviation (SD) of 1.

44
BASICS OF STATISTICS
The central limit theorem (CLT)

The central limit theorem in statistics states that, given a sufficiently large sample size, the
sampling distribution of the mean for a variable will approximate a normal distribution
regardless of that variable’s distribution in the population.

In other words, the central limit theorem says that the sampling distribution of the mean will
always be normally distributed, as long as the sample size is large enough.

45
BASICS OF STATISTICS
In a population, values of a variable can follow different probability distributions. These
distributions can range from normal, left skewed, right skewed, exponential, Poisson,
binomial, uniform or any other distribution. The central limit theorem applies to almost all
types of probability distributions, but there are exceptions. For example, the population
must have a finite variance. That restriction rules out the Cauchy distribution because it has
an infinite variance.

The central limit theorem states that when you have a sufficiently large sample size, the
sampling distribution starts to approximate a normal distribution. How large does the
sample size have to be for that approximation to occur?

46
BASICS OF STATISTICS
It depends on the shape of the variable’s distribution in the underlying population. The more
the population distribution differs from being normal, the larger the sample size must be.
Typically, statisticians say that a sample size of 30 is sufficient for most distributions.
However, strongly skewed distributions can require larger sample sizes.

If σ is the standard deviation of individual data points in the population, variance of averages
is σ2 /n, where n is sample size. Thus, the standard deviation of averages is σ /√n. we can
write this as σ𝑥ҧ = (σ /√n). σ𝑥ҧ is also known as the standard error of mean.

47
BASICS OF STATISTICS
In statistical language, CLT can be stated as follows: If a random variable X has mean µ and
finite variance σ2 , then as n increases, the distribution of sample means approaches a
normal distribution with mean µ and variance σ2 /n, n being the number of observations on
which the means are based.

This means that even if the distribution of a random variable does not follow normal
distribution, sample averages tend to follow it as the subgroup size becomes larger. The
assumption is reasonable for subgroup sizes ≥ 4. This theorem forms an important
theoretical basis of many statistical tools such as control chart, hypothesis tests, etc.

48
BASICS OF STATISTICS
DPO and DPMO

DPO stands for defects per opportunity and DPMO stands for defect per million opportunity.

Six Sigma is a performance target that applies to a single, critical to quality (CTQ)
characteristics, not to total product, For example, Fuel economy, service time, time to pick up
a call, ETC. If service level agreement specifies maximum call length as 150 seconds, any
call that exceeds this length is a defect.

In rare cases, a customer may even be happy despite delayed service; For example, when he
or she gets a free pizza on account of delayed delivery, or a passenger gets a free ticket and
compensation when a flight is overbooked.

49
BASICS OF STATISTICS
DPO and DPMO

A very simple product may have only one CTQ Characteristics. For example, a paper clip
may have only one opportunity for defect: whether it holds the paper or not. Complex
products have more than CTQ characteristics and, Therefore, have more opportunities for
defects. A computer, being a complex product can have many opportunities for defects. Let
us consider that a computer has 10 opportunities for defects - bad drive, system crashes,
display flickers etc.

Six sigma level of performance corresponds to a maximum of 3.4 defects per million
opportunities (DPMO).

50
BASICS OF STATISTICS
▪ A paper clip has one opportunity for defect, and we find 7 clips defective in 1000 clips; the
defects per opportunity DPO and DPMO can be calculated as,

Defects 7
DPO = = = 0.007
Opportunity 1×1000

DPMO = DPO × 106 = 0.007 × 106 = 7000

▪ There are 10 opportunities for defect in one computer and we find 7 defects in 1000 computers.
The DPO and DPMO can be calculated as

Defects 7
DPO = Opportunity = 10×1000 = 0.0007

DPMO = DPO × 106 = 0.0007 × 106 = 700

51
BASICS OF STATISTICS
DPO is a probability of defect. It can be readily
converted into an equivalent Z value with Excel Formula
NORMSINV(1 - DPO). Using Excel, we can easily find
that,

NORMSINV(1 – 0.007) = 2.46

NORMSINV(1 – 0.0007) = 3.19

Please note that for only one opportunity for defect,


DPMO equals parts per million (PPM).

52
BASICS OF STATISTICS
The concept of sigma level

For a process at six sigma level, variation is reduced to such an extent that tolerance equals
12σ. Such a process will result in 3.4 parts per million (PPM) even if the mean shifts to either
side to the extent of 1.5σ.

The reasons for considering 1.5 sigma shift is based on Motorola experience that the mean
of the processes does not remain constant but varies depending on factors such as tool wear,
temperature changes, material variations, drift in measuring instruments, changes in
chemical concentration etc.

53
BASICS OF STATISTICS μ
1.5σ

±3σ ±3σ
±6σ ±6σ
Six Sigma process without shift Six Sigma process with 1.5σ shift

54
BASICS OF STATISTICS
The concept of sigma level

Let us assume that the process shifts towards USL by 1.5 sigma. Thus, the distance between
shifted mean and USL will be 4.5 sigma. Similarly, the distance between shifted mean and
LSL will be 7.5 sigma. The probability of defect below USL can be calculated using Excel
with NORMSDIST(-4.5). This equals 3.4 × 10−6 . Probability of defect above LSL can be
calculated as (1 - NORMSDIST(7.5)) which is 3.19 × 10−14 . This is negligible. Thus, the total
probability of defect for a six sigma process with 1.5 sigma shift is 3.4 × 10−6 . If we convert
this in DPMO, we get 3.4. Similarly, we can calculate DPMO values for various sigma levels.

55
BASICS OF STATISTICS
The concept of sigma level

Sigma Level Defects per Million Opportunities (DPMO)

1 691,462

2 308,538

3 66,807

4 6,210

5 233

6 3.4

56
BASICS OF STATISTICS
The concept of sigma level

Let us calculate sigma levels of paper clip and computer, the Z values were 2.46 and 3.19
respectively. If these values are based on long term process data it means that the defect
level is calculated after taking into consideration 1.5 sigma shift. Thus, to convert these Z
values into sigma level we must add 1.5. The Sigma levels will be 2.46 + 1.5 = 3.96 and 3.19
+ 1.5 = 4.69.

57
BASICS OF STATISTICS
Paper clip: sigma level 3.96 Computer: sigma level 4.69

Opportunities for 1 Opportunities for 10


defect per unit defect per unit
Defects 7 Defects 7

Sample Size 1000 Sample Size 1000

DPO 0.007 DPO 0.0007

DPMO 7000 DPMO 700

Sigma Lavel 3.96 Sigma Lavel 4.69

58
BASICS OF STATISTICS
Throughput yield and Sigma level

Throughput yield (TY) is defined as the probability of producing a defect free component.
This can be calculated using poison distribution.
e−µ µx
P(x) =
x!

Here µ = DPU and x = 0

e−µ µx e−DPU µ0
Y = P(0) = =
x! 0!
Probability of
Y = e−DPU defect = 1 - e−DPU
Y = e−DPU

59
BASICS OF STATISTICS
Rolled throughput yield (RTY or 𝐘𝐑𝐓 )

Rolled Throughput Yield (RTY) is the probability of the entire process producing zero
defects. This metric is increasingly relevant when a process has excessive rework.

Rolled Throughput Yield (RTY) is a process performance measure that provides insight into
the cumulative effects of an entire process. RTY measures the yield for several process steps
and provides the probability that a unit will come through that process defect-free.

RTY allows us to expose the "hidden factory" by providing visibility into the yield of each
process step. This helps us identify the poorest performing process steps and gives us clues
into where to look to find the most impactful process improvement opportunities.

60
BASICS OF STATISTICS
Calculation of RTY:

Rolled throughput yield is a multiplication of throughput yields of each process step that a
product goes through. If Y1 , Y2 ,……. Yn are yields of step 1, 2, …..n, then

YRT = Y1 × Y2 ×…….× Yn

YRT = e−DPU1 × e−DPU2 ×…….× e−DPUn

YRT = e−(DPU1+DPU2+⋯+DPU𝑛)

YRT = e−TDPU where TDPU is Total Defects per Unit

61
BASICS OF STATISTICS
Calculation of RTY:

Calculation from an example (assuming one defect makes a defective unit which must be
scrapped or reworked):

Process 1: There were 50 units that entered Process 1 and 40 of them were neither reworked or
scrapped. This means 40 of the 50 went through Process 1 without a defect which = 80%.

OR an estimate can be done (works best when DPU is very small)

DPU = 10 defects among 50 units = 0.2 DPU

thus Y1 = e−DPU1 = e−0.2 = 0.8187 = 82%

Notice this DPU is "large" and therefore this estimate of 82% is off from the true value of 80%.

62
BASICS OF STATISTICS
Calculation of RTY:

Process 2: There were 46 units that entered Process 2 and none were scrapped but 12 were
reworked. This means 34 of the 46 went through Process 2 without a defect = 73.91% OR an
estimate can be done.

DPU = 12 defects among 46 incoming units = 0.26087

thus Y2 = e−DPU2 = e−0.26087 = 0.7704 = 77%

Again, this DPU is even higher than Process 1 so expect the TPY estimate to be further off
from the actual value of 73.91%

63
BASICS OF STATISTICS
Calculation of RTY:

Process 3: There were 46 units that entered Process 3 and 9 were scrapped and none were
reworked. This means 37 of the 46 went through without a defect = 80.43% OR an estimate
can be done.

DPU = 9 defects among 46 incoming units = 0.19565

thus Y3 = e−DPU3 = e−0.19565 = 0.8223 = 82%

Again, this DPU estimate is off from the true value of 80.43%.

64
BASICS OF STATISTICS
Calculation of RTY:

Review of calculating RTY

Multiply the TPY for each process and this becomes RTY for the entire process.

Using actual values,

YRT = Y1 × Y2 ×…….× Yn = 0.80 × 0.7391 × 0.8043 = 0.4756 = 48%

Using the DPU estimate method,

YRT = Y1 × Y2 ×…….× Yn = 0.8187 × 0.7704 × 0.8223 = 0.5186 = 52%

Notice a 4% approximate difference.

65
BASICS OF STATISTICS
Calculation of RTY:

At the end of the entire process there are 37 units left of the original 50 units. The RTY is not
37/50 = 74% because that value of 74% only accounts for scrapped units and not the
reworked units.

Once the reworked units are incorporated into the calculation at each step does the RTY
become accurate. This emphasizes the importance of including the reworked units,
especially if the rework is very costly or near the cost of scrapping a unit.

If the rework cost is very low relative to a scrapped unit, then the incorporation of rework
figures is reduced in its importance.

66
BASICS OF STATISTICS
Calculation of RTY:

Another shortcut that does not work is to add all the reworked units + scrapped units across
all the processes and divide by the starting quantity. A total of 18 units reworked + 13
scrapped = 31 and some would think that 19 must have gone through without a defect. That
does not equate to the correct Rolled Throughput Yield.

In this case it would give an answer of 19/50 = 38% which is not correct.

EACH process has its own numerator and denominator that is dependent on the previous
process so take each process in order and calculate as shown above.

67
BASICS OF STATISTICS
Calculation of RTY:

A 3-step process has the following yields and this is the only information you are provided:

Y1 = 98.7%, Y2 = 99.4%, Y3 = 97.8%

What is the Total Defects Per Unit (TDPU)?

This is a two-step problem. First find the RTY:

RTY = 0.987 × 0.994 × 0.978 = 0.959494

Recall the TDPU = -ln (RTY)

Therefore, the TDPU = -ln (0.959494) = 0.041

68
BASICS OF STATISTICS
Normalized yield

The Normalized Throughput Yield is a Lean Six Sigma metric that may be used as a baseline
for process steps when the Rolled Throughput Yield is established for the final step of the
process. The Normalized Throughput Yield is calculated as the nth root of the Rolled
Throughput Yield.

Normalized Yield (NY) is the average yield per process step. It's the probability of a unit
passing through one process step without rework.

69
BASICS OF STATISTICS
Normalized yield

Normalised yield can be considered as the yield of an equivalent single step in the process
which can replace all steps resulting in the same RTY. If there are m process steps and YRT is
rolled throughput yield, then relationship of normalised yield Ynorm and normalised DPU
(DPUnorm ) are given by,

Ynorm = m
YRT

But Ynorm = e−DPUnorm

Therefore, DPUnorm = - ln(Ynorm )

70
BASICS OF STATISTICS
Normalized yield

Consider the above example, where YRT value is 0.5186 with three steps,

3
Ynorm = m
YRT = 0.5186

Ynorm = 0.8034

DPUnorm = - ln(0.8034)

DPUnorm = 0.2189

71
BASICS OF STATISTICS
Hidden factory

The hidden factory, within the context of efficiency and quality control, refers to a
percentage of processing that occurs outside the established system of measurements.

The Hidden Factory represents the sum of all non-value-adding activities in a production
process. These activities, while not contributing to the final product, consume resources like
time, labor, and materials. Examples include excessive paperwork, rework, handling of
defects, overprocessing, unproductive wait times etc.

72
BASICS OF STATISTICS
Statistics: It is a branch of applied mathematics where we collect, organize, analyze and interpret
numerical facts. Statistical methods are the concepts, models, and formulas of mathematics used
in the statistical analysis of data. They can be subdivided into two main categories.

Descriptive Statistics: Descriptive statistics consists of measure of central tendency and


measure of dispersion. This method involves summarizing or describing the sample of data in
various forms to get an overall summary of the data. Most often the results are shown in
qualitative form as the name suggest.

Inferential Statistics: Inferential statistics consists of estimation and hypothesis testing. In


contrast, inferential statistics try to make assumptions about the population of the data, given the
sample; or in predicting various outcomes.
73
DESCRIPTIVE STATISTICS
Measure of Central Tendency: It is a method of descriptive statistics which identifies with a
single value. The single value is generally the central position of the distribution, and hence
they are also known as measures of central location. The three measures of central tendency
are:

(a) Mean (b) Median (c) Mode

Measures of dispersion: It describes the amount of variation within a given distribution.


They are spread or dispersed around some central value, mostly the mean. The most
commonly used methods of dispersion are:

(a) Standard deviation (b) Variance (c) Range

74
INFERENTIAL STATISTICS
Hypothesis testing:

Hypothesis testing is a form of statistical inference that uses data from a sample to draw
conclusions about a population parameter or a population probability distribution.

Hypothesis testing is an act in statistics whereby an analyst tests an assumption regarding a


population parameter. The methodology employed by the analyst depends on the nature of
the data used and the reason for the analysis.

75
HYPOTHESIS TESTING
Let's discuss few examples of statistical hypothesis from real-life -

▪ A teacher assumes that 60% of his college's students come from lower-middle-class

families.

▪ A doctor believes that 3D (Diet, Dose, and Discipline) is 90% effective for diabetic patients.

Statistical analysis validate assumptions by collecting and evaluating a representative


sample from the data set under study.

The process of hypothesis testing involves four key steps: defining the hypotheses,
developing a plan for analysis, examining the sample data, and interpreting the final results.

76
HYPOTHESIS TESTING
Hypothesis Testing Formula

(xഥ – μ0 )
Z=
(σ /√n)

Here, xത is the sample mean,

μ0 is the population mean,

σ is the standard deviation,

n is the sample size.

77
HYPOTHESIS TESTING
Basic terminology

In hypothesis testing, some commonly used terms are described below

Null hypothesis: The Null Hypothesis is the assumption that the event will not occur. A null
hypothesis has no bearing on the study's outcome unless it is rejected. H0 is the symbol for
it, and it is pronounced H-naught.

Alternate hypothesis: The Alternate Hypothesis is the logical opposite of the null
hypothesis. The acceptance of the alternative hypothesis follows the rejection of the null
hypothesis. H1 is the symbol for it.

78
HYPOTHESIS TESTING
Let's understand this with an example.

A sanitizer manufacturer claims that its product kills 95% of germs on average.

To put this company's claim to the test, create a null and alternate hypothesis.

Null Hypothesis (H0 ): Average = 95%

Alternative Hypothesis (H1 ): The average is less than 95%

79
HYPOTHESIS TESTING
Let's understand this with an example.

Another straightforward example to understand this concept is determining whether a coin is fair
or not. As we can say that the coin is fair when probability of head and tail are equal and if it is
not, the coin is unfair.

Step 1 - The null hypothesis states that the probability of head is equal to tail.

Step 2 - The alternate hypothesis states that the probability of head and tail would be different.

Step 3 – Perform an experiment. Like in this case, we can toss the coin 100 times and see the
results. If we get 50 heads and 50 tails or may be 60 heads and 40 tails, then we can say the coin is
fair. But if we get 70 heads and 30 tails, then there is a chance that the coin is unfair.

80
HYPOTHESIS TESTING
Step 4 – Define confidence interval (CI). A
confidence interval is the mean of your estimate
plus and minus the variation in that estimate. CI
is defined by domain expert.
Confidence level = 1 - Significance level (𝛼)

Step 5 – Based on above experiment, we have either select or reject the null hypothesis.
Rejection of null hypothesis leads to selection of alternate hypothesis or vice versa.

81
HYPOTHESIS TESTING
Let's consider a hypothesis test for the average height of women in the United States.
Suppose our null hypothesis is that the average height is 5'4". We gather a sample of 100
women and determine their average height is 5'5". The standard deviation of population is 2.

To calculate the z-score, we would use the following formula:

(xഥ – μ0 ) (5′5" − 5′4") 0.5


Z= = = = 11.11
(σ /√n) (2" / √100) 0.045

We will reject the null hypothesis as the z-score of 11.11 is very large and conclude that
there is evidence to suggest that the average height of women in the US is greater than 5'4".

82
HYPOTHESIS TESTING
Steps in Hypothesis Testing

Hypothesis testing is a statistical method to determine if there is enough evidence in a


sample of data to infer that a certain condition is true for the entire population. Here’s a
breakdown of the typical steps involved in hypothesis testing:

▪ Formulate Hypotheses

Null Hypothesis (𝐇𝟎 ): This hypothesis states that there is no effect or difference, and it is the
hypothesis you attempt to reject with your test.

Alternative Hypothesis (𝐇𝟏 or 𝐇𝐚 ): This hypothesis is what you might believe to be true or
hope to prove true. It is usually considered the opposite of the null hypothesis.

83
HYPOTHESIS TESTING
▪ Choose the Significance Level (α)

The significance level, often denoted by alpha (α), is the probability of rejecting the null
hypothesis when it is true. Common choices for α are 0.05 (5%), 0.01 (1%), and 0.10 (10%).

▪ Select the Appropriate Test

Choose a statistical test based on the type of data and the hypothesis. Common tests include t-
tests, chi-square tests, ANOVA, and regression analysis. The selection depends on data type,
distribution, sample size, and whether the hypothesis is one-tailed or two-tailed.

▪ Collect Data

Gather the data that will be analyzed in the test. To infer conclusions accurately, this data should
be representative of the population.

84
HYPOTHESIS TESTING
▪ Calculate the Test Statistic

Based on the collected data and the chosen test, calculate a test statistic that reflects how
much the observed data deviates from the null hypothesis.

▪ Determine the p-value

The p-value is the probability of observing test results at least as extreme as the results
observed, assuming the null hypothesis is correct. It helps determine the strength of the
evidence against the null hypothesis.

85
HYPOTHESIS TESTING
▪ Make a Decision

Compare the p-value to the chosen significance level:

o If the p-value ≤ α: Reject the null hypothesis, suggesting sufficient evidence in the data
supports the alternative hypothesis.

o If the p-value > α: Do not reject the null hypothesis, suggesting insufficient evidence to
support the alternative hypothesis.

▪ Report the Results

Present the findings from the hypothesis test, including the test statistic, p-value, and the
conclusion about the hypotheses.
86
HYPOTHESIS TESTING
▪ Perform Post-hoc Analysis (if necessary)

Depending on the results and the study design, further analysis may be needed to explore
the data more deeply or to address multiple comparisons if several hypotheses were tested
simultaneously.

87
HYPOTHESIS TESTING
Types of Hypothesis Testing

1. Z Test

2. T Test

3. Chi-Square

4. ANOVA

88
HYPOTHESIS TESTING
One-Tailed and Two-Tailed Hypothesis Testing

The One-Tailed test, also called a directional test, considers a critical region of data that would
result in the null hypothesis being rejected if the test sample falls into it, inevitably meaning the
acceptance of the alternate hypothesis.

In a one-tailed test, the critical distribution area is one-sided, meaning the test sample is either
greater or lesser than a specific value.

In two tails, the test sample is checked to be greater or less than a range of values in a Two-Tailed
test, implying that the critical distribution area is two-sided.

If the sample falls within this range, the alternate hypothesis will be accepted, and the null
hypothesis will be rejected.
89
HYPOTHESIS TESTING
Example:

Suppose 𝐇𝟎 : mean = 50 and 𝐇𝟏 : mean not equal to 50

According to the 𝐇𝟏 , the mean can be greater than or less than 50. This is an example of a
Two-tailed test.

In a similar manner, if 𝐇𝟎 : mean >=50, then 𝐇𝟏 : mean <50

Here the mean is less than 50. It is called a One-tailed test.

90
HYPOTHESIS TESTING
Type 1 and Type 2 Error:

A hypothesis test can result in two types of errors.

Type 1 Error: A Type-I error occurs when sample results reject the null hypothesis despite
being true.

Type 2 Error: A Type-II error occurs when the null hypothesis is not rejected when it is
false, unlike a Type-I error.

91
HYPOTHESIS TESTING
Example:

Suppose a teacher evaluates the examination paper to decide whether a student passes or
fails.

𝐇𝟎 : Student has passed

𝐇𝟏 : Student has failed

Type I error will be the teacher failing the student [rejects 𝐇𝟎 ] although the student scored
the passing marks [𝐇𝟎 was true].

Type II error will be the case where the teacher passes the student [do not reject 𝐇𝟎 ]
although the student did not score the passing marks [𝐇𝟏 is true].
92
HYPOTHESIS TESTING
Question 1

A telecom service provider claims that customers spend an average of ₹400 per month, with
a standard deviation of ₹25. However, a random sample of 50 customer bills shows a mean of
₹250 and a standard deviation of ₹15. Does this sample data support the service provider’s
claim?

Solution: Let’s break this down:

Null Hypothesis (𝐇𝟎 ): The average amount spent per month is ₹400.

Alternate Hypothesis (𝐇𝟏 ): The average amount spent per month is not ₹400.

93
HYPOTHESIS TESTING
Given:

Population Standard Deviation (σ): ₹25

Sample Size (n): 50 and Sample Mean (തx̄): ₹250

1. Calculate the z-value:

(xഥ – μ0 ) (250 – 400)


z= = = -42.42
(σ /√n) (25 /√50)
2. Compare with critical z-values: For a 5% significance level, critical z-values are -1.96 and
+1.96. Since -42.42 is far outside this range, we reject the null hypothesis. The sample data
suggests that the average amount spent is significantly different from ₹400.

94
PROCESS CAPABILITY
An “in-control” process can produce bad or out-of-
specification product. Manufacturing processes must meet
or be able to achieve product specifications. Process
capability is defined as the ability of the process to meet the
design specification for a product or service.

Process capability is the repeatability and consistency of


a manufacturing process relative to the customer
requirements in terms of specification limits of a product
parameter. This measure is used to objectively measure
the degree to which your process is or is not meeting the
requirements. 95
PROCESS CAPABILITY (SIGMA LEVELS)
Sigma Level (Process Capability) Defects per Million Opportunities (DPMO)

1 691,462

2 308,538

3 66,807

4 6,210

5 233

6 3.4

96
PROCESS CAPABILITY
VOC AND VOP

Voice of customers (VOC) – Process Tolerance given by customers.

Voice of process (VOP) – Usually given by the designer.

Output from data from the field tend to fit normal distribution - Property of Normal

distribution tells us 99.73% of data fall within ± 3𝜎. Hence this may be called VOP which is

equal to 6𝜎.

97
PROCESS CAPABILITY
Nominal
Value or
Base value
Nominal Value: A target for design
specification. LSL USL
Tolerance: Allowances provided above or
below the normal value.
To quantify process capability, we can
compare two measures.
Design specification width (VOC) = USL - LSL
Process width (VOP) = 3σ + 3σ = 6σ

Design specification width


PROCESS CAPABILITY
Design Specification

Customer Requirement Specification

International Tennis
Federation (ITF)

Tennis ball

Diameter of ball should


be 6.56 cm to 6.86 cm

99
PROCESS CAPABILITY
Design Specification

Reject Accept Reject


6.56 6.86

Diameter of balls
(in cm) 100
LSL μ USL
PROCESS CAPABILITY
Design Specification

Reject Accept Reject


6.56 6.86

Diameter of balls
(in cm) 101
LSL μ USL
PROCESS CAPABILITY
E.g. A company has to produce shaft with nominal value of 250 mm. The LSL and USL
decided by the customer is 240 mm and 260 mm respectively. Now company will find the
process distribution or the population distribution.
Process distribution
or population
distribution

99.73% of products
fall within ±3σ
limit 232 250 268
-3σ 3σ
PROCESS CAPABILITY
Key question: Is the process capable of producing 99.73% of its products within design
specification limits?
LSL USL

It means the process


is not capable
because it produces Defective
defective products products

232 240 250 260 268


-3σ 3σ
PROCESS CAPABILITY RATIO
Process capability ratio (Cp) is a simple process capability index that relates the allowable
spread of the specification limits (the difference between the upper specification limit, USL,
and the lower specification limit, LSL) to the measure of the variation of the process (either
actual or natural), represented by 6 sigma, where sigma is the estimated process standard
deviation.
In other words, Cp can be defined as the ratio between design specification width to
process width. If the process is in statistical control and the process mean is centered on the
target, then Cp can be calculated as follows:

VOC Design specification width (USL – LSL) Tolerance


Cp = = = =
VOP Process width 6σ 6σ
PROCESS CAPABILITY RATIO
E.g.
In above case, USL = 260 mm and LSL = 240 mm.
Process width = 268 – 232 = 36 mm

(USL – LSL) 260 – 240


Cp = =
6σ 36
Cp = 0.56 < 1
Thus, the process is not capable.
PROCESS CAPABILITY RATIO
Cp < 1, |USL – LSL| < 6σ means the process variation exceeds specification, and a
significant number of defects are being made.
LSL USL

-3σ 3σ
PROCESS CAPABILITY RATIO
Cp = 1, |USL – LSL| = 6σ means that the process is just meeting specifications. A minimum of
0.3% defects will be made and more if the process is not centered.
LSL USL

-3σ 3σ
PROCESS CAPABILITY RATIO
Cp > 1, |USL – LSL| > 6σ means that the process variation is less than the specification,
however, defects might be made if the process is not centered on the target value.

LSL USL

-3σ 3σ
PROCESS CAPABILITY RATIO
Note:

Now what should be the Cp value for 6σ process.

For the process with 6σ specification, the specification width = 12σ

12σ
Cp = =2

Thus, for 6σ process, process capability ratio is always equal to 2.


PROCESS CAPABILITY INDEX

(USL – LSL)
Cp =

PROCESS CAPABILITY INDEX μ

Q. How process capability index (Cpk) can be


computed?

USL – μ μ − LSL
Cpk = Min ( , )
3σ 3σ

Why min?
Because we want to make sure that we give μ − LSL = USL – μ

importance to that specification which is


critical. E.g. car in narrow road. For a perfectly centered process Cpk = Cp

Note: Thus, Cp does not get affected by process mean shift but Cpk does. It happens
because Cpk is related to the centricity of the process. Hence, Cpk ≥ Cp
INTERPRETING Cpk VALUES
If Cp = Cpk = 1, the process is If Cpk = 0, the process mean is overlapping
operating at border line with with one of the specification limit.
sigma level 3.
INTERPRETING Cpk VALUES
If Cpk < 0, the process mean has gone If 0 < Cpk < 1, the process mean is
beyond one of the specification limits. within the specification limits but some
part of process is outside the limit.
INTERPRETING Cpk VALUES
If Cpk > 1, the process is well within the specification limit.

Note:
▪ Higher the Cp, Cpk values, higher the sigma level.
▪ In general Cp, Cpk ≥ 1.33 is considered good.
▪ Cp = 1.33 is sigma level 4.
INTERPRETING Cpk VALUES
Cp and Cpk for a centered process with six sigma specification = 2
μ

1.5σ

6σ USL – μ
12σ 6σ 6σ
INTERPRETING Cpk VALUES
In a process where the value of 𝜎 is reduced to such an extent that difference between USL
and LSL becomes 12𝜎 the process is said to have achieved six sigma level.
USL – μ = 6σ − 1.5σ = 4.5σ

USL – μ μ − LSL
Cpk = Min ( , )
3σ 3σ
4.5σ 7.5σ
Cpk = Min ( , ) = Min (1.5, 2.5)
3σ 3σ

Hence,

Cpk = 1.5 which was 2 in case of perfectly centered process.

But if we calculate Cp value then we found Cp = 2 for six sigma process.


INTERPRETING Cpk VALUES
Now talking about process capability, there are certain important considerations.

▪ The process should operate within the specification limits.

▪ We should check the proximity of the process mean with the mean of the specification

that have been provided to us.

▪ Cpk helps to place the process distribution in relation to the product specification limits.

▪ It is also a measure of the manufacturability of the product with the given processes.

▪ Cpk measures not only the process variation with respect to allowable specifications, it

also considers the location of the process mean.


INTERPRETING Cpk VALUES
Q. The food when served to the customer at a restaurant should be between 39 to 49°C. The
process used in serving the food at a correct temperature has process standard deviation of
2°C and the process mean is 40°C. What is the Cpk value for the process.
Solution:
USL = 49°C, LSL = 39°C, standard deviation = 2°C and mean = 40°C.

USL – μ μ − LSL 49 − 40 40 − 39
Cpk = Min ( , ) = Min ( , )
3σ 3σ 3×2 3×2
Cpk = Min (1.5, 0.166)
Cpk = 0.166
Thus, our LSL is critical and we need to shift process mean towards specification mean
(i.e. we need to increase the mean temperature above 40°C)
INTERPRETING Cpk VALUES
Q. A metal fabricator produces connecting rods with an outer diameter that has a 1 ± 0.01
inch specification. A machine operator takes several sample measurements over time and
determines the sample mean outer diameter to be 1.002 inches with a standard deviation of
0.003 inch. Calculate the process capability index.
Solution:
Sample mean (x̄) = 1.002 and Sample standard deviation (s) = 0.003
LSL = 1 − 0.01 = 0.99 USL = 1 + 0.01 = 1.01

USL – x̄ x̄ − LSL
Cpk = Min ( , )
3𝑠 3𝑠
1.01 − 1.002 1.002 − 0.99
Cpk = Min ( , ) = Min (0.888, 1.333 ) = 0.888
3×0.003 3×0.003
Thus, the process capability index is 0.888
INTERPRETING Cpk VALUES
Q. A call center analyses the data of 150 calls and finds that the mean call length is 75
seconds with standard deviation of 20 seconds. Service level agreement with the company is
for a maximum call length of 120 seconds. If the data is normally distributed what is the
process capability?

Q. What is your opinion of the process where grand mean is 25.5 mm and standard deviation
is 0.15 mm, USL = 25.5 mm and LSL = 24.5 mm for a component? Find out the values of 𝐶𝑝
and 𝐶𝑝𝑘.
PRINCIPLES OF SIX SIGMA
1. Improve Customer Satisfaction

The ultimate goal of Six Sigma is delivering business value as defined by the customer. That
means enhancing customer satisfaction by consistently delivering products or services that
meet or exceed customer expectations. To do that, teams analyze processes for potential
improvement, quantify the costs, and determine if the benefits warrant the investment.

This principle strongly emphasizes understanding customer needs and expectations. By


gathering customer requirements, preferences, and feedback data, organizations can align
their processes to deliver products or services that satisfy customers.

121
PRINCIPLES OF SIX SIGMA
2. Process Focus

Detailed process mapping plays a crucial role in the success of Six Sigma initiatives by providing
a comprehensive understanding of the current state of processes and facilitating targeted
improvements. Six Sigma professionals use graphs and flow charts to illustrate the details of the
process and guide them in decision-making. This visual breakdown makes identifying strengths
and weaknesses in a current process easier by pinpointing the performance of specific steps.

To create a process map, you must first define the process focus and then outline the major steps
and stages from beginning to end. Bring together people from different departments involved so
you can get the whole picture. Document each input, output, and the flow of materials or
information from one step to the next. As the team collects data.
122
PRINCIPLES OF SIX SIGMA
3. Remove Variation from Processes

Six Sigma looks at two types of process variation: special cause variation and common cause
(natural) variation. Common cause variation refers to the inherent variability in a process
over time, such as fluctuations in materials, environmental conditions, equipment
performance, or operator behavior.

Special cause variation refers to variability in a process caused by specific identifiable


factors or events that are not part of the usual, stable operation of the process. These factors
are usually external or are some anomaly or error that disrupts the normal function of a
process.

123
PRINCIPLES OF SIX SIGMA
4. Involve and Equip the People in the Process

Seasoned pros often say Six Sigma projects will only succeed if the organization has buy-in from
the top down. That means the whole team needs to be involved and trained in the Six Sigma
discipline to assume their appropriate role in each project. Just like a band of musicians needs to
be in rhythm and harmony together, every role in a Six Sigma project is crucial to its success.

Roles and Responsibilities Within a Six Sigma Project:

Executives: Establish the focus of Six Sigma within the overall organizational goals

Champion: Communicate the organization’s vision, mission, and goals to create an organizational
deployment plan and identify individual projects.

124
PRINCIPLES OF SIX SIGMA
Master Black Belt: Oversee an organization’s whole Six Sigma program and is the primary
internal consultant. Train and coach Black Belts and Green Belts and develop key metrics
and strategic direction.

Black Belt: Run individual projects and manage Green and Yellow Belts.

Green Belt: Assist with data collection and analysis

Yellow Belt: Acts as a support person for the project team.

White Belt: Support Six Sigma projects as needed but are not necessarily part of the project
team.

125
PRINCIPLES OF SIX SIGMA
5. Make Systematic Decisions Based on Data

Six Sigma uses verifiable data and statistics to make decisions that can help organizations
achieve measurable profit gains. It uses data to tangibly improve the quality of products and
services, increasing customer satisfaction while reducing costs. A Six Sigma project aims to
create a process that is 99.99966% free of defects (or to have fewer than 3.4 errors in one million
opportunities).

Both quantitative and qualitative data are crucial to a comprehensive understanding of process
performance. You can only remove variations and defects knowing the whole picture.
Quantitative analysis provides objective, statistical insights into process performance and
variation. In contrast, qualitative data analysis complements this by offering a deeper contextual
understanding and insights into human behaviors and organizational dynamics.

126
PRINCIPLES OF SIX SIGMA
6. Aim for Continuous Improvement

Six Sigma remains effective nearly forty years later because it emphasizes sustained
improvement. Organizations that use Six Sigma don’t just fix a process and move on. They
continue monitoring process improvements and make small, incremental changes to ensure they
always perform at their best. Continuous improvement is especially important as technology
continues to advance rapidly, thereby constantly introducing new opportunities to increase
efficiency and quality.

To maintain momentum in Six Sigma initiatives, senior leadership has to remain fully committed
to the Six Sigma program and actively support improvement efforts. It is critical to ingrain
continuous improvement into your organizational culture, values, and practices and reward
employees who contribute to process excellence and innovation.

127
MAPPING THE CURRENT PROCESS
Process mapping is a technique utilized in a Six Sigma project to visualize the steps involved
in a certain activity or process. In its basic form, Six Sigma process mapping is a flowchart
that illustrates all of the inputs and outputs of an event, process, or activity in an easy-to-read,
step-by-step format.

Process mapping is a crucial step in any six sigma project, especially when DMAIC roadmap
is being used.

128
MAPPING THE CURRENT PROCESS
The following are the different ways in which we can map a process:

1. Flow chart of the process or workflow analysis.

2. Analysis of process to develop relationship Y = f(X1, X2,….,Xn). We will call this


relational process map (RPM).

3. Supplier-Input-Process-Output-Customers (SIPOC) diagram.

4. Value stream mapping (VSM) to identify value added and non value added activities.

5. Any other specific approach that suits the process.

129
MAPPING THE CURRENT PROCESS
The choice of the method used to map the process depends on types of process and our
objectives of mapping. For example, in a lean six sigma project where we wish to increase
process speed and reduce variation we can use VSM and also RPM. It is quite common to
refer to all of these as process maps.

Therefore, when someone mentions a process map, it is best to look at the specific way of
mapping the process and its suitability to the project.

130
MAPPING THE CURRENT PROCESS
Flow Process Chart or Flow Charts:
Symbol Title

Conventionally, industrial engineers are used to a tool called flow process


Operation
charts. These charts are used to identify activities which do not add value so
that we can eliminate or minimise these. Symbols used in flow process charts Transport
are shown in figure.
Delay
While using flow charts, processes divided into smaller elements. Each
activities then classified as one of the five. Other information such as the Storage

distance moved, waiting time etc, is recorded.


Inspection

131
MAPPING THE CURRENT PROCESS
Flow Process Chart or Flow Charts:
Symbol Title

Flow charts can be developed using these symbols to identify value added
Operation
and non value added activities. Operation is the only VA element and all
other elements including inspection, are NVA. Flow charts are specially Transport
useful for manufacturing processes.
Delay

Storage

Inspection

132
MAPPING THE CURRENT PROCESS
Relational Process Map (RPM)

One of the differences between six sigma and other improvement approaches is its
significant dependence on data based approach using statistical methods. In six Sigma, we
first convert a real life practical problem into a statistical one. Is like modelling a process.
The process response is usually called ‘key process output variable’ (KPOV).

Examples of KPOVs could be Yield of a process, cycle time, quality level such as customer
acceptance, productivity, health index, customer satisfaction index, repair time, reliability,
downtime, Inventory turns, market share.

133
MAPPING THE CURRENT PROCESS
As we can see, our objective will be to maximise some of these KPOVs such as yield, market
share, customer satisfaction index, productivity, and inventory turns. On the other hand, we
would like to minimise some of the other KPOVs such as cycle time, repair time, downtime,
and rejections.

Our first task, therefore, is to decide the objective of our six sigma project, its current level
and our target. While the six sigma level of achievement corresponds to 3.4 defects per
million opportunities (DPMO), we usually cannot reach this level without a series of six
sigma projects in the same direction.

134
MAPPING THE CURRENT PROCESS
The target level of KPOV should be decided based on expectations of the customers and
industry benchmarks. Thus, if the current yield of the process is 85%, we may strive for 95%
as the next target. Later, We may take projects to achieve even higher levels. Usually, the
difficulty level increases exponentially as we move closer to the six sigma level.

We first need to understand the process so that we can improve it the initial step towards this
is to map the process for KPOVs and key process input variables (KPIVs).

135
MAPPING THE CURRENT PROCESS
Developing a relational process map

Using this map we can start developing a model in a form that is similar to a mathematical
function.

Y = f(X1, X2, X3,….,Xn)

Where Y is the KPOV and X1, X2, X3,….Xn are KPIVs. There can be more that one KPOVs.

A simple example of a process map is shown in next slide. We need to classify the KPIVs (Xs)
as either controllable (C) or uncontrollable (U). This is shown in the column for C/U.

136
MAPPING THE CURRENT PROCESS
A process map for higher education in engineering is shown below.
KPIVs (Xs) C/U Process KPOVs (Ys)
X1 Quality of students C/U Quality of results Y1
X2 Qualification of professors C Placements Y2
X3 Course contents C Ranking Y3
X4 Method of delivery C Students satisfaction Y4
X5 Exam pattern C Quality of projects Y5
Education of students
X6 Evaluation criteria U All round development Y6
X7 Facilities C Cost Y7
X8 Teaching hours U
X9 Lab hours C
X10 Project guidance U

137
MAPPING THE CURRENT PROCESS
SIPOC Diagram

SIPOC is used at the macro or top level and stands for Supplier-Input-Process-Output-
Customer. SIPOC diagram is useful in understanding the overall perspective of the process.
Sometimes, SIPOC is called ‘50,000 feet view’.

A SIPOC diagram provides a high-level view of a process by documenting its suppliers,


inputs, process, outputs, and customers. It visualizes how everyone in the process receives
materials or data from each other, and is often used to improve or understand processes that
impact customer experience.

138
MAPPING THE CURRENT PROCESS
How to create a SIPOC diagram in 7 steps

Using the SIPOC model is simple, but it’s


actually best practice not to follow the acronym
in order. We recommend starting with the
“process” section as that’s often the easiest
place to begin, but you can also work
backwards from “customers” to “suppliers.”
For that reason, teams sometimes call this tool
a COPIS diagram instead.

139
MAPPING THE CURRENT PROCESS
Here’s how to create a SIPOC diagram:

1. Choose a process

Select the process you want to visualize with your SIPOC diagram. This can be a new business
process you want to implement or an existing process you want to optimize. Creating a SIPOC
diagram can help you understand the process, brainstorm ideas for improvement, and provide a
high-level overview of the process to help stakeholders make decisions.

For example, imagine you want to improve shipping and delivery of your product. A SIPOC
diagram can help you identify inefficiencies, ensure you’re managing suppliers in the best
possible way, and determine whether you’re delivering a quality product to customers.

140
MAPPING THE CURRENT PROCESS
2. Define the process: P

Instead of completing your SIPOC diagram in order, it’s often easiest to start with the “P”
section and define your process first. Break the process down into 4-5 high-level steps, each
with its own action and subject. If you want, you can organize these steps as a flow chart, with
each one feeding into the other.

To continue our product shipping example, the process can be broken down into the
following steps:

▪ Customer checks out

▪ Invoice sent to warehouse

141
MAPPING THE CURRENT PROCESS
▪ Warehouse team prepares shipment

▪ Distribution company picks up shipment

▪ Distribution company carries shipment to destination

If your process is long and contains many different steps, try to group batches of smaller
steps together. For example, you could use the broader step “Invoice sent to warehouse” to
stand in for all the details of how information is transferred from your ecommerce platform
to the shipping warehouse. Remember that the purpose of a SIPOC diagram is to provide a
high-level overview, not a detailed view.

142
MAPPING THE CURRENT PROCESS
3. List the outputs: O

Identify the outputs of the process. This helps you understand what you get from the
resources you invest in the process, and what customers are actually receiving. Outputs can
be things like materials, products, services, or information - essentially anything you,
internal team members, or customers get out of the process. Ideally, outcomes should
correspond with customer requirements.

In the product shipping SIPOC example above, the outcomes are:

▪ Customers get the product within a certain time frame

▪ Your company receives money for the product

143
MAPPING THE CURRENT PROCESS
4. Identify the customers: C

Customers are the people who receive the outputs or benefit from the process. Keep in mind
that customers don’t have to be external - they can also include co-workers and internal
stakeholders. Let's use a different example instead of the shipping scenario. Say you're
preparing your company's annual retreat. In this scenario, your customers and stakeholders
would be the team members attending the event.

For the shipping example, you could list the following customers: online shoppers (who
receive the product), and your company (which receives money for the product).

144
MAPPING THE CURRENT PROCESS
5. List the inputs: I

Inputs are the resources you need for the process to function properly. Similar to outputs,
these can be things like materials, products, services, or information. Listing the suppliers of
the inputs helps you understand resource requirements for the process and determine
whether you’re getting the materials you need from your suppliers.

For your product shipping process, this could include customer shipping and payment
information, online payment services, packaging services, packaging materials, warehouse
space, and delivery trucks.

145
MAPPING THE CURRENT PROCESS
6. Identify suppliers: S

Suppliers are where you get each of the inputs of the process. This step helps you understand how
many suppliers you’re working with and whether you’re managing them in the most efficient way.

In our product shipping example, that could include the following:

▪ Customers: Provide shipping and payment information

▪ Warehouse team members: Offer packaging services

▪ Packaging manufacturer: Create packaging materials

▪ Warehouse leasing company: Provide warehouse space

▪ Delivery services: Provide delivery trucks

146
MAPPING THE CURRENT PROCESS
7. Share your diagram

A SIPOC diagram is meant to be shared. It’s most valuable as a tool to help you, your team,
and stakeholders understand how a business process works. That means to get the full
benefit of your SIPOC map, you should not only share it, but also make sure it’s easily
accessible.

One of the best ways to share information is with a project management tool, because it lets
you organize project information and tasks in one central place. That means instead of
sending a dozen separate emails, you can share a single version of your diagram with each
stakeholder—then communicate with everyone on one thread.

147
MAPPING THE CURRENT PROCESS
Value Stream Mapping (VSM)

Value stream mapping helps to understand and streamline processes. It analyses the flow of
materials and information currently required and identifies opportunities for improvement
during the process. VSM is used in lean six sigma projects.

148

You might also like