0% found this document useful (0 votes)
43 views90 pages

Measurement System Analysis: Subhransu Sekhar Mohanty

The document outlines the fundamentals of Measurement System Analysis (MSA), emphasizing its significance in quality control and the various components involved, such as measurement variation, gage studies, and statistical tools. It details key properties of effective measurement systems, the types of measurement instruments used in industries, and the factors influencing measurement accuracy. Additionally, it discusses the impact of measurement variation on data interpretation and decision-making.

Uploaded by

aryanmauryapmkvy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views90 pages

Measurement System Analysis: Subhransu Sekhar Mohanty

The document outlines the fundamentals of Measurement System Analysis (MSA), emphasizing its significance in quality control and the various components involved, such as measurement variation, gage studies, and statistical tools. It details key properties of effective measurement systems, the types of measurement instruments used in industries, and the factors influencing measurement accuracy. Additionally, it discusses the impact of measurement variation on data interpretation and decision-making.

Uploaded by

aryanmauryapmkvy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

MSA

Measurement
System Analysis
Subhransu Sekhar Mohanty
Contents
Module 1: Introduction to MSA Module 4: Gage Studies and Analysis Module 7: Variations and Effects in
• Introduction to Gage R&R Studies Measurement Systems
• Understanding the Effect of
Measurement Variation • Gage, Measurement System, and Gage R&R • Properties of Measurement Systems

Reference Value, Discrimination, and Bias • Variations in Measurement Systems


• Importance of MSA in Quality Control •

• Precision, Linearity, and Sensitivity • Effects of Measurement System Variability


• Overview of Two Categories of MSA
• Type 1 Gage Study • Nested Gage R&R Study and Its Interpretation
Key Terminology in Measurement
Introduction

Systems
Elaborate on what
Module 5: Statistical Tools for MSA
you8:want
Module to Agreement
Attribute discuss. Analysis (AAA)
• Descriptive Statistics Overview • Introduction to Attribute Agreement Analysis
Module 2: Fundamental Concepts of
MSA • Run Charts and Data Visualization • AAA - Agreement Within Themselves
(Consistency)
• True vs. Reference Value • Consistency, Uniformity, and Capability
Analysis • AAA - Agreement with Standard (Correctness)
• Resolution or Discrimination
• AAA - Agreement with Each Other
Our Projects
• Measurement Uncertainty and its
• Accuracy vs. Precision Significance
Elaborate on what

you want to discuss.
AAA - All Operators vs. Standard
• Bias and Linearity Module 6: Calibration and Measurement • Consistency and Correctness Plots
• Measurement Variation and Its Impact System Performance Module 9: Advanced Statistical Measures in MSA
Module 3: Precision and Its Components • Calibration System Fundamentals • Understanding Kappa Value
• Measurement Systems Performance What is Kendall’s Coefficient of Concordance?
• Precision - Repeatability •
Assessment
• Precision - Reproducibility About Us
ME&T (Measuring Equipment and

• Kappa and Kendall’s Coefficient for Consistency

• Understanding Normal Distribution Techniques) and Reference Standard • Kappa and Kendall’s Coefficient for Correctness

Calibration, Transfer, and Master Standards • Kappa and Kendall’s Coefficient - Amongst
• Precision to Tolerance Ratio (PTR) •
Themselves
• Measurement and True Value • Working and Check Standards
Relationship • Measurement Systems Traceability
Module 1: Introduction to MSA

Module
• Understanding the Effect of Measurement
Variation

01
• Importance of MSA in Quality Control
• Overview of Two Categories of MSA
• Key Terminology in Measurement
Systems

Subhransu Sekhar Mohanty


Introduction to MSA (Measurement
System Analysis)
What is Measurement ?
• Measurement is the process of assigning numbers or values to objects to represent their properties and relationships. This
concept was first defined by C. Eisenhart in 1963.
• The act of assigning numbers is called the measurement process, and the resulting number is the measurement value.

What is Measurement System ?


A System which includes all tools, methods, standards, personnel, and conditions used to measure a feature or characteristic. It is the
complete process of obtaining measurements.

What is Measurement System Analysis (MSA) ?


Measurement System Analysis (MSA) is a critical tool in quality control that helps ensure the accuracy and reliability of data collected
from measurement systems. This module provides an overview of the fundamental aspects of MSA, explaining its significance, key
categories, and essential terminologies.

Subhransu Sekhar Mohanty


Cause-and-Effect (Fishbone) Diagram for
Measurement System Variability
Measurement system variability can affect the decision regarding the stability, target and variation of a process. The basic relationship
between the actual and the observed process variation is:

σobs2(Observed Process Variance) is the total variance that we observe in the process, which includes both the actual process
variation and measurement system variation.
σactual2(Actual Process Variance) represents the inherent variability in the process itself, excluding measurement errors.
σmsa2 (Measurement System Variance) accounts for variations introduced by the measurement system, including factors like
repeatability and reproducibility (R&R).

σobs2(Observed Process Variance) = σactual2(Actual Process Variance) + σmsa2 (Measurement System Variance)

Subhransu Sekhar Mohanty


Key Properties of a Good Measurement System

Key Properties of a Good Measurement System


1️⃣ High Sensitivity & Discrimination
• The instrument should measure in small increments compared to the process variation or tolerance limits.
• 10-to-1 Rule: The instrument should divide the tolerance into at least 10 parts for better precision.
2️⃣ Statistical Stability
• The measurement system should be consistent over time.
• Any variation should come from common causes (normal fluctuations) and not special causes (unexpected errors).
3️⃣ Small Measurement Variation Compared to Tolerance
• For product control, measurement variation should be much smaller than specification limits.
4️⃣ Small Measurement Variation Compared to Process Variation
• For process control, the measurement system’s variation should be much smaller than the total process variation (6-sigma
range) from the MSA study.
✅ Example: If a micrometer is used to measure a part with a tolerance of ±0.1 mm, it should have a resolution of at least 0.01 mm
(1️0-to-1️ Rule).

Subhransu Sekhar Mohanty


Instrument Vs Gauges
In Measurement System Analysis (MSA), the terms "Instrument" and "Gauge" are often used interchangeably, but they have distinct
meanings based on their application and functionality.

1. Instrument: 2. Gauge:
Definition: A device or system used to measure a physical Definition: A tool or device designed to check,
quantity with higher accuracy, resolution, and sometimes digital compare, or verify whether a dimension or characteristic
processing. falls within a specified tolerance range.
Purpose: Typically used for precise measurement and analysis of Purpose: Primarily used for Go/No-Go decisions or
a parameter. limit checking rather than detailed measurement.
Examples: Examples:
Digital Vernier Caliper Plug Gauge (Go/No-Go)
Coordinate Measuring Machine (CMM) Snap Gauge
Oscilloscope Thread Gauge
Spectrophotometer Dial Gauge
Micrometer with digital readout Bore Gauge

Subhransu Sekhar Mohanty


SWIPE
The acronym S.W.I.P.E. represents the six critical factors that influence the accuracy and reliability of a measurement system.
These elements help in identifying and minimizing measurement errors, making it a useful error model for MSA (Measurement
System Analysis).

• Refers to the reference or calibration standard used to verify measurement accuracy.


:
:
Standard (S) • Ensures that instruments are properly calibrated and traceable to national/international standards.
✅ Example: A gauge block used to calibrate a micrometer to ensure correct readings.
::

• The operator using the instrument.

Workman(W) • Errors can arise due to technique, experience, fatigue, or interpretation of results.
✅ Example: Two inspectors measuring the same part with a micrometer may get slightly different readings
due to varying pressure applied.

• The measuring device used to take readings.


Instrument (I) • Errors can come from wear, misalignment, poor resolution, or incorrect calibration.
✅ Example: A weighing scale that has not been calibrated may give inaccurate weight measurements.

• The measurement method or process used.

Procedure (P) • Errors occur when procedures are inconsistent, unclear, or not followed correctly.
✅ Example: If an operator measures coating thickness without following a standardized procedure,
results may vary.

• The surroundings where measurements are taken.

Environment (E) • Factors like temperature, humidity, vibrations, lighting, and cleanliness affect accuracy.
✅ Example: A CMM (Coordinate Measuring Machine) in a factory without temperature control may give
different readings due to material expansion/contraction.

Subhransu Sekhar Mohanty


Cause-and-Effect (Fishbone) Diagram for
Measurement System Variability
Part Instrument
(Workpiece) (Gauge Error)
Material
differences Calibration
issues
Surface
roughness Wear & tear of
measuring device
Shape & geometry Low resolution
variations or precision
:
: Dimensional changes due Incorrect settings or
:: to temperature zeroing
Coating or contamination Probe misalignment or
on the surface improper contact
Measurement
System Variability
Temperature changes
affecting materials Skill level and experience

Humidity Inconsistent
variations measurement technique

Vibrations from machines Fatigue or stress affecting


or surroundings judgment

Lighting conditions affecting Subjective interpretation of


visual inspections results

Dust & contaminants interfering Handling errors or improper


with readings positioning

Environment Person
(External Factors) (Operator Influence)

Subhransu Sekhar Mohanty


Type of Measurement Instruments used in Industries
Instrument Best Use Least Count Limitations
Measuring external, internal 0.01 mm (Digital) / 0.02 mm Errors due to misalignment,
Vernier Caliper
dimensions, depth (Analog) parallax, and jaw wear
Limited range (usually 25 mm
Precision measurement of small 0.001 mm (Digital) / 0.01 mm
Micrometer per micrometer), requires
dimensions (Analog)
skilled handling
Sensitive to shocks and
Checking runout, flatness, and 0.01 mm / 0.001 mm (High
Dial Gauge (Indicator) incorrect readings due to
alignment precision)
stylus pressure
Measuring vertical dimensions and 0.01 mm (Digital) / 0.02 mm Limited to surface plates,
Height Gauge
marking (Analog) requires stable setup
Requires precise
0.01 mm (Digital) / 0.02 mm
Depth Gauge Measuring depth of slots, holes perpendicular alignment for
(Analog)
accuracy
Requires skilled use, can be
Measuring internal diameters of
Bore Gauge 0.01 mm / 0.001 mm affected by bore surface
holes and bores
roughness
CMM (Coordinate Expensive, needs a controlled
3D measurement of complex parts 0.001 mm
Measuring Machine) environment
Checking shape, contour, and Requires skilled interpretation
Profile Projector 0.01 mm
thread measurements and fixture setup

Subhransu Sekhar Mohanty


Type of Measurement Instruments used in Industries
Instrument Best Use Least Count Limitations
Measuring angles, contours, and Susceptible to optical errors,
Optical Comparator 0.001 mm
profiles needs a dark environment
0.001 µm (Ra) Limited range, affected by
Roughness Tester Surface finish measurement Where: probe wear and surface
-6
µm :(Micrometer) = 10 Meter
Ra (Roughness Average) cleanliness
Affected by environmental
Weighing Scale Measuring weight 0.001 g - 1 g (Based on type) conditions like air currents and
vibrations
Requires calibration, affected
Thermocouple Measuring temperature 0.1°C - 1°C
by ambient conditions
Non-contact temperature Accuracy affected by material
Infrared Thermometer 0.1°C - 1°C
measurement emissivity and ambient light
Optical types need a clear
Tachometer Measuring rotational speed 1 RPM / 0.1 RPM (Laser type) target; mechanical types
cause friction losses
Ultrasonic Thickness Affected by material properties
Measuring thickness of materials 0.01 mm / 0.001 mm
Gauge and couplant used
Measuring force, tension, and Load cell degradation over
Force Gauge 0.01 N / 0.1 N
compression time
Measuring applied torque on Requires calibration, affected
Torque Wrench 0.1 Nm / 0.01 Nm
fasteners by operator handling
Subhransu Sekhar Mohanty
Type of Measurement Instruments used in Industries
Instrument Best Use Least Count Limitations
Affected by reflection and
Laser Distance Meter Measuring large distances 0.1 mm
atmospheric conditions
Hardness Tester (Brinell,
Requires sample preparation,
Rockwell, Vickers, Material hardness measurement Varies with method
indentation errors possible
Mohs)
Affected by ambient noise and
Sound Level Meter Measuring noise levels 0.1 dB
environmental factors
Measuring acidity/alkalinity of solutions.
pH (Potential of Hydrogen) is a scale used to measure the Needs regular calibration and
pH Meter acidity or alkalinity of a solution. It ranges from 0 to 14, where: 0.01 pH
pH < 7 → Acidic (e.g., lemon juice, vinegar),pH = 7 → Neutral correct electrode maintenance
(e.g., pure water), pH > 7 → Alkaline (e.g., baking soda, soap)
Needs calibration and proper
Densitometer Measuring optical or material density 0.001 g/cm³
sample preparation
Flow Meter (Ultrasonic, Affected by viscosity,
Measuring fluid flow rate Varies by type
Magnetic, Turbine, etc.) temperature, and impurities
Sensitive to lighting
Colorimeter Measuring color intensity 0.01 Absorbance
conditions, requires calibration
Coating Thickness Affected by substrate material
Measuring paint/plating thickness 0.1 µm
Gauge and coating uniformity
Leak Tester (Pressure Requires controlled
Decay, Helium, Bubble Detecting leaks in containers, pipes, etc. Varies by method conditions, affected by
Test, etc.) environmental pressure
Subhransu Sekhar Mohanty
Type of Measurement Instruments used in Industries
Instrument Best Use Least Count Limitations
Affected by temperature,
Measuring oxygen concentration
Oxygen Meter 0.1% pressure, and sensor
in air/liquids
degradation
Detecting surface/subsurface Requires skilled interpretation
Magnetic Particle Tester Visual
defects in metals and surface preparation
Radiographic Testing Internal defect detection in Expensive, requires safety
High-resolution
(X-ray, Gamma-ray) materials precautions
Affected by material
Detecting cracks and material
Eddy Current Tester 0.01 mm conductivity and probe
inconsistencies
positioning

Subhransu Sekhar Mohanty


1. Understanding the Effect of
Measurement Variation

Measurement variation refers to the inconsistencies that arise in measurement systems due to factors such as equipment, operators,
environment, and method variations. These variations can lead to incorrect data interpretation, resulting in poor decision-making
and process inefficiencies.

Key Sources of Measurement Variation:


• Bias – A systematic deviation from the true value.
Note:
The True Value is the actual, ideal, or theoretically correct measurement of a characteristic, free from all measurement errors. In practice, the
true value is often unknown and can only be estimated using highly precise instruments or reference standards.
• Repeatability (Equipment Variation) – Variation when the same operator measures the same item multiple times with the same
instrument.
• Reproducibility (Appraiser Variation) – Variation when different operators measure the same item using the same instrument.
• Stability – The consistency of measurements over time.
• Linearity – The change in bias over the measurement range.
Understanding these variations is crucial for assessing whether a measurement system is reliable and capable of supporting quality
control efforts.

Subhransu Sekhar Mohanty


2. Importance of MSA in Quality Control
MSA plays a vital role in ensuring that data collected for process control and decision-making is accurate and reliable. Without a
proper MSA, measurement errors may lead to incorrect conclusions, affecting product quality and customer satisfaction.

✅ Improves the reliability of measurement data


Example: In an automotive assembly plant, an MSA study ensures that the calipers used to measure brake pad thickness give
consistent and accurate readings. This prevents incorrect rejections or approvals of parts.

✅ Helps identify and reduce sources of variation in measurements


Example: A lead-acid battery manufacturer notices variations in voltage measurements between different operators. MSA helps
identify that inconsistent probe placement is causing the variation, leading to better training and standardized procedures.

✅ Enhances process capability analysis and statistical process control (SPC)


Example: In an electronics manufacturing unit, an MSA study on PCB thickness measurement ensures that SPC charts reflect true
process variations rather than errors from faulty micrometers.

✅ Ensures compliance with industry standards (such as IATF 16949 in automotive manufacturing)
Example: An automotive parts supplier undergoes an MSA study to validate torque wrenches used in tightening bolts. This ensures
compliance with IATF 16949 and prevents audit non-conformities.

✅ Aids in reducing rework, scrap, and overall costs due to measurement errors
Example: A precision machining company finds that incorrect roughness tester readings cause unnecessary rejection of good parts.
MSA helps detect calibration issues, reducing scrap and saving costs.
Subhransu Sekhar Mohanty
3. Overview of Two Categories of MSA
MSA is categorized based on the type of data being analyzed: Variable MSA for continuous data and Attribute MSA for discrete data.

1. Variable MSA (Continuous Data Analysis) 2. Attribute MSA (Discrete Data Analysis)
When to Use: When to Use:
• When measurements are numerical (e.g., length, weight, • When measurements are categorical (e.g., Pass/Fail, Go/ No-
voltage, pressure). Go, Accept/Reject).
• Common in manufacturing, machining, and lab testing. • Common in visual inspections, defect grading, and quality
Key Analysis Method: Gauge Repeatability & Reproducibility audits.
(GR&R) Key Analysis Methods:
• Repeatability → Variation when the same person measures • Kappa Studies → Evaluates agreement among inspectors
the same part with the same tool multiple times. beyond chance (e.g., Cohen’s Kappa).
• Reproducibility → Variation when different people measure • Attribute Agreement Analysis (AAA) → Checks consistency
the same part using the same tool. in inspectors' judgments.
• Bias, Stability, Linearity → Other factors affecting • Signal Detection Analysis → Determines if inspectors
measurement accuracy over time. correctly identify defects vs. false alarms.
✅ Example: ✅ Example:
In a battery manufacturing plant, engineers perform a GR&R In an automotive parts factory, inspectors visually classify
study on a micrometer used to measure electrode thickness. If the painted components as "OK" or "Defective." An Attribute
GR&R variation is too high, they may need to calibrate the Agreement Analysis (AAA) helps verify if inspectors
micrometer or retrain operators. consistently agree on defect classification.
Subhransu Sekhar Mohanty
4. Key Terminology in Measurement Systems
Understanding the basic terminologies used in MSA is essential for effective analysis and interpretation of results.
• Accuracy – The closeness of a measured value to the true value.
• Precision – The degree to which repeated measurements under unchanged conditions show the same results.
• Bias – A consistent deviation from the true value.
• Repeatability – The variation in measurements taken under identical conditions by the same operator.
• Reproducibility – The variation in measurements when different operators use the same equipment.
• Resolution – The smallest measurement increment that an instrument can detect.
• Stability – The ability of a measurement system to produce consistent results over time.
• Linearity – The consistency of bias across the measurement range.
• GR&R Study (Gauge Repeatability & Reproducibility) – A statistical method to analyze the variability in
measurement systems.
• Tolerance % in MSA: % of measurement variation compared to specification limits

Subhransu Sekhar Mohanty


Module 2: Fundamental Concepts of

Module
MSA
• True vs. Reference Value

02
• Resolution or Discrimination
• Accuracy vs. Precision
• Bias and Linearity
• Measurement Variation and Its Impact

Subhransu Sekhar Mohanty


Fundamental Concepts of MSA
(Measurement System Analysis)
This module explores the core principles of MSA, emphasizing the importance of accurate and precise measurement systems. It covers
key concepts such as true vs. reference value, resolution, accuracy vs. precision, bias, linearity, and measurement variation.

Subhransu Sekhar Mohanty


1. True vs. Reference Value
A fundamental aspect of any measurement system is the distinction between the true value and the reference value.
• True Value: The actual value of a measured characteristic in an ideal condition. It is often unknown because all measurement
systems have some level of imperfection.
• Reference Value: Accepted Value. A value determined from a reliable source, such as a calibrated standard or master gauge,
which serves as a benchmark for measurement accuracy.
A high-quality measurement system should have minimal deviation from the reference value, ensuring consistency and reliability in
data collection.

2. Resolution or Discrimination
Resolution (or Discrimination) refers to the smallest unit of measurement that a system can detect and display. It is crucial for
ensuring that measurements capture meaningful variations in a process.

Key Aspects of Resolution:


• A measuring instrument should have a resolution that is at least one-tenth (10%) of the process variation to be effective.
• If resolution is too low, small changes in measurements will go undetected, leading to poor decision-making.
• Excessive resolution without process capability analysis may lead to unnecessary complexity in data interpretation.
For example, if a micrometer has a resolution of 0.001 mm, but the process requires measurement accuracy up to 0.0001 mm, the
micrometer lacks the required resolution for proper measurement.

Subhransu Sekhar Mohanty


3. Accuracy vs. Precision
A common misconception in measurement is that accuracy and precision are the same. While both are essential for a reliable
measurement system, they represent different characteristics.
Accuracy
• The closeness of a measured value to the actual or reference value.
• A measurement system is considered accurate if it consistently provides values close to the reference standard.
• Example:
1️⃣ Weighing Scale:
A scale is used to weigh a 1.00 kg standard weight.
Readings: 1.01 kg, 0.99 kg, 1.00 kg, 1.02 kg, 0.98 kg
Interpretation: The readings are close to the true value (1.00 kg), so the scale is accurate, even though minor variation exists.

2️⃣ Shooting at a Target:


A shooter aims at the bullseye and hits near the center consistently.
Interpretation: The shots are close to the intended target, meaning high accuracy.

3️⃣ Digital Thermometer for Body Temperature:


The normal human body temperature is 37°C.
A thermometer gives readings: 37.1°C, 37.0°C, 36.9°C
Interpretation: The thermometer is accurate because it gives values very close to 3️7°C.

Subhransu Sekhar Mohanty


3. Accuracy vs. Precision
Precision
• The repeatability of measurements—how close multiple measurements are to each other, regardless of their closeness to the true
value.
• A system can be precise but not accurate, meaning it produces consistent readings that are incorrect.
• Example:
1️⃣ Manufacturing a Metal Rod:
• The required length is 50.00 mm.
• A machine produces rods measuring 49.80 mm, 49.81 mm, 49.79 mm, 49.80 mm.
• Interpretation: The values are consistent (high precision), but not accurate because they are all slightly off from the true value (50.00 mm).
2️⃣ Dart Throwing on a Target:
• A player throws darts, and all land in the same corner of the board, far from the bullseye.
• Interpretation: The throws are precise (close to each other) but not accurate (not near the bullseye).
3️⃣ Blood Pressure Monitor:
• A monitor gives repeated readings of 130/85 mmHg when the actual blood pressure is 120/80 mmHg.
• Interpretation: The device is precise (consistent readings) but not accurate because the readings are higher than the actual value.

Ideal Case: High Accuracy & High Precision


✔ Measurements are both close to the true value and consistent.
✔ Example: A machine produces parts exactly at 50.00 mm every time with minimal variation (e.g., 50.01, 49.99, 50.00, 50.00, 50.00 mm).

Subhransu Sekhar Mohanty


Accuracy 3. Accuracy vs. Precision

High Condition Description Example


Close to true A properly
Accurate &
value and calibrated digital
Precise
consistent balance
A scale that gives
Accurate but Not On target but fluctuating
Precise inconsistent readings around
the true value
A stopwatch that
Precise but Not Consistent but
is consistently 5
Accurate incorrect
seconds slow
A malfunctioning
Neither Accurate Off-target and
instrument with
Low Nor Precise inconsistent
high variation

Precision
Low High
Subhransu Sekhar Mohanty
4. Bias , Linearity and Stability
Bias
Bias is the systematic deviation of a measurement from the true or reference value. It represents the difference between the observed
measurement and the actual standard.
Example of Bias:
• A voltmeter that consistently reads 9.8V when measuring a 10V reference source has a bias of -0.2V.
To correct bias:
• Calibration adjustments can be made.
• The cause of systematic errors should be identified (e.g., instrument wear, incorrect zeroing).

Linearity
Linearity refers to how bias varies across the measurement range. A measurement system should ideally have consistent bias
throughout its operational range.
Example of Linearity Issue:
• A temperature sensor may read 0.5°C too high at low temperatures,
but 1.5°C too high at high temperatures, indicating a nonlinear bias.
Linearity problems suggest that the measurement system does not respond uniformly across its entire range, leading to unreliable
results.

Stability
Stability in Measurement System Analysis (MSA)
Stability refers to how consistent a measurement system remains over time.
It checks whether bias (systematic error) changes over a period.
A stable system means measurements stay statistically controlled without unexpected shifts.
✅ Example: If a weighing scale gives consistent readings for the same object over weeks, it is stable. If the readings drift, the scale
lacks stability.
Subhransu Sekhar Mohanty
4. Bias , Linearity and Stability
Possible Causes of Excessive Bias: Possible Causes of Linearity Error: Possible Causes of Instability:
1. Calibration Issues – Instrument needs 1. Calibration Issues – Instrument needs calibration, 1. Calibration Issues – Instrument needs calibration,
calibration, improper calibration, or error in the calibration interval too long, improper calibration calibration interval too long, improper calibration
master. (not covering full operating range), or incorrect or master use.
2. Instrument Wear & Damage – Worn or master use. 2. Instrument Wear & Aging – Worn or damaged
damaged instrument, equipment, fixture, or 2. Instrument Wear & Damage – Worn or damaged instrument, equipment, fixture, normal aging, or
master. instrument, equipment, fixture, or master obsolescence.
3. Instrument Quality – Poor design or (minimum/maximum error). 3. Poor Maintenance – Issues with air, power,
conformance, linearity error. 3. Poor Maintenance – Issues with air, power, hydraulics, filters, corrosion, rust, or cleanliness.
4. Incorrect Measurement Setup – Wrong gage, hydraulics, filters, corrosion, rust, or cleanliness. 4. Instrument Quality & Design – Poor design,
improper method (setup, loading, clamping, 4. Instrument Quality & Design – Poor design, lack lack of robustness in instrument or measurement
technique). of robustness in instrument or measurement method. method.
5. Measurement Errors – Measuring the wrong 5. Incorrect Measurement Setup – Wrong gage for 5. Incorrect Measurement Setup – Wrong
characteristic, distortion (gage or part). the application, improper setup (loading, clamping, technique (setup, loading, clamping), distortion
6. Environmental Factors – Temperature, technique). (gage or part).
humidity, vibration, cleanliness. 6. Size-Dependent Distortion – Measurement 6. Environmental Factors – Temperature, humidity,
7. Application & Human Factors – Part size, distortion changes with part size. vibration, cleanliness, environmental drift.
position, operator skill, fatigue, observation 7. Environmental Factors – Temperature, humidity, 7. Application & Human Factors – Part size,
errors (readability, parallax). vibration, cleanliness. position, operator skill, fatigue, observation errors
8. Theoretical or Systematic Errors – Violation 8. Application & Human Factors – Part size, (readability, parallax).
of assumptions, errors in applied constants. position, operator skill, fatigue, observation errors 8. Theoretical or Systematic Errors – Violation of
(readability, parallax). assumptions, errors in applied constants.
9. Theoretical or Systematic Errors – Violation of
assumptions, errors in applied constants.
Subhransu Sekhar Mohanty
5. Measurement Variation and Its Impact
Measurement variation refers to the difference observed when measuring the same characteristic multiple
times. It arises from several sources and can significantly impact quality control and decision-making.
Types of Measurement Variation
Equipment Variation (Repeatability)
o The variation in measurements taken by the same person, using the same instrument, on the same part.
o Affects the ability to obtain consistent results under identical conditions.
Appraiser Variation (Reproducibility)
o The variation when different operators measure the same part with the same instrument.
o A lack of standard measurement techniques can increase reproducibility errors.
Part-to-Part Variation
o The natural variation in the parts being measured, due to manufacturing differences.
Stability (Drift over Time)
o The consistency of measurements over extended periods. A drifting measurement system indicates instrument wear or
environmental influences (e.g., humidity, temperature changes).
Linearity (Non-Uniform Bias Across Range)
o As discussed earlier, linearity variation can distort results at different points in the measurement range.

Subhransu Sekhar Mohanty


Impact of Measurement Variation
1⃣ Inaccurate Data Leads to Incorrect Quality Decisions
• Measurement errors can cause false acceptance or rejection of products.
• Operators may incorrectly adjust processes based on unreliable data, leading to further deviations.
2⃣ Risk of Rejecting Good Parts (Type I Error) or Accepting Defective Parts (Type II Error)
• Type I Error (Producer’s Risk): A good part is incorrectly rejected, leading to unnecessary scrap, rework, or increased costs.
• Type II Error (Consumer’s Risk): A defective part is mistakenly accepted, causing field failures, customer complaints, and warranty claims.
3⃣ Process Capability (Cp, Cpk) Depends on Accurate Measurements
• Process capability indices Cp & Cpk measure how well a process meets specifications.
• If measurement variation is high, even a capable process may appear unstable, leading to incorrect process improvements.
4⃣ Increased Costs Due to Scrap, Rework, or Unnecessary Adjustments
• An unstable measurement system can lead to unnecessary process corrections, wasting time, materials, and resources.
• Frequent recalibration of instruments is needed if measurement errors are high, adding operational costs.
5⃣ Impact on Root Cause Analysis and Problem-Solving
• If measurement data is unreliable, root cause analysis for defects becomes ineffective.
• Teams may chase wrong corrective actions, leading to extended problem resolution times.
6⃣ Compliance and Regulatory Impact
• Industries like automotive, aerospace, and medical devices require strict measurement controls to meet ISO, IATF 16949, and FDA standards.
• Measurement errors can cause non-conformance issues, leading to audits, penalties, and loss of certifications.
7⃣ Supplier and Customer Disputes
• Measurement inconsistency between supplier and customer can cause rejection of shipments.
Subhransu Sekhar Mohanty
Module 3: Precision and Its Components

Module
• Precision - Repeatability
• Precision - Reproducibility

03
• Understanding Normal Distribution
• Precision to Tolerance Ratio (PTR)
• Measurement and True Value
Relationship

Subhransu Sekhar Mohanty


Precision and Its Components
Precision in a measurement system refers to its ability to provide consistent and repeatable results. It is a crucial factor
in ensuring the reliability of quality control data. This module explores the different components of precision, the role of
normal distribution in measurement analysis, the importance of the Precision-to-Tolerance Ratio (PTR), and the
relationship between measurement and true values.

Subhransu Sekhar Mohanty


Repeatability
Repeatability, also known as "within-appraiser" variability, refers to the variation in measurements when the same appraiser uses the
same instrument multiple times on an identical characteristic of the same part. It represents the inherent variation of the equipment
under fixed and defined measurement conditions (part, instrument, standard, method, operator, environment, and assumptions).
Though often called equipment variation (EV), repeatability actually includes all sources of within-system variation, making it a form
of common cause (random) error.
Possible Causes of Poor Repeatability:
1. Sample Variability – Form, position, surface finish, taper, inconsistency.
2. Instrument Issues – Wear, repairs, fixture failure, poor quality or maintenance.
3. Standard Quality – Class, wear, or inconsistency in reference standards.
4. Measurement Method – Setup, technique, zeroing, holding, clamping variations.
5. Appraiser Factors – Technique, position, experience, skill, fatigue.
6. Environmental Factors – Temperature, humidity, vibration, lighting, cleanliness fluctuations.
7. Systematic Errors – Assumption violations, improper operation.
8. Equipment & Design Issues – Lack of robustness, poor uniformity, wrong gage selection.
9. Distortion & Rigidity – Gage or part distortion, lack of rigidity.
10.Application & Observation Errors – Part size, position, readability, parallax issues.

Subhransu Sekhar Mohanty


Reproducibility
Reproducibility, also known as "between-appraisers" variability, refers to the variation in measurement averages when different
appraisers use the same instrument to measure the same characteristic on the same part. It is typically relevant for manual instruments
where operator skill influences results but less so for automated systems. More broadly, reproducibility represents variation between
different measurement conditions, including appraisers, gages, labs, and environments (temperature, humidity).
Potential Sources of Reproducibility Error:
1. Part Variation – Differences when measuring different part types (A, B, C) with
the same instrument and method.
2. Instrument Variation – Differences when using different instruments (A, B, C) for
the same parts and operators.
3. Standard Variation – Influence of different setting standards in measurement.
4. Method Variation – Differences due to manual vs. automated systems, zeroing,
holding, or clamping methods.
5. Operator Variation – Differences between appraisers due to training, technique,
skill, or experience.
6. Environmental Factors – Differences caused by temperature, humidity, or other
environmental changes over time.
7. Systematic Errors – Violations of study assumptions, lack of robustness in
instrument or method.
8. Operator Training – Effectiveness of training on measurement consistency.
9. Application Issues – Part size, position, readability, or parallax errors.

Subhransu Sekhar Mohanty


1. Precision – Repeatability
Repeatability is the ability of a measurement system to produce consistent results when the same operator measures the same part
using the same equipment under the same conditions. It is also known as equipment variation.
Characteristics of Repeatability:
It reflects the variation due to the measurement instrument.
A system with high repeatability will yield nearly identical results for multiple measurements of the same object.
Low repeatability indicates that the measuring instrument or method is inconsistent.
Example of Repeatability:
A micrometer is used to measure the thickness of a metal sheet five times, yielding results of 2.01 mm, 2.02 mm, 2.01 mm, 2.00 mm,
and 2.02 mm. Since the values are close to each other, the system has good repeatability. However, if the readings varied significantly
(e.g., 2.01 mm, 2.05 mm, 2.10 mm), the instrument would have poor repeatability.

Good Repeatability Poor Repeatability

Thickness: 2.01mm Thickness: 2.02mm Thickness: 2️.01️mm Thickness: 2️.00mm Thickness: 2.01mm Thickness: 2.05mm Thickness: 2️.1️0mm

Subhransu Sekhar Mohanty


2. Precision – Reproducibility
Reproducibility is the variation in measurement results when different operators use the same instrument to
measure the same part. It is also known as appraiser variation.
Characteristics of Reproducibility:
• It reflects the variation due to the operator, method, or environmental conditions.
• A measurement system with high reproducibility produces consistent results regardless of who performs the measurement.
• Low reproducibility suggests differences in operator technique, training, or environmental influences.
Example of Reproducibility:
Three inspectors measure the diameter of the same cylindrical part using the same caliper. If their results are
50.02 mm, 50.03 mm, and 50.02 mm, the system has high reproducibility. However, if one inspector records
50.02 mm, another records 50.10 mm, and the third records 49.98 mm, the system has poor reproducibility,
likely due to operator technique differences or misinterpretation of measurement readings.

High reproducibility Poor reproducibility


50.02mm 50.02mm
50.03mm 50.10mm
50.02mm 49.98mm

Min: 50.02mm Min: 49.98mm


Max: 50.03mm Max: 50.10mm

Range: 0.01mm Range: 0.12mm


Subhransu Sekhar Mohanty
3. Understanding Normal Distribution
Normal distribution is a statistical concept used to describe how measurement data is spread around the mean
(average) value. It is also known as the bell curve due to its characteristic shape.
Key Features of Normal Distribution:
• Symmetrical shape with most values clustering around the mean.
• Mean (µ) represents the central value.
• Standard deviation (σ) measures the spread of data points.
• 68-95-99.7 Rule:
o 68% of values fall within ±1️σ of the mean.
o 95% fall within ±2️σ.
o 99.7% fall within ±3️σ.
Importance in MSA:
• Helps in evaluating measurement system consistency.
• A well-calibrated measurement system should follow a normal distribution with minimal bias.
• Large variations or skewed distributions indicate potential measurement errors.
Example of Normal Distribution in Measurements:
If a measurement system is used to check the weight of a product with an average weight of 500g, most measurements should fall within
495g to 505g (assuming a small standard deviation). If there are too many extreme values (e.g., 480g or 520g), it suggests poor
measurement precision.

Subhransu Sekhar Mohanty


4. Precision-to-Tolerance Ratio (PTR)
Precision-to-Tolerance Ratio (PTR) is a metric used to assess whether a measurement system is precise enough
relative to the required product tolerances.
Formula for PTR:
PTR=6σMS/Tolerance
Where:
• σMS = Measurement system standard deviation
• Tolerance = The allowable range of variation in the product specification
Interpreting PTR Values:
• PTR < 10% → Excellent measurement system (high precision).
• PTR between 10-30% → Acceptable but may need improvement.
• PTR > 30% → Poor measurement system (high variability), requiring improvement or replacement.

Example of PTR Calculation:


A diameter measurement system has a standard deviation of 0.02 mm, and the tolerance for the product is ±0.10 mm (total range:
0.20 mm).
PTR=(6×0.02)/0.20=0.12/0.20=0.60=60%
Since PTR > 30%, this measurement system is not precise enough and requires improvement (e.g., better instrument, stricter
measurement procedures).

Subhransu Sekhar Mohanty


5. Measurement and True Value Relationship
A true value is the actual, correct value of a measured characteristic, but in practical applications, it is often
unknown due to measurement limitations. Instead, measurement systems aim to provide values that are as close as
possible to the true value.

Key Relationships:
1. Ideal Case: The measured value equals the true value (rare in real-world applications).
2. Bias Present: A consistent deviation from the true value, leading to incorrect data interpretation.
3. Random Variation Present: Fluctuations in measurements due to repeatability and reproducibility issues.

Example:
A 100g standard weight is measured multiple times using a weighing scale.
• If the results are 100.1g, 100.0g, 100.2g, the system is close to the true value.
• If the results are 98.5g, 98.4g, 98.6g, there is a bias in the system.
• If the results vary widely (97g to 103g), both repeatability and reproducibility issues exist.
Impact of Poor Measurement Systems on True Value:
• Incorrect acceptance/rejection of products.
• False alarms in quality control (detecting issues that do not exist).
• Unnecessary process adjustments due to unreliable data.

Subhransu Sekhar Mohanty


Module 4: Gage Studies and Analysis
Introduction to Gage R&R Studies

Module

• Gage, Measurement System, and Gage R&R


Reference Value, Discrimination, and Bias

04

• Precision, Linearity, and Sensitivity


• Type 1️ Gage Study

Subhransu Sekhar Mohanty


Gage Studies and Analysis
Gage Studies are essential tools in Measurement System Analysis (MSA) used to evaluate the capability and performance of
measurement systems. They help determine whether a measurement system is reliable enough for quality control and process
improvement. This module covers Gage R&R studies, key measurement concepts, and the Type 1 Gage Study for assessing
measurement accuracy.

Subhransu Sekhar Mohanty


1. Introduction to Gage R&R Studies
Gage Repeatability & Reproducibility (Gage R&R) is a statistical method used to evaluate the amount of variation introduced by a
measurement system. It quantifies how much of the observed variation in a process is due to the measurement system rather than the
process itself.
Objectives of Gage R&R Studies:
• Determine if a measurement system is suitable for use.
• Identify whether variation comes from the equipment (repeatability) or the operator (reproducibility).
• Ensure measurement errors do not significantly affect process control.
Common Applications of Gage R&R:
• Evaluating the reliability of calipers, micrometers, CMMs, pressure gauges, and other measuring devices.
• Ensuring consistency among multiple inspectors measuring the same product.
• Validating measurement systems before conducting SPC (Statistical Process Control) or capability analysis.
2. Gage, Measurement System, and Gage R&R
A gage is any instrument used to measure a characteristic of a product, such as length, weight, voltage, or pressure. However, a
complete measurement system consists of:
1. The Gage (Instrument) – The physical measuring device.
2. The Operator (Appraiser) – The person taking measurements.
3. The Measurement Method – The defined procedure for measuring.
4. The Environment – Factors like temperature, humidity, and vibration that may affect measurements.

Subhransu Sekhar Mohanty


Gage R&R
Gage Repeatability & Reproducibility (Gage R&R or GRR) is a method used in Measurement Systems Analysis (MSA) to assess the
variation in a measurement system. It combines:
Repeatability – Variation when the same operator measures the same part multiple times using the same instrument.
Reproducibility – Variation when different operators measure the same part using the same instrument.
Mathematically, GRR variance is expressed as:
σ2️GRR=σ2️repeatability+σ2️reproducibility

Example of Gage R&R Study:


Scenario:
A manufacturing company wants to check the reliability of its calipers for measuring a metal rod’s diameter.
Study Design:
Parts: 1️0 metal rods (randomly selected).
Operators: 3️ operators.
Measurements: Each operator measures each part 3️ times using the same caliper.
Data Collection & Analysis:
Repeatability Check: If the same operator measuring the same part gives significantly different readings, the instrument (caliper)
may have poor repeatability.
Reproducibility Check: If different operators measuring the same part get significantly different readings, operator technique or
training may be an issue.
Conclusion:
If the total Gage R&R variation is low (<1️0% of total variation), the measurement system is acceptable.
If GRR variation is high (>3️0%), the measurement system needs improvement—possible recalibration, training, or a better
instrument.

Subhransu Sekhar Mohanty


3. Reference Value, Discrimination, and Bias
These concepts are critical for understanding measurement accuracy and capability.
Reference Value:
• The known or true value of a measurement, usually obtained from a calibrated standard.
• Used as a benchmark for assessing the accuracy of a gage.
Discrimination (Resolution):
• The smallest unit of measurement a gage can detect.
• Ideally, the resolution should be at least 1/10th of the process tolerance for meaningful measurement.
• Example: A vernier caliper with a resolution of 0.01 mm may not be sufficient for a process that requires 0.001 mm precision.
Bias:
• The systematic difference between measured values and the reference value.
• A gage with high bias consistently over- or under-measures compared to the true value.
• Bias can be corrected through calibration adjustments.
Example of Bias:
A thermometer is used to measure the temperature of 100°C boiling water, but it consistently reads 98.5°C. This indicates a bias of -
1.5°C.

Subhransu Sekhar Mohanty


4. Precision, Linearity, and Sensitivity
These factors help in evaluating the overall performance of a measurement system.
Precision
• The ability of a gage to produce consistent results.
• Includes repeatability (equipment variation) and reproducibility (operator variation).
Linearity
• The variation in bias over the measurement range.
• If a measurement system is accurate at low values but inaccurate at high values, it has a linearity problem.
Example of Linearity Issue:
A pressure sensor reads 0.5% lower at 10 psi but 2% higher at 100 psi, indicating non-uniform bias across its range.
Sensitivity
• The ability of a gage to detect small measurement differences.
• A measurement system with low sensitivity fails to capture minor variations, leading to poor quality control decisions.
Example of Sensitivity Issue:
A weighing scale that only detects changes of 0.1g may not be suitable for measuring pharmaceutical tablets, which require accuracy
up to 0.001g.

Subhransu Sekhar Mohanty


5. Type 1 Gage Study
A Type 1 Gage Study is a quick and simple method used to evaluate the accuracy and repeatability of a measurement system. It
involves:
• A single operator measuring a single part multiple times using the same gage.
• Comparing the measurements to the reference value to determine bias.
• Assessing repeatability by analyzing variation in repeated measurements.

Steps to Perform a Type 1 Gage Study:


Select a reference standard (a calibrated master part).
Use one operator and one gage to measure the part at least 25 times.
Calculate the average measurement and compare it to the reference value to determine bias.
Calculate standard deviation (σ) to evaluate repeatability.
Compute Gage Capability Ratio (Cg): Cg=Tolerance/6σ
If Cg > 1.33, the gage is acceptable; if Cg < 1.33, the gage needs improvement (recalibration, better resolution, or operator training).

Example of a Type 1 Gage Study:


A micrometer is used to measure a 10.00 mm standard block. The measurements are:
• 9.98 mm, 9.99 mm, 10.01 mm, 10.00 mm, 9.99 mm
• The average is 9.994 mm → Bias = -0.006 mm (small but present).
• The standard deviation (σ) = 0.005 mm.
• If the tolerance for the process is 0.10 mm, then:
Cg=0.10/(6×0.005)=0.10/0.03=3.33 Since Cg > 1.33, this gage is acceptable for use.

Subhransu Sekhar Mohanty


Module 5: Statistical Tools for MSA
Descriptive Statistics Overview

Module

• Run Charts and Data Visualization


Consistency, Uniformity, and Capability

05

Analysis
• Measurement Uncertainty and its
Significance

Subhransu Sekhar Mohanty


Statistical Tools for MSA
(Measurement System Analysis)
Statistical tools are essential for analyzing measurement system performance and identifying sources of variation. This module
covers descriptive statistics, data visualization techniques like run charts, capability analysis, and measurement uncertainty,
all of which are crucial for assessing the reliability of a measurement system.

Subhransu Sekhar Mohanty


1. Descriptive Statistics Overview
Descriptive statistics summarize data collected from measurement systems and provide insights into variation, central tendency, and
distribution patterns. These statistics help determine whether a measurement system is stable and capable.

Key Descriptive Statistics Used in MSA:


Statistic Description Importance in MSA
Mean (𝜇) Average of all measured values. Shows the central tendency of the measurement data.
Median Middle value when data is arranged in order. Helps identify skewed data.
Range (R) Difference between the highest and lowest value. Indicates spread and possible variation issues.
Standard Deviation (σ) Measure of how much data deviates from the mean. Determines precision and consistency.
Variance (σ²) Square of the standard deviation. Quantifies overall variability in measurements.
Ratio of standard deviation to mean, expressed as a Useful for comparing variation between different
Coefficient of Variation (CV)
percentage. measurement systems.

Example of Descriptive Statistics in MSA:


A digital scale measures a 100g standard weight five times, yielding: 100.1g, 99.9g, 100.2g, 100.0g, 99.8g.
• Mean (𝜇) = 100.0g (indicating accuracy).
• Range = 100.2g - 99.8g = 0.4g (small variation, showing good precision).
• Standard deviation (σ) = 0.14g, meaning the system has low spread and high repeatability.
Descriptive statistics provide a quick assessment of measurement system performance, helping identify potential bias, instability,
or excessive variation.

Subhransu Sekhar Mohanty


Example – Stability
To assess the stability of a new measurement
instrument, the team:
• Selected a part near the middle of the production
range.
• Sent it to the lab to determine the reference value
(6.01️).
• Measured the part 5 times per shift for 4 weeks
(2️0 subgroups).
• Collected data and analyzed it using X̄ & R charts
for stability assessment.

Analysis of the control charts indicates that the measurement process is stable since there are no obvious special cause effects
visible.

Subhransu Sekhar Mohanty


2. Run Charts and Data Visualization
Run charts and other visualization tools help analyze measurement data over time, making it easier to detect trends, shifts, and
anomalies in measurement systems.

Run Chart (Trend Analysis Over Time):


A run chart plots measured values sequentially to detect patterns or drifts in the measurement process.
Uses in MSA:
• Identifies trends, cycles, or sudden shifts in measurement data.
• Detects potential stability issues in measurement systems.
• Helps visualize repeatability and reproducibility problems.

Other Data Visualization Tools in MSA:


• Histogram: Shows frequency distribution of measurements, revealing normality and variation patterns.
• Box Plot: Highlights median, quartiles, and outliers in measurement data.
• Scatter Plot: Shows the relationship between two variables, useful for analyzing linearity and bias.

Example of Run Chart in MSA:


A temperature sensor measuring boiling water should consistently read around 100°C. A run chart with fluctuations like 100.1°C,
99.9°C, 100.0°C, 100.5°C, 98.0°C suggests a possible drift or a calibration issue in the sensor.

Subhransu Sekhar Mohanty


3. Consistency, Uniformity, and Capability Analysis
A reliable measurement system must exhibit consistency (repeatable results), uniformity (minimal variation across its range), and
capability (suitability for its intended purpose).

Consistency Analysis:
• Measured using standard deviation and repeatability studies.
• A consistent measurement system will have low variation and stable results over time.
Uniformity Analysis:
• Ensures a measurement system provides consistent accuracy across different values.
• Evaluated through linearity tests (checking if bias remains constant across measurement ranges).

Capability Analysis (Gage Capability Index, Cg & Cgk):


Capability analysis determines if the measurement system is suitable for process control by evaluating the ratio of measurement
variation to product tolerances.
Gage Capability Ratio (Cg): Measures repeatability of a gage.
Cg=Tolerance/6σ
Cg > 1.33 → Acceptable, Cg < 1.33 → Needs improvement.
Gage Performance Index (Cgk): Considers both bias and repeatability.
Cgk=Tolerance−2️∣Bias∣ / 6σ
Cgk > 1.33 means the system is accurate and precise.

Subhransu Sekhar Mohanty


Example
Capability Analysis in MSA
Given Data:
• Measurement Tool: Height Gauge
• Standard Block Size: 50 mm
• Tolerance: ±0.20 mm (Total Tolerance = 0.20×2=0.400
• Standard Deviation (σ): 0.03 mm
Step 1: Calculate Gage Capability Ratio (Cg)
Cg=Tolerance / 6σ=0.40/(6×0.03)=0.400/0.18=1.11
Step 2: Interpretation
• Since Cg = 1.11 is less than 1.33, the gage does not meet the acceptance criteria for repeatability.
• This indicates that the gage variation is too high, meaning it may need recalibration, maintenance, or replacement to ensure
more consistent measurements.

If Bias is Considered: Calculate Cgk


Let’s assume the bias (systematic error) is 0.05 mm.
Cgk=(Tolerance−2∣Bias∣ )/ 6σ=(0.40−2 x (0.05)) / 0.18=(0.40−0.10) / 0.18=0.30 / 0.18=1.67
• Since Cgk = 1.67 > 1.33, the system is accurate (low bias), but still, the gage may need improvement in repeatability (low Cg).
• If Cgk < 1.33, it would indicate both repeatability and accuracy issues.
Would you like a deeper dive into methods to improve gage performance in such cases?
Subhransu Sekhar Mohanty
4. Measurement Uncertainty and Its Significance
Measurement uncertainty quantifies the doubt in a measurement result, acknowledging that no measurement is perfect.

Key Factors Contributing to Measurement Uncertainty:


• Instrument resolution (discrimination).
• Environmental factors (temperature, humidity, vibrations).
• Operator technique (skill and consistency).
• Gage calibration errors.

Methods to Estimate Measurement Uncertainty:


1. Type A Evaluation: Uses statistical analysis (standard deviation, repeatability tests).
2. Type B Evaluation: Uses expert judgment and historical data (e.g., manufacturer specifications).
Formula for Expanded Uncertainty (U):
U=k × σ
Where:
• k = Coverage factor (typically 2 for 95% confidence).
• σ = Standard deviation of measurement data.

Subhransu Sekhar Mohanty


4. Measurement Uncertainty and Its Significance
Example of Measurement Uncertainty Calculation:
A weighing scale measuring a 10g object has:
• Standard deviation σ = 0.02g
• Coverage factor k = 2 (for 95% confidence)
U=2×0.02g=0.04g
So, the weight is reported as 10.00g ± 0.04g at 95% confidence.

Significance of Measurement Uncertainty:


• Helps determine if measurements are reliable for decision-making.
• Ensures compliance with ISO 17025 (Calibration and Testing Laboratories).
• Prevents false acceptance/rejection of parts in quality control.

Subhransu Sekhar Mohanty


Module 6: Calibration and Measurement
System Performance
• Calibration System Fundamentals

Module • Measurement Systems Performance


Assessment

06
• ME&T (Measuring Equipment and
Techniques) and Reference Standard
• Calibration, Transfer, and Master Standards
• Working and Check Standards
• Measurement Systems Traceability

Subhransu Sekhar Mohanty


Calibration and Measurement System Performance
Calibration and measurement system performance assessment are critical to ensuring reliable and accurate measurements in quality
control. This module explores calibration system fundamentals, performance assessment methods, measurement equipment
and techniques (ME&T), standards, and traceability—all essential for maintaining a robust measurement system.

Subhransu Sekhar Mohanty


1. Calibration System Fundamentals
Calibration is the process of comparing a measurement instrument’s output to a known reference standard to ensure its accuracy and
reliability. If deviations are found, adjustments or corrections are made to align the instrument with the standard.

Key Objectives of Calibration:


• Ensure measurement accuracy and consistency over time.
• Detect and correct bias and drift in measurement instruments.
• Maintain compliance with ISO 9001, ISO 17025, and IATF 16949 standards.

Calibration Process Steps:


1. Select a Reference Standard → The standard should have higher accuracy than the instrument being calibrated.
2. Compare the Instrument’s Readings → Measure a known reference value and record deviations.
3. Adjust the Instrument (if required) → Apply corrections to bring measurements in line with the standard.
4. Document Calibration Results → Maintain records for traceability and audits.
5. Assign Calibration Intervals → Define how often calibration should be performed based on instrument usage and wear.

Example of Calibration:
A digital weighing scale used in a pharmaceutical lab is calibrated using a certified 500g weight. If the scale reads 499.7g, an adjustment
is made to correct the deviation and bring it closer to 500.0g.

Subhransu Sekhar Mohanty


2. Measurement Systems Performance Assessment
To determine if a measurement system is performing adequately, various assessment methods are used to analyze accuracy, precision,
and stability.
Key Performance Assessment Metrics:
• Accuracy: How close a measurement is to the true value.
• Precision (Repeatability & Reproducibility): How consistently measurements are repeated.
• Bias: The systematic difference between measured and actual values.
• Stability: Consistency of measurements over time.
• Linearity: Consistency of bias across the entire measurement range.

Common Methods for Performance Assessment:


• Gage R&R Studies → Evaluate repeatability & reproducibility.
• Stability Testing → Measures long-term measurement variations.
• Control Charts (X̄-R, X̄-S) → Detect trends or drifts in measurements.
• Process Capability Index (Cp, Cpk) → Determines whether the system meets specification limits.

Example of Performance Assessment:


A micrometer measuring 50 mm parts shows an average reading of 50.02 mm with a standard deviation of 0.01 mm. The bias is small,
and variation is low, indicating good performance. However, if repeated measurements show a drift (e.g., 50.02️ mm → 50.04️ mm →
50.07 mm), this suggests a stability issue requiring recalibration.

Subhransu Sekhar Mohanty


3. ME&T (Measuring Equipment and
Techniques) and Reference Standard
ME&T (Measuring Equipment and Techniques) refers to the selection and proper use of measurement instruments, methods, and
environmental conditions to ensure reliable measurements.

Best Practices for Effective Measurement:


• Choose equipment with sufficient resolution for the application.
• Use standardized techniques to minimize operator-induced errors.
• Control environmental factors like temperature, humidity, and vibrations.
• Train operators to use equipment correctly and consistently.

Reference Standards:
• A reference standard is a highly accurate measurement device or artifact used as a benchmark for calibrating other instruments.
• Reference standards must be traceable to national or international standards (e.g., NIST, NABL, ISO/IEC 17025).

Example of ME&T in Action:


A CMM (Coordinate Measuring Machine) is used to measure automotive engine parts. To ensure accurate measurements, the machine
is placed in a temperature-controlled environment and calibrated using traceable reference blocks.

Subhransu Sekhar Mohanty


4. Calibration, Transfer, and Master Standards
Calibration involves different types of standards that form a hierarchy, ensuring traceability across the measurement system.

Types of Standards in Calibration:

Standard Type Definition Application


Master Standard The most precise reference used in a laboratory Used to calibrate transfer and working standards
Intermediate standard linking master and working Used when master standards cannot be directly
Transfer Standard
standards applied
Used for daily measurement activities in production or Calibrated periodically using transfer or master
Working Standard
testing standards

Example of Standard Transfer:


• A National Lab (NABL, NIST, etc.) calibrates a Master Standard gauge block.
• The Master Standard is then used to calibrate a Transfer Standard in the factory’s calibration lab.
• The Transfer Standard is used to check Working Standards in production.
This process ensures traceability and maintains measurement accuracy across different levels of measurement systems.

Subhransu Sekhar Mohanty


5. Working and Check Standards
Working Standards and Check Standards are practical tools used in daily operations to ensure measurement accuracy between
scheduled calibrations.

Working Standards:
• Used in routine measurement processes.
• Typically less precise than master or transfer standards.
• Regularly verified to maintain accuracy.
Check Standards:
• Used to validate the performance of measurement equipment between calibration intervals.
• Helps detect drift or wear in working standards before they cause measurement errors.
• Often measured against reference standards at set intervals.

Example of Check Standards:


A digital caliper in a production line is checked daily using a certified gauge block to verify it still measures correctly before use in
actual production.

Subhransu Sekhar Mohanty


6. Measurement Systems Traceability
Traceability ensures that all measurement results can be linked to recognized national or international standards through an unbroken
chain of calibrations.

Why is Traceability Important?


• Ensures global consistency in measurement systems.
• Provides legal and regulatory compliance (ISO 17025, IATF 16949).
• Helps organizations resolve disputes in quality control and customer complaints.
Elements of a Traceable Measurement System:
1. National/International Standards (e.g., NIST, NABL, BIPM)
2. Primary Reference Standards (calibrated at accredited labs)
3. Transfer and Working Standards (calibrated using higher-level standards)
4. Measurement Instruments (used in production and quality control)
5. Calibration Records and Certificates (traceability documentation)

Example of Traceability:
A torque wrench used in an automobile assembly plant is calibrated against a working standard, which was previously calibrated
against a nationally certified master standard. This traceability ensures compliance with automotive industry standards.

Subhransu Sekhar Mohanty


Module 7: Variations and Effects in
Measurement Systems

Module • Properties of Measurement Systems


Variations in Measurement Systems

07

• Effects of Measurement System


Variability
• Nested Gage R&R Study and Its
Interpretation

Subhransu Sekhar Mohanty


Variations and Effects in Measurement Systems
Measurement systems are subject to various types of variations that can impact the accuracy, reliability, and consistency of quality
control data. This module explores the properties of measurement systems, sources of variation, effects of variability, and Nested
Gage R&R studies used to analyze complex measurement processes.

Subhransu Sekhar Mohanty


1. Properties of Measurement Systems
A measurement system consists of the measuring instrument, the operator, the method, and the environment. To be effective, it
must exhibit key properties that ensure reliability in data collection.
Key Properties of a Good Measurement System:

Property Description Importance


Accuracy Closeness of measured values to the true value Ensures minimal bias in measurements
Precision Repeatability and reproducibility of results Reduces measurement variation
Stability Consistency of measurements over time Ensures long-term reliability
Prevents deviations at different points of
Linearity Consistent accuracy across the entire measurement range
measurement
Resolution Smallest detectable measurement difference Avoids missing small variations
Ability to detect meaningful changes in the characteristic being
Sensitivity Helps identify process variations effectively
measured

Example of Measurement System Properties:


A digital micrometer measuring a 10.000 mm reference block should:
• Consistently measure 10.000 mm (high accuracy).
• Show minimal variation in repeated measurements (high precision).
• Maintain accuracy across different sizes (linearity).
• Detect 0.001 mm changes (high resolution).
If the micrometer shows drift over time, it lacks stability and requires recalibration.

Subhransu Sekhar Mohanty


2. Variations in Measurement Systems
Variations in measurement systems arise from equipment, operators, environmental factors, and part differences. Understanding
these variations is crucial for improving system performance.
Sources of Measurement Variation:
Repeatability (Equipment Variation)
o Variation observed when the same operator measures the same part using the same instrument multiple times.
o Affected by gage quality, instrument wear, and environmental conditions.
Reproducibility (Operator Variation)
o Differences in measurement results when multiple operators use the same instrument.
o Affected by operator skill, technique, or measurement interpretation.
Part-to-Part Variation
o Natural variation between different parts being measured.
o A well-controlled manufacturing process should have minimal variation.
Stability (Drift Over Time)
o Long-term variation in measurement results due to instrument wear or external influences.
o Detected through control charts and periodic calibration checks.
Linearity (Variation Across Measurement Range)
o Bias that changes across different values of measurement.
o Example: A weighing scale reading 0.5% lower for 10g but 2% higher for 500g, indicating linearity issues.

Subhransu Sekhar Mohanty


2. Variations in Measurement Systems
Example of Measurement System Variation:
A pressure gauge measures 100 psi in a test environment. Over multiple readings:
• One operator records 98.5 psi, while another records 101 psi (reproducibility issue).
• The gauge reads 99.8 psi, 100.1 psi, 99.9 psi in repeated trials (good repeatability).
• After 6 months, the same gauge reads 103 psi for the same pressure (stability issue).
This variation highlights operator, equipment, and stability-related errors in measurement.

3. Effects of Measurement System Variability


Measurement variability can lead to incorrect decisions in quality control, causing higher costs, customer dissatisfaction, and process
inefficiencies.
Consequences of High Measurement Variability:
• False Acceptance (Type I Error): Rejecting a good part due to measurement errors.
• False Rejection (Type II Error): Accepting a defective part as good.
• Unreliable SPC Data: Statistical Process Control (SPC) charts may show false trends.
• Increased Calibration Costs: Frequent rework and recalibration may be required.
• Process Instability: If measurement errors are mistaken for process issues, unnecessary adjustments may be made.
Example of Measurement Variability Impact:
A diameter measurement system with a ±0.02 mm tolerance should reliably detect variations within this range. If the measurement
variation itself is ±0.03 mm, it becomes larger than the tolerance, making the measurement system unfit for use.
Subhransu Sekhar Mohanty
4. Nested Gage R&R Study and Its Interpretation
A Nested Gage R&R study is used when parts cannot be measured multiple times under identical conditions—
common in destructive testing (e.g., tensile strength, chemical analysis). Unlike a traditional Gage R&R, where
each part is measured multiple times, a nested study assumes each operator measures a unique set of parts.
When to Use a Nested Gage R&R Study:
• When parts are destroyed during measurement (e.g., crash testing, material analysis).
• When only one measurement per part is possible.
• When evaluating operator effects in non-repetitive tests.
Steps for Conducting a Nested Gage R&R Study:
1️. Select multiple operators and assign them unique sets of parts.
2️. Each operator measures their own set of parts, ensuring no overlap.
3️. Analyze measurement variation using statistical tools (ANOVA method).
Analysis of Variance (ANOVA) in MSA
The Analysis of Variance (ANOVA) method is used in Gage R&R analysis to statistically determine the sources of variation in a
measurement system. It helps in quantifying and isolating measurement system errors such as repeatability, reproducibility, and
interaction effects.
Key Components of the ANOVA Table
The ANOVA table consists of six main columns that provide a structured way to analyze variation:
1. Source – Identifies the cause of variation (e.g., parts, appraisers, interaction, repeatability).
2. DF (Degrees of Freedom) – Represents the number of independent values available for estimating variation.
Subhransu Sekhar Mohanty
3. SS (Sum of Squares) – Measures total variation within each source by calculating the squared deviations from the mean.
4. MS (Mean Square) – Found by dividing SS by DF; represents the variance estimate for each source.
5. EMS (Expected Mean Square) – Defines the expected variance components for each MS term in the table.
6. F-ratio – Used for statistical significance testing.
If the F-ratio is high, the interaction effect is significant, meaning the measurement system may be influenced by operator-part interaction.

Decomposition of Variation in ANOVA


The ANOVA method breaks down the total variation into four key components:
1. Parts (Product Variation) – Reflects natural differences in part measurements.
2. Appraisers (Operator Variation or Reproducibility) – Accounts for variation due to different operators measuring the same part.
3. Interaction (Parts × Appraisers) – Identifies if some appraisers measure certain parts differently, indicating an operator bias issue.
4. Repeatability (Gage/Equipment Error) – Measures variation when the same appraiser repeatedly measures the same part.

Advantages of ANOVA over Classical Gage R&R Methods


✔ More Accurate Estimates – ANOVA separates interaction effects, providing better insight into variation sources.
✔ Identifies Operator-Part Interaction – Helps detect bias in specific operators measuring certain parts.
✔ Statistical Significance Testing – The F-ratio test determines if variation sources are statistically significant.
✔ Improved Decision-Making – Helps determine whether the measurement system needs calibration, training, or redesign.

Subhransu Sekhar Mohanty


The ANOVA table decomposes variation into four components:
✔ Parts
✔ Appraisers
✔ Interaction (Appraisers & Parts)
✔ Repeatability (Gage/Equipment Error)

Interpreting Nested Gage R&R Results:

Component Meaning Interpretation


Repeatability Variation due to the measurement instrument High repeatability variation suggests gage issues
High reproducibility variation suggests operator
Reproducibility Variation due to different operators
inconsistency
If part-to-part variation is low, measurement errors
Part-to-Part Variation True variation in the product being measured
dominate

4. Nested Gage R&R Study and Its Interpretation


Example of Nested Gage R&R Study in Destructive Testing:
A metal fatigue test involves measuring the breaking force of samples under stress. Since each sample breaks during testing, the
same part cannot be measured twice.
• Three operators each test 10 different samples.
• Results show high reproducibility variation, meaning operator technique differences are a major issue.
• Solution: Provide better training and standardize testing procedures.

Subhransu Sekhar Mohanty


Module 8: Attribute Agreement Analysis
(AAA)
• Introduction to Attribute Agreement Analysis

Module • AAA - Agreement Within Themselves


(Consistency)

08
• AAA - Agreement with Standard
(Correctness)
• AAA - Agreement with Each Other
• AAA - All Operators vs. Standard
• Consistency and Correctness Plots

Subhransu Sekhar Mohanty


Attribute Agreement Analysis (AAA)
Attribute Agreement Analysis (AAA) is a statistical method used to evaluate the consistency
and accuracy of inspectors, appraisers, or operators when making categorical (attribute-
based) decisions. It is commonly applied in visual inspection, go/no-go testing, pass/fail
decisions, and subjective quality assessments.

Subhransu Sekhar Mohanty


1. Introduction to Attribute Agreement Analysis (AAA)
Unlike Variable Gage R&R, which deals with numerical (continuous) measurements, AAA evaluates categorical data where
measurements involve subjective or predefined classifications.

Objectives of AAA:
• Determine how consistently operators classify attributes (repeatability).
• Assess agreement between operators and a known standard (accuracy).
• Identify discrepancies in subjective decision-making.
• Improve training and standardization in manual inspection processes.
Common Applications of AAA:
• Visual Inspection: Surface defects, color variation, scratch detection.
• Go/No-Go Gauging: Pass/fail, OK/reject decisions.
• Subjective Judgments: Weld quality, paint finish evaluation.
• Medical Diagnosis: Radiology, pathology assessments.

Example of AAA Usage:


Three inspectors examine 100 automotive parts for defects (pass/fail decision).
• If an inspector classifies the same parts differently over multiple trials, they lack consistency.
• If inspectors frequently disagree with the standard, they need better training.

Subhransu Sekhar Mohanty


2. AAA – Agreement Within Themselves (Consistency)
This aspect of AAA evaluates whether each individual operator consistently classifies the same attribute when evaluating
multiple times. It measures repeatability in attribute data.

Key Factors Affecting Consistency:


• Subjectivity in visual inspections.
• Inconsistent application of decision criteria.
• Operator fatigue or distractions.

How to Measure Consistency:


• Operators assess the same set of items multiple times without knowing their previous responses.
• The percentage of times an operator gives the same response is calculated.

Example of Consistency Issue:


An inspector visually checks paint defects on a metal panel. If they classify a scratch as "minor defect" in one round but "major
defect" in another, consistency is low.
Solution: Improve inspection guidelines and operator training.

Subhransu Sekhar Mohanty


3. AAA – Agreement with Standard (Correctness)
This measures how well an operator’s decisions align with the known correct standard. It evaluates accuracy
rather than just repeatability.
How to Measure Correctness:
• Compare each operator’s decisions against a master standard or expert judgment.
• Calculate the percentage of decisions that match the correct classification.
Example of Correctness Issue:
A quality inspector evaluates 50 electronic circuit boards for soldering defects.
• If the inspector correctly identifies defects in 90% of cases, their correctness is high.
• If they misclassify 30% of good boards as defective, corrective action is needed.
Solution: Conduct training and improve decision criteria definitions.

Subhransu Sekhar Mohanty


4. AAA – Agreement with Each Other
This evaluates how well multiple operators agree with one another when making attribute classifications. It measures
reproducibility in attribute data.

Factors Affecting Operator Agreement:


• Variations in personal judgment and experience levels.
• Differences in interpretation of quality standards.
• Lack of clear decision criteria or training.

How to Measure Operator Agreement:


• All operators inspect the same set of items.
• The number of times they agree with each other is recorded.

Example of Poor Operator Agreement:


Three inspectors classify textile fabric quality:
• Inspector A: 80% "Pass", 20% "Fail".
• Inspector B: 65% "Pass", 35% "Fail".
• Inspector C: 90% "Pass", 10% "Fail".
If their decisions vary significantly, operator agreement is low, requiring standardization of criteria.

Subhransu Sekhar Mohanty


5. AAA – All Operators vs. Standard
This analysis determines how well all operators collectively match the correct standard. It provides an overall assessment of operator
performance, system reliability, and training effectiveness.

How to Conduct This Analysis:


• Compare each operator's classifications with the correct standard.
• Determine the overall percentage of agreement across all operators.
• Identify which operators perform better or worse than others.

Example of All Operators vs. Standard Analysis:


A company inspecting automobile windshields for cracks uses four inspectors.
• If Inspector 1 agrees 98% with the standard but Inspector 4 agrees only 70%, additional training is needed for Inspector 4️.
• If none of the operators reach 90% agreement, the inspection method may need improvement.
Solution: Use reference samples, provide better defect examples, and develop clearer inspection criteria.

Subhransu Sekhar Mohanty


6. Consistency and Correctness Plots
Graphical representation of AAA results helps identify patterns and problem areas in operator performance and measurement
system consistency.

Key Plots Used in AAA Analysis:


Kappa Agreement Chart:
o Measures how well the operators agree beyond random chance.
o A Kappa score > 0.75 indicates good agreement.
o A Kappa score < 0.40 suggests poor agreement requiring improvement.
Correctness vs. Consistency Plot:
o X-axis: Consistency (How well an operator agrees with themselves).
o Y-axis: Correctness (How well an operator agrees with the standard).
o Operators in the top-right quadrant (high consistency, high correctness) are performing well.
o Operators in the bottom-left quadrant (low consistency, low correctness) need training.
Confusion Matrix:
o A table showing true positives, false positives, false negatives, and true negatives.
o Helps determine if errors are due to false defect detection (Type I error) or missed defects (Type II error).
Example of Consistency and Correctness Plot Use:
• A visual inspector consistently classifies small scratches as "minor defects", but the standard defines them as "major defects".
• The plot shows high consistency but low correctness, indicating a need for better standard alignment.
Subhransu Sekhar Mohanty
Module 9: Advanced Statistical Measures in
MSA
• Understanding Kappa Value

Module • What is Kendall’s Coefficient of Concordance?


Kappa and Kendall’s Coefficient for

09

Consistency
• Kappa and Kendall’s Coefficient for
Correctness
• Kappa and Kendall’s Coefficient - Amongst
Themselves

Subhransu Sekhar Mohanty


Advanced Statistical Measures in MSA
(Measurement System Analysis)
Advanced statistical measures such as Kappa Value and Kendall’s Coefficient of Concordance
are used to evaluate the reliability of attribute-based measurement systems. These metrics help
assess consistency, correctness, and agreement among multiple operators in subjective quality
assessments.

Subhransu Sekhar Mohanty


1. Understanding Kappa Value
The Kappa statistic (κ) is a measure of inter-rater agreement that quantifies how well operators agree beyond what is expected by
random chance. It is commonly used in attribute agreement analysis (AAA) when inspectors classify parts into categories (e.g.,
Pass/Fail, Defective/Non-Defective).

Kappa (κ) Statistic Formula in MSA


The Kappa (κ) statistic is used in Measurement System Analysis (MSA) to assess the agreement between inspectors (appraisers)
beyond what would be expected by random chance. It is particularly useful in Attribute Agreement Analysis (AAA) when evaluating
pass/fail or categorical decisions.

Formula:
κ=(Po−Pe) / (1️−Pe)
Where:
• Po(Observed Agreement): The actual percentage of times inspectors agree on classifications.
• Pe(Expected Agreement): The probability that agreement occurred by chance.

Interpreting Kappa (κ) Values: Kappa (κ) Value Interpretation


< 0.00 Poor agreement (worse than random)
0.00 - 0.20 Slight agreement
0.21 - 0.40 Fair agreement
0.41 - 0.60 Moderate agreement
0.61 - 0.80 Substantial agreement
0.81 - 1.00 Almost perfect agreement
Subhransu Sekhar Mohanty
1. Understanding Kappa Value
Example Calculation:
Let's assume two inspectors classify 100 parts as "OK" or "Defective" with the following data:
Classification Inspector 1: OK Inspector 1: Defective Total
Inspector 2: OK 50 10 60
Inspector 2: Defective 5 35 40
Total 55 45 100

1. Observed Agreement (Po):


o Agreements occur in OK-OK (50 times) and Defective-Defective (35 times).
o So, Po=(50+35)/100=0.85
2. Expected Agreement (Pe):
Probability of both choosing "OK": P(OK)={(55/100)×(60/100)}=0.33
Probability of both choosing "Defective": P(Defective)={(45/100)×(40/100)}=0.18
Total expected agreement: Pe=0.33+0.18=0.51
3. Calculate Cohen’s Kappa (κ\kappa): κ=(Po−Pe)/(1−Pe)=(0.85−0.51)/(1−0.51)=0.34/0.49=0.69
Interpretation: κ=0.69 indicates substantial agreement between inspectors.

Subhransu Sekhar Mohanty


2. What is Kendall’s Coefficient of Concordance?
Kendall’s Coefficient of Concordance (W) in MSA
Kendall’s W is used in Measurement System Analysis (MSA) when multiple raters (more than two) rank or score a set of items. It
helps evaluate the consistency of subjective assessments, commonly used in visual inspections, defect grading, and quality
evaluations.
Formula for Kendall’s W:
W=(1️2️∑Ri2−3️N2(k+1)) / (k2(N3−N))
Where:
W = Kendall’s Coefficient of Concordance (0 ≤ W ≤ 1️)
N = Number of items being ranked
k = Number of raters (appraisers)
Ri= Sum of ranks for item i

Interpreting Kendall’s W Values:

Kendall’s W Value Level of Agreement


1.00 Perfect Agreement
0.80 – 0.99 Strong Agreement
0.60 – 0.79 Moderate Agreement
0.40 – 0.59 Weak Agreement
< 0.40 Poor Agreement (high variability)

Subhransu Sekhar Mohanty


2. What is Kendall’s Coefficient of Concordance?
Example: Kendall’s Coefficient of Concordance (W) in MSA
Scenario: A quality control team of 4 inspectors evaluates the surface finish of 5 machined parts on a scale from 1 (best) to
5 (worst). The rankings given by each inspector are as follows:
Part Inspector 1 Inspector 2 Inspector 3 Inspector 4 Sum of Ranks (Rj)
A 1 2 1 3 7
B 2 1 3 2 8
C 3 3 2 1 9
D 4 4 4 4 16
E 5 5 5 5 20

Answer:
Step 3: Interpretation
Step 1: Compute ∑Rj2
• W=0.49 suggests week agreementamong the inspectors.
∑Ri2=72+ 82+ 92+162+202=850
• A value close to 1 indicates perfect agreement, while 0 indicates no
Step 2: Compute Kendall’s W using the formula
agreement.
W=(1️2️∑Ri2−3️N2(k+1)) / (k2(N3−N))
Where: N=5 (Number of Items)
• k= 4 (Number of raters) Key Applications of Kendall’s W in MSA
• Visual Inspection: Used when multiple inspectors assess the appearance
So, W={(12 x 850)-3 x 52 x (4+1)} / {42 x (53-5) }
of defects in products.
= 0.49
• Defect Grading: Helps analyze consistency in scoring defects in materials
Subhransu Sekhar Mohanty
(e.g., scratches, color variations).
3. Kappa and Kendall’s Coefficient for Consistency
Consistency measures how reliably each operator applies the same criteria over multiple trials.
• Kappa for Consistency: Checks if an operator reaches the same decision when evaluating the same item multiple times.
• Kendall’s W for Consistency: Evaluates if multiple raters assign the same ranking pattern over different trials.
Example of Consistency Analysis:
An inspector examines the same batch of 50 components twice.
• If Kappa κ = 0.85, the inspector is highly consistent.
• If Kendall’s W = 0.90 among three inspectors evaluating the same samples, their consistency is strong.
If values are low, inspectors may need better training or clearer inspection guidelines.

4. Kappa and Kendall’s Coefficient for Correctness


Correctness evaluates how well operators match a known reference standard.
• Kappa for Correctness: Measures how accurately an inspector classifies parts compared to a master standard.
• Kendall’s W for Correctness: Checks if multiple raters rank items similarly to a pre-defined standard.
Example of Correctness Analysis:
A quality inspector examines 100 parts, comparing results to a master reference.
• If Kappa κ = 0.75, the inspector is reasonably accurate but needs improvement.
• If Kendall’s W = 0.85 among five inspectors, their rankings are close to the standard.
Solution for Low Correctness: Use training with reference samples to improve alignment with standards.
Subhransu Sekhar Mohanty
5. Kappa and Kendall’s Coefficient - Amongst Themselves
This assesses agreement between multiple inspectors without comparing to a reference standard.
• Kappa (κ) Amongst Inspectors: Measures agreement when different operators classify parts.
• Kendall’s W Amongst Inspectors: Evaluates if inspectors rank items in a similar order.

Example of Operator Agreement Analysis:


Five inspectors rate paint quality on 2️0 car doors.
• If Kappa κ = 0.65, there is moderate agreement, suggesting some subjectivity in evaluation.
• If Kendall’s W = 0.92, their ranking of defects is highly consistent.
Solution for Low Agreement:
• Establish standardized quality criteria.
• Use reference images for defect classification.
• Conduct team training sessions to improve alignment.

Note:
Kappa and Kendall’s Coefficient are powerful statistical tools for analyzing agreement and consistency in attribute
measurements. They help identify subjectivity, training needs, and system weaknesses in visual inspections and go/no-go tests.

Subhransu Sekhar Mohanty


Modern Measurement System Analysis (MSA)
relies on statistical software tools to analyze data
efficiently, identify variation sources, and ensure the
reliability of measurement systems. This module
covers popular MSA software, data collection and
analysis methods, real-world applications in
manufacturing, and case studies with hands-on
exercises.

Software Tools & Applications in MSA


Subhransu Sekhar Mohanty
1. Overview of Software Tools Used in MSA
Several statistical software tools are used for conducting Gage R&R studies, Attribute Agreement Analysis, Capability Analysis,
and Measurement Uncertainty Estimation.

Popular MSA Software Tools:


Software Key Features Common Use in MSA
Minitab User-friendly interface, built-in MSA templates Gage R&R, AAA, Capability Analysis
SigmaXL Excel add-in for statistical analysis Gage R&R, Run Charts, SPC
JMP (by SAS) Advanced data visualization, powerful analytics MSA, DOE, Statistical Modeling
Excel (with VBA/macros) Customizable, widely available Basic Gage R&R, SPC Charts
QI Macros Excel-based statistical toolkit Quick MSA analysis, Lean Six Sigma tools
Statgraphics Powerful statistical modelling Measurement system validation

Choosing the Right MSA Software:


• For beginners: Excel or SigmaXL (easy to use, Excel-based).
• For advanced users: Minitab or JMP (powerful statistical analysis).
• For real-time SPC integration: QI Macros or Statgraphics.

Subhransu Sekhar Mohanty


2. Data Collection and Analysis Using
Minitab / SigmaXL / JMP
The effectiveness of MSA depends on accurate data collection and statistical analysis. Different software tools
streamline this process.
Steps for MSA Using Minitab (Example: Gage R&R Study)
1. Collect measurement data (e.g., three operators measure ten parts three times each).
2. Enter data in Minitab (Structured in rows for parts, operators, and trials).
3. Select “Stat” → “Quality Tools” → “Gage Study” → “Gage R&R (Crossed)”.
4. Analyze results (Look at % Contribution, % Study Variation).
5. Interpret findings (Identify sources of variation and improvement areas).
Example Output in Minitab:
• % Contribution of Measurement System < 10% → Acceptable measurement system.
• % Contribution > 30% → Poor measurement system, needs improvement.
• Part-to-Part Variation should be the dominant source of variation (> 80%).
Using SigmaXL for Attribute Agreement Analysis:
1. Go to SigmaXL > Statistical Tools > Attribute Agreement Analysis.
2. Input Pass/Fail or categorical data from multiple operators.
3. Calculate Kappa statistics for operator agreement.
4. Generate Plots (e.g., Consistency vs. Correctness plots).

Subhransu Sekhar Mohanty


2. Data Collection and Analysis Using
Minitab/SigmaXL/JMP
Using JMP for Capability Analysis in MSA:
1. Import measurement data into JMP.
2. Run Control Charts and Histograms to check variation patterns.
3. Perform Measurement Uncertainty Analysis using built-in models.
4. Visualize trends with interactive graphs.

3. Real-world Applications of MSA


MSA is widely used in industries like automotive, aerospace, electronics, pharmaceuticals, and healthcare to ensure measurement
accuracy.
Example 1: Automotive Industry – Gage R&R Study for Engine Components
• Challenge: Inconsistent bore diameter measurements in an engine assembly line.
• Solution: Conducted Gage R&R Study using Minitab.
• Outcome: Found that operator technique contributed 25% to total variation. Standardized training reduced variation to <10%,
improving part acceptance rates.
Example 2: Electronics Industry – Attribute Agreement Analysis for PCB Inspection
• Challenge: Different inspectors had low agreement on defect classification for printed circuit boards (PCBs).
• Solution: Used SigmaXL to perform Attribute Agreement Analysis (AAA).
• Outcome: Identified operator training gaps and improved standard defect classification, increasing agreement from 60% to 90%.

Subhransu Sekhar Mohanty


Example 3: Pharmaceutical Industry – Measurement System Traceability
• Challenge: Ensuring weighing balances meet ISO 17025 calibration requirements.
• Solution: Used JMP for measurement uncertainty analysis and Minitab for control charts.
• Outcome: Improved weighing consistency, ensuring traceability to national standards.

Subhransu Sekhar Mohanty


Thank You !
Subhransu Sekhar Mohanty

You might also like