0% found this document useful (0 votes)
22 views37 pages

Meas ch1

Uploaded by

Sol Temesgen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views37 pages

Meas ch1

Uploaded by

Sol Temesgen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Chapter One

CONCEPTS OF MEASURING SYSTEMS


Units
• The result of a measurement of a physical quantity
must be defined both in kind and magnitude.
• The standard measure of each kind of physical
quantity is called a unit.
• Magnitude of a physical quantity = (Numerical ratio)
X (Unit)
Absolute Units
• An absolute system of units is defined as a system in
which the various units are all expressed in terms of
a small number of fundamental units.
Fundamental and Derived Units
• In Science and Technology two kinds of units are
used. Fundamental units and derived units
Fundamental units
• The fundamental units in mechanics are measures of
length, mass and time. The sizes of fundamental
units, whether centimeter or meter or foot, gram, or
kilogram or pound, second or hour are quite arbitrary
and can be selected to fit a certain set of
circumstances.
• Since length, mass and time are fundamental to most
other physical quantities besides those in mechanics;
they are called the Primary fundamental units.
Derived units
• All other units which can be expressed in terms
fundamental units with the help of physical
quantities are called Derived Units.
• Every derived unit originates from some physical
law or equation which defines that unit.
• The volume V of a room is equal to the product of
its length (l), width (b), and height (h) therefore
V=lbh
Cont.

• If meter is chosen as the unit of length, then the


volume of a room 6m X 4m X 5m is equal to 120.
The number of measures (6 X 4 X 5 =120) as well as
units (m x m x m = ) are multiplied. The derived unit
for volume is thus .
Some fundamental units
S. Name Unit Symb
No ol
1 Length Meter M
2 Mass Kilogra kg
m
3 Time Second sec
4 Electric Current Ampere A
5 Temperature Kelvin K
6 Luminous Intensity Candela Cd
Cont.
Supplementary Units
S. No Name Unit Symbol

1 Plane angle radian rad

2 Solid angle steradian sr

Derived Units
S. No Name Unit
1 Area m2
2 Volume m3
3 Density kg/m3
4 Angular velocity rad/sec
5 Angular acceleration rad/sec2
6 Pressure, Stress kg/m2
7 Energy Joule(Nm)
8 Charge Coulomb
9 Electric Field Strength V/m
10 Capacitance (ASec/V)
11 Frequency Hz
12 Velocity m/sec
13 Acceleration m/sec2
14 Force Kg-m(N)
15 Power Watt (J/sec)
Dimensions

• Dimensions are physical quantities that can be


measured, whereas units are arbitrary names
that correlate to particular dimensions to make
it relative.
• Example : a dimension is length, whereas a
meter is a relative unit that describes length).
• The dimension is written in a characteristics
notation, example [L] for length, [T] for time.
Cont.
• A derived unit is always recognized by its
Dimensions, which can be defined as the complete
algebraic formula for the derived unit.
• Thus when quantity such as area A of a rectangle is
measured in terms of other quantities i.e. length l,
and width b, in this case, the relationship is
expressed mathematically as;
Area A = l x b
• Since l and b each have the dimensions of a length,
[L], the dimensions of area is
[A] = [L] x [L] = []
Cont.

• If meter (m) is unit of length, then meter square ()


can be used as unit of area.
• In mechanics has the three fundamental units are
length, mass and time.
• The dimensional symbols are length [L], Mass [M],
time [T]
Conversions
1 ft = 30.48 cm= 12 inches
1 m = 3.28 ft
1 kg = 2.2 pounds
1 hp = 746 W
Standards and Classification of Measurement

• A standard is a physical representation of a unit of


measurement.
• The term standard is applied to a piece of
equipment having a known measure of physical
quantity.
• They are used for the purpose of obtaining the
values of the physical properties of other
equipment of by comparison methods.
• The classification of standards is based on the
function and the application of the standards.
(a) International Standards

• Defined based on international agreement.


• Represent units of measurement closest to
achievable accuracy with current technology and
scientific methods.
• Checked and evaluated regularly against absolute
measurements using fundamental units.
• Maintained at the International Bureau of Weights
and Measures (BIPM).
• Not accessible to ordinary users of measuring
instruments for calibration or comparison purposes.
(b) Primary Standards
• Absolute standards of extremely high accuracy,
serving as ultimate reference standards.
• Maintained by national standards laboratories
worldwide.
• Represent fundamental units and some derived
electrical and mechanical units.
• Independently calibrated by absolute
measurements at each national laboratory.
(c) Secondary Standards
• Basic reference standards in industrial
measurement laboratories.
• Maintenance and calibration responsibility lies with
the specific industry.
• Checked locally against reference standards
available in the area.
• Periodically sent to national standards laboratories
for calibration and comparison against primary
standards.
(d) Working Standards

• Major tools of a measurement laboratory.


• Used to check and calibrate general laboratory
instruments for accuracy and performance.
• Example: A manufacturer of precision resistances
may use a Standard Resistance (a working
standard) in the quality control department to
check the values of manufactured resistors,
ensuring measurement setups perform within
specified accuracy limits.
Measurements and Measuring Systems
Measurements
• The measurement of a given quantity is essentially
an act or the result of comparison between the
quantity and a predefined standard.
• Since two quantities are compared the result is
expressed in numerical values.
Methods of Measurement
• The methods of measurement may be broadly
classified into two categories. (i) Direct methods (ii)
Indirect methods.
(i) Direct Method
• In these methods, the unknown quantity is directly
compared against a standard.
• The result is expressed as a numerical number and
a unit.
• Direct methods are quite common for the
measurement of physical quantities like length,
mass and time.
Instrument

• An instrument may be defined as a device for


determining the value or magnitude of a quantity
or variable.
• Basic Types of Measuring instruments are
(i) Mechanical measuring instruments
(ii) Electrical measuring instruments
(iii) Electronic measuring instruments
Classification of Instruments

• There are many way in which instruments can


be classified. Broadly, instruments are
classified into two categories.
Absolute Instruments
Secondary Instruments
• Absolute Instruments:
– Provide magnitude of measured quantity in terms of
physical constants.
– Examples: Tangent Galvanometer, Rayleigh’s Current
Balance.
• Secondary Instruments:
– Measurement of quantity observed through output
indicated by instrument.
– Calibrated by comparison with absolute or previously
calibrated secondary instrument.
– Examples: Voltmeter, Glass Thermometer, Pressure
Gauge.
Functions of Instruments and Measurement Systems

• Instruments may be classified based on their


function.

• Three main functions are


i. Indicating Function
ii. Recording Function
iii. Controlling Function
(i) Indicating Function
• These instruments provide information regarding the
variable quantity under measurement and most of
the time this information are provided by the
deflection of the pointer.
• This kind of function is known as the indicating
function of the instruments.
(ii) Recording Function
• Recording instruments give a continuous record of the
quantity being measured over a specified period.
(iii) Controlling Function
• This is function is widely used in industrial world. In
this these instruments controls the processes.
Characteristics of Instruments and Measurement
Systems
• Static Characteristics:
– Criteria defined for measurements of quantities that
are slowly varying or almost constant.
– Do not vary with time.
• Dynamic Characteristics:
– Relation between input and output expressed using
differential equations.
– Applicable when quantity under measurement changes
rapidly with time.
Dynamic Characteristics

Desirable Undesirable

Speed of Response Lag

Fidelity Dynamic Error


Cont..
Calibration
• The various performance characteristics are obtained in one
form or another by a process called “Calibration”.
• It is the process of making an adjustment or marking a scale
so that the readings of an instrument agree with the accepted
and the certified standard.
Some important definitions
1 Static Error: It is the difference between the measured value
and true value of the quantity
Mathematically
δA = Am − At ----------- eq (1.1)
δA: Absolute error or Static error
Where, Am: Measured value of the quantity
At : True value of the quantity
Cont..
2. Static Correction: It is the difference b/w the true value & measured value of the
quantity mathematically
δC=(−δA)=(At−Am)
Limiting error or Relative error:
(εr) = δA/At
εr=(Am − At)/At
Percentage relative error:
% εr = (δA/At) × 100
From relative percentage error, accuracy is expressed as
A = 1 − |εr|
Where A: relative accuracy
And
a = A × 100%
Where a = Percentage accuracy
Error can also be expressed as percentage of Full Scale Deflection (FSD) as,
Example: The expected value of voltage to be measured is 150 V. However, the
measurement gives a value of 149 V. Calculate (i) Absolute error (ii) Percentage
error, (iii) Relative accuracy (iv) Percentage accuracy (v) Error expressed as
percentage of full scale reading if scale range is 0 – 200 V.

Solution: Expected value implies true value


A t= 150 V
A m = 149 V
(i) Absolute error = A m − A t = −1 V
(ii) % εr = (A m − A t )/At ×100= 1/150 ×100=− 0.66%
(iii) A = 1 − | εr| = 1 − | −1/150| = 0.9933
(iv) % a = A × 100 = 99.33%
(v) F.S.D= [(A m − A t )/F.S.D] × 100

= −1/200 ×100 = − 0.5 %


Example:
A Voltage has a true value of 1.50 V. An analog indicating instrument with a scale
range of 0 – 2.50 V shows as a voltage of 1.46 V. What are the values of absolute
error and correction. Express the error as a fraction of the true value and the full
scale deflection (f.s.d.).

𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧:
Absolute error δA = Am − At = 1.46 – 1.50 = − 0.04 V
Absolute correction δC = − δA = + 0.04 V
Relative error,εr= δA/A t= (− 0.04 /1.50) ×100= −2.67 %
Relative error (expressed as a percentage of F.S.D.) = (− 0.04 / 2.5) ×100 =−1.60 %
Where F.S.D. is the Full Scale Deflection.
Example: A meter reads 127.50 V and the true value of the voltage is 127.43 V
Determine (a) The static error, (b) The static correction for this instrument
Solution:
From Eqn. 1.1, the error is
δA = Am − At= 127.50 – 127.43 =+0.07 V
Static Correction δC = − δA = −0.07 V
Example: A thermometer reads 95.45°C and the static correction given in the
correction curve is –0.08°C. Determine the true value of the temperature.
Solution:
True value of the temperature A t= Am + δC = 95.45 – 0.08 =95.37°C
3. Accuracy
• Refers to the closeness of an instrument's reading to the true
value of the measured quantity.
 Expression of Accuracy
 Percentage of Full-Scale Reading:
– Used for instruments with uniform scales.
– Accuracy is specified as a percentage of the instrument’s full-scale
value.
• Example
– Full-Scale Reading: 50 units
– Accuracy: ±0.1% of full-scale
– Error Allowance: ±0.1% × 50 = ±0.05 units
• Measurement can deviate by up to 0.05 units from the true value,
regardless of the actual reading.
Example: A wattmeter having a range 1000 W has an error of ± 1% of full scale
deflection. I f the true power is 100 W, what would be the range of readings?
Suppose the error is specified as percentage of true value, what would be the range
of the readings?
Solution:
When the error is specified as a percentage of full scale deflection, the
magnitude of limiting error at full scale = ±1/100 ×1000= ±10 W
Thus the Wattmeter reading when the true reading is 100 W may be
100 ±10 W i.e., between 90 to 110 W
Relative error = ±10/100 ×100= ±10%
Now suppose the error is specified as percentage of true value.
The magnitude of error = ±1/100 ×100= ±1 W
Therefore the meter may read 100 ±1 W or between 99 to 101 W
 Accuracy can also be defined in terms of static error.
4 Precision in Measurements

• Definition: The degree of agreement within a group of measurements.


• Key Components:
– Conformity: Consistency of repeated measurements.
– Significant Figures: Level of detail in a measurement (more significant figures
= higher precision).
• High Precision ≠ Accuracy: Precise measurements may not be accurate
(close to the true value).
• Precision Error: Error caused by limitations of the measuring
instrument (e.g., scale constraints).
• Example:
True Value: 2,385,692 ohms (2.39 MΩ)
Measured Value: 2.4 MΩ (due to ohmmeter scale limitations)
• Precision Error: Measurement is consistent but inaccurate due to tool
limitations.
5. Sensitivity in Measurements

• Definition: Sensitivity refers to the smallest change in the


measured variable that an instrument can detect and respond to.
• Sensitivity is a measure of how much the instrument’s output
changes when there is a small change in the quantity being
measured.
• The more sensitive an instrument is, the smaller the change it can
detect and respond to accurately.
• Example:
• If an instrument measures temperature, its sensitivity would be the
smallest temperature change that it can detect and reflect in its reading.
• For example, if a thermometer is sensitive to 0.1°C, a change of 0.1°C in
temperature will cause a noticeable change in the thermometer's output
reading.
6. Hysteresis in Instruments

• Definition:
Hysteresis occurs when an instrument shows different
output values for the same input depending on whether
the input is increasing or decreasing.
• Key Points:
• Output differs for increasing vs decreasing inputs.
• Maximum variations in output typically occur around half of the
full scale of the instrument.
• Hysteresis error is most pronounced at the 50% of full scale.
• Example:
A pressure sensor may show a different reading when the
pressure increases to 50 units vs when it decreases back to
50 units, even if the actual pressure is the same.
Threshold in Measurements

• Definition: The minimum value at which an


instrument can reliably detect a change in the
measured variable.
• Below this threshold, no detectable response occurs.
• Determines the sensitivity of the instrument.
• High threshold = less sensitive; Low threshold = more
sensitive, but may introduce noise.
• Example: A temperature sensor with a 0.5°C
threshold will only register temperature changes ≥
0.5°C.
Ended

You might also like