0% found this document useful (0 votes)
83 views3 pages

Calibration of Instruments

Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views3 pages

Calibration of Instruments

Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

CALIBRATION OF INSTRUMENTS

1.1 INTRODUCTION

It is presumed that most instruments exhibit linearity in the range of their operation. An instrument is said to be linear if the input physical quantity and the resulting output reading of the instrument have a linear relationship. The output of the instrument could be linear deflection, angular deflection or voltage etc. This output has to be displayed in terms of the physical quantity which the instrument was expected to measure in the first place. The manufacturer calibrates this display by comparing the output of the instrument with respect to a standard input. Having obtained such an instrument, which has been marked and calibrated by the manufacturer, the user has to periodically calibrate the instrument to see whether it is working within the prescribed limits. The main aim of conducting most of the experiments in the measurement laboratory is to undertake such a calibration. In order to calibrate an instrument, we need another standard input which can be measured ten times more accurately than with the instrument under calibration. The standard input is varied within the range of measurement of the instrument to be calibrated. Based on the standard input and the values obtained from the instrument, one can calibrate the instrument. We know that there are uncertainties in the reading obtained from the instrument. Based on the readings obtained from the instrument with respect to a standard input, we try to determine whether we are able to predict, the true value of the input within statistical limits. Table 1.1 : Sample table No. qi Increasing 1 2 3 qo Decreasing

1.2 PROCEDURE In order to calibrate an instrument, a standard input which is capable of being measured ten times more accurately than the given instrument is used as the input. The output reading of instrument qo is noted against each value of the standard input by repeating the experiment for various values of the standard input. The experimental data recorded during the calibration of an instrument is typically recorded in the Table 1.1. A word of caution while using the above data. The output values of instrument during increasing and decreasing values of the input should not be averaged, but must be considered as separate set. Therefore in the above example, there will be 6 readings that will have to be used in further calculations. Most practical instruments are designed to exhibit linearity because it maximizes the convenience of it use. One can fit a straight line through the output values obtained from such instruments that exhibit linearity. Using a leastsquare minimization scheme is the most accepted method to fit a straight line. It consists of minimizing the vertical deviations between the data points and the straight line. The slope m and the intercept b of the best-fit straight line through least square minimization are respectively given by,

m=
and

N N

qi qo

qi2 (

qi

qi )

qo

1.1

b=

qo N

qi2 (

qi2 (

qi qo )( qi )
2

qi )

1.2

where, N represents the total number of data points. The slope and intercept determined from above relate the input qi and output qo as follows:

qo = mqi + b

1.3

One can determine the revised value of the input, denoted as qi by rewriting Equation 1.3 in the following form:
qi = qo b m

1.4

Equation 1.4 actually gives a correction factor for the bias portion (systematic-error) of the total inaccuracy of the instrument. In order to obtain a statistical estimate of the random error, the standard deviation of the input value obtained using equation 1.4 with respect to the actual input value has to be computed. It is given by the following equation.

1 S = N
2 qi

(q

qi

2 S qo

m2

1.5

Once the standard deviation of the input value S qi is determined, the instrument is ready for use for actual measurement without the help of the standard input. The instrument calibration can be summarized as follows. 1. The output value q o , m and b are used in equation (1.4) to compute the input value qi corrected for bias errors. 2. The true input value that is corrected for both bias and random errors is expressed as an uncertainty: qi 3S qi 3. The above results, with respect to the example shown in Table 1.1 can be presented in Table 1.2, which can be used for determining the true value of the input, after correcting for the bias, and the possible range of the true value of the input, accounting for random errors. The qo of this table can have smaller discrete step than used in the experiment, since this table can be generated using a computer.

You might also like