INSTRUMENTATION
Presented by: Group 3
              INSTRUMENTATION
 An important part of the research study is the instrument
   in gathering the data because the quality of research
output depends to a large extent on the quality of research
   instruments used. Instrument is the generic term that
researchers use for a measurement device like survey, test,
    questionnaire, and many others. To help distinguish
between instrument and instrumentation, consider that the
instrument is the device and instrumentation is the course
 of action which is the process of developing, testing, and
                      using the device.
Researchers can choose the type of instruments to use based on their
resear questions or objectives. There are two broad categories of
instruments namely, researcher-completed instruments and 2) subject-
completed instruments.
Ex.
Researcher-completed Instruments       Subject-completed instrument
Rating scales                           Questionnaires
Interview schedules/guides             Self-checklists
Tally sheets                            Attitude Scales
Flowcharts                             Personality Inventories
Performance checklists                 Achievement/aptitude tests
Time-and-motion log                    Projective devices
Observation forms                      Sociometric devices
A critical portion of the research study is the instrument used to gather
data. The validity of the findings and conclusions resulting from the
statistical instrument will depend greatly on the characteristics of your
instruments. We will discuss the general criteria of good research
instruments which are validity and reliability.
VALIDITY
Validity refers to the extent to which the instrument measures what it
intends to measure and performs as it is designed to perform. It is
unusual and nearly impossible that an instrument is 100% valid that is
why validity is generally measured in degrees. As a process, validation
involves collecting and analyzing data to assess the accuracy of an
instrument.
TYPES OF VALIDITY
Content validity —the extent to which a research
instrument accurately measures all aspects of a
construct.
Construct Validity— the extent to which a research
instrument (or tool) measures the intended construct.
Criterion validity— the extent to which a research
instrument is related to other instruments that
measure the same variables.
Content validity looks at whether the instrument
adequately covers all the content that it should with
respect to the variable. In other words, it refers to the
appropriateness of the content of an instrument. It
answers the question "Do the measures (questions,
observation logs, etc.) accurately assess what you want
to know?"or "Does the instrument cover the entire
domain related to the variable, or construct it was
designed to measure?"
Construct validity refers to whether you can draw
inferences about test scores related to the concept being
studied.
There are three types of evidence that can be used to demonstrate
a research instrument has construct validity:
1. Homogeneity— this means that the instrument measures one
construct.
2. Convergence— this occurs when the instrument measures
concepts similar to that of other instruments. Although if there
are no similar instrument available this will not be possible to do.
3. Theory evidence— this is evident when behavior is similar to
theoretical propositions of the construct measured in the
instrument.
   The final measure of validity is criterion validity. A criterion is any other
instrument that measures the same variable. Correlations can be conducted to
  determine the extent to which the different instruments measure the same
                                    variable.
                Criterion validity is measured in three ways:
  1. Convergent validity-shows that an instrument is highly correlated with
                   instrument measuring similar variables.
     Example: geriatric suicide correlated significantly and positively with
                   depression, loneliness and hopelessness.
2. Divergent validity—shows that an instrument is poorly
correlated to instruments that measure different variables.
  Example: there should be a low correlation between an
    instrument that measures motivation and one that
                 measures self- efficacy.
 3. Predictive validity—means that the instrument should
       have high correlations with future criterions.
Example: a score of high self-efficacy related to performing
  a task that should predict the likelihood a participant
                   completing the task.
 RELIABILITY
     Reliability relates to the extent to which the
 instrument is consistent. The instrument should be
   able to obtain approximately the same response
   when applied to respondents who are similarly
situated. Likewise, when the instrument is applied at
   two different points in time, the responses must
highly correlate with one another. Hence, reliability
  can be measured by correlating the responses of
 subjects exposed to the instrument at two different
 time periods or by correlating the responses of the
          subjects who are similarly situated.
 3 Attributes of Reliability in Quantitative Research
                       Attributes
1. Internal Consistency or Homogeneity—the extent to
which all the items on a scale measure one construct.
2.Stability or Test-Retest Correlation —the consistency
 of results using an instrument with repeated testing.
  3.Equivalence—Consistency among responses of
 multiple users of an instrument, or among alternate
               forms of an instrument.
  1. Internal consistency or homogeneity is when an instrument
measures a specific concept. This concept is through questions or
indicators and each question must correlate highly with the total
      for this dimension. For example, teaching effectiveness is
     measured in terms of seven questions. The scores for each
     question must correlate highly with the total for teaching
                            effectiveness.
    There are three ways to check the internal consistency or
                    homogeneity of the index.
  a) Split-half correlation. We could split the index of "exposure
   to televised news" in half so that there are two groups of two
questions, and see if the two sub-scales are highly correlated. That
 is, do people who score high on the first half also score high on
                          the second half?
  b) Average inter-item correlation. We can also determine the internal
consistency for each question on the index. If the index is homogeneous,
each question should be highly correlated with the other three questions.
 c) Average item-total correlation. We can correlate each question with
 the total score of the TV news exposure index to examine the internal
 consistency of items. This gives us an idea of the contribution of each
                    item to the reliability of the index.
2. Stability or test-retest correlation. This is an aspect of reliability where
many researchers report that a highly reliable test indicates that the test
   is stable over time. Test-retest correlation provides an indication of
stability over time. It is an extent to which scores on a test are essentially
 invariant over time. This definition clearly focuses on the measurement
 instrument and the obtained test scores in terms of test-retest stability.
3. Equivalence. Equivalence reliability is measured by the
  correlation of scores between different versions of the
 same instrument, or between instruments that measure
the same or similar constructs, such that one instrument
 can be reproduced by the other. If we want to know the
   extent to which different investigators use the same
instrument to measure the same individuals at the same
                time yield consistent results.
    When you gather data, consider readability of the
 instrument. Readability refers to the level of difficulty of
       the instrument relative to the intended users.
THANK YOU