0% found this document useful (0 votes)
41 views17 pages

Research Information

The document provides guidance on developing the method section of a research proposal for a quantitative survey or experimental study. It discusses key components to include such as defining the survey or experimental design, describing the population and sampling procedure, and determining sample size. Examples are provided of how to describe the purpose and rationale for the chosen design, whether the survey will be cross-sectional or longitudinal, and how to specify details about the population, sampling approach, and sample size calculation.

Uploaded by

zh6563833
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views17 pages

Research Information

The document provides guidance on developing the method section of a research proposal for a quantitative survey or experimental study. It discusses key components to include such as defining the survey or experimental design, describing the population and sampling procedure, and determining sample size. Examples are provided of how to describe the purpose and rationale for the chosen design, whether the survey will be cross-sectional or longitudinal, and how to specify details about the population, sampling approach, and sample size calculation.

Uploaded by

zh6563833
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Chapter 8 Quantitative Methods

We turn now from the introduction, the purpose, and the questions and hypotheses to the method section of
a proposal. This chapter presents essential steps in designing quantitative methods for a research proposal or
study, with specific focus on survey and experimental designs. These designs reflect postpositivist
philosophical assumptions, as discussed in Chapter 1. For example, determinism suggests that examining the
relationships between and among variables is central to answering questions and hypotheses through surveys
and experiments. In one case, a researcher might be interested in evaluating whether playing violent video
games is associated with higher rates of playground aggression in kids, which is a correlational hypothesis that
could be evaluated in a survey design. In another case, a researcher might be interested in evaluating whether
violent video game playing causes aggressive behavior, which is a causal hypothesis that is best evaluated by a
true experiment. In each case, these quantitative approaches focus on carefully measuring (or experimentally
manipulating) a parsimonious set of variables to answer theory-guided research questions and hypotheses. In
this chapter, the focus is on the essential components of a method section in proposals for a survey or
experimental study.

206
Defining Surveys and Experiments
A survey design provides a quantitative description of trends, attitudes, and opinions of a population, or tests
for associations among variables of a population, by studying a sample of that population. Survey designs help
researchers answer three types of questions: (a) descriptive questions (e.g., What percentage of practicing
nurses support the provision of hospital abortion services?); (b) questions about the relationships between
variables (e.g., Is there a positive association between endorsement of hospital abortion services and support
for implementing hospice care among nurses?); or in cases where a survey design is repeated over time in a
longitudinal study; (c) questions about predictive relationships between variables over time (e.g., Does Time 1
endorsement of support for hospital abortion services predict greater Time 2 burnout in nurses?).

An experimental design systematically manipulates one or more variables in order to evaluate how this
manipulation impacts an outcome (or outcomes) of interest. Importantly, an experiment isolates the effects of
this manipulation by holding all other variables constant. When one group receives a treatment and the other
group does not (which is a manipulated variable of interest), the experimenter can isolate whether the
treatment and not other factors influence the outcome. For example, a sample of nurses could be randomly
assigned to a 3-week expressive writing program (where they write about their deepest thoughts and feelings)
or a matched 3-week control writing program (writing about the facts of their daily morning routine) to
evaluate whether this expressive writing manipulation reduces job burnout in the months following the
program (i.e., the writing condition is the manipulated variable of interest, and job burnout is the outcome of
interest). Whether a quantitative study employs a survey or experimental design, both approaches share a
common goal of helping the researcher make inferences about relationships among variables, and how the
sample results may generalize to a broader population of interest (e.g., all nurses in the community).

207
Components of a Survey Study Method Plan
The design of a survey method plan follows a standard format. Numerous examples of this format appear in
scholarly journals, and these examples provide useful models. The following sections detail typical
components. In preparing to design these components into a proposal, consider the questions on the checklist
shown in Table 8.1 as a general guide.

208
209
210
The Survey Design
The first parts of the survey method plan section can introduce readers to the basic purpose and rationale for
survey research. Begin the section by describing the rationale for the design. Specifically:

Identify the purpose of survey research. The primary purpose is to answer a question (or questions)
about variables of interest to you. A sample purpose statement could read: “The primary purpose of this
study is to empirically evaluate whether the number of overtime hours worked predicts subsequent
burnout symptoms in a sample of emergency room nurses.”
Indicate why a survey method is the preferred type of approach for this study. In this rationale, it can be
beneficial to acknowledge the advantages of survey designs, such as the economy of the design, rapid
turnaround in data collection, and constraints that preclude you from pursuing other designs (e.g., “An
experimental design was not adopted to look at the relationship between overtime hours worked and
burnout symptoms because it would be prohibitively difficult, and potentially unethical, to randomly
assign nurses to work different amounts of overtime hours.”).
Indicate whether the survey will be cross-sectional—with the data collected at one point in time—or
whether it will be longitudinal—with data collected over time.
Specify the form of data collection. Fowler (2014) identified the following types: mail, telephone, the
Internet, personal interviews, or group administration (see also Fink, 2016; Krueger & Casey, 2014).
Using an Internet survey and administering it online has been discussed extensively in the literature
(Nesbary, 2000; Sue & Ritter, 2012). Regardless of the form of data collection, provide a rationale for
the procedure, using arguments based on its strengths and weaknesses, costs, data availability, and
convenience.

211
The Population and Sample
In the method section, follow the type of design with characteristics of the population and the sampling
procedure. Methodologists have written excellent discussions about the underlying logic of sampling theory
(e.g., Babbie, 2015; Fowler, 2014). Here are essential aspects of the population and sample to describe in a
research plan:

The population. Identify the population in the study. Also state the size of this population, if size can be
determined, and the means of identifying individuals in the population. Questions of access arise here,
and the researcher might refer to availability of sampling frames—mail or published lists—of potential
respondents in the population.
Sampling design. Identify whether the sampling design for this population is single stage or multistage
(called clustering). Cluster sampling is ideal when it is impossible or impractical to compile a list of the
elements composing the population (Babbie, 2015). A single-stage sampling procedure is one in which
the researcher has access to names in the population and can sample the people (or other elements)
directly. In a multistage or clustering procedure, the researcher first identifies clusters (groups or
organizations), obtains names of individuals within those clusters, and then samples within them.
Type of sampling. Identify and discuss the selection process for participants in your sample. Ideally you
aim to draw a random sample, in which each individual in the population has an equal probability of
being selected (a systematic or probabilistic sample). But in many cases it may be quite difficult (or
impossible) to get a random sample of participants. Alternatively, a systematic sample can have precision-
equivalent random sampling (Fowler, 2014). In this approach, you choose a random start on a list and
select every X numbered person on the list. The X number is based on a fraction determined by the
number of people on the list and the number that are to be selected on the list (e.g., 1 out of every 80th
person). Finally, less desirable, but often used, is a nonprobability sample (or convenience sample), in
which respondents are chosen based on their convenience and availability.
Stratification. Identify whether the study will involve stratification of the population before selecting the
sample. This requires that characteristics of the population members be known so that the population
can be stratified first before selecting the sample (Fowler, 2014). Stratification means that specific
characteristics of individuals (e.g., gender—females and males) are represented in the sample and the
sample reflects the true proportion in the population of individuals with certain characteristics. When
randomly selecting people from a population, these characteristics may or may not be present in the
sample in the same proportions as in the population; stratification ensures their representation. Also
identify the characteristics used in stratifying the population (e.g., gender, income levels, education).
Within each stratum, identify whether the sample contains individuals with the characteristic in the
same proportion as the characteristic appears in the entire population.
Sample size determination. Indicate the number of people in the sample and the procedures used to
compute this number. Sample size determination is at its core a tradeoff: A larger sample will provide
more accuracy in the inferences made, but recruiting more participants is time consuming and costly. In

212
survey research, investigators sometimes choose a sample size based on selecting a fraction of the
population (say, 10%) or selecting a sample size that is typical based on past studies. These approaches
are not optimal; instead sample size determination should be based on your analysis plans (Fowler,
2014).
Power analysis. If your analysis plan consists of detecting a significant association between variables of
interest, a power analysis can help you estimate a target sample size. Many free online and commercially
available power analysis calculators are available (e.g., G*Power; Faul, Erdfelder, Lang, & Buchner,
2007; Faul, Erdfelder, Buchner, & Lang 2009). The input values for a formal power analysis will
depend on the questions you aim to address in your survey design study (for a helpful resource, see
Kraemer & Blasey, 2016). As one example, if you aim to conduct a cross-sectional study measuring the
correlation between the number of overtime hours worked and burnout symptoms in a sample of
emergency room nurses, you can estimate the sample size required to determine whether your
correlation significantly differs from zero (e.g., one possible hypothesis is that there will be a significant
positive association between number of hours worked and emotional exhaustion burnout symptoms).
This power analysis requires just three pieces of information:
1. An estimate of the size of correlation (r). A common approach for generating this estimate is to
find similar studies that have reported the size of the correlation between hours worked and
burnout symptoms. This simple task can often be difficult, either because there are no published
studies looking at this association or because suitable published studies do not report a correlation
coefficient. One tip: In cases where a published report measures variables of interest to you, one
option is to contact the study authors asking them to kindly provide the correlation analysis result
from their dataset, for your power analysis.
2. A two-tailed alpha value (α). This value is called the Type I error rate and refers to the risk we
want to take in saying we have a real non-zero correlation when in fact this effect is not real (and
determined by chance), that is, a false positive effect. A commonly accepted alpha value is .05,
which refers to a 5% probability (5/100) that we are comfortable making a Type I error, such that
5% of the time we will say that there’s a significant (non-zero) relationship between number of
hours worked and burnout symptoms when in fact this effect occurred by chance and is not real.
3. A beta value (β). This value is called the Type II error rate and refers to the risk we want to take in
saying we do not have a significant effect when in fact there is a significant association, that is, a
false negative effect. Researchers commonly try to balance the risks of making Type I versus Type
II errors, with a commonly accepted beta value being .20. Power analysis calculators will
commonly ask for estimated power, which refers to 1 − beta (1 − .20 = .80).
You can then plug these numbers into a power analysis calculator to determine the sample size needed.
If you assume that the estimated association is r = .25, with a two-tailed alpha value of .05 and a beta
value of .20, the power analysis calculation indicates that you need at least 123 participants in the study
you aim to conduct.
To get some practice, try conducting this sample size determination power analysis. We used the
G*Power software program (Faul et al., 2007; Faul et al., 2009), with the following input parameters:
Test family: Exact

213
Statistical test: Correlation: Bivariate normal model
Type of power analysis: A priori: Compute required sample size
Tails: Two
Correlation ρ H1: .25
α err prob: .05
Power (1 – β err prob): .8
Correlation ρ H0: 0
This power analysis for sample size determination should be done during study planning prior to
enrolling any participants. Many scientific journals now require researchers to report a power analysis for
sample size determination in the Method section.

214
Instrumentation
As part of rigorous data collection, the proposal developer also provides detailed information about the actual
survey instruments to be used in the study. Consider the following:

Name the survey instruments used to collect data. Discuss whether you used an instrument designed for this
research, a modified instrument, or an instrument developed by someone else. For example, if you aim
to measure perceptions of stress over the last month, you could use the 10-item Perceived Stress Scale
(PSS) (Cohen, Kamarck, & Mermelstein, 1983) as your stress perceptions instrument in your survey
design. Many survey instruments, including the PSS, can be acquired and used for free as long as you
cite the original source of the instrument. But in some cases, researchers have made the use of their
instruments proprietary, requiring a fee for use. Instruments are increasingly being delivered through a
multitude of online survey products now available (e.g., Qualtrics, Survey Monkey). Although these
products can be costly, they also can be quite helpful for accelerating and improving the survey research
process. For example, researchers can create their own surveys quickly using custom templates and post
them on websites or e-mail them to participants to complete. These software programs facilitate data
collection into organized spreadsheets for data analysis, reducing data entry errors and accelerating
hypothesis testing.
Validity of scores using the instrument. To use an existing instrument, describe the established validity of
scores obtained from past use of the instrument. This means reporting efforts by authors to establish
validity in quantitative research—whether you can draw meaningful and useful inferences from scores
on the instruments. The three traditional forms of validity to look for are (a) content validity (Do the
items measure the content they were intended to measure?), (b) predictive or concurrent validity (Do
scores predict a criterion measure? Do results correlate with other results?), and (c) construct validity
(Do items measure hypothetical constructs or concepts?). In more recent studies, construct validity has
become the overriding objective in validity, and it has focused on whether the scores serve a useful
purpose and have positive consequences when they are used in practice (Humbley & Zumbo, 1996).
Establishing the validity of the scores in a survey helps researchers to identify whether an instrument
might be a good one to use in survey research. This form of validity is different from identifying the
threats to validity in experimental research, as discussed later in this chapter.
Reliability of scores on the instrument. Also mention whether scores resulting from past use of the
instrument demonstrate acceptable reliability. Reliability in this context refers to the consistency or
repeatability of an instrument. The most important form of reliability for multi-item instruments is the
instrument’s internal consistency—which is the degree to which sets of items on an instrument behave
in the same way. This is important because your instrument scale items should be assessing the same
underlying construct, so these items should have suitable intercorrelations. A scale’s internal consistency
is quantified by a Cronbach’s alpha (α)value that ranges between 0 and 1, with optimal values ranging
between .7 and .9. For example, the 10-item PSS has excellent internal consistency across many
published reports, with the original source publication reporting internal consistency values of α =

215
.84–.86 in three studies (Cohen, Kamarck, and Mermelstein, 1983). It can also be helpful to evaluate a
second form of instrument reliability, its test-retest reliability. This form of reliability concerns whether
the scale is reasonably stable over time with repeated administrations. When you modify an instrument
or combine instruments in a study, the original validity and reliability may not hold for the new
instrument, and it becomes important to establish validity and reliability during data analysis.
Sample items. Include sample items from the instrument so that readers can see the actual items used. In
an appendix to the proposal, attach sample items or the entire instrument (or instruments) used.
Content of instrument. Indicate the major content sections in the instrument, such as the cover letter
(Dillman, 2007, provides a useful list of items to include in cover letters), the items (e.g., demographics,
attitudinal items, behavioral items, factual items), and the closing instructions. Also mention the type of
scales used to measure the items on the instrument, such as continuous scales (e.g., strongly agree to
strongly disagree) and categorical scales (e.g., yes/no, rank from highest to lowest importance).
Pilot testing. Discuss plans for pilot testing or field-testing the survey and provide a rationale for these
plans. This testing is important to establish the content validity of scores on an instrument; to provide
an initial evaluation of the internal consistency of the items; and to improve questions, format, and
instructions. Pilot testing all study materials also provides an opportunity to assess how long the study
will take (and to identify potential concerns with participant fatigue). Indicate the number of people
who will test the instrument and the plans to incorporate their comments into final instrument
revisions.
Administering the survey. For a mailed survey, identify steps for administering the survey and for
following up to ensure a high response rate. Salant and Dillman (1994) suggested a four-phase
administration process (see Dillman, 2007, for a similar three-phase process). The first mail-out is a
short advance-notice letter to all members of the sample, and the second mail-out is the actual mail
survey, distributed about 1 week after the advance-notice letter. The third mail-out consists of a
postcard follow-up sent to all members of the sample 4 to 8 days after the initial questionnaire. The
fourth mail-out, sent to all nonrespondents, consists of a personalized cover letter with a handwritten
signature, the questionnaire, and a preaddressed return envelope with postage. Researchers send this
fourth mail-out 3 weeks after the second mail-out. Thus, in total, the researcher concludes the
administration period 4 weeks after its start, providing the returns meet project objectives.

216
Variables in the Study
Although readers of a proposal learn about the variables in purpose statements and research
questions/hypotheses sections, it is useful in the method section to relate the variables to the specific questions
or hypotheses on the instrument. One technique is to relate the variables, the research questions or
hypotheses, and sample items on the survey instrument so that a reader can easily determine how the data
collection connects to the variables and questions/hypotheses. Plan to include a table and a discussion that
cross-reference the variables, the questions or hypotheses, and specific survey items. This procedure is
especially helpful in dissertations in which investigators test large-scale models or multiple hypotheses. Table
8.2 illustrates such a table using hypothetical data.

217
Data Analysis
In the proposal, present information about the computer programs used and the steps involved in analyzing
the data. Websites contain detailed information about the various statistical analysis computer programs
available. Some of the more frequently used programs are the following:

IBM SPSS Statistics 24 for Windows and Mac (www.spss.com). The SPSS Grad Pack is an affordable,
professional analysis program for students based on the professional version of the program, available
from IBM.
JMP (www.jmp.com). This is a popular software program available from SAS.
Minitab Statistical Software 17 (minitab.com). This is an interactive software statistical package available
from Minitab Inc.
SYSTAT 13 (systatsoftware.com). This is a comprehensive interactive statistical package available from
Systat Software, Inc.
SAS/STAT (sas.com). This is a statistical program with tools as an integral component of the SAS system
of products available from SAS Institute, Inc.
Stata, release 14 (stata.com). This is a data analysis and statistics program available from StataCorp.

Online programs useful in simulating statistical concepts for statistical instruction can also be used, such as the
Rice Virtual Lab in Statistics found at http://onlinestatbook.com/rvls.html, or SAS Simulation Studio for
JMP (www.jmp.com), which harnesses the power of simulation to model and analyze critical operational
systems in such areas as health care, manufacturing, and transportation. The graphical user interface in SAS
Simulation Studio for JMP requires no programming and provides a full set of tools for building, executing,
and analyzing results of simulation models (Creswell & Guetterman, in press).

We recommend the following research tip—presenting data analysis plans as a series of steps so that a reader
can see how one step leads to another:

Step 1. Report information about the number of participants in the sample who did and did not return the
survey. A table with numbers and percentages describing respondents and nonrespondents is a useful tool to
present this information.

Step 2. Discuss the method by which response bias will be determined. Response bias is the effect of
nonresponses on survey estimates (Fowler, 2014). Bias means that if nonrespondents had responded, their
responses would have substantially changed the overall results. Mention the procedures used to check for
response bias, such as wave analysis or a respondent/nonrespondent analysis. In wave analysis, the researcher
examines returns on select items week by week to determine if average responses change (Leslie, 1972). Based
on the assumption that those who return surveys in the final weeks of the response period are nearly all
nonrespondents, if the responses begin to change, a potential exists for response bias. An alternative check for
response bias is to contact a few nonrespondents by phone and determine if their responses differ substantially
from respondents. This constitutes a respondent-nonrespondent check for response bias.

218
Step 3. Discuss a plan to provide a descriptive analysis of data for all independent and dependent variables in
the study. This analysis should indicate the means, standard deviations, and range of scores for these variables.
Identify whether there is missing data (e.g., some participants may not provide responses to some items or
whole scales), and develop plans to report how much missing data is present and whether a strategy will be
implemented to replace missing data (for a review, see Schafer & Graham, 2002).

Step 4. If the proposal contains an instrument with multi-item scales or a plan to develop scales, first evaluate
whether it will be necessary to reverse-score items, and then how total scale scores will be calculated. Also
mention reliability checks for the internal consistency of the scales (i.e., the Cronbach alpha statistic).

Step 5. Identify the statistics and the statistical computer program for testing the major inferential research
questions or hypotheses in the proposed study. The inferential questions or hypotheses relate variables or
compare groups in terms of variables so that inferences can be drawn from the sample to a population. Provide
a rationale for the choice of statistical test and mention the assumptions associated with the statistic. As
shown in Table 8.3, base this choice on the nature of the research question (e.g., relating variables or
comparing groups as the most popular), the number of independent and dependent variables, and the
variables used as covariates (e.g., see Rudestam & Newton, 2014). Further, consider whether the variables will
be measured on an instrument as a continuous score (e.g., age from 18 to 36) or as a categorical score (e.g.,
women = 1, men = 2). Finally, consider whether the scores from the sample might be normally distributed in a
bell-shaped curve if plotted out on a graph or non-normally distributed. There are additional ways to
determine if the scores are normally distributed (see Creswell, 2012). These factors, in combination, enable a
researcher to determine what statistical test will be suited for answering the research question or hypothesis.
In Table 8.3, we show how the factors, in combination, lead to the selection of a number of common
statistical tests. For additional types of statistical tests, readers are referred to statistics methods books, such as
Gravetter and Wallnau (2012).

Step 6. A final step in the data analysis is to present the results in tables or figures and interpret the results
from the statistical test, discussed in the next section.

219
Interpreting Results and Writing a Discussion Section
An interpretation in quantitative research means that the researcher draws conclusions from the results for
the research questions, hypotheses, and the larger meaning of the results. This interpretation involves several
steps:

Report how the results addressed the research question or hypothesis. The Publication Manual of the
American Psychological Association (American Psychological Association [APA], 2010) suggests that the
most complete meaning of the results come from reporting extensive description, statistical significance
testing, confidence intervals, and effect sizes. Thus, it is important to clarify the meaning of these last
three reports of the results. The statistical significance testing reports an assessment as to whether the
observed scores reflect a pattern other than chance. A statistical test is considered to be significant if the
results are unlikely by chance to have occurred, and the null hypothesis of “no effect” can be rejected.
The researcher sets a rejection level of “no effect,” such as p = 0.001, and then assesses whether the test
statistic falls into this level of rejection. Typically results will be summarized as “the analysis of variance
revealed a statistically significant difference between men and women in terms of attitudes toward
banning smoking in restaurants F (2, 6) = 8.55, p = 0.001.”
Two forms of practical evidence of the results should also be reported: (a) the effect size and (b) the
confidence interval. A confidence interval is a range of values (an interval) that describes a level of
uncertainty around an estimated observed score. A confidence interval shows how good an estimated
score might be. A confidence interval of 95%, for example, indicates that 95 out of 100 times the
observed score will fall in the range of values. An effect size identifies the strength of the conclusions
about group differences or the relationships among variables in quantitative studies. It is a descriptive
statistic that is not dependent on whether the relationship in the data represents the true population.
The calculation of effect size varies for different statistical tests: it can be used to explain the variance
between two or more variables or the differences among means for groups. It shows the practical
significance of the results apart from inferences being applied to the population.

The final step is to draft a discussion section where you discuss the implications of the results in terms of

220
how they are consistent with, refute, or extend previous related studies in the scientific literature. How
do your research findings address gaps in our knowledge base on the topic? It is also important to
acknowledge the implications of the findings for practice and for future research in the area. It may also
involve discussing theoretical and practical consequences of the results. It is also helpful to briefly
acknowledge potential limitations of the study, and potential alternative explanations for the study
findings.

Example 8.1 is a survey method plan section that illustrates many of the steps just mentioned. This excerpt
(used with permission) comes from a journal article reporting a study of factors affecting student attrition in
one small liberal arts college (Bean & Creswell, 1980, pp. 321–322).

Example 8.1 A Survey Method Plan

221

You might also like