JSCM 12161
JSCM 12161
MARK PAGELL
University College Dublin
BRIAN FUGATE
University of Arkansas
Survey research in supply chain management has been and will continue
to be an important methodology in advancing theory and practice. How-
ever, supply chain scholars have multiple, divergent views regarding what
is acceptable in terms of survey design, especially regarding respondents.
We build on insights and commentaries provided by JSCM associate edi-
tors to develop and share general guidelines we will use during our tenure
as editors to consider the rigor of survey research designs. We also outline
ways that survey designs for supply chain research can be strengthened.
The aim of this editorial was to clearly communicate expectations to the
JSCM community, so that authors and reviewers can be more successful in
advancing the theory and practice of supply chain management.
January 2018 1
Journal of Supply Chain Management
range of opinions and a diversity of backgrounds were integration and coordination, are by nature polyadic;
represented. The four commentaries that accompany they can only be assessed through responses from the
this editorial are the outcome of that process and were multiple sources that are integrating or coordinating.
instrumental in informing our conclusions. Second, supply chain research often makes inferences
The following sections outline the problem, espe- about organizations, rather than individuals. An
cially in a supply chain management setting, detail organization is a more complex unit of analysis than
how we will handle survey manuscripts for the remain- an individual; thus, it cannot be assessed by asking
der of our tenure as editors, and provide suggestions individuals about their personal feelings, opinions, or
for strengthening survey designs for supply chain behavior (Phillips, 1981). Because organizations have
research. The aim of this editorial was to clearly com- characteristics that are distinct from the characteristics
municate our expectations to the JSCM community. of individuals, different research methods are required
for learning about their behavior (Phillips, 1981).
Third, these issues are further exacerbated when the
THE PROBLEM(S) research question focuses on relationships between
Discussions of survey design often focus on single- multiple supply chain members within or across
respondent bias and the inability of common tests to firms. As Roh, Whipple and Boyer (2013) noted, ask-
detect it. This is a serious problem and one that Monta- ing a supplier about its customers’ perceptions of trust
bon, Daugherty and Chen (2018) and Krause, Luzzini or power is akin to asking women to describe men’s
and Lawson (2018) address in some detail in their perceptions of their health issues. Thus, the single-
commentaries. The potential for bias is an issue for all respondent issue is especially salient for research ques-
single-respondent survey research, regardless of tions that focus on the perspectives of more than one
research domain, particularly for perceptual measures. functional area or firm, as many important supply
However, supply chain survey researchers frequently chain research questions do.
use perceptual measures, for several reasons. First, sup- However, Kaufmann and Saw (2013) reported that
ply chain management research questions often focus 87.8% of the survey research published in five leading
on organizations, groups of organizations, or func- supply chain management journals1 between 2006
tional areas within an organization, rather than individ- and 2012 used single respondents to provide percep-
uals. Latent constructs that are central to the domain, tual reports of organizational constructs, and Monta-
such as power and trust, may be best measured using bon et al. (2018) found that only 23.8% of the
perceptual reports (Boyer & Swink, 2008; Ernst & Tei- articles recently published in the four leading empiri-
chert, 1998; Kumar, Stern & Anderson, 1993) of facts, cal research journals on the SCM Journal List2 used
beliefs, motives and activities associated with organiza- multiple sources to report on polyadic constructs.
tional events, and decisions (Huber & Power, 1985). Figure 1 details the four primary generic survey
However, there are many issues associated with the research designs we see in manuscripts submitted to
use of perceptual measures of organizational phenom- JSCM. Type 1 survey designs employ a single respon-
ena. As Ketchen, Craighead and Cheng (2018) dent who provides responses for all items, including
describe, reporting on organizational phenomena both the independent and dependent variables. The
requires respondents to engage in high-level cognitive constructs in this type of study are monadic, meaning
processes that require them to work at a high level of that they focus on a single perspective, such as that of
abstraction and weight inferences and engage in pre- a firm or a department within a firm. For example, a
diction, interpretation, and evaluation (Podsakoff & firm’s defect rate or the strength of a department’s
Organ, 1986). Even the most competent respondents lean practices is monadic constructs. Type 1 research
can experience perceptual and cognitive limitations designs are likely to suffer from common method
that result in response inaccuracies (Huber & Power, bias, as described below. However, this type of
1985), particularly for retrospective reports (Golden, research may be acceptable in certain situations, such
1992), including imperfect recall of past events and as behavioral operation studies that focus on individ-
coloring of recollections by their implicit theories and ual decision-makers, research on small and medium
biases (Kahneman & Tversky, 2009). Surveys of enterprises (SME)s that are supply chain members, or
respondents’ perceptions are thus useful, but poten- research questions targeting the perceptions of an
tially seriously flawed (Podsakoff & Organ, 1986). individual, such as a survey of Fortune 500 CEOs
Although the use of single respondents presents seri-
1
ous issues for survey research at any unit of analysis Journal of Operations Management, Journal of Business Logistics,
Journal of Supply Chain Management, Journal of Purchasing and
due to the significant risk of common methods bias,
Supply Management, and International Journal of Product Distribu-
it is a particular problem for many supply chain sur- tion and Logistics Management.
veys for three related reasons. First, many constructs 2
Decision Sciences, Journal of Business Logistics, Journal of Opera-
that are central to supply chain management, such as tions Management, and Journal of Supply Chain Management.
FIGURE 1
Four Generic Survey Research Designs
Type 1 Type 2
High
• Single respondent for all items • Single respondent for all items
Potential for Common Method Bias
Type 4 Type 3
• Multiple respondents, with • Multiple respondents, with
independent and dependent independent and dependent
variables addressed by different variables addressed by different
respondents respondents
Low
• Some or all polyadic constructs are • Some or all polyadic constructs are
addressed by the best respondents addressed by a single respondent
about what they think about when deciding whether variables. Thus, a Type 3 survey research design
to encourage purchasing from China. There may be reduces the risk of common method bias. However,
other situations where a Type 1 research design may some or all of the constructs in a Type 3 design are
be appropriate, as well. polyadic, but are assessed by a single source. For
Type 2 survey research designs also employ a single example, a Type 3 design for a research question
respondent who provides responses for all items, related to supplier relationships would have multiple
including both the independent and dependent vari- respondents within the buying firm addressing ques-
ables. The difference between Type 1 and Type 2 tions about relational trust and its outcomes; how-
research designs, however, is that Type 2 designs con- ever, it would not include responses from the
tain some polyadic constructs. A polyadic construct suppliers who constitute the other half of the buyer–
includes relationships, multiple entities, or attributes supplier relationship. Thus, a Type 3 design is likely
that cannot be characterized by a single objective to suffer from respondent bias.
description, such as culture, relationship strength, or Finally, a Type 4 design employs multiple respon-
integration. For example, culture is a set of collective dents, with the independent and dependent variables
values and beliefs (Hofstede, 1991; Schein, 2010); addressed by different respondents. It contains some
one person’s perception is not adequate to assess a polyadic constructs, which are addressed by appropri-
firm’s culture because of its collective nature. Simi- ate respondents from different sources. For example,
larly, the quality of a buyer–supplier relationship can- Dong, Ju and Fang (2016) selected 100 buyers per
not be adequately addressed by just the buyer or the industry at each of four national trade shows (400
supplier. We use the broad term “polyadic” to encom- buyers). Each was asked to identify their counterpart
pass a continuum of relationships from dyadic (for in one of their firm’s major suppliers. A total of 191
example, buyer–supplier relationships) to triadic (for suppliers were identified and contacted by telephone,
example, the buyer–supplier–supplier triads described with 156 agreeing to participate in an interview. After
by Choi and Wu [2009]) to network relationships. removal of incomplete responses, there were 141 pairs
The Type 2 design is the most flawed of the research of responses. This allowed the authors to assess polya-
designs that we see, because it uses single sources to dic constructs, including supply chain performance,
address polyadic constructs. Thus, Type 2 designs are information sharing, and dynamic adaptation, by
likely to suffer from both common method bias and obtaining information from both buyers and their
respondent bias. paired suppliers. Other independent variables, includ-
Type 3 survey designs use multiple respondents, ing role ambiguity, role conflict, and buyer oppor-
with the dependent variable responses provided by tunism, were monadic and were thus assessed by only
respondents who are different from those who pro- the buyer (role ambiguity and role conflict) or the
vide responses to the independent variables. Further, supplier (buyer opportunism). Thus, because it
each respondent may address different independent employs multiple sources, every item in a Type 4
January 2018 3
Journal of Supply Chain Management
design is addressed by the best informant(s) for that organization, organization size and complexity,
item. This design should have the lowest likelihood of breadth of information sources available, and volatility
suffering from common method bias or respondent of the internal and external environment (Bagozzi
bias problems. However, the practical implications of et al., 1991).
this type of research are laden with significant costs to Although researchers are interested in the relation-
both the researchers and the respondents, making it ship between the true scores for the variables of inter-
challenging to undertake. est, what they observe is the relationship between
Prior to presenting guidelines and recommenda- measured scores, which includes random and system-
tions, we first provide a detailed discussion of issues atic error (Van Bruggen et al., 2002). This error com-
related to using single respondents to provide percep- ponent can be substantial, estimated at comprising
tual information and a single source to address polya- well over 50% of the measured score in many cases
dic constructs, addressing issues of both common (Cote & Buckley, 1987; MacKenzie & Podsakoff, 2012;
method bias and respondent bias. We then outline Van Bruggen et al., 2002). Thus, systematic error pro-
our expectations for papers that are submitted to vides a plausible alternative explanation for an
JSCM, especially for those that do not use a Type 4 observed relationship between constructs because of
design. Finally, we conclude by describing potential its potential correlation with them; the presence of
ways to address these issues, providing opportunities systematic error in single-respondent research can lead
to improve research designs when a Type 4 design is to misleading conclusions (Podsakoff et al., 2003).
not possible. Because systematic error can either inflate or deflate
an observed relationship between constructs, it can
cause both Type I and Type II errors (MacKenzie &
PROBLEM 1: COMMON METHOD BIAS Podsakoff, 2012).
True score theory (Lord & Novick, 1968) is based Systematic error is a form of common method bias;
on the premise that every measure has a true, objec- one way of looking at single respondents is as the
tive score, in contrast to the observed score, which is “method” in common method bias (MacKenzie &
contaminated (Ketokivi & Schroeder, 2004). There are Podsakoff, 2012). When the same respondent pro-
two sources of contamination. Random error is vari- vides ratings of multiple variables, especially both
ability around the “true” mean (Ernst & Teichert, independent and dependent variables, there is a
1998), which can occur when respondents encounter method effect that produces a rival explanation for
difficulties in making complex organizational judg- observed relationships (Podsakoff et al., 2003). The
ments (Van Bruggen, Lilien & Kacker, 2002). expected value of the correlation between systematic
Although it has an expected value of zero, random errors increases if the scores for the measured values
error attenuates relationships between variables; thus, come from the same respondent (Van Bruggen et al.,
it can inflate parameter estimates and lead to errors in 2002). Thus, using a single-respondent approach con-
inference (Bagozzi, Yi & Phillips, 1991; Ketokivi & tributes to “insidious errors” that are not usually
Schroeder, 2004; Van Bruggen et al., 2002). detectable, but which can nonetheless lead to incor-
The other source of observed score contamination is rect inferences (Roh et al., 2013).
systematic error, which affects the measurement of dif- In addition, it is impossible to assess the convergent
ferent variables in a similar way (Roh et al., 2013; Van and discriminant validity of a measure when it only
Bruggen et al., 2002). It is the stable variance compo- has a single respondent (Phillips, 1981). This prevents
nent of a score, due to the idiosyncratic perspective of partitioning the variance between the true score, sys-
individual respondents (Anderson, Zerrillo & Wang, tematic error, and random error, in order to assess the
2016). For example, a respondent who is fundamen- extent to which they are correlated (Jones, Johnson,
tally optimistic about the future may consistently select Butler & Main, 1983; Ketokivi & Schroeder, 2004).
higher scores across variables than one who is funda- Thus, there is no way to determine the validity of a
mentally pessimistic. Sources of systematic error measure that is only assessed by single respondents;
include both individual respondent characteristics and measured scores that are aggregated across multiple
organizational characteristics (see Table 1). Individual respondents will be closer to the true score.
sources of bias include the respondent’s background The best way to reduce the effect of systematic error
and experiences, need for social desirability, implicit is through using multiple respondents (Ketokivi &
theories, halo effects, leniency or acquiescence biases, Guide, 2015), because they have different personal
positive or negative affectivity, and transient mood characteristics, organizational perspectives, and experi-
states (Ernst & Teichert, 1998; MacKenzie & Podsakoff, ences. Thus, Type 1 and 2 research designs should
2012; Podsakoff, MacKenzie, Lee & Podsakoff, 2003). generally be avoided, although Kull, Kotlar and Spring
Organizational sources include the respondent’s hierar- (2018) provide an example of when a Type 1 design
chical or functional position, length of tenure in the might be acceptable.
January 2018
The magnitude of the inaccuracy will
increase when a respondent believes
Survey Research Expectations
5
6
TABLE 1 (continued)
or availability of information
himself or herself (Anderson, 1987; Van Weele & Van
Paoig, 2014). While this may seem like a semantic
distinction, it is important to the many supply chain
Effect
external information
internal information research questions that focus on phenomena that exist
at the level of a department, organization, relation-
ship, or network. Informants are asked to report on
their perceptions of observed or expected organiza-
tional relationships, generalizing to organizational
patterns of behavior (Seidler, 1974). Because infor-
mants are asked to aggregate and summarize, they do
not need to represent all members; it is their percep-
tions and expertise that are sampled (Seidler, 1974).
Informants are chosen because they occupy a role that
is expected to make them knowledgeable about the
perceptions of internal cause and effect or ease with which information
Key Informants
Description
Uncertainty of
environment
environment
organization
organization
Size of the
January 2018 7
Journal of Supply Chain Management
with them for a relatively long period of time (Seidler, (Huber & Power, 1985). No one respondent can
1974). It can be a very reliable approach to this type provide the perspective of a large organization (Bou-
of research if the researcher is able to find key infor- Llusar et al., 2016; Boyer & Pagell, 2000). Different
mants who are representative, reflective, articulate, managers have access to different information about
and personable; the researcher’s skills can correct for practices and performance, as well as making different
any informant biases (Seidler, 1974). In survey assumptions about the co-occurrence of events (Bou-
research, however, researchers do not enter into a Llusar et al., 2016). For example, asking a single key
long-term personal relationship with key informants, informant to provide organization-wide measures for
making it challenging to ensure representativeness a multidivisional, multinational organization that
and unbiased reporting of perceptual assessments. employs over 40,000 people (Huselid & Becker, 2000)
This raises the important question of whether any is fraught with the potential for serious reliability
single informant can effectively report on a large orga- problems. The quality of key informant reports is also
nization (John & Rene, 1982) or on a polyadic con- affected by the types of constructs they are asked to
struct, for several reasons. First, key informants are address. As Krause et al. (2018) describe in their com-
often asked to perform complex tasks involving social mentary, the right key informant can address monadic
judgment (John & Rene, 1982; Phillips, 1981). Even constructs in their own area of expertise, but no single
selecting informants at a higher hierarchical level does key informant can provide an unbiased assessment of
not necessarily ensure more reliable and valid a polyadic construct, such as integration between
responses than lower-level managers would (Kumar functions. Thus, we view the notion of an omniscient,
et al., 1993), as described in Krause et al.’s (2018) all-knowing key informant as a myth in all but a few
commentary. very specific situations. As Kull et al. (2018) note, a
Second, because there is typically only one key key informant may be appropriate for a small firm
informant, their reports are subject to the same types with, say, 43 employees, where the president/owner
of systematic bias as all single respondents’ reports makes virtually all decisions, and there are no alterna-
are, including systematically under-reporting or over- tive knowledgeable informants (John & Rene, 1982).
reporting certain phenomena due to informants’ posi- Many manuscripts submitted to JSCM justify using a
tion, job satisfaction, or other characteristics (Kumar single key informant by citing Kumar et al. (1993).
et al., 1993; Phillips, 1981). For example, a CEO However, they overlook Kumar et al.’s (1993) primary
would perceive a supply chain issue differently than a recommendation, which is to use multiple respon-
second-level executive or line manager (Kumar et al., dents and do so in a manner that does not assume
1993). A single key informant is still a single respon- that all respondents perceive the constructs of interest
dent (Jones et al., 1983), with all the associated bag- in the same way. Although the decision to use a single
gage. Even for a construct that is seemingly objective, informant is sometimes made based on the difficulty
such as practices, there are always individual biases. of finding multiple competent informants, few
Although practices are a core element of supply chain researchers have formally evaluated whether this is
management research (Carter, Kosmol & Kaufmann, actually the case (Kumar et al., 1993). We agree with
2017), almost all SCM research treats them as though Kumar et al.’s (1993) position, cautioning potential
they were objective (Pagell, Klassen, Johnston, Shev- authors to read this article carefully before citing it.
chenko & Sharma, 2015); if two respondents disagree The best solution to these problems is to use multi-
on their rating of a practice, it is treated as random ple respondents, as aggregation of responses into a
error. Yet the cognitive component of the theory of single composite score helps address systematic error;
routines suggests that such disagreements can reflect the group judgment will have a smaller variance than
actual differences in practices, in that each respondent the individual estimates (Van Bruggen et al., 2002).
interprets what is to be done through his or her own Having more respondents also reduces random error
cognitive filters and then proceeds to perform the through averaging (Jones et al., 1983) and provides
practice accordingly (e.g., Feldman & Rafaeli, 2002; the opportunity to analyze the impact of various
Parmigiani & Howard-Grenville, 2011). Thus, respon- sources of error, in order to determine how to correct
dents’ cognitive theories are reflected in responses that for it (Van Bruggen et al., 2002). However, the use of
are systematically biased by their individual interpreta- multiple respondents is not without its costs and chal-
tion of a practice and how they, themselves, would lenges, as described by Montabon et al. (2018) and
perform it, which is not necessarily how it is per- Krause et al. (2018), and single-respondent designs
formed by others. still have a place in JSCM.
Third, it is unreasonable to expect that any single
informant is able to observe the operations of an Source-Spanning Research
entire firm and provide crucial information about a As research interest in supply chain management
broad range of practices and outcomes, even a CEO has increased, so has the need to do research that
crosses functional and organizational borders (Roh view of the relationship (Anderson et al., 2016).
et al., 2013), in order to get a more complete view of “Presuming that one party mirrors the other is
complex phenomena (Kaufmann & Saw, 2013). The potentially erroneous” (Roh et al., 2013, p. 713).
assumption that a single source (Ketchen et al., 2018) We posit that many single-source studies, which
can provide valid responses to items assessing aspects implicitly assume that the cognitions of the respon-
of a polyadic construct, such as a supply chain rela- dents are objective and stable across actors, have lim-
tionship, is another serious and often overlooked ited our understanding of supply chain management.
problem, in addition to the serious problems associ- This is a missed opportunity. Kaplan (2011) notes
ated with using single respondents. Yet, “the vast that an increased emphasis on the cognitive element
majority of empirical supply chain management in strategy research has significantly improved the
research examines multi-stakeholder constructs using field’s understanding. In the supply chain manage-
data from only one side of the supply chain relation- ment domain, recent research has studied buyer–sup-
ship” (Roh et al., 2013, p. 712). plier relationship asymmetries (e.g., Villena &
The problems with single-source research are well Craighead, 2017) and used these differences to
known, as described by Ketchen et al. (2018). Most explore relationships across firms, allowing the devel-
supply chain management researchers are painfully opment of important new insights.
aware of the challenge of effectively executing research Thus, in addition to having different respondents
that avoids respondent biases, for several reasons. First address independent and dependent variables, the use
are the “extraordinarily difficult” (p. 712) challenges of multiple sources is also an important part of effec-
associated with collecting valid, reliable data from tive research design for research involving polyadic
multiple sources (Roh et al., 2013) such as all three constructs, in order to avoid respondent bias. For
members of a triad or from a network. For example, example, supply chain trust depends on both the buy-
Montabon et al. (2018) described the challenges of ers’ and sellers’ perspective of their dyadic relationship
finding equally qualified respondents on both sides of (Svensson, 2006). Even a key executive in charge of a
a dyadic relationship. The challenges grow exponen- portfolio of relationships will only be well informed
tially when we extend this from a dyad to a triad or a about the trust perceptions of his or her organiza-
network. Further, as Krause et al. (2018) point out, tion’s closest collaborators (Svensson, 2006). Thus, to
multiple source data that are contradictory add ambi- truly understand polyadic constructs, multiple sources
guity to the data and uncertainty to the findings. are needed.
Other challenges include what to do with multiple John and Rene (1982) found that the key infor-
source data with missing informants (Bou-Llusar mants from both sides of a dyad were able to provide
et al., 2016). For example, Palmatier, Schier and reliable and valid data about structural characteristics
Steenkamp’s (2007) study of loyalty found that some of the relationship. However, data on what they called
of the suppliers’ salespeople left their organization sentiments variables were not comparable; there were
prior to completion of the longitudinal study. Partial real differences in perceptions across the dyad, which
dyadic data cannot be included in matched pairs for they attributed to key informants’ inability to make
data analysis, yet it seems inappropriate and poten- the complex social judgments needed to estimate atti-
tially biasing to systematically discard unmatched dya- tudinal scores at the organization level. Anderson
dic data (Svensson, 2006). et al. (2016) reported similar findings, noting signifi-
Despite the real and present challenges of collecting cant agreement across dyads about structural relation-
multiple sources, however, many SCM research ques- ship properties, such as formalization and
tions cross internal functional or organizational centralization of decision-making, but lack of agree-
boundaries to explore how multiple actors interact or ment on scores for dyadic sentiments, including
how their individual practices integrate into a supply domain consensus and accomplishments from the
chain outcome. Using a single source to capture these relationship. Based on their empirical findings in a
phenomena would mean ignoring their cognitive ele- similar study, Roh et al. (2013) recommended that
ment or assuming that respondents from both sides research that uses a single source must either be posi-
of a dyad would interpret phenomena in the same tioned so that the research question is targeted at only
way (Kaufmann & Saw, 2013). Ketchen et al.’s com- one side of a relationship or provide an explicit theo-
mentary (2018) likens using a single respondent to retical, practical, and empirical rationale for the valid-
study supply chain relationships to a marriage coun- ity of using this design to explain a polyadic
selor asking questions about a marriage to only one construct.
spouse. Asking purchasing managers how decisions Although there have been numerous calls for
are made in the operations function or a buying firm research that examines both sides of a dyadic relation-
about its supplier’s perceptions and how they ship, there is “a paucity of such research” (Roh et al.,
affect outcomes is incomplete and provides a distorted 2013). Worse, much of what is presented as dyadic
January 2018 9
Journal of Supply Chain Management
supply chain management research actually asks a sin- collect data from all of the involved sources. However,
gle respondent about the other side’s perceptions. practical considerations sometimes preclude these
Items such as “Our customers value their relationship approaches. We provide a brief overview of some
with our firm,” or “We provide sufficient information alternatives that may help researchers to address these
to our suppliers” are far too common. Even in SCM issues. In each case, however, the researchers are
research that claims to capture a dyad, the emphasis is responsible for justifying how their approaches ade-
often primarily on either the buyer’s or seller’s per- quately address the problems described above.
spective (Svensson, 2006).
Despite the limitations of single-source survey Use of an Appropriate Design
research (Ketchen et al., 2018), we fully recognize the Consistent with Ketchen et al.’s and Krause et al.’s
serious challenges faced when executing survey (2018) commentaries, we believe that theory should
research that captures multiple sources of rich, appro- ultimately drive empirical research and that the speci-
priate data (Krause et al., 2018; Montabon et al., fic research question should determine the best design
2018), especially when the phenomena of interest are for a study. For example, consider research on supply
polyadic in nature. Yet we believe that empirical sur- chain trust. At the individual level of analysis, behav-
vey research in SCM should evolve toward the next ioral research might focus on an individual buyer’s
stage of maturity and empirical rigor. Accordingly, we trust in a supplier and how it impacts the buyer’s
present guidelines for researchers, authors, and review- decision-making. A Type 1 design would be the best
ers to consider in both the design and communication choice, as both the independent and dependent vari-
of empirical SCM research. ables are a single manager’s perceptions. In another
example at the level of an individual firm, consider a
research question that focuses on trust between the
GUIDELINES marketing and manufacturing functions within a firm
The following sections detail the general guidelines and how it impacts internal integration. For this
we will use during our tenure as editors and outline example, a Type 4 design, using respondents from
ways that survey designs for supply chain research can both the marketing and manufacturing functions,
be strengthened. We position these as guidelines, not would be appropriate. Similarly, trust is often a criti-
rules, because of our belief that design is both situa- cal construct in research on relationships that cross
tional and a blend of art and science. organizational boundaries (typically buyers and sup-
In developing our perspective, we built upon the pliers); this type of research also requires a Type 4
thoughtful commentaries provided by our AEs. In design, with responses from both buyers and suppli-
inviting pairs of AEs (who were invited to add a third ers.
person to their team), we intentionally selected a Type 1 designs assume that a key informant can
broad range of opinions about these issues. The address all items knowledgably, but are still vulnera-
results are interesting and informative, as well as rep- ble to common method bias. Therefore, we argue that
resentative of the range of perspectives of authors who a Type 1 design is rarely appropriate. However, there
submit to JSCM. Even the commentaries that are most are some exceptions, including research that explores
supportive of using a single key informant (e.g., individual decision-making within a supply chain or
Krause et al., 2018 and Montabon et al., 2018) sug- settings where only monadic constructs that are under
gest this design is problematic, and Ketchen et al. the respondents’ control are addressed.
(2018) state that using a single source to assess a Kull et al.’s (2018) commentary argues for the
polyadic construct is never appropriate. Using the the- acceptability of a Type 1 design when studying small
oretical foundation and research question to drive the and medium enterprises (SMEs). They make the point
research design is a key component of Ketchen et al.’s that it is much more likely that a single key informant
(2018) theoretical calibration, Krause et al.’s (2018) exists in a very small firm, where top managers are
focus on alignment, and Montabon et al.’s (2018) asked to wear many hats. In addition, constructs such
standards. Kull et al.’s (2018) commentary on SMEs as internal integration or coordination across func-
provides an example of the sort of justification we tions, which would be polyadic in a larger firm, are
expect of authors who are making the point that a likely monadic in a small firm, where a single deci-
single respondent or source is the best choice for a sion-maker is responsible for many supply chain func-
certain context. tions. In addition, secondary data on small firms are
In general, however, the best way to deal with com- unlikely to exist. Kull et al. (2018) also note the
mon method bias is to design the research so that it prevalence of SMEs in supply chains, making the
includes multiple respondents (Ketokivi & Schroeder, point that, if we ignore SMEs because of our inability
2004; MacKenzie & Podsakoff, 2012). Similarly, the to obtain multiple respondents or sources, we are
best way to understand a polyadic construct is to ignoring much of what goes on in real supply chains.
These are compelling arguments for the use of a Type overlooked. This must be based on theory and the
1 research design. We highlight this as the sort of jus- state of knowledge in the literature—not only by cit-
tification that we expect from authors submitting ing other papers that have used a similar design.
papers employing a Type 1 design. Although it is the Table 2 outlines the sort of thought process that we,
authors’ responsibility to justify the use of this design, as co-editors, go through when evaluating submis-
we are open to strong arguments and logical support sions, providing a guideline to when researchers will
for their choice of design. need to put substantial effort into justification of their
Moving forward, we will generally anticipate a Type research design.
4 design for research questions that address polyadic
constructs. If the research question crosses functional Design, Not Control
or organizational boundaries, a multiple source There are two approaches to dealing with the
approach will generally be needed. However, Type 1, method bias that is inherent in the use of single
2, and 3 designs may still sometimes be appropriate, respondents (MacKenzie & Podsakoff, 2012). The first
with some qualifications. Although it is impossible to is to minimize the effect through careful research
assess the validity of responses using these designs, design, while the second is to control for common
researchers may be able to justify their use. Montabon method bias after the data have been collected. The
et al. (2018), while acknowledging that single respon- best approach is generally to avoid the use of single
dents or sources are sometimes unavoidable, describe respondents by dealing with these issues during the
ways to minimize the associated biases through mixed design phase of the research. However, if the use of
methods, triangulation with secondary data, and mul- single respondents is unavoidable, Harman’s test is
tiple source subsamples. Similarly, Krause et al. weak and not especially useful. Other statistical meth-
(2018) recommend pretests of all types and addi- ods to control common method bias such as partial
tional sources of data for polyadic constructs. They correlation (Podsakoff & Organ, 1986) are also gener-
note the importance of establishing measurement ally problematic and only indicate if a problem exists;
equivalence for constructs that cross boundaries in they do nothing to fix it (Ketokivi & Guide, 2015).
their use. Rather, design approaches that obtain the indepen-
Further, for initial research on a groundbreaking dent and dependent variable measures from different
topic, a Type 1 or 2 approach may be justifiable as sources or other types of statistical approaches are
being the only way to take a first step. We are more preferable. However, the use of multiple respondents
likely to make exceptions to the need for multiple is almost always superior (MacKenzie & Podsakoff,
respondents or sources when the research question is 2012), as “no simple statistical procedure adequately
novel and important, the level of analysis is local to eliminates the problem of same source variance”
the respondent, or the dependent and a majority of (Podsakoff & Organ, 1986, p. 538).
the independent variables are monadic constructs that If a single key informant design cannot be avoided,
are measured objectively. Consistent with the recom- then careful selection of the key informants is a must
mendations of Krause et al. (2018) and Ketchen (Montabon et al., 2018). Some biases and sources of
et al.’s (2018) commentaries and with our focus on respondent inaccuracies are related to individual roles
being flexible about unusual situations, we are more held by respondents in their organization, including
likely to make exceptions for research in especially both perceptual biases and knowledge of specific
novel contexts or hard-to-reach organizations. information (Huber & Power, 1985). There may also
In conclusion, a Type 4 design is the most appropri- be cognitive biases related to position. In the human
ate design for many supply chain management resource management arena, Huselid and Becker
research questions. Type 1 designs have a limited (2000) describe the difference in expected responses
place, mainly for behavioral questions with dependent for a survey asking for assessment of organization-
variables that measure an individual’s perceptions or wide HR practices sent to vice presidents of HR versus
in contexts like SMEs, where getting additional data a group whose titles include vice presidents for train-
would be difficult or impossible. Type 2 designs, ing and development, employee staffing, labor rela-
while the most common in our experience, are also tions, or compensation and benefits. Given that
the least appropriate. managerial experiences and roles influence their cog-
Finally, we remind authors of the necessity to justify nitions, mixing managers with many different roles or
their research design, as described by Montabon et al. from many different industries will conflate random
(2018). It is incumbent upon researchers who employ and systematic bias. Single-industry studies, or studies
single respondents to thoroughly justify this choice, which can control for industry via large subsamples,
based on theory, empirical evidence, and context. where all of the respondents have the same responsi-
Authors need to show why theirs is a special case and bilities, are preferable to multiple-industry studies
that the issues we have outlined should be with a mix of managerial responsibilities. Similarly,
January 2018 11
Journal of Supply Chain Management
TABLE 2
Editorial Thought Process About Using Single Respondents or Sources
all respondents should have about the same level of process, eliminating the saliency of any contextually
experience. provided retrieval cues (Podsakoff et al., 2003). Thus,
Podsakoff et al. (2003) suggest several alternative it reduces respondents’ ability and motivation to use
measurement design approaches to help minimize their previous responses to fill in gaps or infer missing
biases associated with a single respondent. Temporal, details. Further, it reduces biases in the response
proximal, or psychological separation of measurement reporting and editing stages through reducing the con-
of the dependent and independent variables can help sistency motif and demand characteristics (Table 1).
separate the independent and dependent variables in While separation approaches will not eliminate the
the minds of the respondents. Temporal separation common method bias associated with the use of a sin-
inserts a time lag between measurement of indepen- gle respondent, they may help to reduce it.
dent and dependent variables from the same respon- If there is no alternative to using a single respon-
dent. With proximal separation, respondents assess the dent, then it is important to ask questions that will
independent and dependent variables under different reliably obtain the required information, including
conditions, for example using different media (face-to- alignment of the level of questions with the level that
face interview, paper-and-pencil survey, and online the respondents can understand, ensuring that respon-
survey) or in different locations (Podsakoff & Organ, dents have appropriate experience to link key terms to
1986). Psychological separation can be achieved relevant concepts, refraining from asking respondents
through creation of a cover story that makes it appear about their motives, avoiding complex or abstract
that the measurement of the independent variables is questions without clear examples, using clear, concise
not related to measurement of the dependent vari- language, and only asking about information that is
ables. Separation through one or more of these means within respondents’ span of control (MacKenzie &
reduces biases in the retrieval stage of the response Podsakoff, 2012).
January 2018 13
Journal of Supply Chain Management
Blindenbach-Driessen, F., van Dalen, J., & van den John, G., & Rene, T. (1982). The reliability and valid-
Ende, J. (2010). Subjective performance assess- ity of key informant data from dyadic relation-
ment of innovation projects. Journal of Product ships in a marketing channel. Journal of Marketing
Innovation Management, 27, 572–592. Research, 19, 517–524.
Bou-Llusar, J. C., Beltran-Martın, I., Roca-Puig, V., & Jones, A. P., Johnson, L. A., Butler, M. C., & Main, D.
Escrig-Tena, A. B. (2016). Single- and multiple- (1983). Apples and oranges: An empirical com-
informant research designs to examine the human parison of commonly used indices of interrater
resource management-performance relationship. agreement. Academy of Management Journal, 26,
British Journal of Management, 27, 646–668. 507–517.
Boyer, K. K., & Pagell, M. (2000). Measurement issues Kahneman, D., & Tversky, A. (2009). Prospect theory:
in empirical research: Improving measures of An analysis of decision under risks. Econometrica,
operations strategy and advanced manufacturing 47, 263–292.
technology. Journal of Operations Management, 18, Kaplan, S. (2011). Research on cognition and strategy:
361–374. Reflections on two decades of progress and a look
Boyer, K. K., & Swink, M. L. (2008). Empirical ele- to the future. Journal of Management Studies, 48,
phants — Why multiple methods are essential to 665–695.
quality research in operations and supply man- Kaufmann, L., & Saw, A. A. (2013). Use of a multiple-
agement. Journal of Operations Management, 26, informant approach in supply chain management
337–348. research. International Journal of Product Distribu-
Carter, C. R., Kosmol, T., & Kaufmann, L. (2017). tion and Logistics Management, 44, 511–527.
Toward a supply chain practice view. Journal of Ketchen, D., Craighead, C., & Cheng, L. (2018).
Supply Chain Management, 53, 114–122. Achieving research design excellence through the
Choi, T. Y., & Wu, Z. (2009). Triads in supply networks: pursuit of perfection: Toward strong theoretical
Theorizing buyer-supplier-supplier relationships. calibration. Journal of Supply Chain Management,
Journal of Supply Chain Management, 45, 8–25. 54, 16–22.
Cote, J. A., & Buckley, M. R. (1987). Estimating traid, Ketokivi, M., & Guide, D. (2015). Notes from the edi-
method and error variances: Generalizing across tors: Redefining some methodological criteria for
70 construct validation studies. Journal of Market- the journal. Journal of Operations Management, 37,
ing Research, 24, 315–318. v–viii.
Dong, M. C., Ju, M., & Fang, Y. (2016). Role hazard Ketokivi, M., & Schroeder, R. G. (2004). Perceptual
between supply chain partners in an institution- measures of performance: Fact or fiction? Journal
ally fragmented market. Journal of Operations Man- of Operations Management, 22, 247–264.
agement, 46, 5–18. Krause, D., Luzzini, D., & Lawson, B. (2018). Building
Ernst, H., & Teichert, T. (1998). The R&D/marketing the case for a single informant in supply chain
interface and single informant bias in new pro- management survey research. Journal of Supply
duct development research: An illustration of a Chain Management, 54, 42–50.
benchmarking case study. Technovation, 18, 721– Kull, T. J., Kotlar, J., & Spring, M. (2018). Small and
739. medium enterprise research in supply chain man-
Feldman, M. S., & Rafaeli, A. (2002). Organizational agement: The case for single-respondent research
routines as sources of connections and under- designs. Journal of Supply Chain Management, 54,
standings. Journal of Management Studies, 39, 309– 23–34.
331. Kumar, N., Stern, L. W., & Anderson, J. C. (1993).
From the Editors (2011). Publishing in academy of Conducting interorganizational research using key
management journal – Part II: Research design. informants. Academy of Management Journal, 36,
Academy of Management Journal, 54, 657–666. 1633–1651.
Golden, B. R. (1992). The past is the past – or is it? Lord, F. M., & Novick, M. R. (1968). Statistical theoriz-
The use of retrospective accounts as indicators of ing mental test scores. Reading, MA: Addison-
past strategy. Academy of Management Journal, 35, Wesley.
848–860. MacKenzie, S. B., & Podsakoff, P. M. (2012). Com-
Hofstede, G. (1991). Cultures and organizations: Soft- mon method bias in marketing: Causes, mecha-
ware of the mind. Maidenhead, UK: McGraw-Hill. nisms and procedural remedies. Journal of
Huber, G. P., & Power, D. J. (1985). Retrospective Retailing, 88, 542–555.
reports of strategic level managers: Guidelines for McGrath, J. E. (1981). Delimmatics: The study of
increasing their accuracy. Strategic Management research choices and dilemmas. American Behav-
Journal, 6, 171–185. ioral Scientist, 25, 179–210.
Huselid, M. A., & Becker, B. E. (2000). Comment on Montabon, F., Daugherty, P., & Chen, H. (2018). Set-
“Measurement error in research on human ting standards for single respondent survey
resources and firm performance: How much error design. Journal of Supply Chain Management, 54,
is there and how does it affect size estimates?” by 35–41.
Gerhat, Wright, McMahan and Snell. Personnel Pagell, M., Klassen, R., Johnston, D., Shevchenko, A.,
Psychology, 53, 835–854. & Sharma, S. (2015). Are safety and operational
effectiveness contradictory requirements: The roles Schein, E. H. (2010). Organizational culture and leader-
of routines and relational coordination. Journal of ship. San Francisco, CA: Jossey-Bass.
Operations Management, 36, 1–14. Seidler, J. (1974). On using informants: A technique
Palmatier, R. W., Schier, L. K., & Steenkamp, J.-B. E. for collecting quantitative data and controlling
M. (2007). Customer loyalty to whom? Managing measurement error in organization analysis. Amer-
the benefits and risks of salesperson-owned loy- ican Sociological Review, 39, 816–831.
alty. Journal of Marketing Research, 44, 185–199. Shadish, W. R., Cook, T. D., & Campbell, D. T.
Parmigiani, A., & Howard-Grenville, J. (2011). Routi- (2002). Experimental and quasi-experimental designs
nes revisited: Exploring the capabilities and prac- for generalized causal inference. Boston, MA:
tice perspectives. The Academy of Management Houghton Mifflin.
Annals, 5, 413–453. Svensson, G. (2006). Multiple informants and asym-
Phillips, L. W. (1981). Assessing measurement error in metric interactions of mutual trust in dyadic busi-
key informant reports: A methodological note on ness relationships. European Business Review, 18,
organizational analysis in marketing. Journal of 132–152.
Marketing Research, 18, 395–415. Van Bruggen, G. H., Lilien, G. L., & Kacker, M.
Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Pod- (2002). Informants in organizational marketing
sakoff, N. P. (2003). Common method biases in research: Why use multiple informants and how
behavioral research: A critical review of the litera- to aggregate respondents. Journal of Marketing
ture and recommended remedies. Journal of Research, 39, 469–478.
Applied Psychology, 88, 879–903. Van Weele, A. J., & Van Paoig, E. M. (2014). The
Podsakoff, P. M., & Organ, D. W. (1986). Self-reports future of purchasing and supply management
in organizational research: Problems and pro- research: About relevance and rigor. Journal of
spects. Journal of Management, 12, 531–544. Supply Chain Management, 50, 56–72.
Roh, J. A., Whipple, J. M., & Boyer, K. K. (2013). The Villena, V. H., & Craighead, C. W. (2017). On the
effect of single rater bias in multi-stakeholder same page? How asymmetric buyer-supplier rela-
research: A methodological evaluation of buyer- tionships affect opportunism and performance.
supplier relationships. Production and Operations Production and Operations Management, 26, 491–
Management, 22, 711–725. 506.
January 2018 15