Skip to main content

MIAMI: development and validation of a revised measure of academic misconduct

Abstract

Academic misconduct remains a perennial concern in tertiary education around the globe. Research intended to explain this phenomenon has been conducted for almost 100 years. One of the most cited researchers is Donald McCabe, whose work was rooted in a survey instrument he developed in the late 1980s and distributed to 100,000 + students over the following two decades. Recognizing the need to continue to understand academic misconduct in the 21st century context, the present study updated and validated the original McCabe instrument as an inventory of academic misconduct behaviors, and derived psychometrically sound factors of the same to enable researchers to examine the predictors of these behaviors. Specifically, our updated instrument (named MIAMI: McCabe/ICAI Academic Misconduct Inventory) was shown to have construct validity through associations with the following predictors in a survey of tertiary students (n = 2329): academic integrity climate, peer norms, moral attitudes, and achievement goal structures. The updated survey will facilitate further research that will advance our understanding of academic misconduct and test the efficacy of interventions designed to promote academic integrity.

Introduction

While studies of dishonesty and character date back to the early 20th century (e.g., Hartshorne and May 1928), large-scale, multi-institutional research on the prevalence of academic misconduct at the college or university level did not emerge until the 1960s. Goldsen et al. (1960) surveyed students enrolled in 11 U.S. universities about specific academic misconduct behaviors, and that study influenced Bowers (1964), who then studied the integrity values, attitudes and behaviors of 5000 students enrolled at 99 different U.S. colleges and universities. Bowers (1964) found that at least half of the students admitted to engaging in some type of cheating behavior while in college despite the fact that the majority of them agreed that cheating was “wrong on moral grounds” (p. 194). After Bowers’ study in 1964, there was limited multi-institutional research until the 1990s when Davis et al. (1992) and Davis and Ludvigson (1995) simply asked students if they had “cheated” rather than asking about specific behaviors. Their work nevertheless illuminated possible determinants of academic misconduct, such as grades, pressure, stress, and a lack of internal moral fortitude.

Donald McCabe was the most prolific researcher to use survey methods to study self-reported cheating and its determinants. McCabe and Treviño (1993) adapted Bowers’ survey to assess 6,096 students enrolled at 31 colleges and universities, and McCabe continued this line of research for 25 years, the summation of which was published in 2012 (McCabe et al. 2012). McCabe and colleagues surveyed over 100,000 students about academic misconduct in college and McCabe’s survey has been the most cited instrument for academic integrity researchers in North America and globally (with over 5000 citations in Google Scholar). It has been widely adapted and modernized to fit the research questions being investigated (e.g., Stephens and Gehlbach 2007). However, previous adaptations have generally been on an ad-hoc basis to suit the needs of the researchers (e.g., Harris et al. 2020). Thirty years of survey research inspired by McCabe et al. (2012) has enhanced our knowledge and understanding of academic misconduct; however, McCabe’s large-scale surveys did not employ advanced statistical analysis or undergo empirical validation. Furthermore, the development of those instruments was based on practical observation rather than grounded in behavioral science theories.

In the present study, we updated, standardized, and validated the MIAMI (McCabe/ICAI Academic Misconduct Inventory) for use by researchers and practitioners. This project was conducted as a function of the ICAI (International Center for Academic Integrity) Research Committee with the support of ICAI leaders and membership to serve ICAI’s mission of supporting academic integrity worldwide. McCabe, as ICAI’s founder, requested that his research agenda continue after his passing, and this revision serves to keep his legacy alive. McCabe’s version of the survey was not named. MIAMI is a new name, referring to the misconduct scale described in the present study, which was named to recognize both Don McCabe and the collaboration among members of the ICAI.

As detailed below, we established construct validity by measuring concurrent relationships with theoretically relevant contextual and individual variables investigated by McCabe’s research team. In the following literature review, we present empirical findings regarding the key behaviors associated with academic misconduct and present hypotheses that we test to replicate these findings.

Academic misconduct behaviors

Many different strategies have been used by researchers to study academic misconduct. Most commonly, researchers create an inventory of cheating behaviors and ask students to indicate any in which they have engaged. Based on Bowers (1964), McCabe and Treviño (1993) identified and measured twelve academic misconduct behaviors. The original items were treated as a single factor and assessed for reliability using only Cronbach’s alpha (.79). McCabe treated these items as an inventory rather than exploring or validating the factor structure. The response scales have varied (e.g. Jordan 2001; Rettinger and Kramer 2009) and the responses are often aggregated into a binary variable for “cheater” or “non-cheater,” or a count of categories endorsed, often using simple addition (Jordan 2001).

While previous studies have been useful in both testing theory and conducting institutional assessments, they generally make certain underlying assumptions that have not been tested. For example, all behaviors are assumed to be psychologically associated and derived from similar motivations so that a simple sum reflects a single underlying construct. Previous research using McCabe’s inventory has therefore taken two different psychometric approaches: first, that all misconduct behavior is fundamentally associated with a single underlying factor (Jordan 2001), and second, that there are distinct categories of behavior based on variables such as context (exam, assignment, project), direction of information exchange (e.g., giving vs. receiving information, as in Rettinger and Kramer 2009), or type of behavior (cheating, plagiarism, facilitation, etc. (Pavela 1997) and that those categories, while related, are fundamentally independent. We propose to test this assertion by examining the factor structure of the academic misconduct behavior items. Both single-factor and multiple-factor models have been suggested in the literature (Rettinger and Kramer 2009); in the present study we will provide evidence to distinguish these theoretical possibilities.

Concurrent validity measures

In any effort to validate a novel measure of a psychological construct, it is both necessary and logical to include constructs that are theoretically related to the focal construct to provide evidence of concurrent validity. In our study, we select four well-researched covariates of academic misconduct: perceptions of academic integrity climate, peer norms, moral attitudes, and academic motivation. These variables represent a subset of possible predictors, emphasizing well-studied, stable psychological constructs that can be influenced by institutional actions. We provide a brief literature review on each in the following paragraphs.

Academic integrity climate

Academic integrity climate is the perception that an institutional culture supports and upholds academic integrity standards. Climate also reflects the extent to which students believe that their institution has a clearly communicated and fair process for addressing violations of the standards. Previous research has shown that a variety of formal and informal climate factors can either encourage or discourage academic integrity (McCabe et al. 2012). Informal student factors include students’ knowledge and understanding of academic integrity, their acceptance and understanding of academic integrity policy, their attitudes towards cheating, and their beliefs about peers’ behavior (Jordan 2001; Bertram Gallant and Drinan 2006; Young et al. 2018). Informal faculty factors include faculty support and communication about integrity, and their knowledge about and support for the academic integrity policy, as well as perceived faculty support for and communication of academic integrity and integrity policies (McCabe and Trevino 1993; Beasley 2014; Young et al. 2018). Formal factors include the presence of an honor code or academic integrity policy and moral reminders/pledges of integrity. Together, these research findings suggest that self-reported academic misconduct is less likely when there is an honor code, when students understand and are reminded about it, and when faculty support it (Bing et al. 2012; McCabe and Trevino 1993; Whitley 1998; McCabe and Pavela 2004; Burrus et al. 2007; O’Neill and Pfeiffer 2012; Tatum 2022). Students are also less likely to engage in academic misconduct when they believe faculty will report them, they will face sanctions, and the penalties are considered severe (McCabe and Trevino 1993; Boehm et al. 2009; O’Neill and Pfeiffer 2012; Young et al. 2018). Therefore, we expected that a stronger academic integrity climate will be associated with fewer reported misconduct behaviors.

Perception of peer norms

Students’ misconduct behavior is influenced by their beliefs about their peers’ attitudes and behaviors. We propose that peer attitudes and behaviors are separate but related constructs that can have independent and powerful effects on students’ own attitudes and behavior. On attitudes, Bowers (1964) showed that students who believe their peers strongly disapprove of cheating are almost three times less likely to report engaging in cheating themselves (71%) when compared to those who believe that peer disapproval is very weak (26%). McCabe et al. (2012) reported a 50% decrease between the same groups in their own data.

Student behaviors are also influenced by their peers' behavior (Daumiller and Janke 2020; Zhao et al. 2022). Students who report witnessing more misconduct amongst their peers engage in more misconduct themselves (Haines, et al. 1986; Jordan 2001; McCabe and Treviño 1993). Longitudinal evidence suggests that the mere presence of peers who have cheated in the past increases an individual student’s chances of cheating in the future (Carrell et al. 2008). Experimental evidence also indicates that students believe that seeing others cheat increases the likelihood of cheating (Rettinger and Kramer 2009). This research refers to actual observation of cheating, rather than diffuse beliefs about what others do (O’Rourke et al. 2010), and is measured by asking participants about their “direct knowledge” of academic misconduct. Therefore, we expected that a valid measure of academic misconduct would be positively associated with knowledge that one’s peers engage in misconduct and negatively associated with an attitude that one’s peers would disapprove of the behavior.

Moral attitudes

Students’ attitudes toward academic misconduct have been widely studied over the past six decades (Bowers 1964; DeVries and Ajzen 1971). To date, two types of attitudes have been the subject of much of the theoretical and empirical literature: students’ judgments or beliefs concerning the valence or morality of cheating behavior, and their tendency to disengage or “neutralize” personal responsibility for engaging in it. Numerous studies have demonstrated that participants’ attitudes toward cheating are reliable (and often strong) predictors of their intentions to cheat and self-reported cheating (Beck and Ajzen 1991; Harding et al. 2007; Hendy and Montargot 2019). Results have also consistently shown a significant negative correlation between students’ belief that cheating is bad (serious, wrong, etc.) and their self-reported cheating behavior (Bushway and Nash 1977; Whitley 1998).

Equally consistent are the positive relations between academic misconduct and the tendency to deploy “mechanisms of moral disengagement” (Bandura 2011) or “techniques of neutralization” (Sykes and Matza 1957) to reduce or forestall self-recriminations (or blame from others) for engaging in dishonest behavior. Regardless of the terminology and instrument employed, studies have consistently demonstrated strong positive associations between moral disengagement/neutralization and academic misconduct among secondary (Evans and Craig 1990; Stephens 2018; Stephens and Gehlbach 2007; Zito and McQuillan 2010) and tertiary students (Diekhoff et al. 1996; Farnese et al. 2011; Haines et al. 1986; Michaels and Miethe 1989; O’Rourke et al. 2010; Pulvers and Diekhoff 1999).

In the present study, we anticipated that students who view misconduct more positively would be more likely to self-report engaging in such conduct. Further, we expected neutralizing attitudes to become stronger as self-reported misconduct increased. Finally, we treated these variables as two separate measures as in Stephens (2018). Given the importance and specificity of these predictions, associations between moral judgment variables and the MIAMI serve as a strong validation for this updated measure of academic misconduct.

Academic motivation

Goal structures represent students’ perceptions of the motivation-focused goals that instructors emphasize during instruction (Bardach et al. 2020). When students perceive that their instructors allow them to take the time to truly master the content that they are learning about, and when students perceive that their instructors value effort and improvement over time, students perceive a mastery goal structure (Meece et al. 2006). Research consistently indicates that a mastery goal structure decreases the likelihood of students engaging in academic misconduct behaviors (Anderman and Midgley 2004; Bong 2008; Anderman et al. 2022). When instructors are perceived as emphasizing grades and performance (extrinsic goal structure), students are more likely to engage in cheating behaviors (e.g. Anderman and Midgley 2004). We expected perceived mastery goal structures to be associated with less misconduct and extrinsic goal structures to be associated with more self-reported misconduct. This association can serve to support claims of concurrent validity of the revised misconduct measure (MIAMI).

Summary & hypotheses

In the present study, we validated an updated version of McCabe’s measure of academic misconduct. Because the literature contains examples of both single and multiple factor conceptions of academic misconduct without testing them psychometrically, we considered single factor, bifactor, and multiple factor models. Based on an unpublished pilot study briefly described in the procedure section, we predicted that while a single factor model of academic misconduct behavior would be adequate, a model containing factors for different types of behavior, such as plagiarism and other misuse of resources, collusion, and fraud would provide an even better fit to the data (H1). We further predicted that the MIAMI (and the other measures used for validation) would show acceptable reliability (H2). Finally, by simultaneously measuring students’ self-reported academic misconduct, their perceptions of the academic climate at their institutions, and other key psychological variables, we aimed to demonstrate the concurrent validity of the MIAMI (H3). If the MIAMI is valid, patterns of association between the MIAMI and these key associated constructs would be consistent with results found in the literature. In particular,

  • H3a. Students who perceive a stronger academic integrity climate (higher scores) will report fewer misconduct behaviors (lower MIAMI scores).

  • H3b. Students who perceive higher levels of peer disapproval of misconduct behaviors will report engaging in fewer misconduct behaviors (lower MIAMI scores).

  • H3c. Students who report higher frequency of misconduct behaviors by peers will report engaging in more misconduct behaviors (higher MIAMI scores)

  • H3d. Students who hold a more positive moral stance toward misconduct will report more of that misconduct (higher MIAMI scores).

  • H3e. Students who have stronger neutralizing attitudes will report engaging in more misconduct behaviors (higher MIAMI scores).

  • H3f. Students who indicate higher extrinsic goal structures to learn will report engaging in more misconduct behaviors (lower MIAMI scores).

  • H3g. Students who indicate having a mastery goal structure will report engaging in fewer misconduct behaviors (lower MIAMI scores).

Method

Participants

Respondents were recruited from four institutions in the United States and Canada: a research university in the Southeast (n = 1216); two public liberal arts universities, one in the Mid-Atlantic (n = 548), and one in the Southwest (n = 192); and a community college in Ontario (n = 418). A total of 2374 students began the survey, but 47 were removed prior to analyses because they did not complete the survey, withdrew, were under 18 years old, or due to careless or dishonest responses.Footnote 1

Participants (n = 2329) ranged in age from 18 to 81 years old with an average age of 24.2 (SD = 7.6); 65% of participants identified as female, 29% as male, 2% as non-binary, 1% as more than one gender, and 3% declined to respond. In terms of race/ethnicity, 57.5% of participants identified as White, 15.2% as Asian, 9.6% as Multiracial, 6.5% as African American/Black, 6.3% as Hispanic, 2.7% declined to respond, 2% preferred to self-identify, 0.3% identified as American Indian or Alaskan Natives, and .04% identified as Native Hawaiian or Pacific Islander. Most participants indicated they were enrolled in Bachelor’s programs (60.2%), followed by those enrolled in Doctoral (11.9%), Master’s (11.1%), non-degree (10.3%), and associate’s (5.2%) programs, with 1.3% declining to provide an answer. Thirty-three percent of students were in their first year at their current institution, 27% in their second year, 19% in their third year, 13% in their fourth year, and 8% in their fifth year or later. With respect to the field of study, participants were fairly equally distributed among the biological/ physical sciences (17%), social sciences (15%) and business (16%); 10% were in Education, 7% in computer science/mathematics, with the rest distributed among a wide range of disciplines. The majority of participants completed most of their secondary education in the United States (76%) or Canada (8%) with the remaining completing theirs most commonly in India (4%) or China (1%). Relatedly, 82% of participants reported studying in their first language. Finally, 15% of the participants reported being the first in their immediate family to participate in post-secondary education.

Survey design and procedure

To create and validate the MIAMI, the McCabe et al. (2012) misconduct measure was updated to reflect new forms of misconduct and language use to ensure that the entirety of the construct was still captured, even as cheating behaviors have changed over time. The research team reached a consensus about which items to include and the initial wording of each one with the driving goals of generalizability and clarity. The climate measure was revised using the same process, based on McCabe et al. (2012). Measures of peer norms, moral attitudes toward misconduct, and learning goals were adapted to create the full survey instrument.

Following the initial generation phase, the instrument was pretested and piloted to ensure face and content validity as well as the psychometric soundness of the instruments. First, a draft was reviewed by a panel (n = 15) of academic integrity scholars and professionals recruited through the International Center for Academic Integrity (ICAI). The group was asked to comment on the items in the instrument and any missing items. Revisions were made based on this feedback. Next, focus groups were conducted with undergraduate students to evaluate the clarity and specificity of the climate and behavior scales. These groups consisted of upper-level psychology students in a small, liberal arts college with an honor code. The groups were asked to nominate and paraphrase ambiguous items to ensure that the intent of the item had been communicated. Revisions were made in cases of confusion or misinterpretation by the students.

Pilot data was then collected from a sample of undergraduates with the retained set of items. Using a random 50% of the pilot sample (n = 1004), an exploratory factor analysis (EFA; with direct oblimin rotation and maximum likelihood extraction) revealed that minor revisions were necessary for the MIAMI and the climate scale. The results of the EFAs during this piloting phase informed the hypothesized model structures fit via confirmatory factor analysis (CFA) in the current study.

For the validation data reported here, all survey participants were recruited via email by their institution. Three of the institutions offered a raffle entry for varying prizes as compensation. Participants were treated according to APA ethical guidelines and no identifying data were collected. Three institutions used complete samples and one randomly selected subset of the student body for participation. Response rates varied by institution, ranging from 3–10%.

The survey was administered online. All consenting participants received the survey sections in randomized order. Half received the demographic section first, while half received it at the end of the survey. Following (or preceding) the demographic section, the other sections were presented in random order and items within each section were also randomly ordered. Attention-check items such as “Humans eat food” were inserted in the academic misconduct and integrity climate scales to assess participant attention. Using a single item, participants were asked to self-report how honest they were in answering the questions on the survey and were then debriefed. This question has been shown to effectively reduce underreporting of sensitive behavior in other contexts (Zimmerman and Langer 1995). Any respondent who indicated they were either “not at all honest” or “not very honest” was removed from the dataset prior to analyses (N = 29).

Materials

The survey questions were designed to elicit participants’ self-reports of their behavior, attitudes, beliefs, and experiences regarding academic integrity at their current institution. In addition to the survey items described below, participants were asked about a range of demographic and academic attributes. All items in the survey are available upon request from the corresponding author or ICAI.

Academic misconduct behaviors

Academic misconduct is assessed with the MIAMI, an original measure derived (in part) from the list of behaviors used by McCabe and Treviño (1993). McCabe’s survey consisted of 25 substantive items and one “other” item. The content focused on exams, assignments, and lab/research data, with an emphasis on in-person communication and behavior.

We revised three main aspects of the McCabe survey: language, online courses, and technology. We updated the language not to change the substance of an item, but to modernize it (e.g., replacing the term “chums” with “classmates”). Other updates were made to account for the large increase in online and non-traditional students in higher education over the last three decades (Gray 2014). For example, items regarding technology use were made more specific, and references to classrooms were removed. Finally, references to technologies had to be updated to reflect the changes since the McCabe survey was created. Technologies such as translation websites, study helper websites, and smartphones, and websites to enable the outsourcing of work to third parties are reflected in the updated survey. Of note is that the updated items were developed prior to the launch of ChatGPT in November 2022. Therefore, no items in the present version specifically address ChatGPT or other forms of artificial intelligence (AI) use. A semi-open ended item labeled “other” was included as a response option, a continuation of McCabe’s method. Only 72 (3.1%) of participants provided an open ended response, thus this response was not analyzed for this manuscript. The final list of 24 behaviors used in the present study is in Table 1. The revision does not distinguish between in-person and online behaviors, and includes a broad range of dishonest activities not previously considered (e.g. bribery and impersonation). AI-specific items are in development and are available from the author.

Table 1 MIAMI complete item list

The response scale for the behaviors section was also updated. The new scale allowed participants to indicate that they had engaged in a behavior: Never; Once; 2–4 times; 5–10 times; 11 or more times; or to indicate that this behavior is “not applicable to my program.” The added “not applicable to my program” option was important for questions on activities that not all students encountered, such as the opportunity to falsify lab data. The questionnaire specifically asked students to consider the last 12 months at their current institution. This focus on a recent period of time increases the likelihood of students remembering and answering accurately, and removes the language around a traditional academic unit such as a term, acknowledging the growth in non-traditional students for whom these traditional academic units are not necessarily relevant.

Academic integrity climate

Items for academic integrity climate were based on McCabe’s original survey questions assessing student and faculty knowledge of and attitudes towards the academic integrity policy at their institution. The original items assessing the construct were presented in varying response formats, including Likert response scales as well as dichotomous options (i.e. yes/no), and were written for both students and faculty. We developed 22 self-report items that measure college students’ perceptions of the academic integrity climate at their institution. Respondents indicated their agreement with statements measuring the following dimensions of climate: understanding of what actions constitute cheating, perception of their peers’ attitudes and behaviors, perception of student and faculty knowledge and support of the academic integrity policy, and perception of the effectiveness of the academic integrity policy in deterring cheating, including the likelihood and severity of facing consequences. Participants responded on a 5-point Likert scale from Strongly Disagree (1) to Strongly Agree (5). Items were both positively and negatively worded and a higher score indicates a stronger climate of support for academic integrity. Sample items include “Most students here ignore the academic integrity policy” (reverse item) and “If I witnessed another student cheating, I know what steps to follow to report the incident.”

After an analysis of reliability, we retained 19 items that demonstrated strong internal consistency reliability (α = .86, ω = .87). Based on the VALUE rubric of the American Association of Colleges and Universities (AAC&U 2021), three items were written to assess students’ perceptions of their own ethical development such as, “My experience at this college has helped me act more ethically as a student.” These additional items (α = .86, ω = .86) were presented to participants, but not included in the climate scale data analyses.

Perception of peer norms

Two aspects of peer norms—attitudes and behaviors—related to academic misconduct were assessed with measures adapted from McCabe and Treviño (1993). Perceptions of peer disapproval of misconduct were assessed using a 5-point Likert scale from Strongly Disagree (1) to Strongly Agree (5) with five items (e.g., “If I cheated on a test or exam, my friends would be really disappointed in me”). The 5-item subscale measuring peer disapproval of cheating demonstrated strong internal consistency reliability (α = .84, ω = .85).

Perceptions of peer cheating behavior were assessed by asking participants to indicate how often they had “observed or had direct knowledge of students” at their institutions engaging in academic misconduct using five items (e.g., “Using unauthorized notes or sources during a test or exam”). The 5-item peer behavior scale demonstrated strong internal consistency reliability (α = .86, ω = .87).

Moral attitudes (moral unacceptability and disengagement)

Students’ beliefs about the moral unacceptability of academic misconduct were assessed with an adapted version of (McCabe et al. 2012) measure. For five of the most common behaviors assessed in McCabe’s misconduct behavior scale (described previously), participants were prompted to indicate how “serious” they believe each transgression is using a four-point scale (where 1 = Not cheating, 2 = Trivial cheating, 3 = Moderate cheating, 4 = Serious cheating). In the updated survey, participants were asked to use a five-point scale (where 0 = Not at all morally/ethically wrong, 1 = Slightly, 2 = Moderately, 4 = Very, 5 = Completely morally/ethically wrong) to respond to the following prompt: “In your opinion, for each of the behaviors described below, please indicate the extent that you personally think the behavior is morally/ethically wrong.” The 5-item measure of moral unacceptability demonstrated strong internal consistency reliability (α = .89, ω = .89).

An adapted version of Shu et al. (2011) measure of moral disengagement was used to assess participants’ tendency to displace or otherwise minimize personal responsibility for cheating. Specifically, participants used a 5-point Likert scale from Strongly Disagree (1) to Strongly Agree (5) to indicate the extent to which they agreed with seven items, such as “It is OK to cheat to help one’s friends.” In the present study, the 7-item moral disengagement scale demonstrated strong internal consistency reliability (α = .89, ω = .90).

Academic motivation

We measured the extent to which students perceived that their instructors emphasize mastery using four items and their emphasis on the importance of grades and test scores with three items. These measures are adapted from items developed by Midgley and colleagues (Midgley et al. 1998). Each subscale uses a 5-point Likert-type scale, from Not at all true (1) to Very true (5). The items assessing perceptions of a mastery goal structure included statements such as “My professors believe that it’s important to understand the work, not just to memorize it.” The 4-item mastery goal structure subscale demonstrated strong internal consistency reliability (α = .85, ω = .85). The items assessing perceptions of an extrinsic goal structure include items such as “My professors emphasize the importance of test scores.” The 3-item extrinsic goal structure subscale showed strong internal consistency reliability (α = .81, ω = .81).

Results

MIAMI factor structure

The theorized factor structure for the MIAMI included three subscales: collusion, misuse of resources, and fraud/contract cheating. To confirm the theorized model fit with the data collected, a confirmatory factor analysis (CFA) was performed with maximum likelihood estimation. The model was specified such that six items loaded onto the collusion subfactor, nine items loaded on the misuse of resources subfactor, and nine items loaded onto the fraud/contract cheating subfactor. Initial results indicated that two items in the collusion factor, three items in the misuse of resources factor, and one item from the fraud/contract factor had low factor loadings or were cross-loading onto other factors. These items were removed. One additional item from the fraud/contract factor was found to have very little variation in the sample. This item was also removed. The final composition of the three subfactors is reported in Table 2 and includes four items for collusion, six items for misuse of resources, and seven items for fraud/contract. We evaluated model fit using fit indices such as root mean square error of approximation (RMSEA), comparative fit index (CFI), and standardized root mean square residual (SRMR) with cutoff values recommended by Hu and Bentler (1999). RMSEA values less than .05, CFI values greater than .90, and SRMR values less than .08 were considered indicative of acceptable model fit to the data (See Table 2). Chi-square tests of model fit are also presented in Table 2.

Table 2 Results of confirmatory factor analysis for MIAMI

Results of the CFA confirmed that the 3-factor structure of the MIAMI items is a good fit to the data (RMSEA = .03, CFI = .90, SRMR = .05). All items significantly loaded onto their respective factors and as expected, the three subfactors were significantly correlated with one another (r = .73-.86). A single-factor version of the MIAMI was also analyzed with all 24 items (minus the one item with no variance) loading onto a single factor (i.e., “cheating”) and another with just the 17 items retained from the three-factor solution loading onto a single factor. The results indicated poor fit to the data for both the 24-item (RMSEA = .04, CFI = .75, SRMR = .07) and 17-item (RMSEA = .04, CFI = .87, SRMR = .05) single-factor versions of the model. Only the three-factor structure of the MIAMI had good model fit according to the CFAs. Therefore, H1 was supported.

MIAMI scale reliability

Results for internal consistency of the final version of the academic misconduct behavior scale using both Cronbach’s alpha and McDonald’s omega show that the scales are reliable. The collusion subscale (α = .79, ω = .83) and misuse of resources subscale (α = .72, ω = .76) also showed strong reliability. The fraud/contract cheating subscale (α = .64, ω = .63) lacked similar strength because few participants reported engaging in this type of behavior. For those wishing to include more possible behaviors as part of an institutional assessment, internal consistency for the original 24 items was very good (α = .89, ω = .91). The 17-item version loading onto a single factor (α = .86, ω = .89) was also found to be reliable. Overall, H2 was supported.

Concurrent validity

The core logic of this validation study is that the MIAMI, an update to McCabe’s (McCabe and Trevino 1997) measure of academic misconduct behavior, will show the same pattern of associations with criterion variables as previous versions. Correlation coefficients describing the associations between misconduct (as a 17-item single factor and as its three subcomponents) and the criterion variables are presented in Tables 3 and 4. Associations between the variables are generally consistent and in the predicted direction. This illustrates the consistency of the replication across measures.

Table 3 Correlations between single-factor MIAMI and other criterion variables
Table 4 Correlations between three-factor MIAMI and other criterion variables

Subtypes of cheating behaviors are positively correlated to each other (see Table 4). As Table 4 shows, collusion and misuse of resources, collusion and fraud/contract, and misuse of resources and fraud/contract are all significantly related to one another. Further, cheating behaviors were related to observed peer behaviors and moral disengagement. Additionally, as predicted, cheating behaviors were negatively related to perceptions of academic integrity, peer attitudes, mastery goal structures, and moral unacceptability. This pattern is consistent with previous research and serves as a strong validation of the new measures.

For those wishing to use a single-factor version of misconduct, the single-factor (17-item) construct shows a strong association with the other variables. Cheating behavior is negatively correlated to students’ perception of the academic integrity climate. As expected, the peer attitudes subscale is negatively correlated to cheating behavior while the observed peer behavior subscale is positively correlated to cheating behavior. Interestingly, the subscale for extrinsic goal structures lacks a significant correlation to misconduct, but the subscale for mastery goal structures has a negative association. Finding misconduct to be morally unacceptable was correlated to lower rates of self-report cheating behavior; moral disengagement was related to increased cheating behavior.

Based on updated effect size guidelines and our sample size (Gignac and Szodorai 2016), we interpret the magnitude of the correlated values as medium to strong with the strongest relationships existing between cheating behaviors and the subscales for peer-observed behavior and moral disengagement. All variables included in the correlation matrix related to each other in the expected ways, based on the previous literature (see Tables 3 and 4). Overall, with the exception of H3f (extrinsic goal structures), all predictions related to H3 were supported.

Discussion

In the present study, we update and validate an instrument assessing academic misconduct behavior for use by researchers and practitioners. Furthermore, we establish a psychometrically sound instrument by demonstrating a statistically valid factor structure, reliability, and concurrent validity for scholarly purposes. The MIAMI demonstrates strong internal consistency reliability, both as a full scale and for the subscales of collusion, misuse of resources, and fraud/contract cheating behaviors. The confirmatory factor analysis further supports the hypothesized three-factor structure, offering an empirically derived way to model different types of misconduct.

We have presented both a 17-item and 24-item measure of academic misconduct. It is important to note that institutions and researchers can use the full 24-item scale to assess the frequency of academic misconduct behaviors. The items are reliable and valid. From a psychometric perspective, some items load on more than one factor, while other items, such as bribery, have little to no variance because they are extremely rare behaviors that are seldom to never reported by students. In our sample, no student reported using bribery, thus including the bribery item in a confirmatory factor analysis creates problems because the item does not load on any of the factors. With that said, researchers may want to measure bribery because these behaviors may emerge in a larger sample or in different contexts. Therefore, the 24-item scale can be used for a measure of frequency of the full list of misconduct behaviors. However, if a researcher desires to create a total or subscale composite scores or latent variable approaches, the 17-item scale with three factors is psychometrically sound for that purpose.

Importantly, the MIAMI shows patterns of correlations with other validated constructs as predicted, providing evidence of concurrent validity. Consistent with prior research, academic misconduct is negatively associated with perceptions of a strong academic integrity climate and positively related to observing peer misconduct and holding neutralizing attitudes that disengage personal responsibility. The findings also replicate previous work demonstrating that students who view cheating as more morally unacceptable tend to engage in less self-reported misconduct. The lack of association between an extrinsic goal structure and cheating aligns with some prior work but contradicts other studies (Anderman et al. 2022). Thus, these findings support all of the subparts of H3 except H3f, connecting extrinsic goal structures to academic misconduct.

We establish content validity for the MIAMI using the most appropriate methods available for revising an established measure. Modern approaches, such as those documented by Colquitt et al. (2019), were not available at the creation of McCabe’s original scale and maintaining continuity did not allow for their use here. This is a necessary tradeoff to maintain continuity with previous versions and demonstrate content validity. Our process utilized experts in the field of academic integrity who reviewed a revised and updated set of items that included changes to wording and language and the addition of new forms of academic misconduct, particularly those facilitated by the broader use of technology. We establish face validity using both expert review as well as undergraduate focus groups. We present a pattern of relationships between the MIAMI and other variables that support previous empirical research and provide compelling evidence for the construct validity of the MIAMI. The correlations between academic misconduct and the contextual variables (academic integrity climate, peer behaviors, moral disengagement, peer disapproval, intrinsic motivation structures) measured in the present study also replicate previous findings using the original McCabe measure.

In addition to updating the original McCabe instrument as an inventory of cheating and misconduct behaviors, we derive psychometrically sound sub-scales or factors of academic misconduct behaviors which enables researchers to examine the predictors of these sub-categories or clusters of behaviors. The three-factor model that we present is a psychometrically sound approach to measure and study the distinct factors of collusion, misuse of resources/plagiarism, and fraud. The three-factor structure allows future researchers to examine antecedents and correlates of these distinct cheating behaviors as well as examine possible interventions that address these in unique ways. For example, future research will need to disentangle which constructs that relate to academic misconduct (e.g., climate, peer norms, moral attitudes, and academic motivation) are the most predictive of self-reported cheating behaviors (Perry, A. H., Rettinger, D. A., Stephens, J. M., Anderman, E. M., McTernan, M. L., Tatum, H., Gallant, T. B., McNally, D., & Cullen, C. Comparing theoretical models of academic integrity: A psychometric approach. in preparation). One of the most important contributions of our research is making available a standardized measure of academic misconduct that will allow for more effective comparisons across research studies spanning the globe.

One might criticize the use of self-report measures in our study. While acknowledging the inherent limitations, the overwhelming majority of research on academic dishonesty is based on self-report measures. Although self-reported cheating behaviors may be correlated with social desirability, empirical evidence suggests that they are good estimates of actual behavior. Experimental methods demonstrate that people who self-report lying more often are more likely to cheat in a behavioral task (Halevy et al. 2014). The alternatives to self-report measures, such as inducing dishonest behavior in a laboratory setting or conducting a natural experiment present their own set of ethical problems, particularly related to incentives for and consequences of cheating. We included established measurement strategies to reduce social desirability bias, random answering, and lying. We feel confident that the strategies we used to minimize bias, social desirability, and random responding were effective. The primary goal of the current study was to present a reliable and valid self-report instrument. Further validation of the academic misconduct measure should include predictors that are measured using other methods. Although our samples came from a small number of institutions, our sample size was more than sufficient for the analyses we conducted and there was diversity in geography as well as among our participant demographics.

An important limitation of the present research is the rapid change in academic misconduct caused by generative artificial intelligence. The current research will provide a fundamental theoretical and practical basis for revisions that include misconduct behaviors related to the improper use of artificial intelligence technology. While fundamental principles of academic integrity are anticipated to remain the same, future research should determine whether AI-based academic misconduct is fundamentally similar to or different from previously studied behaviors.

We present a reliable and valid measure of academic misconduct that can be used by institutions, practitioners, and researchers. Scholars now have access to a validated inventory of academic misconduct behaviors for use as a criterion or predictor variable in further research. If the measure is as widely used as McCabe’s original, the MIAMI can support more consistent comparisons of misconduct behavior across studies. Practitioners can use the measure to track changes in students’ (self-reported) misconduct, targeting specific areas for academic integrity interventions to reduce misconduct. Further, institutional researchers now have a validated self-report measure to use when assessing the effect of academic integrity programs on misconduct. The updated MIAMI will be useful for institutions working with ICAI to improve the academic integrity climate among their students and it can facilitate further research advancing our understanding of this critical issue in higher education and testing interventions to promote academic integrity.

Data availability

No datasets were generated or analysed during the current study.

Notes

  1. Careless responders were identified through the use of four data quality questions that required participants to provide logical and accurate responses (e.g. How often have you worked more than 25 h in a single day?) as well as a self-reported honesty question (i.e. Overall, how honest would you say you were in answering this questionnaire?) and an examination of straight-lined responses on multiple survey scales. Dishonest responders were identified using the honesty question described in the Procedure section.

Abbreviations

ICAI:

International Center for Academic Integrity

MIAMI:

McCabe ICAI Academic Integrity Inventory

EFA:

Exploratory Factor Analysis

CFA:

Confirmatory Factor Analysis

APA:

American Psychological Association

AAC&U:

American Association of Colleges and Universities

CFI:

Comparative Fit Index

RMSEA:

Root mean square error of approximation

SRMR:

Standardized root mean square residual

References

Download references

Acknowledgements

The authors wish to acknowledge funding from the office of the Dean of Arts and Sciences at the University of Mary Washington and logistical and financial support from the International Center for Academic Integrity. Full materials will be made available for use by scholars. In memory of Don McCabe.

Author information

Authors and Affiliations

Authors

Contributions

Authors’ Contributions using the CRediT Taxonomy. Conceptualization—DR, JS, HT, TBG, EA. Methodology—MMcT, AP, JS, HT, DR, DcN. Investigation—EA, DMcN, DR, HT. Formal analysis and investigation—AP, MMcT. Writing—original draft preparation -AP, DR, HT, CC, TBG, MMcT, DMcN, JS, EA. Writing—review and editing DR, HT, CC, DMcN, AP, TBG, JS, EA, MMcT. Supervision—EA, DR

Corresponding author

Correspondence to David A. Rettinger.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rettinger, D.A., Tatum, H., Perry, A.H. et al. MIAMI: development and validation of a revised measure of academic misconduct. Int J Educ Integr 20, 19 (2024). https://doi.org/10.1007/s40979-024-00167-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40979-024-00167-2

Keywords