100% found this document useful (1 vote)
261 views90 pages

Aviation Training Debrief Analysis

This study analyzed 36 LOFT debriefing sessions from 5 major U.S. airlines to evaluate instructor facilitation techniques and crew participation. Researchers developed the Debriefing Assessment Battery (DAB) to reliably rate instructors and characterize crew involvement. DAB ratings closely matched descriptive measures. Data provided evidence that effective facilitation increased depth of crew self-analysis and CRM performance discussion. However, instructor facilitation skills varied greatly, and crews fell short of independently leading debriefings. Debriefings lasted an average of 31 minutes but may benefit from longer durations. The study suggests ways to improve debriefing effectiveness through more consistent instructor training and increased crew participation.

Uploaded by

AMIR Shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
261 views90 pages

Aviation Training Debrief Analysis

This study analyzed 36 LOFT debriefing sessions from 5 major U.S. airlines to evaluate instructor facilitation techniques and crew participation. Researchers developed the Debriefing Assessment Battery (DAB) to reliably rate instructors and characterize crew involvement. DAB ratings closely matched descriptive measures. Data provided evidence that effective facilitation increased depth of crew self-analysis and CRM performance discussion. However, instructor facilitation skills varied greatly, and crews fell short of independently leading debriefings. Debriefings lasted an average of 31 minutes but may benefit from longer durations. The study suggests ways to improve debriefing effectiveness through more consistent instructor training and increased crew participation.

Uploaded by

AMIR Shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

NASA Technical Memorandum 110442 DOT/FAA/AR-96/126

LOFT Debriefings: An Analysis


of Instructor Techniques and Crew
Participation
R. Key Dismukes, Kimberly K. Jobe, Lori K.
McDonnell

March 1997

National Aeronautics and

Space Administration

NASA Technical Memorandum 110442 DOT/FAA/AR-96/126


LOFT Debriefings: An Analysis
of Instructor Techniques and Crew
Participation
R. Key Dismukes, Ames Research Center, Moffett Field, California

Kimberly K. Jobe, San Jose State University/Ames Research Center, Moffett Field, California

Lori K. McDonnell, San Jose State University/Ames Research Center, Moffett Field, California

March 1997

PREFACE
This study originated from requests from several airline training departments for
help in analyzing the effectiveness of LOFT debriefings. Doug Daniel and Steve
Gregorich helped identify crucial issues and ways to study these issues.

The study could not have been conducted without the generous willingness of
instructors and line crews to allow us to observe their debriefings. We are
impressed with their high standards of professionalism. Training department
managers from each of the airlines that participated in the study provided a
wealth of background information and made valuable suggestions on early drafts
of this manuscript.

This study was funded by the FAA's Office of the Chief Scientist and Technical
Advisor for Human Factors (AAR-100). Eleana Edens, the program manager,
provided support, encouragement and helpful suggestions.

TABLE OF CONTENTS
Preface iii

List of Figures and Tables vi

1.0 OVERVIEW 1

2.0 INTRODUCTION 3

2.1 Background 3
2.2 What is Facilitation and Why Use It? 4

2.3 Techniques for Facilitation 6

2.3.1 Introductions 6

2.3.2 Active listening 6

2.3.3 Questions 6

2.3.4 Silence 6

2.3.5 Videos 6

2.4 Research Questions 7

3.0 METHODS 7

3.1 Participants 7

3.2 Procedures 7

3.3 Measures 8

3.3.1 Descriptive measures 8

3.3.2 Debriefing Assessment Battery 8

3.4 Statistical Analyses 9

4.0 RESULTS 9

4.1 General Observations 9

4.2 Descriptive Data 10

4.2.1 Participation 10

4.2.2 Content of discussion 11

4.2.3 Instructor questions 11

4.2.4 Interruptions 11

4.2.5 Videos 11
4.2.6 Crew participation 11

4.3 Debriefing Assessment Battery 12

4.3.1 Scores 12

4.3.2 Correlations 12

4.3.3 Effect of introductions 13

4.4 Correlations Between Battery and Descriptive Variables 13

4.4.1 Instructor battery with instructor descriptive 13

4.4.2 Instructor battery with crew descriptive 13

4.4.3 Crew battery with crew descriptive 13

4.4.4 Instructor descriptive with crew battery and descriptive 13

4.5 Instructor Differences 14

5.0 DISCUSSION 14

5.1 Descriptive Variables 15

5.1.1 Duration 15

5.1.2 Content 15

5.1.3 Instructor characteristics 16

5.1.4 Crew characteristics 17

5.2 Debriefing Assessment Battery 17

5.2.1 Battery characteristics 17

5.2.2 Scores and correlations 18

5.3 Facilitation Techniques and Common Mistakes 19

5.4 Implications for Training 19

6.0 CONCLUSIONS AND RECOMMENDATIONS 21


Figures and Tables 23

References 49

Appendix A. Coding 51

Appendix B. Calculation of utterance variables 55

Appendix C. Debriefing Assessment Battery 58

Appendix D. Anchoring of the Debriefing Assessment Battery 61

Appendix E. Spearman Correlation Coefficients 75

FIGURES AND TABLES


Figure 1. Crew interaction chart

Figure 2. Effect of Instructor facilitation on crew analysis and evaluation

Figure 3. Distribution of instructor scores on the Debriefing Assessment Battery

Table 1. Number of Debriefings Observed and Analyzed

Table 2. Interrater Reliabilities for the Debrief Assessment Battery

Table 3. Average Duration of Debriefings

Table 4. Participation in Debriefings

Table 5. Content of Debriefings

Table 6. Discussion of Crew Performance

Table 7a. Correlations Between Instructor and Crew Topics

Table 7b. Correlations Between Instructor and Crew Emphasis on Aspects of


Crew Performance

Table 8a. Instructor Questions: Two-person Crews

Table 8b. Crew Responses to Non-directed Questions: Two-person Crews

Table 9a. Instructor Questions: Three-person Crews


Table 9b. Crew Responses to Non-directed Questions: Three-person Crews

Table 10. Percent of Total Crew Words & Utterances Coded R, S1, S & Q

Table 11. Distribution of Crew Questions

Table 12. Average Number of Proactive Questions Per Hour

Table 13. Additional Measures of Crew Participation

Table 14. Debriefing Assessment Battery Scores

Table 15. Frequencies of Rating Scores on the Debriefing Assessment Battery

Table 16. Spearman Correlations Between IP and Crew Variables on the


Debriefing Assessment Battery

Table 17. Spearman Intercorrelations Among Instructor Variables: Debriefing


Assessment Battery

Table 18. Relationship of High and Low Introduction Scores to Crew Analysis &
Evaluation and Depth of Activity

Table 19. Correlations Between Instructor Battery and Descriptive Variables

Table 20. Correlations Between Instructor Battery Variables and Crew


Descriptive Variables

Table 21. Correlations Between Crew Battery and Descriptive Variables

Table 22. Correlations Between Instructor Descriptive Variables and Crew


Battery and Descriptive Variables

Table 23. Variability Within and Across Instructors

LOFT DEBRIEFINGS: AN ANALYSIS


OF INSTRUCTOR TECHNIQUES AND
CREW PARTICIPATION
R. Key Dismukes,Kimberly K. Jobe, and Lori K. McDonnell2

SUMMARY
This study analyzes techniques instructors use to facilitate crew analysis and
evaluation of their LOFT performance. A rating instrument called the Debriefing
Assessment Battery (DAB) was developed which enables raters to reliably
assess instructor facilitation techniques and characterize crew participation.
Thirty-six debriefing sessions conducted at five U.S. airlines were analyzed to
determine the nature of instructor facilitation and crew participation. Ratings
obtained using the DAB corresponded closely with descriptive measures of
instructor and crew performance. The data provide empirical evidence that
facilitation can be an effective tool for increasing the depth of crew participation
and self-analysis of CRM performance. Instructor facilitation skill varied
dramatically, suggesting a need for more concrete hands-on training in facilitation
techniques. Crews were responsive but fell short of actively leading their own
debriefings. Ways to improve debriefing effectiveness are suggested.

1.0 OVERVIEW
How much crews learn in Line-Oriented Flight Training (LOFT) and take back to
the line depends on the effectiveness of the debriefing that follows the LOFT.
The Crew Resource Management (CRM) literature and the Federal Aviation
Administration's (FAA) advisory circular (AC) 120-35C recommend that in the
debriefing instructors should facilitate self-discovery and self-critique by the crew
rather than lecture on what they did right and wrong. Self discovery by the crew
is believed to provide deeper learning and better retention. Also, crews are more
likely to enhance their performance of CRM in line operations if they develop
their ability to analyze flight operations in terms of CRM and debrief themselves
after line flights.

In this study 36 LOFT debriefings conducted at five major U.S. airlines were
analyzed. Audiotape recordings of each session were made with the permission
of instructors and crews. The recordings were subsequently deidentified, coded,
and analyzed for more than 70 variables. The Debriefing Assessment Battery
was developed to systematically characterize instructor effectiveness at
facilitation and the nature of crew participation in debriefings. The data indicate
that the Debriefing Assessment Battery is a reliable and valid instrument for
assessing instructors' skill in facilitation and for analyzing crew participation. The
battery was designed to be used by researchers, however a short form of the
battery that can be used by training departments to evaluate debriefings in real
time is currently being developed and evaluated.

Most instructors at all five airlines followed a similar general format for debriefing.
However, within each airline both instructors and crews varied widely on many of
the specific variables observed. There were also substantial differences among
airlines on several variables for both instructors and crews, though most of these
differences were not statistically significant due to the large variability within each
airline.
The debriefings lasted an average of 31 minutes, with a range of 8 to 82 minutes.
However, 31 minutes may not allow adequate time for crews to analyze their
performance thoroughly or learn and practice the skills of self-debriefing. This
study provides no data on the optimal length for debriefings, however an hour
may be a useful rough target, with adjustments for the needs of individual crews.
This suggestion must, of course, be considered in the context of other demands
on instructors' time.

Most instructors appropriately emphasized crew performance in the LOFT and


achieved a balance between CRM and technical issues, although the range of
instructor scores on these variables was very large. Instructors typically
emphasized the things crews did well, but said little about things done not so well
and spent little time suggesting ways to improve. Likewise, crews' discussions of
their performance tended to be factual descriptions of events and crew actions,
with limited evaluation of performance or discussion of ways to improve.

The content of the debriefings was driven almost exclusively by the instructors;
crew members rarely brought up topics on their own initiative. Also, discussions
revolved around the instructor, even when the instructor succeeded in getting the
crew to do most of the talking: there was little back-and-forth discussion directly
between crew members. The data indicate that crews were responsive but not
very proactive. This may be in part because few of the instructors explicitly told
crews they should take a proactive role and perform their own analysis without
depending on the instructor to lead them step by step. It may also be that
instructors themselves either do not fully accept or understand the concept of
crews taking initiative and responsibility for the content of the debriefing.

On average, instructors asked a large number of questions to elicit crew


participation, directing their questions evenly among crew members. Participation
by captains and first officers was quite similar. Participation by flight engineers (in
three-person crews) was lower, but this difference was marginally significant.

Most instructors appeared to be highly competent and conscientious in the


traditional roles of instructors, and most attempted to facilitate crew participation
to some degree; however, their success in facilitation ranged from very good to
poor. Instructors who were effective in facilitation tended to use a combination of
techniques, such as careful phrasing of questions to encourage crew self-
analysis, strategic silence, active listening, and follow-up on crew-initiated topics.
Probably more important than the use of any particular technique is the
instructor's underlying focus on encouraging the crew to analyze for themselves
the situations that confronted them in the LOFT and how well they managed
those situations.

Many instructors unwittingly did things counterproductive to their own attempts to


facilitate crew participation. In addition to failing to explicitly state expectations for
crew participation and allowing the discussions to revolve around themselves
instead of encouraging crew interaction, some instructors failed to allow crew
members enough time to formulate thoughtful responses to questions. Also,
some instructors engaged in long monologues, gave their own evaluations before
eliciting crew self-evaluation, failed to push the crew to go beyond superficial
description of their actions, and/or failed to encourage crews to analyze why
things went well when they did.

The wide range of instructor effectiveness in facilitation indicates that the airlines
face an issue of standardization of this aspect of debriefing. The distribution of
facilitation scores was distinctly bimodal, with one group of instructors scoring in
the good to very good range and another group of instructors scoring in the
marginal range. Also, instructors who did well in one aspect of facilitation typically
did well in all aspects (except stating expectations for crew participation), and
those who did poorly in one aspect tended to do poorly in all aspects. These data
suggest instructors' ability to use various techniques is determined at least in part
at the conceptual level: Do they grasp the underlying concept of facilitation? Do
they accept the concept? Is facilitation the type of approach for which they have
ability?

The CRM literature states that debriefings should be led by the crews
themselves, using the instructor as a resource. Our data suggest that this goal,
although worthwhile, is rather idealistic. Instructors become discouraged when,
after a brief and rather abstract course in facilitation, they attempt to facilitate
debriefings and discover that crews often do not immediately respond. We
suggest that it would be more effective to teach instructors that facilitation should
be adapted to the level at which the particular crew is able to respond.
Facilitation can be conducted at levels ranging from high, which approaches the
ideal of the debriefing being led by the crew, to low, in which the instructor leads
the crew substantially, but in all cases debriefings should emphasize as much
self-discovery by the crew as possible.

Instructors are encouraged to attempt to facilitate at the highest level possible for
a particular crew. Realistically, however, most crews do not yet have the skills
and motivation needed to lead their own debriefings without substantial
assistance from the instructor. It may be possible to change this situation over
time if LOFT instructors consistently encourage crews to take a proactive role in
debriefing their own training.

Instructors sometimes mistakenly assume that using facilitation requires giving


up their role as teachers in the debriefing. On the contrary, good facilitation in no
way precludes the instructor from adding his or her own perspective to the
discussion or from teaching specific points about CRM and technical issues as
appropriate. Effective facilitators can integrate their teaching points into a group
discussion in which the crew members are full participants.
The study provides empirical evidence that facilitation can be used to
substantially increase crew self-discovery and the depth of crew participation.
Instructors, however, need additional training in facilitation. Facilitation training
should emphasize hands-on practice in which instructors encounter the kinds of
obstacles they are likely to face in actual debriefings. Initial training should be
followed by mentoring by senior instructors who are themselves expert
facilitators. A training manual that provides detailed suggestions for how to
facilitate debriefings is forthcoming as a companion to this technical report.

2.0 INTRODUCTION
2.1 Background
Line Operational Simulation (LOS) is widely used to provide opportunities for
crews to practice CRM concepts in realistic and challenging simulated flight
situations. As indicated in the FAA's AC 120-35C (1995), LOS includes LOFT,
Line Operational Evaluation (LOE), and Special Purpose Operational Training
(SPOT). LOFT is the original "non-jeopardy" form of simulation training in which
crews are not graded on their performance. Like LOFT, SPOT is used for training
rather than evaluative purposes. In LOE crews are graded, which is required in
those airlines that participate in the FAA's Advanced Qualification Program
(AQP). Both LOFT and LOE are full-mission simulations that include all phases
of flight, whereas SPOT may be full-mission or only a segment of a flight tailored
to focus on a particular training point.

How much crews learn in LOFT and take back to the line depends on the
effectiveness of the debriefing that follows the LOFT (Helmreich & Foushee,
1993). The simulation itself is a busy, intense experience, and thoughtful
discussion afterward is necessary for the crew to sort out and interpret what
happened and why. Instructors are expected to lead debriefings in a way that
encourages crew members to analyze their LOFT performance for themselves.
Rather than lecturing to the crew on what they did right and wrong, the instructor
is expected to facilitate self-discovery and self-critique by the crew (Butler, 1993;
Hawkins, 1987; Smith, 1994).

CRM and LOFT programs have developed considerably since their inception
almost twenty years ago. The concepts and the value of CRM are now generally
accepted by both airline managers and pilots. However, it is not clear whether
crews consistently think about and practice CRM in line operations (see
discussion in Helmreich & Foushee, 1993). AQP is bringing to fore the issue of
how well crews are actually able to practice CRM, because poor CRM can cause
crews to fail a LOE (Birnbach & Longridge, 1993; FAA, 1991). In order for LOE
programs to be effective and accepted, pilots must believe they are being graded
on performance dimensions they understand and by criteria that seem
appropriate and achievable. The ability of crews to analyze and evaluate their
own performance in LOFT may predict their acceptance of LOE grading.

2.2 What is Facilitation and Why Use It?


The FAA's AC 120-35C on Line Operational Simulations (1995) describes the
general concept of facilitated debriefings:

The facilitator should not handle the debrief in a "teacher tell" manner but,
instead, operate as a resource to crew members by highlighting different portions
of the LOS that may be suitable for review, critique, and discussion. The
discussion should be led by the crew themselves, using the facilitator and the
videotape as resources for use during their critique...Self-criticism and self-
examination are almost always present in these situations, and in many cases
they are much more effective than facilitator criticism...Thus, the facilitator should
do everything possible to foster this sort of self-analysis, while at the same time
keep the debrief at a constructive level. In the role of moderator, the facilitator
can guide the discussion to areas that he or she has noted...However, unless
absolutely necessary, the facilitator should avoid "lectures" about what is right
and wrong.

The concept of facilitated debriefings appears to have been part of the early
inception of LOFT (Lauber & Foushee, 1981). The origin of this concept is not
clear, but it appears to have been derived from the use of facilitation in other
business settings, such as retreats in which managers discuss their
organizational goals and issues (e.g., Gibb, 1982; Mills & Roberts, 1981).

The primary rationale for facilitating rather than lecturing is that crews can learn
and remember much more when they participate actively and make their own
analyses than when they listen passively to the instructor (Duvall & Wicklund,
1972; Smith, 1994). Another potential benefit of crew-centered LOFT debriefings
is that they can help crews develop the habits of analyzing their own CRM
performance on the line and conducting their own crew debriefings following line
operations (Butler, 1993). In practice, crew debriefings on the line in civil
operations are as yet rare, although military crews often debrief their missions.
Thus, the LOFT debriefing is an important tool for showing crews how to debrief
and for illustrating the benefits of self-debriefing.

Continental Airlines' (1992) handbook on LOFT facilitation techniques outlines a


useful hierarchy of facilitation based on the concepts of discovery and ownership.
According to this handbook, the goal of facilitation is to have crews recognize
what they did well and what they need to improve (discovery), and to have crews
make a commitment to continue or begin using desired behaviors and stop using
undesirable ones (ownership). At the top of the hierarchy is "they see it, they say
it." This is the ideal in which crews recognize and analyze their own performance.
In the middle is "you help them see it, they say it." If crews are not able to
recognize what they did well and what they can improve, the facilitator can lead
them to self-analysis through questioning. Finally, at the bottom of the hierarchy
is "you help them see it, you help them say it." When crews are unable to
recognize or analyze their performance the facilitator must evaluate for them to
ensure that they understand what went well or poorly, and why.

A literature search conducted as part of this study revealed no studies that


analyzed the specific needs and issues of LOFT debriefings in order to adapt the
general concept of facilitation to this specialized setting, which differs
substantially from most business settings. The training departments of many
airlines provide their instructors written guidelines; however, these guidelines
tend to be rather sketchy and most do not provide a detailed exposition of how to
use facilitation.

The general literature on facilitation in settings other than LOFT is also rather
sketchy. This is a trade literature rather than a scientific literature, and very little
empirical evidence is provided to support assertions, validate specific techniques,
or qualify the range of settings in which advocated techniques may be effective.
However, the general concept of facilitation has considerable face validity as a
way to encourage self-discovery by crew members. Both the adult learning
literature and the cognitive research literature suggest that self-discovery
improves learning, retention, and the ability to apply knowledge in diverse
settings.

According to the facilitation literature, adult learning is typically self-directed


(Cornwell, 1979). In general, adults dislike long lectures, they learn best from
discussions with peers, they need to integrate new knowledge with what they
already know as professionals, they want to be told up front what is expected of
them, and their self-esteem is directly affected by classroom discussion (Zemke
& Zemke, 1981).

Active participation requires crew members to process information more deeply


than listening passively to an instructor's critique does (see, for example,
Slamecka & Graf, 1978). Deeper processing leads to elaboration of the
information in memory and enables better retrieval from memory when it is
needed (Baddeley, 1990).

Facilitation can help individuals develop problem solving and critical thinking
skills (Gow & Kember, 1993). Research in several areas of expertise suggests
that individuals are better at solving problems and applying their knowledge in
diverse situations if they have a good metacognitive perspective of their technical
skills (see Metcalfe & Shimamura, 1994). Metacognition refers to knowledge of
one's own thought processes and the ability to keep track of what one is doing
while analyzing problems and managing tasks. Debriefings that emphasize self-
analysis and self-discovery help crews develop metacognitive skills for managing
cockpit situations. One could argue that the concept of metacognition is implicit in
the philosophy of CRM; for example, CRM teaches crews to establish priorities
and keep track of how they are managing their priorities during abnormal line
situations.

2.3 Techniques for Facilitation


Most of the techniques for facilitating group participation that are suggested in
the literature concern the use of introductions, active listening, questions, and
silence. The use of video recordings to enhance discussion is also discussed.

2.3.1 Introductions. An explicit introduction is necessary to clarify the role of the


facilitator and the nature of the participation expected of the group (Casey,
Roberts, & Salaman, 1992; Nelson-Jones, 1992; Gibb, 1982). A good
introduction can also motivate the group to participate by providing a rationale for
the session.

2.3.2 Active listening. Good listening skills enable the facilitator to work with
what the participants are saying and to encourage further participation. Active
listening shows that the facilitator is attending to the speaker, understands what
is being said, and wants to hear more. Active listening can range from a simple
"uh-huh" or "okay" to echoing or reflecting in one's own words what a speaker is
trying to communicate.

2.3.3 Questions. According to the Socratic method, learning is facilitated by


questioning, encouraging exploration, and pushing for explanation; not by
lecturing and telling the answers (Casey et al., 1992). "Can you give me a
specific example?" "How did you and the other person actually behave?" and
"What were your thoughts in the situation?" are examples of questions that can
aid self-assessment (Nelson-Jones, 1992). Mills and Roberts (1981) assert that,
ideally, questions should be brief; open (i.e., non-restrictive, don't imply opinion
or judgment); and begin with who, where, and when for factual responses or
what, how, and why for more in-depth and detailed answers.

The use of probing questions encourages active and in-depth participation.


Probing questions that ask participants to explain and justify their responses
have been reported to be particularly effective (Jacobsen, Eggen, & Kauchak,
1989). Mills and Roberts (1981) identified seven types of probes that encourage
continued participation: non-verbal (e.g., a nod); short verbal ("Uh, huh?"); "W"
words (especially what, how, and why); statements such as "Tell me more.";
echoing of participant words; reflection of what the participant said with different
words but the same meaning; and specialized reflections that imply more than
stated by the participant. (Also, see Eitington, 1986.)

2.3.4 Silence. Sometimes group participants do not respond immediately to a


leader's question. Most people find silence in a group setting uncomfortable, and
leaders often allow no more than a one second pause before rephrasing a
question or answering it for the group. However, one second may not be long
enough for participants to formulate a thoughtful response. Studies show that
waiting three to four seconds substantially improves both the number and quality
of responses (Rowe, 1986; Jacobsen et al., 1989). The longer pause elicits
longer, more confident responses from the group, as well as more numerous
voluntary observations, participant interactions, and participant questions.
Furthermore, responses from slower participants increase, speculative
responses and evidence-inference statements increase, and failures to respond
decline (Ornstein, 1990; Rowe, 1974).

2.3.5 Videos. Most airlines videotape the LOFT. Although the use of video is not
a facilitation technique per se, it can aid facilitation. Instructors select segments
of the videotape to show during the debriefing to help the crew observe and
discuss their performance. The video can help the crew view their performance
from a third-party perspective (FAA, 1995); it may also help the crew remember
what happened.

The literature cited above provides examples of facilitation techniques and a


rationale for using them, but unfortunately provides little in the way of detailed,
practical guidance for using these techniques in particular group settings and
integrating the techniques into the overall management of a session. In order for
these techniques to be used effectively in LOFT debriefings, they must be
adapted to the particular characteristics and demands of these debriefings.

2.4 Research Questions


Although the concept of facilitated debriefings is widely espoused in the CRM
literature, little empirical research has examined what actually happens in
debriefings. This study attempts to answer five major questions:

1) To what extent do instructors attempt to facilitate crew participation and self-


discovery in LOFT debriefings?

2) What techniques do instructors use to facilitate and how effective are these
techniques?

3) Is facilitation a viable approach to encouraging crew participation and self-


discovery?

4) What is the character of crew participation, especially in terms of analyzing


and evaluating their own performance?

5) How much variation occurs among instructors and among airlines in the
conduct of debriefings?
3.0 METHODS
3.1 Participants
Thirty-nine LOFT debriefings conducted at five major U.S. airlines between June
1994 and May 1995 were observed. All five airlines are large, well-established
national companies; four are passenger airlines and one is a cargo company. At
each of the airlines the first author observed four to eleven debriefings. (At the
first company visited, a second research observer was also present at the
debriefings and interviews.) The training department managers who arranged the
observations were asked not to preselect which instructors and crews would be
observed; rather, the selection was driven by the schedules of who was
instructing during the three to five days each airline was visited. The observed
debriefings represented all or most of the fleets operated by each airline, and at
least one LOFT simulation of each scenario flown in each fleet was observed.
Generally, one debriefing was observed per instructor and crew; however, four of
the instructors were observed debriefing a second crew for the purpose of
comparison.

Permission to attend the debriefing and to audio tape the session was obtained
from each instructor and each crew member, and assurance was provided that
all data collected would be completely deidentified to assure anonymity for all
participants.

3.2 Procedures
Prior to observation of the debriefings, the written scenarios for each LOFT were
reviewed and managers in the CRM departments were interviewed. After each
debriefing the instructor was interviewed and asked to rate the crew's CRM
performance and technical performance on separate five-point Likert scales
ranging from poor (1) to exemplary (5). Instructors were also asked for comments
about the debriefing process.

The audio recordings of 36 of the 39 debriefings were transcribed into text in their
entirety and all references to individuals and organizations were deleted. (Two of
the recordings were not sufficiently intelligible for transcribing and the tape
recorder failed during another debriefing.) Of the 36 debriefings that were
transcribed, 25 were from two-person crews, and eleven were from three-person
crews (Table 1).

3.3 Measures
3.3.1 Descriptive measures. Each instructor and crew utterance was coded for
nine factors and the coding was checked during data entry. (The factors and the
coding rules are described in Appendix A.) From these nine factors 72 utterance
variables were calculated (see Appendix B). Data were also extracted on the
instructors' use of videotapes to illustrate the crews' performance in the LOFT,
including the number of video segments played for crew discussion, the length of
the segments played, and the extent to which the segments were discussed. The
above data will be referred to as "descriptive" to distinguish them from the data
generated using the Debriefing Assessment Battery described below.

3.3.2 Debriefing Assessment Battery. The Debriefing Assessment Battery was


developed to systematically characterize instructor effectiveness at facilitation
and the nature of crew participation in debriefings (Appendix C). This battery
provides subjective rating scales on several dimensions, with appropriate
anchoring (Appendix D), and can be used by raters who have experience in
CRM. McDonnell (1995) provides a detailed description of the development and
validation of the battery. The battery was based on the adult learning and
facilitation literature, existing rating scales by M. M. Connors (1995) and R. H.
Moos (1994), face valid assumptions of what constitutes good facilitation, and the
airline industry's guidance to their instructors on how to facilitate LOFT
debriefings. The battery incorporates a seven-point Likert scale ranging from
poor (1) to outstanding (7).

The battery contains 28 items grouped into seven composite categories


consisting of four items each. Five of the categories rate the instructor while the
remaining two rate the crew. The five instructor categories are Introduction
(letting the crew members know what is expected), Questions (to focus on topics
and elicit crew participation), Encouragement (the degree to which the instructor
encourages and enables the crew to participate actively and deeply), Focus on
Crew Analysis & Evaluation (getting the crew to analyze and evaluate their own
performance), and Use of Videos (to remind the crew of what happened in the
LOFT and provide a springboard for discussion). The video is not part of
facilitation per se but its use is an important part of the overall structure of the
debriefing. Items in the two crew categories-Crew Analysis & Evaluation and
Depth of Crew Activity-were designed to correspond closely with items in the
instructor categories.

Two of the authors independently rated the instructors and crews from each of
the debriefing sessions after listening to each LOFT session audio tape while
reading the verbatim transcript. For each of the first 10 debriefings, the ratings on
the individual battery items were compared and discussed before rating the next
debriefing. During each discussion, if either believed any ratings needed to be
changed based on issues raised by the other, the scores were revised
accordingly, although no effort was made to reach consensus on each item. For
the remaining 26 debriefings, ratings were not systematically discussed.

Interrater reliability was determined by calculating Pearson correlation


coefficients for the two raters' initial scores for each of the seven battery
categories before discussion or any revision of scores. Pearson interrater
reliability coefficients ranged from .73 to .91 for the seven categories of the
battery (Table 2).

Aside from reliability coefficients, data from the battery are based on the average
of the two raters' scores for each item. Composite scores for each of the five
instructor and two crew categories were calculated by averaging the scores for
the four items in each category.

3.4 Statistical Analyses


Differences among airlines were examined by one-way analysis of variance
(ANOVA). In cases in which the ANOVA showed significant differences among
the group of airlines, a Bonferroni post-hoc test was used to determine which
airlines differed significantly from the others. Differences between two and three-
person crews were examined by a t-test. Differences between crew members
(captain, first officer, and flight engineer) were examined by a Wilcoxon matched-
pairs test. Statistical calculations were based on the full set of 36 debriefings,
unless otherwise stated in the tables. For all tests significance was computed by
the two-tailed method, using an alpha of .05. Spearman rank-correlation
coefficients were calculated for all pairs of variables. Correlation coefficients are
referred to as "statistically significant" if p < .05. These findings should be
interpreted cautiously, however, because a large number of correlations were run
and five percent of these can be expected to represent type I error at the .05
alpha level.

Four instructors conducted two debriefings; thus, each of these four instructors
received two measurements for each of the variables associated with their
performance. These two measurements were averaged to obtain a single data
point (n = 32) for (i) calculation of means and standard deviations, and (ii) the
analysis described below. The means with duplicate instructors' scores averaged
(n = 32) are reported for scores on the Debriefing Assessment Battery. However,
since differences between the two methods of calculating the means were minor
for the descriptive variables, these means are reported for the full data set (n =
36).

Data from these four instructors were used to explore the question of whether the
large variability observed among instructors reflected stable differences among
the instructors. Five variables were selected for this analysis: session duration,
percent of group words uttered by the instructor, percent of instructor words
addressing CRM, percent of instructor words addressing crew performance, and
instructor scores on a composite QEF variable created by combining the
Questions, Encouragement, and Focus categories of the assessment battery.
For each of these variables the difference between the values for the two
debriefings given by the same instructors was obtained, providing a delta score.
The average of the delta scores for these four instructors was compared to delta
scores obtained by 448 random pairings among instructors who gave only one
briefing.

4.0 RESULTS
4.1 General Observations
At all five airlines most debriefings were not conducted immediately after the
LOFT. Instead, after a short break, the instructor and crew first returned to the
simulator to conduct about two hours of "batting practice" as rehearsal for the
proficiency check that would follow the next day. A few instructors, apparently on
their own initiative when scheduling allowed, reversed the order so they could
debrief the LOFT before batting practice.

At all airlines most debriefings followed the same general format. The instructor
would either give a very short introduction or no introduction at all, and then lead
discussion of segments of the LOFT in the chronological order in which they
occurred. Rarely did the instructor engage the crew in setting an agenda for
discussion, although some instructors invited general comments on the LOFT
before starting the discussion of specific segments. In the four airlines with video
equipment, the instructor generally used a video segment to begin the discussion
of related portions of the LOFT. A few instructors varied this general format; for
example, one instructor systematically went through the CRM categories
displayed on a wall poster, asking the crew to identify places in the LOFT in
which they had employed each category.

For most variables large differences occurred among debriefings within each
airline. For some variables substantial differences also occurred in the average
values between airlines, although in most cases the within-airline variability
prevented the differences between airlines from being statistically significant.

4.2 Descriptive Data


The average duration of the debriefings was 30.7 minutes (Table 3), with a range
of 8 to 82 minutes. Duration was negatively correlated with instructors' ratings of
crews' CRM performance (r = -.49, p < .01) and technical performance (r = -.39,
p < .05) and positively correlated with the proportion of instructors' words directed
to negative aspects of crew performance or ways to improve (r = .51, p < .01).
This suggests that instructors spend somewhat more time with crews that had
more problems.

Across airlines, instructors' ratings of crew performance averaged 3.6 (SD = .90)
for CRM and 3.5 (SD = .89) for technical on a 1 to 5 scale in which 1 = poor, 3 =
average, and 5 = exemplary. No statistically significant differences were found
among airlines.

4.2.1 Participation. With two-person crews instructors (IPs) did an average of


61% of the talking, captains (CAs) 21%, and first officers (FOs) 18% (Table 4).
Instructors participated significantly more than any of the crew members and the
difference in participation between captains and first officers, though small, was
also statistically significant. With three-person crews instructors did 49% of the
talking, captains 20%, first officers 19%, and flight engineers (FEs) 13%. As with
two-person crews, the amount of participation by instructors was significantly
greater than any of the crew members. Though there were no significant
differences in participation between captains and first officers in the three-person
crews, the difference between first officers and flight engineers was statistically
significant. While the percentage of participation was much higher for instructors
than for crew members on average, the percentage of participation varied
substantially among instructors; for example, the percentage of talking by
instructors with two-person crews ranged from 35 to 85%.

The percentage of the talking done by instructors was negatively correlated (p <
.01) with the percentage of the talking done by each category of crew member
(CA: r = -.62; FO: r = -.83; FE: r = -.77). In contrast, the percentage of talking by
captains was not significantly correlated with the percentage of talking by first
officers or flight engineers, but the percentage of talking by first officers was
positively correlated with the percentage of talking by flight engineers (r = .68, p <
.05).

4.2.2 Content of discussion. The average percentage of words directed to


CRM topics by instructors varied from 19 to 64 among the five airlines (Table 5).
The percentage directed to CRM by crews varied from 25 to 68. The average
percentage of crew discussion directed to CRM mirrored the percentage of
instructor discussion directed to CRM at each airline. At most of the airlines,
CRM topics occupied substantially more of the discussion than did technical
topics.

On average, 41% of instructor words and 52% of crew words were directed to the
performance of the crew in the LOFT (Table 6). Instructors emphasized positive
aspects of crew performance (18%) over negative aspects (3%) and ways to
improve performance (4%). Most of the crews' words concerning performance
were neutral descriptions of what they did (33%), compared to positive aspects
(8%), negative aspects (6%), and ways to improve (5%).

The content of the crews' remarks mirrored the content of the instructors'
remarks. The percentages of crew words directed to discussion of CRM,
technical, positive performance, negative performance, and ways to improve
performance were all significantly positively correlated with the percentages of
instructor words directed to these topics (Tables 7a and 7b).
4.2.3 Instructor questions. Most instructors asked a large number of questions,
averaging 48 per hour among two-person crews (Table 8a). Among two-person
crews, 60% of these questions were directed to specific crew members. Similar
results were observed with three-person crews (Table 9a). No significant
differences were found in either the proportion of questions directed to each crew
member or in the proportion of non-directed questions answered by each crew
member (Tables 8b & 9b), although the proportion answered by the flight
engineer was substantially lower, falling just short of statistical significance (p <
.06).

4.2.4 Interruptions. Instructors frequently interrupted crew comments. The


average number of interruptions per hour by instructors was 26 (SD = 16).
(Active listening interjections were not counted as interruptions. See Appendix A
for coding rules.) Twenty-one percent (SD = 13%) of all crew utterances
(excluding S statements, defined below) were interrupted by the instructors, and
12% (SD = 8.7%) of all crew utterances were interrupted and never completed.
No statistically significant differences in these variables were found among the
airlines. Neither variable-percent utterances interrupted nor percent utterances
interrupted and not completed-was significantly correlated with descriptive
measures of crew participation (percent crew participation, number of crew
analyzing utterances per hour, number of crew words per response, and number
of crew S1 words/hour) or crew variables measured by the Debriefing
Assessment Battery.

4.2.5 Videos. On average, instructors showed 8.8 (SD = 5.0) video segments per
hour, each averaging 150 (SD = 113) seconds in duration. No significant
differences were found among airlines.

4.2.6 Crew participation. Crew utterances were categorized as questions (Q);


responses to instructor or crew questions (R); statements that add content to the
discussion (S1); or other statements (S), most of which were concerned with
maintenance of discourse (e.g., "I see what you mean"). Responses accounted
for 44% of all crew words and S1 statements accounted for 45% (Table 10). The
distribution of the number of utterances among these four categories differed
from the distribution of number of words because S statements were typically
much shorter than the other three categories. The pattern of distributions among
categories was similar among airlines.

On average, individual crew members asked about six questions per hour. To
analyze the character of crew questions, the set of all crew questions from
airlines Y and Z (n = 98) were divided into three categories. Proactive questions
address the content of the debriefing, raising new issues or bringing new
information into the discussion (e.g., Did you realize I had not finished the
checklist?). Reactive questions respond to a prompt without adding new
information, usually to disambiguate what was said or meant (e.g., Do you mean
the taxi checklist or the predeparture checklist?). Miscellaneous questions are
generally extraneous (e.g., "Do I have time for a coke?") or meta-conversational
(e.g., "You know what I mean?").

Thirty-five percent of crew questions were proactive, 34% were reactive, and
30% were miscellaneous (Table 11). Sixty percent of the proactive questions
addressed CRM, technical, or mixed topics, but only 12% of the reactive
questions, and 7% of the miscellaneous questions addressed CRM, technical, or
mixed topics.

A few significant differences occurred among airlines in the number of proactive


questions asked, but at all five airlines the number of proactive questions by crew
members was small (Table 12). No significant differences were found in the
number of proactive questions asked by captains, first officers, and flight
engineers.

Three other measures of crew participation were also examined: the number of
analyzing utterances per hour, the number of words per utterance, and the
number of words per response to the instructor's questions (Table 13). Analyzing
utterances were defined as those that go beyond simple description of events
and actions to examine underlying factors and how those factors influenced the
outcome (see coding rules in Appendix A). The number of analyzing utterances
per hour averaged 6.2 (SD = 4.7), with no significant differences among airlines
or among the three crew member positions. The number of words per utterance
and the number of words per response averaged 22 (SD = 10) and 30 (SD = 17),
respectively, with no significant differences among airlines or among the crew
member positions.

In general, discussion in the debriefings revolved around the instructor, even


when the instructor got the crew to do most of the talking. Direct back-and-forth
discussion between crew members was infrequent. To explore this aspect
quantitatively, sequences of utterances by crew members were examined (Figure
1). Debriefings were analyzed in terms of blocks of crew utterances, each block
beginning with the first crew utterance after an instructor utterance and
continuing until the instructor spoke again. These blocks were mostly very short;
80% of them consisted of only one utterance by a crew member before the
instructor spoke again; thus, in these blocks there was no crew interaction at all.
Only 5% of the blocks contained four or more utterances by crew members.

4.3 Debriefing Assessment Battery


4.3.1 Scores. Average scores for instructor Questions, Encouragement, Focus,
and Use of Videos and for crew Analysis & Evaluation and Depth of Activity fell
close to 4, or adequate (Table 14). Scores for instructor Introduction were much
lower, averaging 1.6, which falls between poor and marginal. No significant
differences were found among airlines in any category.
The instructors' battery scores on use of Questions, Encouragement, and Focus
were distinctly bimodal, with one mode peaking around 2 (marginal) and the
other between 5 (good) and 6 (very good). Table 15 and Figure 3 show this data
for the five airlines combined. The separate data for four of the five airlines
showed the same general bimodal pattern. In contrast, airline Y scores were all
distributed around the higher mode and showed substantially less variance than
did the scores of the other four airlines on these three variables. Scores for the
two categories of crew participation at each airline also showed bimodal
distributions similar to the distributions of instructor scores.

4.3.2 Correlations. Crew scores on Analysis & Evaluation and Depth of Activity
were significantly positively correlated with instructor Questions, Encouragement,
and Focus, with coefficients ranging from .51 to .78 (Table 16 and Figure 2).
Instructor Introduction and Use of Videos were not significantly correlated with
crew scores on the battery. However, the third item in the Introduction category
was significantly positively correlated with Crew Analysis & Evaluation (r = .45, p
< .006), and the third item in the Use of Videos category was significantly
positively correlated with Crew Analysis & Evaluation (r = .45, p < .02) and fell
just short of significant positive correlation with Depth of Activity (r = .38, p <
.055).

The five instructor categories were significantly positively intercorrelated with


each other (Table 17). In particular, use of Questions, Encouragement, and
Focus were highly intercorrelated. The two crew categories were also
significantly positively intercorrelated (r = .87, p < .01).

4.3.3 Effect of introductions. The ten debriefings for which the instructor
Introduction scores were 1.0 (the lowest possible score) and the nine debriefings
for which the Introduction scores were the highest (ranging from 1.8 to 4.9) were
analyzed further. Crew Analysis & Evaluation scores for the latter group were
significantly higher than for the former group (Table 18). No significant difference
between the two groups was found for Depth of Activity.

4.4 Correlations Between Battery and Descriptive


Variables
4.4.1 Instructor battery with instructor descriptive. The correlations between
the five instructor battery variables and seven instructor descriptive variables
pertaining to how the instructor conducted the debriefing were examined (Table
19). The Introduction category was significantly positively correlated with number
of directed questions, total number of questions, and percent of instructor words
addressing CRM. The Questions category was significantly positively correlated
with number of directed questions, total number of questions, and percent of
instructor words addressing CRM and was significantly negatively correlated with
percent participation by instructor and instructor words per utterance.
Encouragement and Focus showed a pattern of correlation similar to that of
Questions. Use of Videos was significantly positively correlated with percent of
instructor words addressing CRM.

4.4.2 Instructor battery with crew descriptive. The correlations between the
five instructor battery variables and seven crew descriptive variables involving
the nature of crew participation were examined (Table 20). The Introduction
category was significantly positively correlated with crew words per utterance,
words per response, and percent CRM. Encouragement was significantly
positively correlated with crew percent participation, words per utterance, words
per response, self-initiated words, analyzing utterances, and percent CRM.
Questions and Focus showed a pattern of correlations similar to that of
Encouragement, except that the correlations with words per response and self-
initiated words were smaller and not statistically significant. The Use of Videos
category was significantly positively correlated with percent CRM only.

4.4.3 Crew battery with crew descriptive. Table 21 displays the correlations
between the two crew battery categories and the seven crew descriptive
variables. Both Analysis & Evaluation and Depth of Activity were significantly
positively correlated with all seven descriptive variables except proactive
questions.

4.4.4 Instructor descriptive with crew battery and descriptive. The


correlations between six instructor descriptive variables and a number of crew
descriptive and battery variables were examined (Table 22). The percent of all
speakers' words uttered by the instructor (i.e., percent instructor participation)
was significantly negatively correlated with the crew variables: percent
participation, words per utterance, S1 statements, analyzing utterances,
proactive questions, Depth of Activity, and Analysis & Evaluation. Instructor
words per utterance showed the same pattern of negative correlations, except
there was no significant correlation with crew words per utterance. Number of
directed questions per hour was significantly positively correlated only with
percent of crew words addressing performance, and number of non-directed
questions was not significantly correlated with any of these crew variables. The
percent of instructor words addressing performance was significantly positively
correlated with percent of crew words addressing performance and significantly
negatively correlated with crew proactive questions. The percent of instructor
words addressing CRM was significantly positively correlated with crew words
per utterance, words per response, percent of crew words addressing CRM, and
Crew Analysis & Evaluation. For most variables with which a significant
correlation occurred for the crew as a whole, significant correlations also
occurred for each crew member position separately (Appendix E lists the
intercorrelations among all variables).

4.5 Instructor Differences


The delta score is a measure of how much two debriefings differ on a given
variable. The delta scores for the four instructors who gave two debriefings were
not significantly different from the delta scores for randomly-paired instructors for
duration, percent CRM, or percent performance (Table 23). Instructor scores on
the battery's Questions, Encouragement, and Focus categories were combined
to create a QEF variable. For the QEF variable, the delta score of instructors who
gave two debriefings was 34% of the delta score of randomly-paired instructors (t
= -4.14, p < .005).

5.0 DISCUSSION
The five companies studied appear to be representative of large, well-established
U.S. airlines. Although some differences occur, debriefings at these five
companies show many common patterns. These findings, however, may not be
representative of smaller, regional, or newly-started airlines, some of which have
not developed CRM and LOFT programs to the extent that major airlines have.

The large variability observed among instructors at each airline has important
implications. For some variables the average values differed substantially among
some of the airlines, although given the large variability, few of these differences
were statistically significant. At airlines W and X, only four and five debriefings,
respectively, were observed because not many LOFT sessions were run during
our visits. With this small sample size and the variance observed, the standard
errors for some of the mean values are large; thus, especially for these two
airlines, the representativeness of these mean values is uncertain.

For the reasons discussed above, one cannot conclude from these data whether
real differences exist among the airlines on most dimensions (one major
exception is emphasis on CRM, discussed below). What is clear is that individual
instructors at each airline differed enormously in their effectiveness as facilitators
and in their emphasis on CRM topics and crew participation. This large variability
within all five airlines overshadows any differences that might exist among the
airlines. This finding reveals an urgent need for additional training and
standardization within each airline (see section 5.4).

Some of the apparent variability among instructors may actually be within-


instructor variability. For three descriptive variables that might seem
characteristic of an instructor's approach-duration of debriefing, percent
participation by instructor, and percent instructor words directed to CRM-as much
variability was found between the two sessions given by the same instructor as
between randomly-paired sessions given by different instructors. These results
should be interpreted with great caution because of the small sample size (only
four instructors conducted two debriefings), but they suggest that individual
instructors may vary on these dimensions as a function of crew performance,
external constraints on time, or unidentified factors. In contrast to the descriptive
variable results, a direct measure of facilitation (combined scores for Questions,
Encouragement, and Focus) showed much less variability between sessions
given by the same instructor. Thus facilitation effectiveness may be a fairly
consistent characteristic of the individual instructor.

On several occasions crew members spontaneously volunteered that they had


trouble remembering relevant aspects from the LOFT. The common practice of
delaying the debriefing two hours or more until after the batting practice may
have contributed to this memory difficulty. Performing the batting practice
maneuvers, in the same cab as the LOFT and under similar conditions, is likely
to interfere with the memory of the preceding LOFT. Unfortunately, we have no
data addressing how much this practice interferes with the crews' memory, but
we suspect it is not trivial and suggest that the issue be studied empirically.

No industry standards exist with which to compare our observations on


descriptive variables such as duration of sessions, percent discussion devoted to
CRM and crew performance, how much of the talking is done by the instructor,
etc. However, we discuss these variables below in terms of our own subjective
impressions of how consistent the observed values are with objectives stated in
the airlines' internal publications and with guidelines such as AC 120-35C (Line
Operational Simulations).

5.1 Descriptive Variables


5.1.1 Duration. Most debriefings were fairly short: 31 minutes on average,
including time spent watching videos (typically about 1/3 of the total session was
spent watching video segments). It was clear that a half-hour session allowed the
group to discuss only a few examples of the crew's performance, and often did
not provide adequate time for in-depth analysis. Given all that occurs in a typical
LOFT lasting over two hours and the importance of deep analysis of what
happened and how the crew managed the situations confronting them, it seems
highly desirable to spend more than 31 minutes on debriefing. Although these
data do not indicate what duration would be optimal, a thorough discussion was
often accomplished in debriefings lasting about an hour. Instructors do need to
vary the length of the session according to the training needs of the crew, but the
10-fold range of duration observed in this study is clearly problematic.

Instructors who rated the crews' LOFT performance as high tended to conduct
shorter debriefings. During interviews with instructors after each debriefing, some
instructors indicated that some of them feel there is less to discuss with a crew
that has performed well, and these instructors wanted to avoid "nit-picking" good
performance. We suspect this attitude may shortchange high performing crews. It
is important for these crews to analyze why things went well in order to help them
make explicit the factors and behaviors that led to success. These behaviors may
have been intuitive and may have depended on the compatibility of the particular
two or three crew members involved. In order to take the lessons learned back to
the line and apply then in situations in which the crew may not be so compatible,
it would be helpful for the crew members to explicitly discuss what makes certain
behaviors effective. Also, even high-performing crews need a chance to practice
the as yet infrequently used skill of self-debriefing.

5.1.2 Content. Substantial, statistically significant differences occurred among


the airlines in the percent of discussion devoted to CRM, which may reflect
differences in company training philosophy. At all but one of the five airlines,
CRM topics occupied more of the discussion than technical topics. This
emphasis is appropriate to the goals of LOFT. Very large differences also
occurred among instructors within each airline; at one airline, for example, CRM
ranged from 6 to 75% of instructor words. It is not clear whether these
differences reflect different attitudes among the instructors toward CRM or
indicate that individual instructors spend more time on technical topics when they
perceived a crew to be deficient in technical knowledge or skills. However, the
fact that the instructors' relative emphasis on technical topics was not correlated
with their ratings of the crews' technical performance argues against the latter
interpretation, or at least suggests that it is not the dominant factor. Regardless,
a debriefing in which CRM topics plus mixed (CRM and technical combined)
topics occupy less than a third of the discussion seems inappropriate.

Discussion of the crews' LOFT performance was appropriately emphasized in the


debriefings, accounting for roughly half of instructor and crew words, on average.
This figure was fairly consistent across airlines. A good part of the instructors'
comments on performance were positive, and this is consistent with the objective
of reinforcing the crews with positive feedback. In contrast, only a very small
percentage of the discussion by instructors and crews was directed to
problematic aspects of crew performance or ways to improve performance, even
though instructors tended to hold longer sessions for crews whose LOFT
performance they rated as lower. This lack of emphasis seems inconsistent with
the objectives of LOFT.

The content of the instructors' utterances and the content of the crews'
utterances were highly correlated along most dimensions examined. Although
correlation does not necessarily imply causality, our subjective impression is that
the general content and emphasis of the debriefings was driven almost
exclusively by the instructors. This impression is supported by the pattern of
discourse, discussed below.

5.1.3 Instructor characteristics. Instructors generally talked substantially more


than any of the crew members, averaging 61% of the words in debriefings of two-
member crews and 49% of the words in debriefings of three-member crews.
(However, the range of this variable was striking: among debriefings of three-
member crews, one instructor did 17% of the talking and another instructor did
87% of the talking.) The total amount of talking by all crew members combined is,
by definition, the amount not done by the instructor and thus the two variables
are forced into perfect negative correlation. However, the fact that the amount of
talking done by the instructor is also significantly negatively correlated with the
amount done by each crew member separately suggests that too much talking by
the instructor discourages participation by the crew members. Consistent with
this inference, the amount of talking done by the instructors was significantly
negatively correlated with other measures of crew participation: words per
utterance, number of S1 statements, number of analyzing utterances, number of
proactive questions, depth of crew activity, and extent of analysis and evaluation
by the crew. (Number of S1 statements, number of analyzing utterances, and
number of proactive questions contribute to the percent crew participation and
thus inherently have some degree of correlation. These results should be
interpreted cautiously.) The average length of utterances by the instructors
showed a similar pattern of negative correlation with measures of crew
participation, suggesting that long monologues by the instructor discourage crew
participation.

One might wonder if the percent of participation by the instructor might be driven
by the crew; an instructor might be forced to do more of the talking if he or she
tried unsuccessfully to induce the crew to participate substantially. However, the
data suggest otherwise: the battery variable Encouragement was strongly
negatively correlated with percent instructor participation, which is not consistent
with instructors resorting to lecturing only after seriously attempting to facilitate
crew participation. Also, our subjective impression is that instructors seemed
predisposed to whatever level of facilitation they used.

The large number of questions asked by most instructors suggests that they are
attempting to elicit crew participation. The number of questions asked by
instructors was not significantly correlated with any measures of crew
participation, but this might reflect a limitation of the across subjects design of
this study. An instructor might increase the participation of a given crew by
asking more questions, but this may be confounded by the possibility that
instructors increase the number of questions they ask when they encounter a
crew that participates inadequately. The crew prone to low participation may
increase its activity in response to questions but still may remain below average.

The battery category Questions, which addresses the way in which instructors
ask questions and takes into account the crew with which the instructor is
confronted, appears to be a much more useful measure than the simple number
of questions the instructors ask. Instructors' scores on the battery category were
significantly positively correlated with several descriptive measures of crew
participation and both battery categories of crew participation.

In all debriefings observed, the discussion revolved primarily around the


instructor, even when the instructor encouraged the crew to do most of the
talking. Direct back and forth discussion among crew members was rare; most of
the time the pattern was instructor utterance, crew member utterance, instructor
utterance.

Many instructors frequently interrupted crew utterances, and in many cases the
crew members never completed their comment after the interruption.
Surprisingly, the frequency of interruption was not correlated with any of the
descriptive or battery measures of crew participation. Nevertheless, it is hard to
believe that crew members find frequent interruptions encouraging.

5.1.4 Crew characteristics. Two important dimensions of crew participation are


proactivity and analysis of LOFT performance. The descriptive variables do not
directly measure these dimensions but do shed some light on them. One might
expect a proactive participant to ask a lot of questions and to initiate topics and
issues. However, crew members asked very few proactive questions. On the
other hand, crew members' words were evenly divided among direct responses
to the instructor and S1 statements (i.e., crew-initiated utterances that add
substantively to the conversation). Upon further examination, though, it was
found that even these S1 statements mainly address topics initially raised by the
instructor. In general, most crew members were willing participants who
responded readily to the instructor but showed little evidence of proactivity in the
sense of taking responsibility for the direction of the debriefing.

On average, individual crew members made only about six utterances per hour
that were characterized as "analyzing". For coding purposes the definition of
"analyzing" was necessarily arbitrary, and other definitions might have yielded
numbers substantially larger or smaller. Nevertheless, this rough characterization
suggests substantial room for improvement toward one of the major goals of the
debriefing.

Participation by captains and first officers was very similar, as measured by


percent participation, number of non-directed questions answered, number of
proactive questions asked, words per utterance, words per response, number of
S1 words, and number of analyzing utterances. (However, among two-person
crews the percent participation by captains was slightly but significantly greater
than that by first officers.) On the same variables, flight engineers were generally
lower than either captains or first officers, although the only difference that
reached statistical significance was that between first officers and flight engineers
on percent participation.

5.2 Debriefing Assessment Battery


5.2.1 Battery characteristics. The descriptive variables provide useful
information about debriefings but are not by themselves adequate to characterize
instructor use of facilitation or the nature of crew participation. The Debriefing
Assessment Battery was developed to provide a deeper characterization of
instructor and crew performance. It is designed to be used by raters with a
substantial background in CRM and a general understanding of the principles of
facilitation. High interrater reliability was obtained on all categories of this battery
with only a moderate amount of practice.

In contrast to reliability, it is difficult to establish the validity of the battery because


no standard exists with which to compare it. However, the battery does have a
certain amount of face validity in that the items address behaviors generally
agreed upon as necessary for facilitation. Also, the items were worded explicitly
in terms of the general objectives commonly stated for LOFT debriefings. The
results discussed below suggest that, in general, the battery does measure what
was intended.

5.2.2 Scores and correlations. Scores on three of the instructor categories-


Questions, Encouragement, and Focus-were highly predictive of scores on the
two categories of crew participation. The ability to explore the predictive power of
the Introduction category was severely limited because of the small variation of
instructor scores on this variable; most scores fell on the lowest value possible.
However, crews scored significantly higher on Analysis & Evaluation in those few
debriefings in which instructors gave at least a minimal introduction. Also,
Introduction scores were significantly positively correlated with crew words per
utterance, words per response, and percent CRM. These data plus the reasons
discussed in the beginning of this paper suggest that a thorough and explicit
introduction is likely to have a substantial effect, although this issue requires
further study.

Properly speaking, the use of the video of the crews' LOFT performance is not
technically a component of facilitation, but it is widely regarded as an important
tool that can help the crews understand their performance. The nature of the data
(transcribed audio tapes of the debriefing) limited the types of items that could be
used to asses the instructors' Use of Videos. For example, what may be one of
the most important aspects of the video clips, their content, could not be
measured. The items in Use of Videos showed little predictive power for any
aspect of crew performance except percent CRM, and this correlation may only
reflect the fact that instructor scores on Use of Videos were fairly strongly
correlated with instructor percent CRM. Thus we are inclined to delete this
category from the battery.

Instructor scores on Questions, Encouragement, and Focus were moderately


correlated with various descriptive measures of crew participation. Similarly,
instructor scores on the battery were correlated with some descriptive measures
of instructor behavior, and crew scores on the battery were correlated with most
of the descriptive measures of crew behavior that seemed pertinent. The
descriptive measures themselves provide at best a partial and largely indirect
characterization of instructor and crew participation, so the most one could say is
that the patterns of correlations are consistent with the battery measuring what is
intended. For example, as would be expected, crew Depth of Activity was
somewhat more strongly correlated with percent crew participation than Analysis
& Evaluation was. Conversely, crew Analysis & Evaluation was more strongly
correlated with percent crew CRM than Depth of Activity was.

The battery appears to provide a more meaningful appraisal of instructor


facilitation and crew participation than most of the descriptive variables do. Also,
the descriptive variables require a tedious amount of data reduction and can be
measured only in a research setting. In contrast, the battery could, in principle,
be used in real time to evaluate debriefings. We are currently developing a short
form of the battery that can be used by airline training department personnel to
rate instructors and crews during observations of their debriefings (McDonnell,
Dismukes, & Jobe, in preparation).

Intercorrelations among Questions, Encouragement, and Focus were high, as


was the intercorrelation between crew Analysis & Evaluation and Depth of
Activity, thus precluding a meaningful factor analysis. Also, the individual items
within each category were highly intercorrelated. Two possibilities may account
for these high intercorrelations: (i) individual items may overlap and/or entire
categories may overlap substantially in what they measure, and (ii) in this
particular data set the independent variables measured by the battery items and
categories may covary. The latter might occur, for example, if instructors tended
to either grasp and accept the fundamental concepts underlying facilitation or fail
to grasp or accept those underlying concepts. Both possibilities may have been
operating (see discussion of bimodality in section 5.4). In the short form of the
battery mentioned above, the number of items will be reduced substantially:
related items will be combined into one, and the content of separate items will be
segregated more distinctly.

5.3 Facilitation Techniques and Common Mistakes


To facilitate debriefings, instructors used various specific techniques in the broad
categories of introductions, questions, active listening, and silence. Many
instructors showed considerable skill in using these techniques; other instructors
were markedly less effective, or made little attempt to facilitate. Even effective
instructors sometimes did things that undercut their efforts at facilitation.

The most common problem, failing to state explicitly the expectation for crew
participation, is discussed above. Twenty-eight percent of instructors made no
statement at all about expectations and only one instructor gave an explicit
rationale for why the crew should take an active role. Other common mistakes
included failing to pause when the crew did not respond immediately to
questions, keeping the discussion centered on the instructor instead of
encouraging the crew to interact with each other, making long soliloquies,
evaluating crew performance before eliciting crew self-evaluation, failing to push
beyond superficial description of events, and not getting the crew to analyze why
things went well.
A companion to this report describes in detail specific techniques instructors
used and suggests ways to integrate these techniques for effective facilitation
(McDonnell, Jobe, & Dismukes, in press). This companion report, written as a
training manual for instructors, also suggests ways to avoid common facilitation
mistakes.

5.4 Implications for Training


The fact that instructors' scores on Introduction were uniformly low, much lower
than on other categories of facilitation, indicates that this is an area in which
instructors have not been adequately trained. It seems a matter of common
sense that if one wants crews to participate in a certain way, particularly if that
way differs substantially from traditional practice, it is necessary to tell crews
explicitly what is expected of them. It may be that instructors are so accustomed
to the idea that crews should be participating proactively that they overlook the
fact that this expectation has not been stated explicitly to the crews. Alternately,
some instructors may have reservations about the concept that it is preferable for
the debriefing to revolve around the crew, and thus they do not explain this
concept to the crews. Regardless, a good introduction is easy to provide once
instructors recognize its importance; thus, training departments may be able to
improve crew participation with relatively little effort by emphasizing this topic to
instructors. Ideally, the introduction should describe how the debriefing will be
conducted, explain how the crew is expected to participate and what the
instructor's role is, and provide an explicit rationale for the benefits of crew-
centered debriefings.

The fact that instructor scores on Questions, Encouragement, and Focus were
distinctly bimodal and highly intercorrelated suggests that the instructors either
grasped the concept of facilitation and were able to put it into practice or did not
grasp the concept and were therefore unable to practice it effectively. Alternately,
the instructors who were not effective facilitators may not have "bought into" the
concept of facilitation or might simply have lacked the ability for this type of
approach.

These findings suggest that the airlines face an issue of standardization and
quality control of debriefings. Although no attempt was made to measure these
characteristics, it was clear that the great majority of instructors were highly
competent technically, were conscientious, and displayed strong interpersonal
skills. All seemed comfortable with and committed to the concepts of CRM. Thus,
the variability may reflect inadequate training of instructors in the techniques of
facilitation. When interviewed, several instructors spontaneously volunteered that
they did not feel adequately trained to facilitate. To date, in most airlines with
which we are familiar, training in facilitation is vague, consisting mainly of general
concepts and adages (e.g., "Don't insist on closure"). However, facilitation,
especially because it departs radically from the instructional techniques
traditionally used in aviation, requires hands-on training in which instructors
observe expert facilitators, practice facilitating, and receive feedback.

As this report is being written, several airlines are expanding their training in
facilitation, and this can be expected to improve the conduct of debriefings.
Currently, an industry group, the ATA AQP LOFT/Instructor Focus Group, is
preparing a paper that will provide guidance on training instructors in facilitation,
evaluation of crew performance, and related topics.

These findings also suggest that the concept of crews debriefing themselves
using the instructor as a resource (a concept expressed frequently in the CRM
literature and in AC 120-35C), though a worthwhile goal, is rather idealistic. Only
one of the instructors observed attempted to have the crew lead their own
debriefing. Though that debriefing was one of the better ones in terms of the level
of crew participation, the crew only partially understood what constituted a good
debriefing and needed considerable help. In order for crews to take greater
responsibility for the debriefing they must first be told how to conduct one. It
would also help if crews could observe another crew debriefing themselves
effectively; this could be the subject of classroom training that precedes the
LOFT. Crews may need to practice self-debriefing of several LOFTs before they
become proficient.

At the current state of industry practice, instructors who attempt to encourage


crews to self-debrief, or to at least take greater responsibility for the direction of
the debriefing, will encounter widely varying levels of crew responsiveness.
McDonnell et al. (in press), drawing upon a concept expressed by Continental
Airlines (1992), suggest that facilitation can be conducted at a high, medium, or
low level, depending on the level of initiative and the self-debriefing skill of the
particular crew. In high-level facilitation the instructor approaches the ideal of
assisting the crew in their own analysis. In low-level facilitation the instructor
leads the debriefing, directs the crew's attention to critical issues, and may need
to lecture to insure points are understood, but the instructor still attempts to foster
as much self-discovery as possible.

Instructors are encouraged to attempt to facilitate at the highest level possible for
each crew. Realistically, however, most crews do not yet have the skills and
motivation needed to lead their own debriefings without substantial assistance
from the instructor. It may be possible to change this situation over time if LOFT
instructors consistently encourage crews to take a proactive role in debriefing
their own training and to consider the benefits of debriefing line operations.

Instructors sometimes mistakenly assume that using facilitation requires giving


up their role as teacher in the debriefing. On the contrary, good facilitation in no
way precludes the instructor from adding his or her own perspective to the
discussion or from teaching specific points about CRM and technical issues as
appropriate. Effective facilitators can integrate their teaching points into a group
discussion in which the crew members are full participants.

With the exception of Introduction, instructors' scores on the facilitation


categories averaged around 4 (adequate), as did crews' scores on Analysis &
Evaluation and Depth of Activity. These values have little absolute meaning
because they depend on the necessarily arbitrary anchoring of the scales. Each
training department must establish its own standards for satisfactory
performance and anchor their ratings accordingly. What the Debriefing
Assessment Battery provides is a tool for evaluating the relative performance of
instructors and of crews in LOFT debriefings.

It has been a matter of faith among training departments that facilitation is an


effective tool to encourage crews to analyze their performance in LOFT along
CRM dimensions in a way that will benefit them in line operations. This study
provides empirical evidence that this faith is correct.

6.0 CONCLUSIONS AND


RECOMMENDATIONS
These data provide a portrait of how debriefings were being conducted in major
U.S. airlines during the period of mid 1994 to mid 1995. This sample seems
representative of large U.S. carriers, although, as this report was being written
many airlines were upgrading their training in facilitation and this can be
expected to improve the effectiveness of debriefings. The following conclusions
and recommendations reflect both the objective data and our subjective
impressions:

1. Most instructors attempted to facilitate crew participation, but their success


ranged from very good to poor. The bimodal distribution of instructors' battery
scores suggests that at least half of the instructors grasped and utilized the
concept of facilitation effectively, however a substantial minority of instructors
were consistently ineffective in all measures of facilitation. Almost all instructors
appeared to be highly competent and conscientious in the traditional role of
instructors, thus this variability seems to reflect differences in how well instructors
comprehend or buy in to the concept of facilitation.

2. Instructors effectively used a range of specific techniques to facilitate crew


participation (described in detail in McDonnell et al, in press). Perhaps
unwittingly, many instructors also did things that appeared to inhibit crew
participation. The most striking shortcoming was that most instructors made little
effort to convey to the crew that they should be proactive, and it is not clear
whether instructors themselves grasped this concept. It appears that instructors
could substantially improve crew participation by explicitly explaining the relative
roles of the crew and the instructor at the beginning of the debriefing.

3. This study provides empirical evidence that facilitation, when used effectively,
substantially increases the depth of crew participation and the quality of crew
analysis and evaluation of their performance.

4. Crews were generally responsive but showed limited proactivity. Typically,


instructors did most of the talking and the discussion invariably centered around
the instructor's questions, comments, and choice of topics, even when the crew
did most of the talking. Most, but not all, debriefings emphasized CRM and LOFT
performance appropriately. Most debriefings would have been improved by
greater depth of analysis and more attention to ways to improve performance.

5. Within each of the five airlines, instructors varied widely in their conduct of
debriefings, especially in terms of emphasis on CRM, emphasis on crew
participation, and effectiveness in facilitation. Not surprisingly, the character of
crew participation varied similarly, and consequently it seems likely that how
much the crews learned from the LOFT experience may also have varied
considerably. This suggests a need for better standardization within companies.
The great variability within individual airlines obscured the statistical significance
of differences observed among the airlines.

6. These findings suggest that instructors need better training in facilitation. One
way to enhance training would be to emphasize hands-on practice and to follow
up with mentoring by instructors who are themselves expert facilitators. The
current literature on facilitation is rather idealistic, and instructors may become
discouraged when they discover that crews sometimes do not immediately
respond as desired. Instructor training should address obstacles to effective
facilitation and should provide specific techniques to use when crews do not
initially respond. Training should explain to instructors that facilitation can be
conducted at different levels ranging from predominantly crew-led, with instructor
assistance, to predominantly instructor-led, but still emphasizing self-discovery
by the crew as much as possible. Instructors should adapt their level of
facilitation in response to the skill and responsiveness of the particular crew.

7. The average session length of about 31 minutes appeared to limit the


thoroughness and depth of the debriefings. Longer sessions would allow
coverage of more issues and greater depth of discussion. We have no data on
what duration would be optimal, but suggest that an hour might be a useful rough
target, with adjustments for the needs of individual crews. However, this is a
policy issue and each airline will have to make its own cost-benefit analysis.

8. Although we collected no data to assess the effect of the common practice of


conducting maneuver practice between the LOFT and the debriefing, we suspect
that it appreciably impairs the ability of the crew to remember and learn from
what happened in the LOFT. We recommend that this issue should be
investigated empirically.

Figure 1. Crew interaction chart.

Note: Crew interaction is measured by counting the number of crew utterances


between IP utterances. Two or more sequential crew utterances indicate
interaction occurred, while single crew utterances indicate that there was no
interaction.

Figure 2. Effect of instructor facilitation on crew analysis and evaluation.

Note. Instructor Facilitation is a combined measure of Questions,


Encouragement, and Focus

Instructor Scores

1 = Poor; 4 = Adequate; 7 = Outstanding

Figure 3. Distribution of instructor scores on the Debriefing Assessment Battery.

Table 1. Number of Debriefings Observed and Analyzed

Airline V Airline W Airline X Airline Y Airline Z Total


2-person 6 0 5 5 9 25
3-person 2 4 0 4 1 11
Table 2. Interrater Reliabilities for the Debriefing Assessment Battery
Battery variables N Pearson's r

IP
Introduction 35a .91
Questions 36 .78
Encouragement 36 .80
Focus 36 .84
Use of Videosb 18c .77

Crew
Analysis & Evaluation 36 .78
Depth of Activity 36 .73

a The audio recording began late for one session.

bReported reliability for Videos is for crews Y and Z only. Reliability could not be
calculated for all crews because one item was changed after scoring was
completed, and that item was recoded by only one rater.

cThe video equipment was not working for one of the 19 crews in Airlines Y and
Z.

Table 3. Average Duration of Debriefings (minutes)

Mean (SD)

Combined
Airline V Airline W Airline X Airline Y Airline Z
Airlines
28.1 (14.8) 29.2 (2.9) 40.3 (25.5) 36.9 (15.6) 23.1 (7.3) 30.7 (15.2)

Note. Differences among airlines were not statistically significant.

Table 4. Participation in Debriefings (percent of instructor and crew words)

Mean (SD)

Combined
Airline V Airline W Airline X Airline Y Airline Z
Airlines
Instructor:
58(15) -- 61(18) 54(16) 67(14) 61(15)a
2-person crews
50(3.5) 58(27) -- 40(16) 41 49(20)a
3-person crews
Captain:
19(6.9) -- 24(8.2) 22(8.1) 19(8.6) 21(7.8)b
2-person crews
23(17) 16 (8.9) -- 22(7.9) 21 20(9.4)
3-person crews
First Officer:
23(9.4) -- 15(10) 23(13) 14(7.0) 18(9.7)
2-person crews
16(12) 13(9.2) -- 27(14) 23 19(13)c
3-person crews
Flight Engr:
12(2.8) 14(11) -- 12(7.9) 15 13(7.8)
3-person crews
Note: Differences among airlines were not statistically significant. Significant differences among
participants:

a Instructor > captain, first officer, flight engineer (p<.01); b captain > first officer
(p<.01); c first officer > flight engineer (p<.03).

Table 5. Content of Debriefings (percent of instructor and crew words)

Mean (SD)

Airline Airline Airline Airline Airline


Combined
Airlines
V W X Y Z
Instructor
32(25) 19(15) 27(13) 56(13) 64(17) 45(24)a
CRM
22(14) 13(11) 38(10) 8.1(8.7) 10(15) 16(15)b
Technical
24(8.6) 33(13) 9.8(16) 5.6(5.3) 6.2(8.3) 14(14)c
Mixed
22(11) 34(12) 26(7.6) 30(6.8) 20(10) 25(10)d
Non-specific
Crew
25(12) 25(17) 36(20) 68(13) 68(19) 49(25)e
CRM
21(11) 10(4.2) 23(8.6) 5.6(5.3) 6.9(10) 12(11)f
Technical
38(13) 46(12) 8.8(10) 11(10) 14(12) 21(18)g
Mixed
16(11) 18(4.6) 32(14) 16(7.4) 12(13) 17(12)h
Non-specific

Note. Statistically significant differences were found among airlines: a Y>W; Z>V,W,X. b X>Y,Z. c
V>Y,Z; W>X,Y,Z. d not statistically different. e Y>V,W,X; Z>V,W,X. f V>Y,Z; X>Y,Z. g V>X,Y,Z;
W>X,Y,Z. h X>Z.

Table 6. Discussion of Crew Performance

Mean (SD)

Airline Combined
Airline V Airline X Airline Y Airline Z Airlines
W
Positive aspects
19(11) 5.8(5.1) 15(9.3) 16(13) 24(12) 18(12)
% of IP words
6.5(7.3) 3.8(5.6) 7.4(13) 9.9(8.9) 9.5(12) 8.0(9.6)
% of crew
words
Negative
aspects
3.8(2.7) 3.3(2.5) 9.4(13) 1.1(2.1) 1.6(2.6) 3.2(5.5)
% of IP words
6.6(4.1) 8.0(7.9) 9.8(12) 5.1(3.8) 3.4(7.2) 5.9(6.7)
% of crew
words
Ways to
improve
5.0(4.4) 4.5(5.3) 6.8(6.7) 3.0(3.2) 2.7(4.4) 4.1(4.6)
% of IP words
3.6(4.3) 5.0(8.7) 5.6(4.0) 4.6(5.1) 5.6(8.6) 4.8(6.1)
% of crew
words
Neutral
description
18(14) 17(9.6) 9.4(4.5) 21(7.0) 15(8.1) 17(9.5)
% of IP words
40(15) 36(15) 25(18) 28(15) 33(26) 33(19)
% of crew
words
Performance
total
46(21) 30(14) 41(15) 41(13) 43(13) 41(15)
% of IP words
56(22) 53(19) 47(17) 48(21) 56(27) 52(21)
% of crew
words

Note. Differences among airlines were not statistically significant.

Table 7a. Correlations Between Instructor and Crew Topics


Instructor variables
Crew Variables % words CRM % words technical
% words CRM .76*** -.71***
% words technical -.69*** .85***

*p < .05. **p < .01. ***p < .001.


Table 7b. Correlations Between Instructor and Crew Emphasis

on Aspects of Crew Performance


Instructor variables
Crew Variables positive aspects negative aspects ways to improve
positive aspects .35* -.30 -.32
negative aspects -.28 .61*** .53**
ways to improve -.04 .35* .67***

*p < .05. **p < .01. ***p < .001.

Table 8a. Instructor Questions: Two-person Crews

Mean (SD)

Combined
Airline V Airline W Airline X Airline Y Airline Z
Airlines

Number of directed questions per hr:


to CA 18(21) -- 21(7.6) 25(17) 9.3(12) 17(15)

to FO 8.6(6.6) -- 13(7.6) 20(10) 9.0(7.3) 12(8.5)

Number of non-directed questions per hr:


32(19) -- 12(17) 14(3.6) 19(12) 20(15)

Total number of questions per hr:


59(27) -- 46(26) 58(27) 37(14) 48(23)
Table 8b. Crew Responses to Non-directed Questions: Two-person Crews

Mean (SD)

Combined
Airline V Airline W Airline X Airline Y Airline Z
Airlines

Percent non-directed questions answered:


by CA 63(32) -- 31(29) 77(15) 58(19) 58(27)

by FO 53(13) -- 35(32) 60(35) 51(21) 50(25)


Note. Significant differences were found among airlines in percent of non-directed
questions answered by CA: Y>X.
Table 9a. Instructor Questions: Three-person Crews

Mean (SD)

Combined
Airline V Airline W Airline X Airline Y Airline Z
Airlines

Number of directed questions per hr:


to CA 43(31) 4.5(6.4) -- 7.6(7.1) 9.3 13(20)a

to FO 20(11) 4.7(2.9) -- 6.6(5.8) 2.3 8.5(8.1)

to FE 27(2.1) 5.6(1.4) -- 6.4(9.2) 12 10(10)b


Number of non-directed questions per hr:
82(55) 12(5.2) -- 15(9.5) 16 27(35)

Total number of questions per hr:


171(70) 27(14) -- 35(22) 39 59(65)c
Note. Significant differences were found among airlines in a questions directed to CA:
V>W;

bquestions directed to FE: V>WY; c total number of questions per hour: V>WY.

Table 9b. Crew Responses to Non-directed Questions: Three-person Crews

Mean (SD)

Combined
Airline V Airline W Airline X Airline Y Airline Z
Airlines

Percent non-directed questions answered:


by CA 51(16) 68(28) -- 69(28) 14 65(25)

by FO 38(28) 35(47) -- 48(36) 43 41(36)

by FE 26(5.7) 18(21) -- 26(18) 14 23(17)


Note. Percent of non-directed questions answered by FE fell just short of being
significantly lower than CA and FO answers (p < 0.06; Wilcoxan Matched-pairs test).
Other differences among crew members were not significant.
Table 10. Percent of Total Crew Words & Utterances Coded R, S1, S & Q1
Percent of total words Percent of utterances
Crew R S1 S Q R S1 S Q
V 41 48 7 4 35 28 30 7
W 35 51 8 6 23 32 36 10
X 39 48 9 4 26 30 37 7
Y 45 44 7 4 32 29 31 8
Z 54 38 5 3 40 32 22 6
All 44 45 7 4 33 30 30 7
1Response = first responsive utterance by each crew members following a Question. S1 = all self-initiated,
substantive crew

statements that raise issues, introduce topics, or add information to an existing topic. Statements = all
utterances that do not

fit the criteria for R, S1, or Q. Question = any utterance that explicitly asks a question.
Table 11. Distribution of Crew Questions (number per category)
Non-
CRM Technical Mixed Total
specific
Proactive 7 11 3 14 35

Reactive 4 3 0 26 33

Miscellaneous 0 2 1 27 30
Total 11 16 4 67 98

Table 12. Average Number of Proactive Questions Per Hour

Mean (SD)

Combined
Airline V Airline W Airline X Airline Y Airline Z
Airlines
CA 4.9(3.6) 1.7(2.1) 7.5(8.5) 1.5(1.7) 1.2(1.7) 3.0(4.3)

FO 5.4(4.0) 3.8(3.2) 1.1(1.7) 2.5(3.2) 2.1(3.7) 3.0(3.5)

FE 8.1(2.0) 1.1(1.2) -- 1.3(1.4) 0 2.5(3.2)

Note. No statistically significant differences were observed between two and three person crews.
Statistically significant differences found among airlines: CA: X>Z; FE: V>WY.

Table 13. Additional Measures of Crew Participation

Mean (SD)
Captain First Officer Flight Engineer Crew Average

Analyzing utterances
7.0 (6.2) 6.4 (6.1) 3.4 (2.8) 6.2 (4.7)
per hour
Words per utterance 21 (10) 24 (13) 17 (9.2) 22 (10)

Words per response 29 (17) 35 (29) 21 (9.8) 30 (17)

Note. No statistically significant differences were found between airlines or crew positions.

Table 14. Debriefing Assessment Battery Scores

Mean(SD)

Combined
Airline V Airline W Airline X Airline Y Airline Z
Airlines
Instructor Profile:
1.5(.65) 1.4(.73) 1.1(.13) 2.1(1.3) 1.4(.42) 1.6(.83)
Introduction
3.9(1.7) 3.1(1.9) 3.4(1.5) 5.0(.66) 4.2(2.0) 4.1(1.6)
Questions
3.8(1.7) 3.5(2.4) 3.3(1.7) 5.1(.66) 3.9(2.0) 4.1(1.7)
Encouragement
3.2(1.8) 2.9(1.0) 3.0(1.3) 5.0(.69) 4.0(1.7) 3.8(1.6)
Focus
-- 4.3(.85) 2.9(.62) 4.5(1.4) 5.1(1.0) 4.4(1.2)
Use of Videos
Crew Profile:
3.3(1.3) 3.4(1.2) 3.3(1.1) 4.8(.87) 4.2(1.8) 3.9(1.4)
Analysis & Eval.
4.0(1.0) 4.2(1.5) 4.0(1.5) 5.1(1.1) 4.4(1.9) 4.4(1.4)
Depth of Activity
Note: Numbers are average scores of two independent raters (except Video scores for airlines W
& X, which were coded

by only one rater) on a 7-point Likert scale: 1 = poor, 2 = marginal, 3 = needs improvement, 4 =
adequate, 5 = good,

6 = very good, 7 = outstanding.

No differences between airline average scores were statistically significant.

Table 15. Frequencies of Rating Scores on the Debriefing Assessment Battery


Rating Scores (Average of the two raters)

Subjective Needs Very


N Poor Marginal Adequate Good Outstanding
variables Improve Good
IP
Introduction 35 23 8 3 0 1 0 0
Questions 36 2 7 4 3 9 11 0
Encouragement 36 2 9 2 4 9 9 1
Focus 36 2 7 4 6 10 7 0
Use of Videos 26 0 3 4 6 5 6 2

Crew
Analysis &
36 1 6 8 4 13 3 1
Eval.
Depth of
36 1 2 8 5 11 7 2
Activity

Table 16. Spearman Correlations Between IP and Crew Variables on the

Debriefing Assessment Battery


Instructor variablesa
Crew variablesa Introduction Questions Encourage Focus Videos
Analysis & Evaluation .28 .75 *** .78 *** .75 *** .33
Depth of Activity .13 .59 *** .78 *** .51 *** .26

a See Debriefing Assessment Battery (Appendix C)

*p < .05. **p < .01. ***p < .001.


Table 17. Spearman Intercorrelations Among Instructor Variables:

Debriefing Assessment Battery


Subscales Questions Encouragement Focus Use of Videos
Introduction .55*** .44** .49** .29
Questions -- .90*** .89*** .51**
Encouragement -- .78*** .45*
Focus -- .36
Use of Videos --

*p < .05. **p < .01. ***p < .001.

Table 18. Relationship of High and Low Introduction Scores to

Crew Analysis & Evaluation and Depth of Activity

Mean (SD)

Analysis &
Introduction Scores N Depth of Activity
Evaluation
1.0 10 3.2 (1.3)* 4.1 (1.4)

1.8 - 4.9 9 4.4 (.63)* 4.6 (1.0)

Note. The ten debriefings for which instructor Introduction scores were lowest were compared
with the nine debriefings for which Introduction scores were highest.

*p < .025, t-test

Table 19. Correlations Between Instructor Batterya and Descriptiveb Variables

Descriptive variables

% words
% words
Words ## non-
Battery % total Total # addressing
per directed
directed addressing
Variables questions
participation questions performance
utterance questions CRM
Introduction -.07 .12 .41* -.20 .42* .05 .35*
Questions -.49** -.38* .56*** .10 .60*** .05 .35*
Encourage -.75*** -.58*** .38* .15 .43** -.04 .25
Focus -.40* -.31 .50** .08 .52*** .12 .45**
Use of Videos -.06 .09 .24 .17 .38 .25 .69***
a See Debriefing Assessment Battery (Appendix C)

b See Appendix E

*p < .05. **p < .01. ***p < .001.

Table 20. Correlations Between Instructor Battery Variables and Crew Descriptive Variables

Crew Descriptive variables

Instructor
Self-
Percent Words per Analyzing Proactive
Words per initiated Percent
Battery
utterance CRM
variables participation response utterances questions
words

Introduction .07 .52*** .35 * -.06 .12 -.08 .45**


Questions .49** .42* .28 .18 .56*** -.07 .56***
Encourage .74*** .50** .34* .47** .70*** .10 .40*
Focus .40* .39* .28 .09 .53** -.16 .63***
Videos .05 .31 .11 -.02 .14 -.21 .67***

*p < .05. **p < .01. ***p < .001.

Table 21. Correlations Between Crew Battery and Descriptive Variables

Descriptive variables

# of
Self-
Percent words Analyzing Proactive
Battery Words per initiated Percent
variables utterance CRM
participation per utterances questions
words
response

Analysis & .67*** .58*** .50** .51*** .80*** -.14 .56**


Evaluation

.84*** .57*** .45** .76*** .80*** .10 .34*


Depth of
Activity

*p < .05. **p < .01. ***p < .001.

Table 22. Correlations Between Instructor Descriptive Variables and


Crew Battery and Descriptive Variables

Instructor variables

# of non- % words % words


# of directed addressing addressing
% Words per
Crew variables directed questions/hr performance CRM
participation utterance
questions/hr

% participation -.99a -.82*** .08 .23 -.06 -.05

Words per utterance -.38* .07 .07 -.16 .17 .39*

Words per response -.19 .20 -.06 -.24 .14 .36*

S1 statements
-.79*** -.62*** -.07 .20 -.07 -.06
(# words per hour)

# of analyzing -.65*** -.35* .19 .09 .08 .23


utterances per hour

# of proactive -.31* -.47** .07 .24 -.41* -.27


questions per hour

% words addressing -.04 .17 .17 -.28 .24 .76***


CRM

% words addressing .08 .18 .37* -.10 .41* .08


performance

Analysis &
-.67*** -.39* .23 .05 .12 .40*
Evaluation

Depth of Activity -.84*** -.55** -.001 .09 .02 .28

a Forced correlation; see discussion.

*p < .05. **p < .01. ***p < .001.

Table 23. Variability Within and Across Instructors

Mean (SD)

Variables Delta scores t-value p-value


Average value

of variable

different
same instructor instructor

(n=4) (448 random


pairings)
Duration 30.7 (15.2) 18.2 (1.3) 13.7 (12) 0.67 n.s.
IP % CRM 45 (24) 22.8 (7.0) 26.9 (18) -1.12 n.s.
IP % performance 41 (15) 21.8 (9.0) 18.1 (12) 0.84 n.s.
IP QEF 4.0 (1.6) 0.73 (0.48) 1.75 (1.3) -4.14 < .005

REFERENCES
Baddeley, A. (1990). Human memory: Theory and practice. Massachusetts:
Simon & Schuster, Inc.

Birnbach, R. A., & Longridge, T. M. (1993). The regulatory perspective. In


Wiener, E. L., Kanki, B. G., & Helmreich, R. L. (Eds.), Cockpit resource
management (pp. 263-280). San Diego: Academic Press.

Butler, R. E. (1993). LOFT: Full-mission simulation as crew resource


management training. In E. L. Wiener, R. L. Helmreich, & B. G. Kanki (Eds.),
Cockpit resource management (pp. 231-259). San Diego: Academic Press.

Casey, D., Roberts, P., & Salaman, G. (1992). Facilitating learning in groups.
Leadership and Organization Development Journal, 13(4), 8-13.

Connors, M. M. (1995). Macro-analysis of LOFT debriefing. Manuscript in


preparation, NASA-Ames Research Center.

Continental Airlines, Flight Operations Human Factors Group. (1992, January).


LOFT facilitation techniques: Practical strategies and techniques for LOFT
facilitators. Unpublished manuscript.

Cornwell, J. B. (1979). Stimulating and managing participation in class. In P. G.


Jones (Ed.) (1982), Adult learning in your classroom: The best of training
magazine's strategies and techniques for managers and trainers. Minneapolis,
MN: Lakewood Books.

Duval, S., and Wicklund, R. A. (1972). A theory of objective self awareness. New
York: Academic Press.
Eitington, J. E. (1989). The winning trainer. Houston: Gulf Publishing Company.

Federal Aviation Administration. (1991). Advanced qualification program


(Advisory Circular 120-54). Washington DC: Author.

Federal Aviation Administration. (1995). Line operational simulations (Advisory


Circular 120-35C). Washington DC: Author.

Gibb, P. (1982, July). The facilitative trainer. Training and Development Journal,
14-19.

Gow, L., & Kember, D. (1993). Conceptions of teaching and their relationship to
student learning. British Journal of Educational Psychology, 63, 20-33.

Helmreich, R. L., & Foushee, H. C. (1993). Why crew resource management?


Empirical and theoretical bases of human factors training in aviation. In E. L.
Wiener, R. L. Helmreich, & B. G. Kanki (Eds.), Cockpit resource management
(pp. 3-41). San Diego: Academic Press.

Hawkins, F. H. (1987). Human Factors in flight. Brookfield, VT: Gower Publishing


Company.

Jacobsen, D., Eggen, P., & Kauchak, D. (1989). Methods for teaching: A skills
approach (3rd ed.). Columbus: Merrill Publishing Company.

Lauber, J. K., & Foushee, H. C. (1981). Guidelines for line-oriented flight training.
(NASA Conference Publication 2184). Moffett Field, CA: NASA-Ames Research
Center.

McDonnell, L. K. (1996). Facilitation techniques as predictors of crew


participation in LOFT debriefings. (NASA Contractor Report 196701). Moffett
Field, CA: NASA-Ames Research Center.

McDonnell, L. K., Dismukes, R. K., & Jobe, K. K. (In preparation). A short form
battery for assessing LOS debriefings.

McDonnell, L. K., Jobe, K. K., & Dismukes, R. K. (In press). Facilitating LOS
debriefings: A training manual.

Metcalfe, J. & Shimamura, A. P. (1994). Metacognition: Knowing about Knowing.


Cambridge, MA: MIT Press.

Mills, P., & Roberts, B. (1981, March). Learn to guide and control discussion.
Successful Meetings, 96-97.
Moos, R. H. (1994). Group environment scale manual: Development,
applications, research. Palo Alto, CA: Consulting Psychologists Press, Inc.

Nelson-Jones, R. (1992). Group leadership: A training approach. Pacific grove,


CA: Brooks/Cole Publishing Co.

Ornstein, A. C. (1990). Strategies for effective teaching. New York: Harper &
Row.

Slamecka, N. J. & Graf, P. (1978). The generation effect: Delineation of a


phenomenon. Journal of Experimental Psychology: Human Learning and
Memory, 4(6), 592-604.

Rowe, M. B. (1974, February). Wait-time and reward as instructional variables.


Journal of Research in Science Teaching, 81-97.

Rowe, M. (1986, January/February). Wait-time: Slowing down may be a way of


speeding up. Journal of Teacher Education, 43-50.

Smith, G. M. (1994). Evaluating self-analysis as a strategy for learning crew


resource management (CRM) in undergraduate flight training. Ann Arbor, MI:
Dissertation Abstracts.

Zemke, R., & Zemke, S. (1981). 30 things we know for sure about adult learning.
In P. G. Jones (Ed.) (1982), Adult learning in your classroom: The best of training
magazine's strategies and techniques for managers and trainers. Minneapolis,
MN: Lakewood Books.

Appendix A. Coding
Utterance factors coded
Utterance length: number of words
Instructor (IP), 2nd Instructor in role of Flight
Speaker: Engineer (FEI). Captain (CA), First Officer
(FO), or Flight Engineer (FE)
Completed (C), Unfinished (U), Interrupted (I),
Interruptions/Interjections: Interrupted and Unfinished (I/U), Active
listening interjection (I/AL)
Question, Command, Response, or Statement
Utterance type: (Statements self-initiated by crew further coded
as S1)
Target of Question (if
Captain (CA), First Officer (FO), or Flight
clearly directed to a
Engineer (FE)
particular crew member):
Crew Proactive Questions: "P" if crew question is proactive, "O" (Other) if
it is a reactive or miscellaneous question
CRM, Technical, Mixed (CRM & Technical),
Topic type:
or Non-Specific
"A" if crew analyzes situation/performance,
Analysis:
"O" (Other) if not
Evaluation of crew
Positive, Negative, Improve, or Neutral
performance:
Video factors coded
All video segments are coded by indicating
ON ( ): segment number with duration in parentheses
[e.g., ON #1 (:45)]
Code end of video segments by indicating
OFF:
(OFF)
Time spent searching in silence [e.g., SEARCH
SEARCH ( ):
(:30)]

CODING RULES

Utterance Length (LENGTH)

1. Fill in a word count for every utterance for which a speaker and content are
identified. Do not count utterances in which speaker is identified but the words
are unintelligible; or words are transcribed but speaker cannot be identified.

2. Count repeated words (i.e., stuttering) as one word only.

Speaker (SPKR)

Identify the speaker of each utterance using one of the following; IP, CA, FO, FE,
or FEI.

Transcribing Utterances (UTTERANCE)

1. Transcribe the audiotape verbatim.

2. Record all pauses 3 seconds or longer in bold type.

3. Type titles in parentheses [e.g., (CA) or (FO)] in place of spoken names and
type (XX) in place of spoken name of airline.

4. If an utterance is phrased as a statement but is intended to evoke a response,


end the utterance with a "(?)" so it can be coded as a command.
5. If a speaker is interrupted (interjections of active listening or brief interruptions
which do not change the flow of the original speaker's utterance) or is talked over
but clearly continues on to complete the sentence or thought, transcribe and
code the continuation(s) as part of the initial utterance with "(x)" where the
interruption or interjection occurs, and type and code each interrupting utterance
separately below ("I" in the INT column).

6. If speaker is interrupted by a substantial utterance and continues, but the topic


or flow is slightly altered, code the initial utterance as unfinished ("U" in the INT
column), and transcribe and code the continuation as a separate utterance after
the interrupting utterance.

7. If a speaker makes a statement and then asks a question during a single


speaker turn, break it into two separate utterances where the question begins.

8. If a speaker clearly changes topics in the middle of a single speaker turn,


transcribe and code the topic change as a separate utterance.

9. Record length of video silent search time (no one speaks while IP tries to find
a specific video segment) in bold type.

Interruptions / Interjections (INT)

1. Code all utterances that are not completed (whether the speaker is interrupted
or trails off) as "U" and code all completed utterances as "C"

2. Code all utterances that interrupt or interject the preceding speaker as "I"
(code as "I/U" if the interruption is not completed, either because the preceding
speaker keeps talking or another speaker interrupts the interruption)

3. Code all active listening as "AL" (code interjections of active listening as "I/AL")

Utterance Type (TYPE)

Question = Any utterance that explicitly asks a question.

Command = Any IP utterance that commands a response but is not phrased in


question form.

Response = First utterance by any or all crew members following a Question or


Command, unless content of utterance makes it obvious that it is non-
responsive.

S1 (crew) = All self-initiated, substantive crew statements that raise issues,


introduce topics, or add information to an existing topic.
Statement = All utterances that do not fit the criteria for Q, C, R, or S1, unless
content makes it obvious that the utterance is responsive (R) to the preceding Q
or C (e.g., when separated by an intervening utterance).

Question Target (Q TRGT)

1. Code target of IP question if clearly directed to a particular crew member (e.g.,


"CA").

2. For non-directed IP questions, code the crew member(s) who respond in


parentheses [e.g., "(CA)" or "(FO,CA)"] or code as "( )" if no one responds

Crew Proactive Questions (PAQ)

1. Record a "P" in the crew PAQ column if crew question is proactive, or an "O"
(other) if the question is not proactive (i.e., reactive or misc.)

Proactive questions include clarification/verification questions used to raise new


issues or bring new information into the conversation (e.g., "You wanted help?")
and questions designed to gather information (e.g., "Did we have runway
three?")

Topic Type (TYPE)

CRM = Pertains to the coordination and interaction of the crew and specifically
relates to one or more CRM issues or topics.

Technical = Pertains to specific techniques of flying and navigating the airplane


and/or managing the systems, without reference to coordination, planning,
communication, judgment, or decision making among crew members.

Mixed = Has between 1/3 and 2/3 of both CRM and technical.

Non-Specific = Does not refer specifically to either CRM or technical topics.


Includes undetermined, extraneous, procedural, and maintenance of discourse.

(ANALYSIS)

Code all utterances that indicate the speakers are Analyzing the situation &/or
their performance in the LOFT by considering any of the following issues (both
explicit and implicit) as A (Analyzes). Code all utterances which are not analytical
as O (Other).

Generally, analyzing utterances are those that go beyond just describing what
happened to discussing why it happened and identifying what factors contributed
to the situation and/or how these factors influenced the outcome.
explanations of why something was done and/or done a certain way, or what
could have been done differently. Key words include because, should have,
could have , and might have (e.g., "I think we could have performed faster in
holding because we had to take a couple of turns in holding just to make sure we
got set up." and "I felt a little disorganized pushing off and taxiing out and doing
all of that and then having to de-ice; that breaks your flow because you don't put
the flaps down")

how & why factors influenced decisions, actions, and outcomes (e.g., "The
reason this influenced my decision/actions was ..." and "I was thinking this, so I
did this").

contingencies (e.g., "It might have been a lot different if we had asked for more
time before we took that turn. Maybe I should have asked for one more minute.")

(EVALUATION)

Code all utterances which indicate Evaluation of Crew Performance as follows:

Pos = positive evaluation of crew performance

Neg = negative evaluation of crew performance

Improve = suggestions for ways to improve

Neut = neutral evaluation of crew performance

Code all utterances which do not fit into the above categories as O (other)

(VIDEO)

Code all video segments by indicating segment number with duration in


parentheses [e.g., ON #1 (:45)], when segment ends (OFF), and time spent
searching in silence [e.g., SEARCH (:30)]

(COMMENTS)

1. Indicate any pauses IP uses to allow crew to formulate responses to


questions, or pauses after crew statements which encourage crew to say more.

2. Indicate use of probing questions to encourage crew to analyze in more depth.

3. Indicate when IP follows up on topics initiated by crew.

4. Note any noticeably good or poor IP techniques.


5. Record any revelations and/or any specific references to video. Also indicate
any difficulty using video equipment.

Appendix B.

Calculation of utterance variables


# of words for IP, CA, number of words spoken by each; add CA, FO,
FO, FE, Crew, total : and FE totals together for crew total
# of words per speaker ÷ total # of words for
% participation:
the debriefing
# of analyzing
utterances per hour for (# of analyzing utterances ÷ duration) x 60
CA, FO, FE, Crew:
# of questions per hour
(# of questions ÷ duration) x 60
for CA, FO, FE, Crew:
# of proactive questions
per hour for CA, FO, (# of proactive questions ÷ duration) x 60
FE, Crew:
# S1 words per hour for
(# of S1 words ÷ duration) x 60
CA, FO, FE, Crew:
# of words per response
# of response words ÷ # of responses
for CA, FO, FE, Crew:
# of crew words positive ÷ total # of crew
% crew words positive:
words
% crew words negative # of crew words negative and improve ÷ total #
+ improve: of crew words
# of crew words improve ÷ total # of crew
% crew words improve:
words
% crew words # of crew words negative ÷ total # of crew
negative: words
% crew words positive # of crew words positive, negative, and
+ negative + improve: improve ÷ total # of crew words
% crew words neutral: # of crew words neutral ÷ total # of crew words
# of crew words performance (positive,
% crew words
negative, improve, and neutral) ÷ total # of
performance:
crew words
% IP words CRM: # of IP words CRM ÷ total # of IP words
% IP words technical: # of IP words technical ÷ total # of IP words
% IP words mixed: # of IP words mixed ÷ total # of IP words
% IP words non- # of IP words non-specific ÷ total # of IP words
specific:
% IP words CRM + # of IP words CRM + half of mixed ÷ total # of
half of mixed: IP words
% IP words technical + # of IP words technical + half of mixed ÷ total
half of mixed: # of IP words
% IP words positive: # of IP words positive ÷ total # of IP words
% IP words negative + # of IP negative and improve ÷ total # of IP
improve: words
% IP words improve: # of IP words improve ÷ total # of IP words
% IP words negative: # of IP words negative ÷ total # of IP words
% IP words positive + # of IP words positive, negative, and improve ÷
negative + improve: total # of IP words
% IP words neutral: # of IP words neutral ÷ total # of IP words
% crew words CRM: # of crew words CRM ÷ total # of crew words
% crew words # of crew words technical ÷ total # of crew
technical: words
% crew words mixed: # of crew words mixed ÷ total # of crew words
% crew words non- # of crew words non-specific ÷ total # of crew
specific: words
% of crew words CRM # of crew words CRM + half of mixed ÷ total #
+ half of mixed: of crew words
% of crew words
# of crew words technical + half of mixed ÷
technical + half of
total # of crew words
mixed:
# of questions directed
(# of questions directed to each ÷ duration) x
to CA, FO, FE per
60
hour:
% of non-directed
# of non-directed questions answered by each ÷
questions answered by
total # of non-directed questions
CA, FO, FE, no one
# of directed questions
(# of directed questions ÷ duration) x 60
per hour:
# of non-directed
(# of non-directed questions ÷ duration) x 60
questions per hour:
total # of questions per (total # of directed questions + total # of non-
hour directed questions ÷ duration) x 60
number of video
segments shown per (# of segments shown ÷ duration) x 60
hour:
average duration of total duration of all segments shown ÷ # of
video segments shown: segments shown
# of times IP interrupts
(total # of IP interruptions ÷ duration) x 60
crew per hour:
% of crew utterances total # of crew utterances interrupted by IP ÷
interrupted: total # of crew Q, R, and S1 utterances
% of crew utterances
# of crew utterances interrupted and unfinished
interrupted and
÷ total # of crew Q, R, and S1 utterances
unfinished:
% of crew utterances
# of crew utterances interrupted and completed
interrupted and
÷ total # of crew Q, R, and S1 utterances
completed:
# of crew (question,
[# of crew (Q, R, and S1) utterances ÷
response, and S1)
duration] x 60
utterances per hour:
# of words per
total # of words for each ÷ total # of utterances
utterance for IP, CA,
for each
FO, FE, crew:

Appendix C.

DEBRIEFING ASSESSMENT BATTERY

INSTRUCTOR PROFILE
The Instructor Profile is a summary of the strategies and techniques IP's use to assist crews in
conducting their own debriefings while giving direction and focus as necessary. The two main
goals of the debriefing are to 1) get the crew to perform an in-depth analysis of the situation that
confronted them, how they understood and managed the situation, the outcome, and ways to
improve, and 2) get the crew to participate in a proactive, rather than reactive, manner in which
they initiate discussion and elaborate beyond the minimal. These goals are based on the
assumption that active participation by the crew will result in a higher level of learning and
increased likelihood of transfer to the line.

Directions:

Use the scale below to rate the instructors on each of the following elements, then total the
scores to get the overall rating for each category

Poor Marginal Needs Improvement Adequate Good Very Good Outstanding

1234567
Introduction
One purpose of the introduction is to let the crew know that participation and self-evaluation are
expected of them, and why it is important.

Makes clear that his role is guide/facilitator and that crew should do most of the talking

Clearly conveys that crew should take an active role, initiating discussion rather than just
responding to him

Clearly conveys that he wants crew to dig deep, critically analyzing the LOFT and their
performance

Gives a persuasive rationale for the crew to participate actively and make their own analysis

Overall rating of Introduction

Questions
The purpose of asking questions is to get the crew to participate, focus the discussion on
important topics, and enlist the crew in discussing the topics in depth.

Asks an appropriate number of questions to get crew talking & lead them to issues

Avoids answering for the crew when they do not respond immediately or correctly and uses a
pattern of questioning that keeps the focus on the crew

Uses probing and follow-up questions to get crew to analyze in depth and to go beyond yes/no
and brief factual answers

Uses questioning techniques to encourage interaction and sharing of perspectives among crew
members

Overall rating of Questions

Encouragement
Encouragement refers to the degree to which the instructor encourages and enables the crew to
actively and deeply participate in the debriefing.

Conveys sense of interest in crew views and works to get them to do most of the talking

Encourages continued discussion through active listening, strategic pauses, avoiding disruptive
interruptions, and/or following up on crew-initiated topics

Encourages all members to participate fully, drawing out quiet members if necessary

Refrains from giving long soliloquies or giving his own analysis before crew has fully analyzed
Overall rating of Encouragement

Focus on Crew Analysis and Evaluation


The goal of the debriefing session is to get the crew to evaluate and analyze their own CRM
performance so they will learn more deeply and can gain practice in debriefing themselves, a skill
they can then begin to use on the line.

Encourages crew to analyze along CRM dimensions the situation that confronted them, what they
did to manage the situation, and why they did it

Encourages crew to evaluate their performance and/or ways they might improve

Encourages crew to explore CRM issues and how they specifically affect LOFT performance and
line operations

Encourages crew to analyze issues, factors, and outcomes in depth, going beyond simply
describing what happened and what they did

Overall rating of Focus on Crew Analysis & Evaluation

Use of Videos
One stated purpose of showing videotaped segments of the LOFT is to enable the crew members
to see how they performed from an objective viewpoint so they can better evaluate their
performance. More realistically, perhaps, the video reminds the crew of the situation, aiding their
memory and providing a focus for discussion.

Shows an appropriate number of videos of appropriate duration to illustrate/introduce topics

Uses video equipment efficiently: is able to find desired segment without wasting time and pauses
the video if substantial talk begins while playing

Consistently discusses video segments, using them as a springboard for discussion of specific
topics

Has a point to make and uses the video to make that point.

Overall rating of Use of Videos

CREW PROFILE
The crew profile measures the degree and depth of participation by the crew.

Directions:
Use the scale below to rate the crew on each of the following elements, then total the scores to
get the overall rating for each category

Poor Marginal Needs Improvement Adequate Good Very Good Outstanding

1234567

Crew Analysis and Evaluation


Crew analysis and evaluation refers to the depth to which the crew members analyze the LOFT
situation and evaluate their performance.

Analyze along CRM dimensions the situation that confronted them, what they did to manage the
situation, and why they did it

Evaluate their performance and ways they might improve

Explore CRM issues and how they affect LOFT performance and line operations

Analyze issues, factors, and outcomes in depth, going beyond simply describing what happened
and what they did

Overall rating of Crew Analysis & Evaluation

Depth of Crew Activity


Activity refers to how actively, versus passively, and deeply the crew participates in and initiates
discussion.

Go beyond minimal responses to IP questions

Participate deeply and thoughtfully

Initiate dialogue rather than just responding to questions, and/or interact with each other rather
than only with the IP

Behave in a predominantly proactive rather than reactive manner, being actively involved rather
than just passing through the training

Overall rating of Depth of Crew Activity

Appendix D.

ANCHORING OF THE DEBRIEFING


ASSESSMENT BATTERY
IP Introduction
Outstanding:

- Very specifically and thoroughly explains that his role is guide/facilitator and that crew should do
most of the talking and lead the discussion

- Sets strong expectations for proactive crew participation, explicitly stating they should initiate
discussion rather than just responding to IP questions

- Explicitly and emphatically states that crew should dig deep, critically analyzing the LOFT and
their performance

- Gives a persuasive rationale for the crew to participate actively and make their own analysis and
makes a strong case for why it is important to do it this way.

Very Good:

- Clearly conveys that his role is guide/facilitator and that crew should do most of the talking and
lead the discussion

- Clearly conveys that crew should take an active role, initiating discussion rather than just
responding to IP

- Clearly conveys that crew should dig deep, critically analyzing the LOFT and their performance

- Clearly conveys the general rationale for the crew to participate actively and make their own
analysis

Good:

- Conveys that his role is guide/facilitator and that crew should do most of the talking, but not
specifically that they should lead their own discussion.

- Conveys that crew should take an active role, initiating discussion rather than just responding to
IP

- Conveys that crew should dig deep, critically analyzing the LOFT and their performance

- Makes a general statement of the rationale for the crew to participate actively and make their
own analysis

Adequate:

- Conveys that his role is guide/facilitator and that crew should do most of the talking, but does
not emphasize strongly

- Conveys that crew should take an active role and initiate discussion

- Conveys that crew should analyze the LOFT and their performance
- Gives a clear, though implicit rationale for the crew to participate actively and make their own
analysis

Needs Improvement:

- Implies that his role is guide/facilitator and that crew should do most of the talking, but does not
emphasize strongly

- Implies that crew should take an active role and initiate discussion

- Implies that crew should analyze the LOFT and their performance

- Gives a vague, implicit rationale for the crew to participate actively and make their own analysis

Marginal:

- Implies that his role is guide/facilitator and that the crew should talk, but does not emphasize

- Implies that crew should take an active role, but does not specify what they should do.

- Implies that crew should discuss the LOFT and their performance

- Gives vague impression of why crew should participate actively

Poor:

- Does not make clear that his role is guide/facilitator or that crew should do most of the talking

- Does not make clear that crew should take an active role or initiating discussion

- Does not make clear that crew should dig deep or critically analyze the LOFT and their
performance

- Does not give rationale for the crew to participate actively and make their own analysis

IP Questions
Outstanding:

- Consistently asks questions as appropriate to get crew talking & lead them to issues

- Consistently rewords questions or otherwise avoids answering for the crew when they do not
respond immediately or correctly, and consistently uses a pattern of questioning that keeps the
focus on the crew

- Consistently uses probing and follow-up questions as a tool to evoke in-depth discussion and
optimize crew self-discovery, while forcing crew to go beyond yes/no and brief factual answers

- Consistently uses questioning techniques to encourage substantial interaction and sharing of


perspectives among crew members
Very Good:

- Frequently asks questions when appropriate to get crew talking & lead them to issues

- Predominantly rewords questions or otherwise avoids answering for the crew when they do not
respond immediately or correctly and predominantly uses a pattern of questioning that keeps the
focus on the crew

- Frequently uses probing and follow-up questions as a tool to evoke in-depth discussion and
optimize crew self-discovery, pushing crew to go beyond yes/no and brief factual answers

- Frequently uses questioning techniques to encourage interaction and sharing of perspectives


among crew members

Good:

- Generally asks questions as necessary to get crew talking & lead them to issues

- Generally rewords questions or otherwise avoids answering for the crew when they do not
respond immediately or correctly and generally uses a pattern of questioning that keeps the focus
on the crew

- Generally uses probing and follow-up questions to get crew to analyze in depth and to go
beyond yes/no and brief factual answers but may steer crew to predetermined answers while
emphasizing self-discovery.

- Generally uses questioning techniques to encourage interaction and sharing of perspectives


among crew members

Adequate:

- About half of the time asks questions when necessary to get crew talking & lead them to issues

- Generally avoids answering for the crew when they do not respond immediately or correctly, but
may not reword the questions. On average uses a pattern of questioning that keeps the focus on
the crew

- On average uses probing and follow-up questions to get crew to analyze in depth and to go
beyond yes/no and brief factual answers but steers crew to predetermined answers as much as
emphasizes self-discovery.

- On average uses questioning techniques to encourage interaction among crew members

Needs Improvement:

- Sometimes asks questions when necessary to get crew talking & lead them to issues

- To some extent avoids answering for the crew when they do not respond immediately or
correctly and uses a pattern of questioning that keeps the focus on the crew
- Sometimes uses probing and follow-up questions to get crew to analyze in depth and to go
beyond yes/no and brief factual answers but steers crew to predetermined answers more than
emphasizes self-discovery.

- Sometimes uses questioning techniques to encourage interaction among crew members

Marginal:

- Occasionally asks questions to get crew talking & lead them to issues

- Occasionally avoids answering for the crew when they do not respond immediately or correctly
but generally answers for them rather than keeping focus on the crew.

- Occasionally uses probing and follow-up questions to get crew to analyze in depth but generally
settles for yes/no and brief factual answers

- Occasionally uses questioning techniques to encourage interaction among crew members

Poor:

- Rarely asks questions to get crew talking or lead them to issues

- Usually answers for the crew when they do not respond immediately or correctly.

- Rarely uses probing and follow-up questions to get crew to analyze in depth. Usually settles for
yes/no and brief factual answers

- Rarely uses questioning techniques to encourage interaction among crew members

IP Encouragement
Outstanding:

- Consistently communicates an interest in crew views and actively strives to get them to do most
of the talking and lead their own discussion.

- Consistently uses active listening and pauses, avoids interrupting, and follows up on crew
topics.

- Consistently encourages all members to participate and draws out quiet members as necessary.

- Consistently refrains from lecturing and giving own analysis before crew.

Very Good:

- Clearly communicates to the crew that their views are important and works to get them to do
most of the talking and to lead their own discussion.

- Frequently uses techniques such as active listening and pauses, avoids interrupting, and follows
up on crew topics to encourage continued discussion.
- Frequently encourages all members to participate and attempts to draw out quiet members as
necessary.

- Usually refrains from lecturing and giving own analysis before crew.

Good:

- Shows a clear interest in crew views and attempts to get them to do most of the talking. Makes
an effort to get crew to lead their own discussion.

- Often uses active listening and pauses, avoids interrupting, and follows up on crew topics.

- Generally encourages all members to participate, drawing out quiet members as necessary.

- Sometimes lectures, but generally gets crew to analyze situation before giving own analysis.

Adequate:

- On average demonstrates a desire to have crew participate and discuss their views.

- Uses some facilitation techniques to encourage crew discussion and generally avoids
interrupting them. Acknowledges crew topics but may not follow up on them thoroughly.

- Attempts to get all crew members involved.

- On average gets the crew to analyze the situation themselves before evaluating and lecturing to
them.

Needs Improvement:

- Shows interest in crew views but does not push them to do most of the talking.

- Sometimes uses active listening and pauses, and follows up on crew topics, but also sometimes
interrupts.

- Expresses a desire for crew to participate but does not put a lot of effort into getting all members
actively involved.

- Sometimes lectures rather than letting crew do the talking.

Marginal:

- Exhibits only modest interest in crew views.

- Only occasionally uses active listening, pauses, and/or follows up on crew topics, and often
interrupts.

- Expresses a desire for crew to participate but puts minimal effort into actively encouraging them
to do so.
- Tends to lecture and analyze for crew without encouraging them to discuss what happened
themselves.

Poor:

- Gives the impression that crew views are not valued.

- Frequently hinders rather than encourages crew talk and does not follow up on topics initiated
by crew.

- Makes little attempt to get crew members to participate.

- Frequently lectures to crew about what they did and how to improve.

IP Focus on Crew Analysis and Evaluation


Outstanding:

- Continually encourages and pushes crew to analyze along CRM dimensions the situation that
confronted them, what they did to manage the situation, and why they did it.

- Consistently encourages and pushes crew to evaluate their performance and/or ways they
might improve.

- Consistently encourages crew to explore CRM issues and how they specifically affect LOFT
performance and line operations.

- Continually encourages crew to analyze issues, factors, and outcomes in depth, going beyond
simply describing what happened and what they did.

Very Good:

- Frequently encourages and pushes crew to analyze along CRM dimensions the situation that
confronted them, what they did to manage the situation, and why they did it.

- Frequently encourages crew to evaluate their performance and/or ways they might improve.

- Frequently encourages crew to explore CRM issues and how they specifically affect LOFT
performance and line operations.

- Frequently encourages crew to analyze issues, factors, and outcomes in depth, going beyond
simply describing what happened and what they did

Good:

- Generally encourages crew to analyze along CRM dimensions the situation that confronted
them, what they did to manage the situation, and why they did what they did, but may settle for
less than extensive discussion.

- Generally encourages crew to evaluate their performance and/or ways they might improve.
- Generally encourages crew to explore CRM issues, and attempts to get crew to discuss how
they specifically affect LOFT performance and line operations.

- Generally encourages crew to analyze issues, factors, and outcomes in depth. Generally
encourages crew to go beyond simply describing what happened and what they did.

Adequate:

- On average encourages crew to analyze along CRM dimensions the situation that confronted
them and what they did to manage the situation. Encourages but does not push crew to analyze
why they did what they did.

- Tends to encourage crew to evaluate their performance and/or ways they might improve, but
may not pursue thoroughly.

- On average encourages crew to explore CRM issues but tends not to get crew to discuss how
they specifically affect both LOFT performance and line operations.

- Generally encourages crew to analyze issues, factors, and outcomes, but settles for moderate
depth, sometimes letting crew simply describe what happened and what they did.

Needs Improvement:

- Sometimes encourages crew to analyze along CRM dimensions the situation that confronted
them and what they did to manage the situation but does not push crew to discuss why they did
what they did.

- Verbally requests but does not pursue getting the crew to evaluate their performance and/or
ways they might improve.

- Encourages crew to explore CRM issues but does not ask crew to discuss how they specifically
affect LOFT performance and line operations.

- Tends not to push crew to analyze issues, factors, and outcomes in depth. Often settles for
letting the crew simply describe what happened and what they did.

Marginal:

- Only minimally encourages crew to analyze along CRM dimensions the situation that confronted
them and/or what they did to manage it. Does not push crew to discuss why they did what they
did.

- Only occasionally encourages crew to evaluate their performance and/or ways they might
improve.

- Occasionally encourages crew to explore CRM issues, and does not encourage crew to discuss
how they affect LOFT performance or line operations.

- Only occasionally encourages crew to analyze issues, factors, and outcomes in depth. Content
for crew to describe what happened and what they did.

Poor:
- Does not encourages crew to analyze along CRM dimensions the situation that confronted
them, what they did to manage the situation, or why they did it.

- Rarely encourages crew to evaluate their performance or ways they might improve.

- Rarely encourages crew to explore CRM issues.

- Rarely encourages crew to analyze issues, factors, and outcomes in depth.

IP Use of Videos
Outstanding:

- Consistently shows an appropriate number of videos of appropriate duration to


illustrate/introduce topics.

- Consistently uses video equipment efficiently: is able to find desired segment without wasting
time and pauses the video if talk begins while playing.

- Actively evokes and consistently pursues thorough crew discussion of each video segment or
topic.

- Consistently has a point to make and uses the video to make that point.

Very Good:

- Usually shows an appropriate number of videos of appropriate duration to illustrate/introduce


topics.

- Usually uses video equipment efficiently: is able to find desired segment without wasting much
time and pauses the video if substantial talk begins while playing.

- Works to get crew to discuss most of the video segments or topics in detail.

- Usually has a point to make and uses the video to make that point.

Good:

- Generally shows an appropriate number of videos of appropriate duration to illustrate/introduce


topics.

- Tends to use video equipment efficiently: is generally able to find desired segment without
wasting much time and generally pauses the video if substantial talk begins.

- Encourages crew to discuss most video segments or topics and refrains from lecturing to crew
or hindering their discussion.

- Generally has a point to make and usually uses the video to make a point.

Adequate:
- On average shows an appropriate number of videos, usually of appropriate duration, to illustrate
and introduce topics.

- On average uses video equipment somewhat efficiently, finding desired segment without
wasting too much time and generally pausing the video if substantial talk begins while playing.

- Generally encourages crew to discuss video segments or topics, but may also lecture to crew,
thereby somewhat discouraging thorough crew discussion.

- Generally has a point to make, but the point is not always clearly tied to the video.

Needs Improvement:

- Shows somewhat too few or too many videos. Sometimes shows very short and/or very long
segments while trying to illustrate/introduce topics.

- Tends to use video equipment inefficiently: tends to waste some time trying to find desired
segments and is slow to pause the video if substantial talk begins while playing.

- Sometimes encourages crew to discuss video segment or topic, but may lecture, interrupt crew
discussion, and/or not consistently pursue crew discussion.

- Sometimes has a predetermined point to make, and sometimes uses the video to make a point.

Marginal:

- Clearly shows too few or too many videos, sometimes of much too long and/or short a duration.
Many videos not used to illustrate/introduce topics.

- Uses video equipment inefficiently, wasting significant time trying to find desired segments while
rarely pausing the video if substantial talk begins while playing.

- Tends not to discuss video segments, and when they are discussed tends to lecture to crew
about what occurred, only minimally encouraging crew to participate in a discussion.

- Only occasionally has a point to make or uses the video to make a point.

Poor:

- Shows way too few or too many videos which are often much too long and/or short. Does not
use videos to illustrate/introduce topics.

- Uses video equipment very inefficiently: wastes substantial time trying to find desired segments
and fails to pause the video if substantial talk begins while playing.

- Usually does not discuss video segments, and when discussed usually lectures to crew without
encouraging (and often hindering) crew participation.

- Rarely has a point to make or uses the video to make a point.

Crew Analysis and Evaluation


Outstanding:

- Consistently analyze along CRM dimensions the situation that confronted them, what they did to
manage the situation, and why they did it.

- Consistently evaluate their performance and ways they might improve.

- Consistently explore CRM issues and how they affect LOFT performance and line operations.

- Consistently analyze issues, factors, and outcomes in depth, going beyond simply describing
what happened and what they did.

Very Good:

- Frequently analyze along CRM dimensions the situation that confronted them, what they did to
manage the situation, and why they did it.

- Frequently evaluate their performance and ways they might improve.

- Often explore CRM issues and how they affect LOFT performance and line operations.

- Frequently analyze issues, factors, and outcomes in depth, going beyond simply describing
what happened and what they did.

Good:

- Generally analyze along CRM dimensions the situation that confronted them and what they did
to manage the situation. Briefly discuss why they did what they did.

- Generally evaluate their performance and ways they might improve.

- Generally explore CRM issues and how they affect LOFT performance and/or line operations.

- Generally analyze issues, factors, and outcomes in moderate depth, usually going beyond
simply describing what happened and what they did.

Adequate:

- On average analyze along CRM dimensions the situation that confronted them and what they
did to manage the situation. Briefly discuss why they did what they did.

- On average evaluate their performance and/or ways they might improve.

- On average explore CRM issues and how they affect LOFT performance and/or line operations.

- Analyze some issues, factors, and outcomes in some depth, often going beyond simply
describing what happened and what they did.

Needs Improvement:
- Only part of the time analyze along CRM dimensions the situation that confronted them, what
they did to manage the situation, or why they did it.

- Only sometimes evaluate their performance and ways they might improve.

- Sometimes explore CRM issues but give little discussion of how they affect LOFT performance
or line operations.

- Analyze only a few issues, factors, and outcomes in any depth, sometimes going beyond simply
describing what happened and what they did.

Marginal:

- Occasionally analyze along CRM dimensions the situation that confronted them. Occasionally
discuss what they did to manage the situation or why they did it.

- Only occasionally evaluate their performance and do not discuss ways they might improve.

- Only occasionally explore CRM issues and do not discuss how they affect LOFT performance
and line operations.

- Analyze issues, factors, and outcomes in very little depth, rarely going beyond simply describing
what happened and what they did.

Poor:

- Do little to analyze along CRM dimensions the situation that confronted them, what they did to
manage the situation, or why they did it.

- Rarely evaluate their performance or ways they might improve.

- Rarely explore CRM issues and how they affect LOFT performance and line operations.

- Do not analyze issues, factors, and outcomes in depth; only briefly describe what happened.

Depth Of Crew Activity


Outstanding:

- Consistently go substantially beyond minimal responses to IP questions.

- Consistently participate deeply and thoughtfully.

- Continually initiate dialogue and pursue issues to completion rather than just responding to
questions, and consistently interact with each other rather than only with the IP.

- Behave in a consistently proactive rather than reactive manner, being actively involved rather
than just passing through the training.

Very Good:
- Frequently go substantially beyond minimal responses to IP questions.

- Usually participate deeply and thoughtfully.

- Frequently initiate dialogue rather than just responding to questions, and often interact with each
other rather than only with the IP.

- Usually behave in a proactive rather than reactive manner, being actively involved rather than
just passing through the training.

Good:

- Generally go well beyond minimal responses to IP questions.

- Generally participate deeply and thoughtfully.

- Tend to initiate dialogue rather than just responding to questions and generally interact with
each other rather than only with the IP.

- Generally behave in a proactive rather than reactive manner, being actively involved rather than
just passing through the training.

Adequate:

- On average go somewhat beyond minimal responses to IP questions.

- On average participate somewhat deeply and thoughtfully.

- On average initiate dialogue rather than just responding to questions and interact with each
other rather than only with the IP.

- On average behave in a proactive rather than reactive manner, being actively involved rather
than just passing through the training.

Needs Improvement:

- Tend to give slightly more than minimal responses to IP questions.

- Sometimes participate deeply and thoughtfully.

- Tend to just respond to questions rather than initiate dialogue. Tend to interact with the IP more
than with each other.

- Sometimes behave in a more reactive than proactive manner.

Marginal:

- Frequently give only minimal responses to IP questions.

- Only occasionally participates deeply or thoughtfully.


- Tend to just respond to questions rather than initiate dialogue. Only occasionally interact with
each other; tend to interact only with IP.

- Behave in a generally reactive rather than proactive manner.

Poor:

- Consistently gives only minimal responses to IP questions.

- Rarely participate deeply or thoughtfully.

- Rarely initiate dialogue; usually just respond to IP. Rarely interact with each other.

- Behave in a consistently reactive rather than proactive manner. Appear to just pass through the
training rather than being actively involved.

Appendix E. Spearman Correlation Coefficients

* - Signif. LE .05 ** - Signif. LE .01 ***- Signif. LE .001 (2-tailed)

CRMPERF -.4881**

TECHPERF -.3875* .5561**

SI_INTRO .0922 -.0319 .0368

SI_QUEST .1341 .2708 .1329 .5469**

SI_ENCRG .1491 .1784 -.0535 .4362** .9043**

SI_CONT .2205 .1336 .1702 .4880** .8861** .7763**

SI_QEC .1841 .1873 .0560 .5003** .9667** .9419**

SI_VIDEO -.3847 .3863 .5632** .3948 .5093** .4529*

IPPART -.2131 -.0360 .2846 -.0691 -.4929** -.7481**

IPPOS -.4259** .3741* .4785** -.1384 -.0571 -.1429

IPNEGIMP .5050** -.4711** -.3449 -.1584 -.1394 -.0209

IPNEG .4006* -.2601 -.3723* -.1188 -.1359 -.0207

IPIMP .4224* -.4433* -.1792 -.1607 -.0825 .0109

IPNEUT -.0772 .2733 .4842** .2620 .3599* .2829

IPPERF -.1579 .1401 .4721** .0457 .0479 -.0364

IPCRM -.1831 .1786 .4882** .3525* .3522* .2478

IPTECH .0413 -.0891 -.2728 -.3922* -.4376** -.3613*

IPMIXED .0730 .0108 -.2949 -.2089 -.2899 -.1831


IPNS .2524 -.1205 -.3642* -.0455 .2627 .3428*

IPCRM2 -.1495 .2058 .4801** .3883* .3232 .2038

IPTECH2 .0794 -.1455 -.3799* -.4225* -.4994** -.4055*

IPWPERUT -.2826 -.0506 .2554 .1152 -.3839* -.5794**

DIRQPHR .1133 .0762 .0816 .4122* .5555** .3776*

IPDQ_CA .1390 -.1000 -.0128 .3908* .4512** .3018

IPDQ_FO .0935 .1192 .0872 .4221* .6051** .4546**

IPDQ_FE .0182 .4846 .1014 .1161 .5182 .2278

NONDQPHR .0612 .2254 .1162 -.1990 .1040 .1467

TOTQPHR .1025 .1962 .1610 .4208* .6005** .4342**

INTERUPH .0681 -.0535 -.1407 -.3407* .0384 .1641

INTERRUP -.0379 -.2019 .0242 -.2407 -.1315 -.2084

INTER_UN .0452 -.1619 -.0012 -.1411 .1234 .0533

VSEGPERH .2297 -.1332 -.0404 -.0243 -.0809 -.0413

AVSEGDUR -.2558 .0209 .0703 -.3113 -.1606 -.1687

SC_ACTIV .1616 -.0129 -.1347 .1338 .5926** .7798**

SC_CONT .2223 .0537 -.0501 .2776 .7509** .7830**

CAPART .1350 -.1540 -.3791* .2469 .4096* .5412**

FOPART .1505 .0100 -.0899 .1221 .3557* .5847**

FEPART -.3636 .5224 -.1584 -.1639 .2091 .5467

CREWPART .2198 .0269 -.2937 .0661 .4888** .7443**

CREWPOS -.0170 .3598* .1648 .2579 .4267** .3442*

CREWNEIM .4817** -.3829* -.0423 -.1356 -.0482 .0567

CREWNEG .4983** -.3784* -.1956 -.2538 -.1229 -.0079

CREWIMPR .4069* -.2460 .1401 -.0146 .0838 .1386

CREWNEUT -.1443 .0758 .0513 .2106 .2612 .2205

CREWPERF -.0674 .1056 .2349 .1543 .2847 .2063

CREWCRM -.0390 .0998 .3983* .4463** .5629** .4044*

CREWTECH .1193 -.0381 -.3174 -.3605* -.3820* -.2782

CREWMIX -.1820 .1848 -.1196 -.1849 -.1566 -.0765


CREWNS .3989* -.3928* -.4842** -.3296 -.3924* -.2692

CREWCRM2 -.1646 .2015 .4667** .4331** .5778** .4123*

CREWTEC2 -.0287 .0318 -.3552* -.3900* -.4374** -.3043

CAWPERES -.0144 .0045 -.1159 .3224 .3499* .4077*

FOWPERES -.0769 -.1506 .0587 .3114 .1115 .1377

FEWPERES .1644 -.0884 -.0286 .3881 .2648 .2082

CREWPERE .0098 -.0712 -.0620 .3501* .2789 .3400*

CAWPERUT -.0530 .0254 -.1600 .3890* .3780* .4511**

FOWPERUT -.1424 -.0452 .0163 .4651** .2832 .3569*

FEWPERUT -.2466 -.1805 -.0382 .1609 .0046 .2151

CREWPERU -.1298 .0552 -.0052 .5212** .4160* .4927**

DURATION CRMPERF TECHPERF SI_INTRO SI_QUEST SI_ENCRG

CASIUTPH .0427 -.1121 -.2770 -.1161 .1925 .3937*

FOSIUTPH .0815 -.0038 -.1711 -.0087 .2200 .4783**

FESIUTPH -.0909 .1636 -.3801 -.2868 .0727 .4510

CREWSIUT .1095 -.0342 -.2893 -.1216 .2326 .5146**

CANALUTT -.1476 .1638 -.0313 .1808 .4922** .6006**

FOANALUT -.0394 .0016 .1167 .0247 .3895* .4894**

FEANALUT .1777 .2336 -.2002 .1233 .3645 .3676

CREWANUT -.0263 .1598 -.0224 .0699 .5590** .7086**

CREWPAQP .1906 -.1825 -.4826** -.1131 -.0720 .1048

FOPAQPH .3335* -.1568 -.2807 .0997 -.0364 .1480

FEPAQPH -.1864 .3065 -.0102 -.4594 .0621 .1126

CAPAQPH .0735 -.2203 -.4341* -.1646 -.1361 -.0379

NONDQ_CA -.1144 .0641 .0003 -.0144 -.0658 .0390

NONDQ_FO -.0939 -.0471 .0329 -.0577 .0513 .0876

NONDQ_FE -.2023 .6371* .0000 -.0721 .2575 .5023

NONDQ_NO .3770* .0467 -.0227 -.0299 .0828 .0491

DURATION CRMPERF TECHPERF SI_INTRO SI_QUEST SI_ENCRG


SI_QEC .9164**

SI_VIDEO .3614 .4699*

IPPART -.4036* -.5776** -.0551

IPPOS .0401 -.0920 .1281 .1329

IPNEGIMP -.0948 -.0407 -.3417 -.1460 -.1979

IPNEG -.2323 -.0944 -.2766 -.1609 -.2192 .7942**

IPIMP .0588 .0300 -.2193 -.1672 -.1198 .8565**

IPNEUT .3165 .3478* .5350** -.1390 -.0179 -.1339

IPPERF .1208 .0371 .2545 .0520 .7291** .1576

IPCRM .4533** .3859* .6864** .0486 .2782 -.3150

IPTECH -.5236** -.4760** -.6398** .0637 -.1579 .2108

IPMIXED -.3117 -.2663 -.1770 -.0544 .0230 .3470*

IPNS .2112 .2538 -.1706 -.3976* -.3283 -.0397

IPCRM2 .4469** .3648* .6551** .0977 .2982 -.2635

IPTECH2 -.5883** -.5246** -.6997** .0790 -.1677 .3398*

IPWPERUT -.3072 -.4411** .0853 .8200** .2448 -.2443

DIRQPHR .4956** .4743** .2442 -.0886 -.0663 .0767

IPDQ_CA .4159* .3781* .0835 -.0231 -.0579 .1045

IPDQ_FO .4971** .5308** .2816 -.1241 -.1704 .1013

IPDQ_FE .2551 .5182 .9048** .2785 .3455 .5577

NONDQPHR .0848 .1064 .1720 -.2333 .0415 .0841

TOTQPHR .5150** .5120** .3819 -.1382 -.0659 .0703

INTERUPH -.0021 .0739 -.1735 -.4436** -.0304 .4258**

INTERRUP -.1414 -.1605 -.1458 .1207 .0092 .1507

INTER_UN .1483 .1082 -.1044 -.1133 -.0534 .0781

VSEGPERH -.0538 -.0498 -.0142 -.0713 -.1088 .1043

AVSEGDUR -.2481 -.1864 .0560 .1889 -.1141 .0327

SC_ACTIV .5137** .6813** .2614 -.8441** -.0264 .1025


SC_CONT .7487** .8242** .3279 -.6702** .0565 .0264

CAPART .3631* .4907** .1150 -.6180** -.1737 .2339

FOPART .2210 .3880* .0499 -.8275** -.0451 .0223

FEPART -.0137 .2909 .0238 -.7671** .2091 -.1521

CREWPART .4007* .5741** .0457 -.9998** -.1413 .1482

CREWPOS .4954** .4052* .0876 -.1581 .3549* -.3315*

CREWNEIM .0430 .0597 -.2126 -.0520 -.1627 .7532**

CREWNEG -.0717 -.0332 -.3366 -.0888 -.2845 .7322**

CREWIMPR .2078 .1729 -.0315 -.1093 -.0441 .6022**

CREWNEUT .1688 .2116 .2330 -.1397 .0418 -.0771

SI_CONT SI_QEC SI_VIDEO IPPART IPPOS IPNEGIMP

CREWPERF .2539 .2632 .2464 .0800 .2325 .1183

CREWCRM .6310** .5550** .6691** -.0367 .2144 -.2777

CREWTECH -.4985** -.4228* -.6131** -.0084 -.1464 .3302*

CREWMIX -.1967 -.1489 .2032 -.0925 -.0351 .0401

CREWNS -.3631* -.3257 -.7452** -.0038 -.3572* .3686*

CREWCRM2 .6446** .5660** .7317** -.0260 .2643 -.3642*

CREWTEC2 -.5404** -.4560** -.4977** -.0166 -.1219 .2831

CAWPERES .3596* .4064* .1363 -.3094 .1671 -.1853

FOWPERES .1587 .1353 .1428 .0188 .1993 -.2362

FEWPERES -.0183 .0639 .2515 .0482 -.0274 -.3449

CREWPERE .2876 .3270 .1069 -.1876 .1909 -.1563

CAWPERUT .4171* .4573** .2877 -.3675* .0824 -.1926

FOWPERUT .2232 .2909 .3186 -.2833 .1362 -.2122

FEWPERUT -.0297 .0000 .1205 -.2638 .2603 -.3079

CREWPERU .3862* .4503** .3259 -.3764* .1707 -.2239

CASIUTPH .1348 .2854 .0654 -.6366** -.1033 .3896*

FOSIUTPH .0860 .2697 .0265 -.8281** -.0040 .1751

FESIUTPH .1412 .1909 -.2143 -.7808** .0455 -.1106

CREWSIUT .1510 .3278 .0383 -.8718** -.1005 .3051


CANALUTT .3916* .5358** .1723 -.5119** .1219 .1005

FOANALUT .4202* .4424** .1758 -.4703** .1881 -.0955

FEANALUT .1461 .2597 .2635 -.4531 -.2369 -.1478

CREWANUT .5355** .6371** .1606 -.7068** .1051 -.0047

CREWPAQP -.1514 -.0290 -.1895 -.3358* -.3882* .4817**

FOPAQPH -.0756 .0058 -.1203 -.2486 -.4140* .3691*

FEPAQPH .0240 .2198 .3546 .0576 .4062 .5645

CAPAQPH -.1990 -.1045 -.2513 -.1733 -.2573 .4151*

NONDQ_CA .0527 .0037 .2257 -.0877 -.0766 -.1307

NONDQ_FO .0231 .0076 -.0214 -.1894 .0892 -.1900

NONDQ_FE .1889 .3357 -.0976 -.8037** .0736 -.2774

NONDQ_NO .0259 .0592 -.2385 .0337 -.1362 .2710

SI_CONT SI_QEC SI_VIDEO IPPART IPPOS IPNEGIMP

IPIMP .4395**

IPNEUT -.1263 -.0604

IPPERF .0732 .1660 .4550**

IPCRM -.3486* -.1656 .2497 .1764

IPTECH .2897 .1049 -.3892* -.1794 -.7359**

IPMIXED .4055* .1320 -.0745 .1651 -.5774** .2114

IPNS -.1367 .0274 .0956 -.3378* -.3054 -.0883

IPCRM2 -.2945 -.1405 .2466 .2199 .9621** -.7851**

IPTECH2 .4272** .1715 -.3803* -.1254 -.8594** .9402**

IPWPERUT -.2118 -.3080 -.1835 .1260 .2371 -.1639

DIRQPHR -.0880 .1356 .2457 .1434 -.0436 -.0249

IPDQ_CA -.0630 .1456 .1282 .1504 -.1201 .0768

IPDQ_FO -.0355 .1501 .1986 -.0379 .1637 -.1558

IPDQ_FE .3506 .4360 .1187 .3736 -.0820 .0959

NONDQPHR .1600 .1628 -.1312 -.0597 -.0942 .2405

TOTQPHR -.0302 .1692 .1724 .1029 -.0537 .0841

INTERUPH .3257 .4298** -.1852 -.0743 -.2117 .2476


INTERRUP .0506 .1151 -.2439 -.1019 -.0470 -.0046

INTER_UN -.0074 .0921 -.1480 -.1501 -.0310 -.0788

VSEGPERH .0507 .2162 .0905 -.0326 -.1260 .2894

AVSEGDUR .1583 -.1726 -.0639 -.1172 -.0763 -.1026

SC_ACTIV .0467 .1567 .1422 .0242 .2755 -.3878*

SC_CONT -.0499 .0790 .2158 .1230 .4037* -.5309**

CAPART .1443 .2902 .0240 -.0362 .0443 -.0162

FOPART .0592 .0677 .1382 .0076 -.0174 -.0313

FEPART -.2618 -.1054 .1142 .0182 .1412 -.3379

CREWPART .1645 .1670 .1349 -.0616 -.0522 -.0629

CREWPOS -.2978 -.3226 .1063 .2957 .1132 -.2555

CREWNEIM .5743** .6183** -.1603 .1062 -.0697 .1107

CREWNEG .6063** .5263** -.1302 -.0311 -.2695 .1575

CREWIMPR .3514* .6740** -.0301 .2173 .1116 .0363

CREWNEUT .0030 -.0853 .2206 .1072 -.0032 -.2108

CREWPERF .1291 .0171 .1861 .4086* .0802 -.2408

CREWCRM -.3434* -.1837 .3301* .2506 .7550** -.7112**

CREWTECH .4456** .1990 -.4269** -.2004 -.6894** .8469**

CREWMIX .1209 .0252 .1592 .0579 -.3808* .0852

CREWNS .2893 .2581 -.4427** -.4304** -.3637* .4517**

CREWCRM2 -.4316** -.2435 .4476** .3158 .7509** -.7826**

CREWTEC2 .3902* .1923 -.2852 -.1345 -.7706** .7046**

CAWPERES -.1898 -.2329 .0284 .0914 .3749* -.4673**

FOWPERES -.2904 -.2100 .0190 .1654 .2522 -.3212

FEWPERES -.1385 -.2792 -.0528 .0732 .0343 -.1147

CREWPERE -.1736 -.2203 -.0144 .1294 .3556* -.4511**

CAWPERUT -.1825 -.2214 .1167 .0388 .4711** -.5919**

FOWPERUT -.1642 -.1838 .1196 .1408 .1950 -.3330*

FEWPERUT -.1949 -.2479 .1858 .3158 .3959 -.6009

CREWPERU -.1943 -.2226 .1757 .1725 .4020* -.5442**


CASIUTPH .3112 .4112* -.0579 -.0578 .0304 .0227

FOSIUTPH .1909 .1946 .0049 .0172 -.0755 .1043

FESIUTPH -.2524 -.0240 .2466 -.0683 .1503 -.4566

CREWSIUT .2899 .3056 .0082 -.0568 -.0630 .0079

CANALUTT .1006 .0301 .0460 .1463 .1254 -.1739

FOANALUT -.1163 -.0149 .0155 .0914 .3638* -.4239**

FEANALUT -.3655 .0528 .0938 -.1553 -.0868 -.0892

CREWANUT -.0475 .0216 .0819 .0735 .2296 -.3679*

CREWPAQP .5110** .4420** -.2524 -.4011* -.2781 .2315

FOPAQPH .4622** .3138 -.0706 -.3189 -.2032 .0537

FEPAQPH .1892 .4937 -.0216 .1868 -.0287 .0384

CAPAQPH .4025* .3833* -.2920 -.3244 -.1971 .2943

NONDQ_CA -.1721 -.0135 .2349 .0080 .0505 -.1806

NONDQ_FO -.1098 -.0767 -.0011 -.0309 -.0619 .0266

NONDQ_FE -.1702 -.3078 .1940 -.0714 .1083 -.1316

NONDQ_NO .3704* .0548 -.2307 -.0898 -.2099 .2261

IPNEG IPIMP IPNEUT IPPERF IPCRM IPTECH

IPNS -.1320

IPCRM2 -.3955* -.4022*

IPTECH2 .4839** -.1150 -.8383**

IPWPERUT .0483 -.5308** .3110 -.1053

DIRQPHR -.1527 .2048 -.0724 -.0789 -.1914

IPDQ_CA -.1337 .1482 -.1473 .0226 -.1217 .9393**

IPDQ_FO -.3157 .2046 .0936 -.2262 -.1922 .8039**

IPDQ_FE .2182 -.2182 .0364 .1636 -.0137 .8929**

NONDQPHR .2117 -.2320 -.0490 .2333 -.3348* -.0853

TOTQPHR -.0651 .0085 -.0470 .0236 -.2784 .8323**

INTERUPH .1828 .1166 -.2362 .2571 -.4455** .1886

INTERRUP .1444 -.1206 -.0039 .0826 .2730 -.0434

INTER_UN .0721 .0834 .0104 -.0265 .0591 .0306


VSEGPERH -.1160 .1997 -.2259 .1696 -.2053 -.0892

AVSEGDUR .1682 -.1189 -.0347 .0438 .1527 -.2248

SC_ACTIV -.0751 .2566 .2384 -.3834* -.5495** -.0005

SC_CONT -.1208 .1880 .4032* -.5208** -.3985* .2264

CAPART -.0852 .1396 -.0011 -.0567 -.5008** .2575

FOPART .0128 .2593 -.0713 -.0540 -.6163** .0007

FEPART -.1182 .6455* -.0455 -.1818 -.5890 -.4146

CREWPART .0548 .4037* -.1013 -.0775 -.8224** .0844

CREWPOS -.0097 .1653 .1335 -.2730 -.0214 .2161

CREWNEIM .1506 -.1754 -.0361 .1650 -.0854 .0673

CREWNEG .4016* -.0005 -.2212 .2685 -.1441 .0793

CREWIMPR -.1407 -.1784 .1138 .0098 -.1738 .1094

CREWNEUT .1253 .0982 .0338 -.1251 -.0099 .3613*

CREWPERF .1364 -.0906 .1282 -.1516 .1759 .3697*

CREWCRM -.5138** -.0223 .7012** -.8040** .1719 .1765

CREWTECH .3043 -.0330 -.6964** .8428** -.2321 -.1464

CREWMIX .6835** -.0656 -.2421 .2794 -.1159 -.0233

CREWNS -.0173 .2175 -.4008* .4073* -.1866 -.1524

CREWCRM2 -.4270** -.0340 .7229** -.8393** .1910 .2341

CREWTEC2 .6020** -.0504 -.6989** .8218** -.1901 -.1454

CAWPERES -.0462 -.0077 .3974* -.4277** .0782 -.0207

FOWPERES .1025 -.1842 .3211 -.2528 .3306* -.0323

FEWPERES .0776 -.0183 -.1005 -.1553 .6284* -.0023

CREWPERE .0866 -.1152 .4146* -.3739* .2045 -.0503

CAWPERUT -.0023 -.0213 .5040** -.5465** .0549 -.0512

FOWPERUT .1364 -.1145 .2585 -.2514 .1146 .1092

FEWPERUT -.1279 .1370 .2420 -.5160 .2729 -.4005

CREWPERU .1137 -.0985 .4534** -.4710** .0778 .0692

CASIUTPH .0401 .1305 -.0304 .0098 -.5240** .0266

FOSIUTPH .0974 .1331 -.1201 .0841 -.6328** -.0225


FESIUTPH -.1273 .6182* .0818 -.2909 -.6986* -.4966

CREWSIUT .1340 .2529 -.1110 .0185 -.6909** -.0195

CANALUTT .0683 .1104 .0992 -.1648 -.2805 .3096

FOANALUT -.1979 .1203 .3227 -.4267** -.1776 -.0723

FEANALUT .0592 .2323 -.0820 -.0501 -.1602 -.1096

CREWANUT -.0271 .2712 .1981 -.3562* -.4140* .1566

CREWPAQP .1144 .2621 -.3015 .2486 -.4976** .0693

FOPAQPH .1815 .2507 -.1826 .0942 -.3522* -.0899

FEPAQPH .0621 .2246 -.0669 .1243 -.4465 .4599

CAPAQPH .0439 .0596 -.2226 .2924 -.3005 .0843

NONDQ_CA .1291 .0712 .0551 -.1368 .0440 -.0204

NONDQ_FO -.2275 .3705* -.1630 -.0709 -.2660 -.0180

NONDQ_FE -.3357 .5977 -.0230 -.1241 -.8291** -.4032

NONDQ_NO .3969* -.2624 -.0928 .3077 -.0043 -.0843

IPMIXED IPNS IPCRM2 IPTECH2 IPWPERUT DIRQPHR

IPDQ_FO .6965**

IPDQ_FE .7000* .6758*

NONDQPHR -.0822 -.0429 .5182

TOTQPHR .8065** .7103** .7882** .4033*

INTERUPH .1526 .2137 .3326 .2721 .1747

INTERRUP -.0574 .0392 -.3158 -.1675 -.1740 .5489**

INTER_UN -.0085 .0012 -.1545 -.1207 -.0769 .4477**

VSEGPERH -.1754 -.0553 .1905 .0223 -.1664 -.0990

AVSEGDUR -.1589 -.3120 -.4048 .0580 -.1140 -.0009

SC_ACTIV -.0505 .1382 -.3052 .0945 .0226 .3627*

SC_CONT .1390 .3184 .1327 .0543 .2088 .2207

CAPART .2603 .2887 .2096 .0612 .2807 .3854*

FOPART -.0130 .1481 -.3387 .2553 .1047 .3439*

FEPART -.4091 -.3014 -.1000 -.1545 -.3554 -.2688

CREWPART .0190 .1232 -.3098 .2332 .1337 .4461**


CREWPOS .1835 .0875 .2870 -.0472 .2252 -.2244

CREWNEIM .1062 .0649 .2648 .1392 .0643 .3196

CREWNEG .1130 .0891 .0412 .1321 .0476 .5059**

CREWIMPR .1148 .0868 .3494 .1930 .1586 .1754

CREWNEUT .3054 .1948 .3273 -.1349 .1995 .1746

CREWPERF .3569* .1768 .5740 -.1032 .2523 -.0524

CREWCRM .1049 .3269 -.3091 -.2851 .0848 -.3296*

CREWTECH -.0692 -.1674 .2700 .3752* .0605 .3402*

CREWMIX -.0359 -.2937 .3059 .3474* .0995 .1304

CREWNS -.0874 -.1375 -.2597 -.1177 -.2723 .3067

CREWCRM2 .1440 .3307* -.2091 -.2543 .1343 -.3424*

CREWTEC2 -.0868 -.3055 .3091 .4108* .0484 .3145

CAWPERES -.0904 .1120 -.2415 -.3855* -.1958 .0102

FOWPERES .0128 -.0950 -.2727 -.1095 -.1053 -.1297

FEWPERES -.1096 -.0734 -.0548 -.3607 -.1190 -.3959

CREWPERE -.0751 .0597 -.1913 -.2408 -.1703 -.0074

CAWPERUT -.1298 .0889 -.2055 -.3247 -.1865 -.0418

FOWPERUT .0859 .0815 -.2014 -.0636 .0695 .0593

FEWPERUT -.4429 -.3991 -.2877 -.7078* -.6110* -.3730

CREWPERU .0121 .1741 -.1868 -.1617 .0113 .0181

CASIUTPH -.0330 .1082 .2273 .1537 .0264 .7105**

FOSIUTPH -.0399 .0873 -.2455 .2911 .0864 .6107**

FESIUTPH -.4273 -.2968 -.3818 -.2455 -.4510 -.1048

CREWSIUT -.0781 .0410 -.3182 .2012 -.0106 .6982**

CANALUTT .2972 .2892 .2364 -.0207 .2105 .4750**

FOANALUT -.0647 .0747 -.5182 .0869 -.0507 .1089

FEANALUT -.2642 -.1442 -.0410 -.1321 -.0799 -.6027*

CREWANUT .1295 .1897 -.2460 .0678 .1004 .3381*

CREWPAQP .0207 .1285 .3781 .2390 .1102 .5028**

FOPAQPH -.0733 -.0120 .1535 .3106 .0572 .1058


FEPAQPH .5592 .4441 .5879 .5448 .4551 .7712**

CAPAQPH .0607 .1576 .3964 .0933 .0813 .6262**

NONDQ_CA -.0318 -.0236 -.2460 -.0244 -.0816 -.0307

NONDQ_FO .0554 -.0543 -.1150 .1036 .0222 .0410

NONDQ_FE -.3265 -.0762 -.2437 .1058 -.1429 -.1313

NONDQ_NO -.0608 -.1619 .1169 .3255 .1145 .1569

IPDQ_CA IPDQ_FO IPDQ_FE NONDQPHR TOTQPHR INTERUPH

INTER_UN .8014**

VSEGPERH -.2676 -.1850

AVSEGDUR .2395 .2375 -.7074**

SC_ACTIV .1082 .2726 -.0797 -.0789

SC_CONT .0924 .2679 -.1552 -.1150 .8721**

CAPART -.0580 -.0335 -.0108 -.2827 .6260** .5476**

FOPART -.0394 .0800 .0895 -.2717 .7127** .5029**

FEPART -.6133* -.4000 .3571 -.0476 .5923 .1785

CREWPART -.1159 .1164 .0570 -.1767 .8434** .6696**

CREWPOS -.2554 -.1824 -.1432 -.1782 .1422 .3660*

CREWNEIM .1363 .1139 -.0139 .1367 .1125 .1332

CREWNEG .3070 .2169 -.1471 .2419 .0835 .0752

CREWIMPR -.0601 .0180 .1798 -.0492 .1539 .1793

CREWNEUT .1775 .2952 -.2334 .2762 .1257 .2030

CREWPERF -.0168 .0460 -.1963 .2906 .0176 .2342

CREWCRM -.0435 .0938 -.0303 -.1744 .3440* .5645**

CREWTECH -.0010 -.1514 .2469 -.0627 -.2718 -.4695**

CREWMIX -.0572 .0134 -.3908* .5376** -.0786 -.1672

CREWNS .2006 .0991 .0099 .0089 -.1148 -.2475

CREWCRM2 -.0612 .0874 -.1198 -.0889 .3181 .5539**

CREWTEC2 -.0070 -.1106 .0533 .1903 -.2868 -.4932**

CAWPERES .2194 .2232 -.1251 -.3597 .5312** .5860**

FOWPERES .1552 .1904 .0432 -.1932 .1740 .2191


FEWPERES .3218 .4703 .6946 -.5030 .3364 .2828

CREWPERE .2365 .2198 -.0769 -.3119 .4472** .5037**

CAWPERUT .1056 .1748 -.1736 -.2671 .5783** .5990**

FOWPERUT .1680 .2622 -.1310 -.1743 .3940* .3836*

FEWPERUT .0115 .2466 .8796** -.6506 .5904 .2276

CREWPERU .1161 .1681 -.1923 -.2460 .5690** .5787**

CASIUTPH .2080 .1783 .0469 -.0321 .6359** .4261**

FOSIUTPH .1314 .1444 -.0409 -.1349 .6923** .4471**

FESIUTPH -.5584 -.5818 .3333 -.0476 .6560* .3524

CREWSIUT .1439 .1883 .0041 -.0496 .7628** .5138**

CANALUTT .1757 .1688 -.1895 .0407 .6313** .6498**

FOANALUT .0978 .3084 -.3010 .2060 .6382** .5990**

FEANALUT -.3601 -.1185 .4671 -.1198 .4886 .4060

CREWANUT .0731 .2291 -.2420 .1195 .8035** .8045**

CREWPAQP -.0599 -.0245 .2674 -.1392 .1068 -.1192

FOPAQPH -.3475* -.1933 .3556 -.1745 .0619 -.1502

FEPAQPH -.3032 -.3776 -.1909 -.0273 -.2922 -.1227

CAPAQPH .2510 .1216 .0721 -.1327 .0326 -.1494

NONDQ_CA -.1616 -.1305 .1983 -.3035 .0102 .0021

NONDQ_FO -.1619 -.0294 -.0322 .0055 .0931 -.0482

NONDQ_FE -.6644* -.7587** -.2684 .2684 .4263 .1528

NONDQ_NO .2396 .2121 -.1900 .3059 .0276 .0508

INTERRUP INTER_UN VSEGPERH AVSEGDUR SC_ACTIV SC_CONT

FOPART .3255

FEPART -.0228 .6819*

CREWPART .6175** .8274** .7563**

CREWPOS .1541 .0063 .1139 .1535

CREWNEIM .0034 .0062 -.7854** .0529 -.2434

CREWNEG -.0284 .0652 -.4623 .0955 -.3064 .8433**

CREWIMPR .0794 .0510 -.3632 .1046 -.1589 .8195**


CREWNEUT -.1238 .1549 .2182 .1374 .0137 .0198

CREWPERF -.2213 -.1263 -.0319 -.0871 .2930 .3932*

CREWCRM .0976 .0121 .1091 .0323 .3686* -.0665

CREWTECH .0902 -.0127 -.2838 .0107 -.1637 .1391

CREWMIX -.1222 .0279 .0913 .0919 -.1653 -.0859

CREWNS .0341 -.0528 .0000 .0139 -.3955* .2562

CREWCRM2 .0365 .0124 .2091 .0203 .3909* -.1425

CREWTEC2 -.0055 -.0161 -.1455 .0179 -.2244 .0368

CAWPERES .3525* .2718 -.0228 .3074 .3896* -.1088

FOWPERES -.2118 .2142 -.1091 -.0232 .0705 .0747

FEWPERES -.1465 .2046 .0274 -.0801 .0000 .0734

CREWPERE .1102 .3000 -.0774 .1855 .2418 .0634

CAWPERUT .4544** .2247 .2146 .3662* .3275 -.2040

FOWPERUT .0939 .4415** -.1236 .2793 .1550 -.0184

FEWPERUT -.1281 .3793 .4429 .2265 -.0435 -.2890

CREWPERU .3031 .4420** .1230 .3730* .3090 -.1214

CASIUTPH .7339** .3639* -.0727 .6380** -.1266 .2069

FOSIUTPH .4726** .8953** .4000 .8284** -.0610 .1006

FESIUTPH .3007 .5401 .7909** .7882** .1503 -.6667*

CREWSIUT .6350** .6784** .3545 .8730** -.0416 .1731

CANALUTT .5414** .3437* .1727 .5082** .2672 .2190

FOANALUT .0162 .5989** .3182 .4700** .0817 .1647

FEANALUT .1826 .2156 .5421 .4384 .2283 -.3936

CREWANUT .3919* .5782** .5148 .7048** .3030 .1725

CREWPAQP .3276 .1419 .2369 .3396* -.3869* .1669

FOPAQPH .0912 .1993 .5397 .2512 -.2738 .2214

FEPAQPH .0719 -.0048 .1434 -.0575 .0024 .1032

CAPAQPH .3555* .0349 -.2192 .1782 -.3845* .0684

NONDQ_CA .0156 .1164 .0182 .0824 .0364 -.0861

NONDQ_FO -.0820 .3578* .6069* .1909 -.0781 -.2676


NONDQ_FE .2742 .6412* .6299* .8341** .3433 -.8430**

NONDQ_NO -.0800 -.1155 -.2150 -.0312 .0158 .4074*

CAPART FOPART FEPART CREWPART CREWPOS CREWNEIM

CREWIMPR .4525**

CREWNEUT .0920 -.0034

CREWPERF .2573 .3405* .6928**

CREWCRM -.2761 .0930 .0133 .2389

CREWTECH .1986 .0660 -.2862 -.2879 -.7663**

CREWMIX .1354 -.1806 .3615* .1431 -.5470** .1328

CREWNS .3938* .0780 -.2160 -.3571* -.5080** .3962*

CREWCRM2 -.3159 .0181 .1339 .3166 .9637** -.8524**

CREWTEC2 .1955 -.0665 -.0144 -.1558 -.9191** .8407**

CAWPERES -.1432 -.1435 -.0413 -.0362 .4524** -.4058*

FOWPERES .0000 -.0111 .2657 .2887 .2003 -.4171*

FEWPERES -.1333 .0300 .6347* .5606 .3196 -.2874

CREWPERE .0296 -.0659 .1153 .1541 .3461* -.4188*

CAWPERUT -.1893 -.2439 .0172 -.0827 .5069** -.5256**

FOWPERUT -.0444 -.0855 .5080** .3274 .2085 -.3552*

FEWPERUT -.3034 -.1640 .4338 .2265 .4977 -.7655**

CREWPERU -.0992 -.1938 .3139 .1777 .4087* -.4951**

CASIUTPH .2600 .1634 .1051 -.1070 -.0966 .1821

FOSIUTPH .1781 .1182 .1292 -.1628 -.1221 .1704

FESIUTPH -.3844 -.3265 .0091 -.1913 .2364 -.3341

CREWSIUT .2564 .1454 .1904 -.0842 -.1183 .1497

CANALUTT .2187 .1127 .4015* .4444** .1691 -.1179

FOANALUT .0722 .2119 .2646 .2536 .3994* -.4353**

FEANALUT -.5138 .1613 .0911 .1963 .1913 -.0642

CREWANUT .1444 .1754 .3957* .3909* .3037 -.3204

CREWPAQP .2777 .0720 .0184 -.2349 -.4420** .4059*

FOPAQPH .2768 .1393 .0295 -.1080 -.3247 .2516


FEPAQPH .2960 .0387 .0812 .1485 -.3393 .1059

CAPAQPH .2531 -.0825 -.0182 -.3302* -.4381** .4343**

NONDQ_CA -.0677 -.0230 -.1597 -.1131 .0859 -.2083

NONDQ_FO -.1979 -.1318 -.0403 -.2170 -.0811 -.0183

NONDQ_FE -.5116 -.4163 -.1333 -.4608 .0782 .1782

NONDQ_NO .4496** .2232 .0068 .1625 -.2793 .3743*

CREWNEG CREWIMPR CREWNEUT CREWPERF CREWCRM CREWTECH

CREWNS -.1818

CREWCRM2 -.3582* -.6272**

CREWTEC2 .6201** .2307 -.8805**

CAWPERES -.3014 -.1795 .4477** -.4447**

FOWPERES .0687 -.3072 .2841 -.2644 .4144*

FEWPERES -.3211 -.3227 .2329 -.3196 .3570 .7489**

CREWPERE -.1563 -.2041 .3762* -.3872* .8480** .7899**

CAWPERUT -.1737 -.2041 .5203** -.4900** .8735** .3000

FOWPERUT .1697 -.4285** .2988 -.1594 .5123** .7613**

FEWPERUT -.2523 -.4302 .5708 -.5525 .5881 .6210*

CREWPERU .0220 -.4185* .4775** -.3633* .7810** .6157**

CASIUTPH .0425 .1557 -.1445 .1454 .1370 -.1717

FOSIUTPH .0749 .0789 -.1431 .1476 .2505 .0304

FESIUTPH -.0411 -.1048 .3273 -.2000 .0683 -.2818

CREWSIUT .1412 .1161 -.1404 .1646 .2152 -.0971

CANALUTT .0204 -.1672 .1857 -.1102 .3701* .2277

FOANALUT -.0517 -.2862 .4288** -.3829* .3310* .4301**

FEANALUT -.0664 -.2557 .1822 -.1367 .0183 -.0410

CREWANUT .0495 -.2219 .3398* -.2571 .3776* .2591

CREWPAQP .1798 .3765* -.4867** .4156* -.2774 -.3370*

FOPAQPH .2110 .1988 -.3534* .2961 -.3330* -.1627

FEPAQPH .3145 .1269 -.2246 .2915 -.4407 -.4588

CAPAQPH .0559 .4428** -.4932** .3926* -.1299 -.2885


NONDQ_CA .1978 -.3828* .1735 -.0355 .1863 .2070

NONDQ_FO -.0074 .0982 -.0724 -.0115 -.1763 -.0918

NONDQ_FE -.0901 .2097 .0782 -.0460 -.1244 -.5472

NONDQ_NO .2194 .2285 -.3275 .3317* -.0552 -.0090

CREWMIX CREWNS CREWCRM2 CREWTEC2 CAWPERES FOWPERES

CREWPERE .7368**

CAWPERUT .4381 .7063**

FOWPERUT .7218* .7192** .4735**

FEWPERUT .6858* .7643** .6560* .6345*

CREWPERU .6087* .8377** .8217** .8458** .7895**

CASIUTPH -.0594 .0372 .2816 .0924 -.0046 .2033

FOSIUTPH -.0365 .1993 .2065 .3442* .1096 .3530*

FESIUTPH -.1872 -.1913 .2648 -.2746 .2831 .0501

CREWSIUT -.0137 .1073 .3054 .2351 .2146 .3018

CANALUTT .1187 .3895* .3482* .3942* .1096 .4741**

FOANALUT .2648 .4293** .2822 .4693** .4566 .4557**

FEANALUT .3638 -.0753 .2449 -.1376 .2494 -.0365

CREWANUT .2014 .3763* .3896* .4001* .2700 .4802**

CREWPAQP -.4348 -.2973 -.1309 -.1339 -.1190 -.1717

FOPAQPH -.1122 -.2273 -.1310 -.0217 .2173 -.0631

FEPAQPH -.5497 -.4527 -.4705 -.4692 -.3433 -.3736

CAPAQPH -.5247 -.1877 -.0598 -.1218 -.3115 -.1341

NONDQ_CA -.2769 .1373 .1553 .1743 .1808 .1770

NONDQ_FO -.3995 -.2019 -.2266 -.1606 .0554 -.1739

NONDQ_FE -.4411 -.4539 .0485 -.4074 -.1940 -.1797

NONDQ_NO .3780 .0693 -.1094 -.0776 -.1385 -.0717

FEWPERES CREWPERE CAWPERUT FOWPERUT FEWPERUT CREWPERU

FOSIUTPH .5963**

FESIUTPH .2455 .3545

CREWSIUT .8774** .8464** .7000*


CANALUTT .5834** .4245** .3727 .5705**

FOANALUT .1434 .4445** .2000 .3451* .3581*

FEANALUT -.0911 -.0456 .4875 .1777 .1458 .0319

CREWANUT .4684** .5389** .6150* .6349** .8049** .7468**

CREWPAQP .5562** .2674 .1822 .4825** .0457 -.1936

FOPAQPH .2074 .1205 .4001 .2401 -.0698 -.0980

FEPAQPH .2820 .1051 .0526 -.0239 .2198 -.1816

CAPAQPH .6022** .2624 -.1772 .4296** .0423 -.2631

NONDQ_CA -.1254 .0434 .3645 .0161 -.0218 .1483

NONDQ_FO -.1009 .2204 .3724 .0338 -.0807 .3146

NONDQ_FE .1012 .6575* .6851* .5472 .1517 .1379

NONDQ_NO .0208 -.0185 -.3693 .0053 .1047 -.1515

CASIUTPH FOSIUTPH FESIUTPH CREWSIUT CANALUTT FOANALUT

CREWANUT .4429

CREWPAQP -.3881 -.0540

FOPAQPH -.0676 -.0662 .7407**

FEPAQPH -.4096 -.0862 .7329* .4329

CAPAQPH -.7455** -.1564 .8281** .3531* .7159*

NONDQ_CA -.1461 .1017 -.0536 .0445 .2251 -.1750

NONDQ_FO -.0207 .0985 .1169 .1758 .4230 -.0020

NONDQ_FE .2765 .4194 .1290 .2871 .1039 -.1297

NONDQ_NO .3397 -.0197 -.0156 .0743 -.4866 .0003

FEANALUT CREWANUT CREWPAQP FOPAQPH FEPAQPH CAPAQPH

NONDQ_FO .1322

NONDQ_FE -.1590 .4884

NONDQ_NO -.4557** -.3257 -.2601

NONDQ_CA NONDQ_FO NONDQ_FE

* - Signif. LE .05 ** - Signif. LE .01 (2-tailed)

" . " is printed if a coefficient cannot be computed

You might also like