0% found this document useful (0 votes)
25 views185 pages

TP11 21

Uploaded by

ich
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views185 pages

TP11 21

Uploaded by

ich
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 185

Department of the Army TRADOC Pamphlet 11-21

Headquarters, U.S. Army


Training and Doctrine Command
Fort Eustis, Virginia 23604-5700

29 March 2024

Army Programs

Army Quality Assurance Program Procedures

FOR THE COMMANDER:

MARIA R. GERVAIS
Lieutenant General, U.S. Army
Deputy Commanding General/
Chief of Staff
OFFICIAL:

WILLIAM T. LASHER
Deputy Chief of Staff, G-6

History. This publication is a new U.S. Army Training and Doctrine Command pamphlet.

Summary. This pamphlet provides guidance on the concepts and processes used to implement
the Army Quality Assurance Program and its five major functions: Army accreditation, oversight
and governance, proponent assessment, internal evaluation, and external evaluation. It contains
specific guidance on quality assurance evaluator development, to include quality assurance
evaluator competencies, professional development, and certification; the Army Enterprise
Accreditation Standards; conducting Army accreditations; and managing learning institution
quality assurance programs, to include master evaluation planning, conducting internal and
external evaluations, preparing self-studies and self-assessments, conducting proponent
assessments, and conducting instructor actions review. This pamphlet supports the policies set
forth in TRADOC Regulation 11-21.

Applicability. This pamphlet applies to all Army learning institutions that are part of The Army
School System, made up of Regular Army, Army National Guard, U.S. Army Reserve, and
Army civilian institutional training systems, except for the U.S. Military Academy.

Proponent and exception authority. The proponent for this pamphlet is the Director, Army
Quality Assurance Program, U.S. Army Training and Doctrine Command. The proponent has the
authority to approve exceptions or waivers to this pamphlet that are consistent with controlling
law and regulations.
TRADOC Pamphlet 11-21

Suggested improvements. Users are invited to send comments and suggested improvements on
DA Form 2028 (Recommended Changes to Publications and Blank Forms) directly to the
Director, Army Quality Assurance Program, U.S. Army Training and Doctrine Command, 950
Jefferson Avenue, Fort Eustis, VA 23604-5710.

Distribution. This pamphlet is available in electronic media only at the U.S. Army Training and
Doctrine Command Administrative Publications website at https://adminpubs.tradoc.army.mil.

2
TRADOC Pamphlet 11-21

Contents
Chapter 1 Introduction .................................................................................................................... 9
1-1. Purpose ................................................................................................................................ 9
1-2. References ........................................................................................................................... 9
1-3. Explanation of abbreviations and terms .............................................................................. 9
1-4. Records management requirements ..................................................................................... 9
1-5. Information Collections ....................................................................................................... 9
Chapter 2 Army Quality Assurance Program ................................................................................. 9
2-1. Introduction ......................................................................................................................... 9
2-2. Core values ........................................................................................................................ 10
2-3. Mission, vision, and strategic goals ................................................................................... 10
2-4. Five major functions .......................................................................................................... 11
2-5. Commander’s organizational inspection program............................................................. 13
2-6. Impact issues ..................................................................................................................... 14
2-7. Value-added practices ....................................................................................................... 14
2-8. Quality assurance versus quality control ........................................................................... 15
2-9. Evaluation versus inspection ............................................................................................. 16
2-10. Quality assurance office, officer, and non-commissioned officer ................................... 17
2-11. Collaboration platforms ................................................................................................... 17
2-12. Marketing the Army Quality Assurance Program ........................................................... 18
2-13. External accrediting agencies .......................................................................................... 18
Chapter 3 Quality Assurance Evaluator Development Program .................................................. 20
3-1. Introduction ....................................................................................................................... 20
3-2. Essential competencies for quality assurance evaluators .................................................. 21
3-3. Quality assurance evaluator courses .................................................................................. 23
3-4. Forums ............................................................................................................................... 25
3-5. Lunch-and-learn training sessions ..................................................................................... 25
3-6. Reading list ........................................................................................................................ 25
3-7. Continuous learning points ................................................................................................ 26
3-8. Quality assurance evaluator certification .......................................................................... 26
Chapter 4 Army Enterprise Accreditation Standards.................................................................... 30
4-1. Introduction ....................................................................................................................... 30
4-2. Standards subject areas ...................................................................................................... 31
4-3. Criteria applicability .......................................................................................................... 32
4-4. Evaluation report and rubrics ............................................................................................ 33
4-5. Changes to the standards, report, and rubrics .................................................................... 34
Chapter 5 Army Accreditation ...................................................................................................... 35
Section I Overview of Army Accreditation .................................................................................. 35
5-1. Introduction ....................................................................................................................... 36
5-2. Army accreditation core purposes ..................................................................................... 36
5-3. Army accreditation classifications .................................................................................... 37
5-4. Army accreditation methodology ...................................................................................... 38
5-5. Proponent assessment and Army accreditation. ................................................................ 38
Section II Army Accreditation Administration............................................................................. 39
5-6. Army accreditation funding ............................................................................................... 39
5-7. Army accreditation schedule ............................................................................................. 39

3
TRADOC Pamphlet 11-21

Section III Army Accreditation Team .......................................................................................... 41


5-8. Forming the Army accreditation team ............................................................................... 41
5-9. Army accreditation team member qualifications .............................................................. 41
5-10. Army accreditation team member roles and responsibilities .......................................... 41
5-11. Expanded team effort ...................................................................................................... 42
5-12. Army accreditation observers .......................................................................................... 43
5-13. Confidentiality and non-disclosure.................................................................................. 44
5-14. Proponent as Army accreditation team lead .................................................................... 44
Section IV Army Accreditation Timeline and Process ................................................................. 44
5-15. Army accreditation timeline ............................................................................................ 44
5-16. Army accreditation process ............................................................................................. 45
5-17. On-the-spot corrections ................................................................................................... 47
5-18. Impact issues and value-added practices ......................................................................... 48
5-19. Army accreditation follow-up actions ............................................................................. 48
Section V Army Accreditation Staff Assistance Visit .................................................................. 48
5-20. Introduction ..................................................................................................................... 48
5-21. Requesting a staff assistance visit ................................................................................... 48
5-22. Staff assistance visit funding ........................................................................................... 49
5-23. Conducting a staff assistance visit ................................................................................... 49
5-24. Staff assistance visit after action review summary.......................................................... 50
Chapter 6 Learning Institution Quality Assurance Program Management ................................... 50
Section I Quality assurance function applicability ....................................................................... 50
6-1. Accredited (center/proponent) ........................................................................................... 51
6-2. Accredited (non-proponent) .............................................................................................. 51
6-3. Assessed (non-proponent) ................................................................................................. 51
Section II Master Evaluation Plan ................................................................................................ 52
6-4. Introduction ....................................................................................................................... 52
6-5. Frequency .......................................................................................................................... 52
6-6. Master evaluation plan elements ....................................................................................... 52
Section III Internal Evaluation ...................................................................................................... 53
6-7. Introduction ....................................................................................................................... 53
6-8. Applicability ...................................................................................................................... 53
6-9. Course evaluations............................................................................................................. 54
6-10. Non-course evaluations ................................................................................................... 55
6-11. Impact issues and value-added practices ......................................................................... 55
Section IV Self-Study and Self-Assessment ................................................................................. 55
6-12. Introduction ..................................................................................................................... 55
6-13. Applicability and frequency ............................................................................................ 56
6-14. Process ............................................................................................................................. 56
6-15. Impact issues and value-added practices ......................................................................... 56
Section V External Evaluation ...................................................................................................... 56
6-16. Introduction ..................................................................................................................... 56
6-17. External surveys .............................................................................................................. 56
6-18. Other types of external evaluation ................................................................................... 58
Section VI Corrective Action Plan ............................................................................................... 58
6-19. Introduction ..................................................................................................................... 58

4
TRADOC Pamphlet 11-21

6-20. Applicability .................................................................................................................... 58


6-21. Process ............................................................................................................................. 58
6-22. Corrective action plans for internal evaluation ............................................................... 59
Section VII Proponent Assessment ............................................................................................... 59
6-23. Introduction ..................................................................................................................... 59
6-24. Applicability and frequency ............................................................................................ 59
6-25. Timing ............................................................................................................................. 59
6-26. Process ............................................................................................................................. 60
6-27. Impact issues and value-added practices ......................................................................... 60
6-28. Proponent assistance ........................................................................................................ 60
Section VIII Accreditation and Assessment Coordination and Preparation ................................. 61
6-29. Introduction ..................................................................................................................... 61
6-30. Coordinating with accreditation or assessment team ...................................................... 61
6-31. Preparing institutions for accreditation or assessment .................................................... 61
6-32. Operations center for on-site visits .................................................................................. 62
Section IX Quarterly quality assurance review ............................................................................ 63
6-33. Introduction ..................................................................................................................... 63
6-34. Applicability .................................................................................................................... 63
Section X Instructor Actions Review ........................................................................................... 63
6-35. Introduction ..................................................................................................................... 63
6-36. Applicability and scope ................................................................................................... 63
6-37. Process ............................................................................................................................. 64
Chapter 7 Quality Assurance Evaluation Concepts and Methods ................................................ 64
Section I Data Collection .............................................................................................................. 64
7-1. Data collection planning .................................................................................................... 64
7-2. Surveys .............................................................................................................................. 67
7-3. Document reviews ............................................................................................................. 73
7-4. Observations ...................................................................................................................... 74
7-5. Interviews .......................................................................................................................... 75
7-6. Focus groups ...................................................................................................................... 78
Section II Data Analysis ............................................................................................................... 83
7-7. Types of data ..................................................................................................................... 83
7-8. Organizing data ................................................................................................................. 83
7-9. Storing and protecting data ................................................................................................ 84
7-10. Cleaning data ................................................................................................................... 84
7-11. Quantitative data analysis ................................................................................................ 85
7-12. Qualitative data analysis .................................................................................................. 86
7-13. Interpreting results ........................................................................................................... 87
7-14. Quality assurance evaluation rubrics ............................................................................... 88
7-15. Root cause analysis.......................................................................................................... 88
Section III Communicating Evaluation Results ............................................................................ 89
7-16. Briefing evaluation results ............................................................................................... 89
7-17. Writing evaluation reports ............................................................................................... 91
Section IV Project Management for Evaluators ........................................................................... 94
7-18. Project management process for evaluators .................................................................... 94
7-19. Project management skills for evaluators ........................................................................ 95

5
TRADOC Pamphlet 11-21

Section V Standards of Professional Evaluation Practice............................................................. 96


7-20. Integrity and fairness ....................................................................................................... 96
7-21. Systematic inquiry and rigor ........................................................................................... 96
7-22. Competence, learning, and credibility ............................................................................. 96
7-23. Collaboration and transparency ....................................................................................... 97
7-24. Prevention of harm .......................................................................................................... 97
7-25. Anonymity versus confidentiality ................................................................................... 97
Appendix A References ................................................................................................................ 99
Appendix B Impact Issue and Value-Added Practice Processes ................................................ 101
Appendix C Marketing the Army Quality Assurance Program .................................................. 106
Appendix D Essential Competencies for Quality Assurance Evaluators ................................... 108
Appendix E Continuous Learning Points Guidelines ................................................................. 110
Appendix F Evaluator Certification Nomination Process .......................................................... 114
Appendix G Army Accreditation Timeline ................................................................................ 116
Appendix H Example Army Accreditation Process ................................................................... 118
Appendix I Initial Impressions Out-Brief Presentation Guidelines ............................................ 138
Appendix J Example Course Evaluation Timeline and Process ................................................. 142
Appendix K External Survey Process, Reporting, and Questions .............................................. 149
Appendix L Example Self-Study and Self-Assessment Process ................................................ 153
Appendix M Self-Study Guidelines ............................................................................................ 154
Appendix N Example Proponent Assessment Timeline and Process ......................................... 161
Appendix O Proponent Assessment Travel ................................................................................ 170
Appendix P Instructor Actions Review Evidence Guidelines .................................................... 173
Glossary ...................................................................................................................................... 175

Table List
Table 2-1 Differences between quality assurance and quality control ......................................... 16
Table 4-1 Underlying meaning of rubric scale items.................................................................... 34
Table 4-2 Adjudicating recommended changes to Army Enterprise Accreditation Standards .... 35
Table 5-1 Army accreditation team roles and major responsibilities ........................................... 42
Table 6-1 Learning institution quality assurance program applicability chart ............................. 51
Table D-1 Essential competencies for quality assurance evaluators .......................................... 108
Table E-1 Continuous learning points credit guide .................................................................... 111
Table G-1 Expected timeline of critical Army accreditation tasks ............................................. 117
Table H-1 Critical accreditation tasks before 120-day Army accreditation period .................... 118
Table H-2 Accreditation mission analysis and planning process ............................................... 119
Table H-3 Critical accreditation tasks days 1 to 30 of 120-day Army accreditation period ...... 124
Table H-4 Critical accreditation tasks days 31 to 90 of 120-day Army accreditation period .... 128
Table H-5 Critical accreditation tasks days 91 to 120 of 120-day Army accreditation period .. 130
Table H-6 Critical accreditation tasks after the 120-day Army accreditation period ................. 134
Table J-1 Example course evaluation milestone timeline........................................................... 142
Table N-1 Example proponent assessment milestone timeline .................................................. 162
Table O-1 Defense Travel System travel authorization checklist for proponent assessments ... 171
Table O-2 Defense Travel System travel voucher checklist for proponent assessments ........... 172

6
TRADOC Pamphlet 11-21

Figure List
Figure 2-1. The Army Quality Assurance Program’s core values ................................................ 10
Figure 2-2. The Army Quality Assurance Program’s five major functions ................................. 12
Figure 3-1. Six domains of essential competencies for quality assurance evaluators .................. 22
Figure 3-2. Quality assurance evaluator courses .......................................................................... 23
Figure 3-3. Levels of quality assurance evaluator certification .................................................... 27
Figure 3-4. Requirements for apprentice recognition ................................................................... 27
Figure 3-5. Requirements for evaluator certification .................................................................... 28
Figure 3-6. Requirements for senior evaluator certification ......................................................... 29
Figure 3-7. Requirements for master evaluator certification ........................................................ 29
Figure 5-1. Core purposes of Army accreditation ........................................................................ 37
Figure 5-2. Army accreditation team member qualifications ....................................................... 41
Figure 5-3. Phases of the Army accreditation process .................................................................. 45
Figure 5-4. Sub-phases of the 120-day Army accreditation period .............................................. 46
Figure 5-5. Reasons for requesting an Army accreditation staff assistance visit ......................... 48
Figure 5-6. Accreditation staff assistance visit request timeline .................................................. 49
Figure 6-1. Possible course evaluation activities .......................................................................... 54
Figure 6-2. Accreditation and assessment coordination activities ................................................ 61
Figure 6-3. Accreditation and assessment preparation activities .................................................. 62
Figure 7-1. Ways to increase survey response rates ..................................................................... 73
Figure 7-2. Ways to increase survey completion rates ................................................................. 73
Figure 7-3. Example focus group ground rules ............................................................................ 81
Figure 7-4. Effective briefing skills .............................................................................................. 91
Figure 7-5. Characteristics of an effective briefer ........................................................................ 91
Figure B-1. Criteria for a valid impact issue............................................................................... 101
Figure B-2. Criteria for a valid value-added practice ................................................................. 103
Figure C-1. Army Quality Assurance Program logo .................................................................. 106
Figure C-2. Example e-mail signature block .............................................................................. 106
Figure I-1. Example initial impressions out-brief title slide ....................................................... 138
Figure I-2. Example initial impressions out-brief agenda slide .................................................. 139
Figure I-3. Example initial impressions out-brief purpose slide................................................. 140
Figure I-4. Example initial impressions out-brief findings legend ............................................. 140
Figure K-1. Example graduate survey items............................................................................... 150
Figure K-2. Example leader survey item .................................................................................... 151
Figure K-3. Summarized external data report format ................................................................. 151
Figure M-1. Example documentary evidence for standard 1 ...................................................... 157
Figure M-2. Example documentary evidence for standard 2 ...................................................... 157
Figure M-3. Example documentary evidence for standard 3 ...................................................... 158
Figure M-4. Example documentary evidence for standard 4 ...................................................... 158
Figure M-5. Example documentary evidence for standard 5 ...................................................... 159
Figure M-6. Example documentary evidence for standard 6 ...................................................... 159
Figure M-7. Example documentary evidence for standard 7 ...................................................... 160
Figure M-8. Self-study appendix C contents .............................................................................. 160
Figure N-1. Example proponent assessment backwards planning.............................................. 163
Figure O-1. Example proponent assessment travel funding submission .................................... 170

7
TRADOC Pamphlet 11-21

This page left blank intentionally.

8
TRADOC Pamphlet 11-21

Chapter 1
Introduction

1-1. Purpose
This pamphlet provides guidance on the concepts and processes used to implement the Army
Quality Assurance Program (AQAP) and the AQAP’s five major functions: Army accreditation,
AQAP oversight and governance, proponent assessment, internal evaluation, and external
evaluation. Unless otherwise specified, the concepts and processes described in this pamphlet
apply to all organizations and individuals involved in implementing, managing, and/or
supporting the AQAP, to include all Army learning institutions that are part of The Army School
System, made up of Regular Army (RA), Army National Guard (ARNG), U.S. Army Reserve
(USAR), and Army civilian institutional training systems, except for the U.S. Military Academy
(USMA). This pamphlet supports the policies set forth in TRADOC Regulation (TR) 11-21.

1-2. References
See appendix A.

1-3. Explanation of abbreviations and terms


See the glossary.

1-4. Records management requirements


The records management requirement for all record numbers, associated forms, and reports
required by this publication are addressed in the Records Retention Schedule–Army (RRS–A).
Detailed information for all related record numbers, forms, and reports is in Army Records
Information Management System (ARIMS)/RRS–A at https://www.arims.army.mil. If any
record numbers, forms, and reports are not current, addressed, and/or published correctly in
ARIMS/RRS–A, see DA Pamphlet (DA PAM) 25-403 for guidance.

1-5. Information Collections


Surveys referred to in paragraph 2-17 and appendix K are assigned survey control number (SCN)
AAHS-RDR-PR-21-190.

Chapter 2
Army Quality Assurance Program
This chapter provides an overview of the AQAP, to include its core values, mission, vision,
strategic goals, and major functions. This chapter also briefly describes impact issues; value-
added practices; quality assurance versus quality control; evaluation versus inspection; the role
of the quality assurance office (QAO), officer, and non-commissioned officer (NCO); AQAP
collaboration platforms; and AQAP marketing.

2-1. Introduction

a. The AQAP defines responsibility for accrediting all Army learning institutions across all
Army components, except for the USMA. Through Army accreditation and its related processes,
the AQAP assures learning institutions achieve Army standards in the development, education,

9
TRADOC Pamphlet 11-21

and training of Soldiers and Army Civilians while strengthening the Army’s ability to learn,
adapt, and innovate, and its readiness to deploy, fight, and win decisively against any adversary
anytime and anywhere.

b. In accordance with AR 10-87, AR 350-1, TR 10-5, TR 10-5-1, and TR 11-21 (see app A
for required publications), the Army’s lead agency for the AQAP is Headquarters (HQ) U.S.
Army Training and Doctrine Command (TRADOC) QAO, which has the responsibility of
establishing AQAP objectives, policies, procedures, and processes.

2-2. Core values


The AQAP’s core values, as shown in figure 2-1, provide the foundation for the AQAP’s mission,
vision, and goals.

•We do what is right and act honestly and ethically; we


Integrity are devoted to keeping our word and honoring our
commitment to maintaining standards.

Innovation •We overcome challenges with creativity and create value


through new ideas.

Accountability •We take ownership and deliver on our commitments.

Collaboration •We work as a team and take an active role to help shape
the environment.

Service •We are committed to selflessly and humbly serving our


nation, our Army, and others in all that we do.

Figure 2-1. The Army Quality Assurance Program’s core values

2-3. Mission, vision, and strategic goals

a. Mission. The AQAP’s mission is to execute the AQAP across the RA, ARNG, and USAR
through accreditations and proponent assessments to assure Army Enterprise Accreditation
Standards (AEAS) are achieved in the development, education, and training of Soldiers and
Army Civilians while strengthening the Army's readiness and ensuring the Army’s ability to
learn, adapt, and innovate.

b. Vision. The AQAP’s vision is to be the Army's organization that assesses and enhances the
quality of Army training, education, and development by fostering innovation and collaboration
through the enforcement of the AEAS.

10
TRADOC Pamphlet 11-21

c. Strategic goals. The AQAP’s strategic goals are nested with the TRADOC campaign plan
and correlate with the three core purposes of Army accreditation: accountability, improvement,
and compliance.

(1) Optimize functions (accountability). Optimizing AQAP functions assures processes,


procedures, methods, and functions operate most efficiently and effectively. This goal addresses
required resources, regulatory guidance, reporting processes, and examination of the quality
assurance governance process. It largely addresses resource issues to optimize AQAP function
and ensure the AQAP is truly responsible for holding the Army’s learning institutions
accountable for providing the greatest return on investment.

(2) Enhance rigor (improvement). The goal of enhancing AQAP rigor stems from the
recognition that some standard criteria, training, and Army accreditation processes are perishable
and may become outdated and require improvement to keep pace with current Army training,
education, and development issues. Enhancing AQAP rigor requires continual adjustment to the
AEAS criteria, a greater investment in further professionalizing the AQAP community, and
collaboration with external accrediting agencies (see para 2-13) to identify and modify functions
that may have the potential of impeding improvement of the Army’s education and training
mission.

(3) Enforce standards across doctrine, organizational, training, materiel, leadership,


personnel, facilities, and policy (DOTMLPF-P) domains (compliance). The goal of enforcing
standards across DOTMLPF-P domains focuses on the importance of communicating the
processes, findings, and value that the quality assurance program provides across the Army
enterprise. This goal addresses objectives to effectively conduct internal and external
evaluations, providing credible data to the Army and TRADOC campaign plans from Army
accreditation findings; and to establish the conditions for learning institutions to highlight their
analysis, design, development, implementation, and evaluation (ADDIE) processes through self-
assessments and self-studies.

2-4. Five major functions


As shown in figure 2-2, the AQAP’s primary portfolio is comprised of five major quality assurance
functions: accreditation, oversight and governance, proponent assessment, internal evaluation, and
external evaluation. Authorities used as prioritizing criteria for these functions include AR 10-87,
AR 350-1, TR 10-5, TR 10-5-1, TR 11-21, TR 350-18, and TR 350-70 (see app A for required
publications).

11
TRADOC Pamphlet 11-21

Accreditation

External Oversight/
Evaluation Governance

AQAP

Internal Proponent
Evaluation Assessment

Figure 2-2. The Army Quality Assurance Program’s five major functions

a. Accreditation. Army accreditation is a disciplined approach to quality assurance across


Army learning institutions. It assures that institutions meet accepted standards and follow
regulatory and command guidance. It is the result of an evaluative process assuring that a
learning institution meets the required percentage of accreditation standards with a focus on
quality, currency, and relevant training and education that meets the needs of the Army. It is also
the voluntary process of evaluating institutions or programs to assure acceptable levels of
quality, including recognition by the U.S. Secretary of Education. HQ TRADOC QAO is the
lead agency for Army accreditation. For more information about Army accreditation, see chapter
5.

b. Oversight and governance. HQ TRADOC QAO, as the lead agency for the AQAP,
exercises staff management of all Army learning institution QAOs to ensure effective
implementation of the AQAP’s other major functions: accreditation, internal evaluation, external
evaluation, and proponent assessment. HQ TRADOC QAO provides staff support to learning
institution QAOs to ensure efficient and effective quality assurance business practices and
operations, and to support institutions with achieving and maintaining AEAS. This support
includes developing and publishing AQAP policies, procedures, and processes; publishing and
maintaining the AEAS; managing the quality assurance survey process and survey system
licenses; managing the quality assurance evaluator development program (QAEDP); and
identifying and sharing trends, value-added practices, and efficiencies across Army learning
institutions. For information about the AEAS, see chapter 4. For information about the QAEDP,
see chapter 3. For information about internal and external evaluation and proponent assessment,
see chapter 6. For information about value-added practices, see paragraph 2-7.

12
TRADOC Pamphlet 11-21

c. Proponent assessment. Proponent assessment is the quality assurance process of assuring


all functionally aligned reserve component (RC) learning institutions and outlying subordinate
schools meet accepted accreditation standards and follow regulatory and command guidance. The
proponent assessment process involves proponent QAOs evaluating training, education, and
organizational processes; providing feedback to the evaluated institution; making
recommendations for improvement; providing written proponent assessment reports to the
assessed learning institution and HQ TRADOC QAO; and following up on the assessed
institutions’ corrective actions. HQ TRADOC QAO, as the lead agency for the AQAP, exercises
staff management of Army learning institution QAOs to ensure implementation of proponent
assessment in support of Army accreditation. For more information about proponent assessment,
see chapter 6, section VII.

d. Internal evaluation. Internal evaluation is an Army learning institution’s quality assurance


review of its own processes and functions. In the AQAP, internal evaluation primarily involves
learning institution QAOs evaluating their institution’s courses against course-related AEAS
criteria. This includes evaluating the learning institution’s courses taught at offsite locations via
mobile training teams. This does not include evaluating functionally aligned RC learning
institutions or outlying subordinate schools; those are evaluated during proponent assessment (see
para 2-4c). Internal evaluation also involves learning institutions assessing themselves against all
AEAS criteria. HQ TRADOC QAO, as the lead agency for the AQAP, exercises staff
management of learning institution QAOs to ensure implementation of internal evaluation. For
more information about internal evaluation, see chapter 6, section III.

e. External evaluation. External evaluation is a quality assurance process that provides the
means to determine if Army training and education meet the needs of the operational Army. It
helps assure that the Army’s training and education system continues to efficiently and
effectively produce graduates who meet established job performance requirements. It also helps
assure that Soldiers and Army Civilians receive all the training they need, that they need all the
training they receive, and that they can perform individual critical tasks and learning outcomes to
prescribed standards. External evaluation involves collecting data from a variety of external
sources, including but not limited to the Center for Army Lessons Learned, combat training
centers, conferences, studies, active and reserve collection and analysis teams, unit commanders
and other Army leaders, Soldier and Army Civilian graduates, and graduates’ supervisors. HQ
TRADOC QAO, as the lead agency for the AQAP, exercises staff management of learning
institution QAOs to ensure implementation of external evaluation. For more information about
external evaluation, see chapter 6, section V.

2-5. Commander’s organizational inspection program


The AQAP’s second portfolio and sixth major function is the organizational inspection program
(OIP). This function is specific to HQ TRADOC QAO as the TRADOC OIP policy proponent.

a. The OIP is a commander’s program that integrates and coordinates command inspections,
staff inspections, and Inspector General (IG) inspections within the command. An OIP is not an
inspection itself, but an overall comprehensive program comprised of inspections. Commanders
designate an OIP coordinator to coordinate and manage the OIP, preferably from within the staff
agency that has tasking authority and direct access to the master calendar.

13
TRADOC Pamphlet 11-21

b. All Army organizations have an OIP, including HQ Department of the Army staff
agencies, Army programs, garrisons and installations, and various other non-standard Army
organizations and agencies with staffs that can conduct inspections on the organization’s behalf.
The battalion is the lowest level organization in which a commander has a staff to perform
internal inspections on subordinate units.

c. OIP is separate and distinct from Army accreditation. OIP involves a quality control
inspection process that uses checklists and applies to all Army organizations. Army accreditation
involves a quality assurance evaluation process that uses evaluative rubrics and applies to Army
learning institutions. Army accreditation is not an inspection.

d. The HQ TRADOC QAO director serves as the HQ TRADOC OIP coordinator, working
closely with the TRADOC IG. The HQ TRADOC QAO director aligns TRADOC staff
inspections with Army accreditation scheduling to the greatest extent possible to reduce
disruption to learning institutions’ training and operational tempo.

e. For more information on the commander’s OIP, see AR 1-201 and TRADOC Supplement
1-201 (see app A for related publications).

2-6. Impact issues

a. An impact issue is a situation or circumstance that impedes an Army learning institution’s


mission, is beyond the learning institution’s ability to resolve, has an audit trail documenting
how the learning institution tried to resolve the issue, and causes the learning institution to fail
one or more AEAS criteria.

b. Quality assurance evaluators at all levels identify impact issues through a variety of
evaluations, such as Army accreditation, proponent assessment, internal and external evaluation,
and self-assessment.

c. In accordance with TR 10-5, TR 10-5-1, and TR 11-21, HQ TRADOC QAO manages the
identification and resolution of accreditation impact issues for all Army learning institutions,
both active component (AC) and RC.

d. HQ TRADOC QAO does not publish individual impact issues due to their potentially
sensitive nature; however, they do report impact issues in the aggregate when sharing Army
quality assurance trends with the AQAP community.

e. For information about processing impact issues, see appendix B, section I.

2-7. Value-added practices

a. A value-added practice is an Army learning institution’s practice that enhances the value
of an AEAS criterion, is or can be effective and applicable across Army learning institutions, can

14
TRADOC Pamphlet 11-21

be supported within the limits of regulatory guidance and resources, and promotes the institution
as a learning organization.

b. Army quality assurance evaluators at all levels identify value-added practices through a
variety of evaluations, such as Army accreditation, proponent assessment, internal evaluation,
and self-assessment.

c. In accordance with TR 10-5, TR 10-5-1, and TR 11-21, HQ TRADOC QAO identifies and
shares value-added practices across Army learning institutions. HQ TRADOC QAO reports
recognized value-added practices when sharing Army quality assurance trends with the AQAP
community.

d. For information about processing value-added practices, see appendix B, section II.

2-8. Quality assurance versus quality control

a. Within the context of Army learning, quality is the ability of a learning product or service
to satisfy the needs of the internal and external stakeholders who use or otherwise benefit from
the product or service. Managing quality involves two separate but not-so-distinct components:
quality assurance and quality control.

(1) Quality assurance is a function that provides leaders assurance that an organization is
efficiently and effectively meeting its mission requirements, also assuring that controls are in
place to effect quality performance across the organization.

(2) Quality control is the day-to-day actions taken to ensure a product or service meets
applicable specifications and standards. Quality control has three objectives: find defects, correct
defects, and validate the deliverable.

b. Although quality assurance and quality control are defined differently, some of their
activities are interrelated. Because the two concepts are interrelated, it can be challenging to
understand their differences. Table 2-1 describes some of the differences between quality
assurance and quality control.

15
TRADOC Pamphlet 11-21

Table 2-1
Differences between quality assurance and quality control
Concept Quality Assurance Quality Control
Assure versus ensure Provides internal and external Actually fulfills those quality
stakeholders the confidence, or requirements; ensures the
assures, that quality requirements requirements are met
will be met
Process versus product Concerned with processes, how Concerned with inspecting
they are performed, and how products throughout the process,
products are developed and validating that the products
meet requirements
Not executing versus Is not part of the process; is on the Is part of the process; is on the
executing the process outside looking in at the process inside executing the process
Prevention versus Is concerned with preventing Is a reactive approach that
reaction defects by looking at the processes concerns itself with identifying
and systems in place and correcting defects
Not controlling versus Never involves controlling product Always involves controlling
controlling quality quality product quality
Recommending versus Recommends corrective actions Makes or directs corrective
making corrective but does not make them actions
actions
Functions Evaluation and auditing Inspection and testing
Examples Evaluator reviews a product and its Training manager reviews a
audit trail documents to examine product for accuracy and
the process and controls used to completeness; either makes
develop the product; recommends corrections or sends the product
improvements to process and/or back to the developer for
controls based on product and corrections
audit trail evidence

c. Although quality assurance and quality control are different, they complement one another,
and both are part of a quality management system focused on meeting quality requirements.

2-9. Evaluation versus inspection

a. Evaluation systematically examines a system or process to determine its value or merit


using standards and evaluative criteria. It involves collecting, analyzing, and interpreting data;
gaining insights; and making judgments to determine the degree of the system’s or process’
value or merit, inform decisions, and improve future performance. Evaluation is part of the
quality assurance function.

b. Inspection closely examines, measures, or tests a product or service’s characteristics and


compares results with specific requirements to establish whether the product or service is correct
and in compliance. Inspection usually follows a checklist based on product or service
specifications. Inspection is part of the quality control function.

16
TRADOC Pamphlet 11-21

c. Although evaluation and inspection are not interchangeable, an evaluation may use
inspection techniques as an evaluation tool. However, an evaluation should not be involved in
any verifying activities leading to actual acceptance or rejection of a product or service.

2-10. Quality assurance office, officer, and non-commissioned officer

a. In accordance with AR 350-1 and TR 11-21, all Army learning institutions have a QAO, or
equivalent, that reports directly to and serves as the “eyes and ears” of their learning institution’s
commander, deputy commander, commandant, assistant commandant, or civilian or military
equivalent, as appropriate. QAOs implement and manage a variety of quality assurance functions
for their learning institutions, to include internal and external evaluation, institutional self-study
and self-assessment, quality assurance reviews, and in some cases proponent assessment.

b. Depending on the mission and size of the learning institution, a QAO may be a fully
staffed organization with a director and several evaluators assigned full time; or, on the other
hand, it may be an individual with other primary duties but assigned additional duties as the
learning institution’s quality assurance officer or NCO. In the latter case, the quality assurance
officer or NCO may report directly to one leader for their primary duties, and report directly to
their learning institution’s senior leader for their quality assurance duties.

c. No matter how a QAO is staffed, the mission is the same – to execute the commander’s
quality assurance program, assuring the quality of the institution’s training and education and the
institution’s achievement of the AEAS.

d. Throughout this pamphlet, whenever the term “quality assurance office” or “QAO” is
used, it refers to both fully staffed quality assurance organizations and individual quality
assurance officers and NCOs.

2-11. Collaboration platforms


The AQAP provides several platforms for the AQAP community to network, share information,
and collaborate. Those platforms include the AQAP portal, the AQAP’s Army 365 site, and the
AQAP milBook on milSuite.

a. AQAP portal. The AQAP portal is the AQAP’s common access card-enabled website,
which hosts the most-current information regarding the AQAP’s mission and vision, policy and
guidance, standards, Army accreditation schedule, evaluator professional development, learning
institution portals, community points of contact, and other information and resources described
in this pamphlet.

(1) The AQAP portal is available via this link: https://armyeitaas.sharepoint-


mil.us/sites/tr-hq-aqap.

(2) When visiting the AQAP portal for the first time, users are prompted to request
permission to enter the site and should follow the instructions on the screen. Users receive an
email notification when their requests for access are approved. Users may contact the AQAP
Portal Administrator with any problems obtaining access or with any questions about the site.

17
TRADOC Pamphlet 11-21

b. AQAP’s Army 365 site. The AQAP hosts an Army 365 site for all members of the AQAP
community to share information, ask questions, and work collaboratively on various efforts.

(1) The AQAP’s Army 365 site includes the following capabilities: threaded chat, virtual
teleconferencing, video teleconferencing, and a host of project collaboration tools. The AQAP
uses the platform to conduct accreditation and other evaluation-related activities, and to host
various AQAP community learning activities.

(2) To join the AQAP’s Army 365 site, search Army 365 for “Army Quality Assurance
Program” or “AQAP” and request to join.

c. AQAP milBook on milSuite. The AQAP hosts a milBook group on milSuite for all
members of the AQAP community to share information and ask questions. The AQAP milBook
group is available via this link: https://www.milsuite.mil/book/groups/tradoc-quality-assurance-
office

2-12. Marketing the Army Quality Assurance Program

a. Marketing the AQAP is an effective way to engage stakeholders and inform them about
the AQAP and how it supports their missions. It is also an effective way to keep stakeholders
informed about changes across TRADOC and the Army that impact their missions, and changes
to the AQAP’s processes and services in support of their missions.

b. Marketing the AQAP is important, but equally important is providing effective quality
assurance services that complement the marketing effort. Marketing helps build relationships of
trust and understanding with stakeholders. Through marketing, combined with effective quality
assurance services, stakeholders can come to understand and trust the value of the AQAP.

c. For example methods of marketing the AQAP, see appendix C.

2-13. External accrediting agencies


The AQAP continually engages and develops collaborative relationships and partnerships with
external accrediting agencies. This enables continued currency and the benchmarking of relevant
best practices in the field of accreditation.

a. Commonalities among external accrediting agencies. Regardless of the accrediting agency,


one commonality is that they all have a quality assurance process for determining whether an
institution meets established standards for function, structure, and performance. For example,
according to the U.S. Department of Education, accreditation involves the following activities:

(1) Accrediting agencies establish accreditation standards in collaboration with learning


institutions.

(2) Learning institutions seeking accreditation prepare a self-study of their performance


against the accreditation standards.

18
TRADOC Pamphlet 11-21

(3) Accreditation teams evaluate learning institutions against the accreditation standards.

(4) Accrediting agencies grant accreditation status to learning institutions meeting the
accreditation standards.

(5) Accrediting agencies monitor accredited learning institutions throughout the


accreditation cycle to assure they continue to meet the accreditation standards.

(6) Accrediting agencies periodically reevaluate accredited learning institutions to


determine continued accredited status.

b. Types of external accreditation. There are two types of external accreditation: institutional
and specialized or programmatic.

(1) Institutional. Institutional accreditation reviews the academic and organizational


structures of a college, university, center, or independent school, and the entire learning
institution is accredited. Army accreditation is an institutional accreditation. When Army
learning institutions seek additional institutional accreditation from a civilian accrediting agency,
it is important to ensure it is recognized by the U.S. Department of Education.

(2) Specialized or programmatic. Specialized or programmatic accreditation reviews


individual programs within a college, university, center, or school, and only that program is
accredited, independent of the parent organization. Programs include academic units, specialties,
disciplinary offerings, departments, or schools within a larger learning institution. This type of
accreditation assures that the program meets the standards of a specific field of study and
provides the education and experiences required for success in that field. Professional
associations related to specific fields of study provide programmatic accreditation. Examples
include the American Psychological Association, the Commission on Accreditation for
Respiratory Care, the Joint Review Committee on Education in Radiologic Technology, and the
National Architectural Accrediting Board.

c. Types of external accrediting agencies. There are two types of external accrediting
agencies: regional and national.

(1) Regional. The most-widely recognized type of external accrediting agency is regional.
Regional accrediting agencies accredit schools, colleges, and universities in certain regions of the
U.S., with some serving international regions. Most colleges in the U.S. are regionally
accredited. Regionally accredited schools are typically academically oriented, state-owned, or
nonprofit. There are six regional accrediting agencies that operate in the U.S.

(a) Middle States Commission on Higher Education.

(b) New England Association of Schools and Colleges.

(c) Northwest Commission on Colleges and Universities.

19
TRADOC Pamphlet 11-21

(d) Higher Learning Commission.

(e) Southern Association of Colleges and Schools.

(f) Western Association of Schools and Colleges.

(2) National. National accrediting agencies focus on accrediting trade schools, vocational
schools, and career programs offering degrees and certifications. These accrediting agencies are
not limited to geographic locations. Nationally accredited schools are typically for-profit schools
that offer technical, vocational, or career programs. Examples of national accrediting agencies
include Accrediting Commission of Career Schools and Colleges, Council on Occupational
Education, and Distance Education Accrediting Commission.

d. Certifying the quality of external accrediting agencies. The Council for Higher Education
Accreditation reviews U.S. institutional and programmatic accrediting agencies for their
effectiveness and quality based on the standards and requirements of the most-recent
Recognition Policy and Procedures. Recognition affirms that accrediting agencies’ standards,
structures, and practices promote academic quality, improvement, accountability, flexibility, and
innovation in the institutions and programs they accredit. The Council for Higher Education
Accreditation is the only private-sector organization in the U.S. that reviews accrediting
agencies.

Chapter 3
Quality Assurance Evaluator Development Program
This chapter provides an overview of the AQAP’s QAEDP and its various offerings.

3-1. Introduction

a. In accordance with AR 350-1 and TR 11-21, one of the AQAP’s goals is to provide a
sound, viable, and flexible quality assurance program that meets the needs of the Army.
Accomplishment of this goal is made possible in great part through the AQAP’s QAEDP. The
QAEDP is one element of the AQAP’s second major function – oversight and governance.

b. The AQAP is committed to helping all quality assurance evaluators develop the
knowledge, skills, and experience necessary for executing efficient and effective quality
assurance business practices and operations. The QAEDP’s broad range of developmental
opportunities are designed to help new and inexperienced evaluators transition to competent
evaluators, and to help competent evaluators transition to leaders and experts in the field of
Army quality assurance.

c. The QAEDP is a multi-faceted developmental program designed to professionalize quality


assurance evaluation practice. It is also designed to ensure quality assurance evaluators execute
the AQAP mission consistently across all Army learning institutions.

20
TRADOC Pamphlet 11-21

d. Foundational to the QAEDP are essential competencies for Army quality assurance
evaluators. The QAEDP’s professional development programs include but are not limited to four
sequential quality assurance evaluator courses (QAEC), a variety of professional development
forums, lunch-and-learn training sessions, and an AQAP reading list. Processes that support and
reward continuous professional development include but are not limited to continuous learning
points (CLP) and evaluator certification.

e. The QAEDP is not all-inclusive or all-encompassing; one size does not fit all. It can adapt
to meet constantly emerging and changing policies and directives, and it recognizes that not all
Army learning institutions are the same. Learning institutions have the flexibility to expand
evaluator development beyond the program to meet their institutions’ specific needs.

3-2. Essential competencies for quality assurance evaluators

a. Competencies are the knowledge, skills, and abilities required for successful human
performance. The AQAP’s essential competencies for quality assurance evaluators identify
specific competencies that quality assurance evaluators are expected to possess and exercise in
their evaluation practice. Successful attainment of these essential competencies helps ensure a
capable and professional team of evaluators across the Army enterprise.

b. The essential competencies for Army quality assurance evaluators are what drive every
element of the QAEDP, to include the QAECs, professional development forums, lunch-and-
learn training sessions, and the AQAP reading list. Evaluators and their supervisors should refer
to the competencies when planning other professional development activities. The competencies
support standardized evaluation practice, with all Army quality assurance evaluators exercising
the same competencies regardless of job title or organization.

c. As shown in figure 3-1, the evaluator essential competencies consist of six domains:
professional practice, systematic inquiry, situational analysis, project management, reflective
practice, and interpersonal competence.

21
TRADOC Pamphlet 11-21

Situational Project
Analysis Management

Systematic Reflective
Inquiry Practice

Professional
Professional Quality Interpersonal
Practice Assurance Competence

Evaluator

Figure 3-1. Six domains of essential competencies for quality assurance evaluators

(1) The professional practice domain refers to exercising the fundamental values and
norms of quality assurance evaluator practice. This includes ethical practice and contributing to
the evaluation knowledge base.

(2) The systematic inquiry domain refers to exercising systematic, data-based inquiry to
ensure accurate and credible evaluation results. This includes exercising technical evaluation
skills related to evaluation planning, data collection, analysis, interpretation, and reporting.

(3) The situational analysis domain refers to understanding and responding to the unique
interests, issues, and contextual circumstances of evaluation. This includes examining the
organizational context of the evaluation, adapting to organizational variance, and being open to
input from others.

(4) The project management domain refers to executing processes required for effective
evaluation project management. This includes identifying needed resources, managing processes
and people, and presenting results in a timely manner.

(5) The reflective practice domain refers to being aware, through reflection, of one’s own
evaluation expertise and areas needing development. This includes pursuing professional
development and building professional relationships to enhance evaluation practice.

(6) The interpersonal competence domain refers to exercising the following skills
necessary for competent evaluation practice: writing, speaking, listening, conflict resolution,
teamwork, cultural understanding, and critical thinking.

22
TRADOC Pamphlet 11-21

d. Distributed across the six domains are 62 competencies. For a full list of the 62
competencies, see appendix D. Once published, performance measures for each competency will
be available in the QAEDP section of the AQAP portal: https://armyeitaas.sharepoint-
mil.us/sites/tr-hq-aqap.

3-3. Quality assurance evaluator courses

a. As shown in figure 3-2, there are four sequential QAECs. The courses are aligned with the
essential competencies for quality assurance evaluators and are designed to help move an
evaluator from a fundamental to an expert level of evaluator competence. They also correspond
with and are prerequisites for the QAEDP’s four levels of evaluator certification (see para 3-8).
Evaluators successfully complete each course before enrolling in the next.

Quality
Quality Quality Quality
Assurance
Assurance Assurance Assurance
Evaluator
Evaluator Evaluator Evaluator
Familiarization
Basic Course Senior Course Master Course
Course

Figure 3-2. Quality assurance evaluator courses

(1) Quality Assurance Evaluator Familiarization Course.

(a) The objective of the Quality Assurance Evaluator Familiarization Course is for newly
assigned evaluators to gain an introductory-level understanding of the AQAP’s purpose,
concepts, applicable references, the AEAS, Army accreditation, and the QAEDP.

(b) The Familiarization Course consists of seven lessons and a post-test and is delivered
online only.

(c) The recommended timeframe for completing the Familiarization Course is within the
first two months of being assigned quality assurance evaluator duties.

23
TRADOC Pamphlet 11-21

(d) The Familiarization Course is a pre-requisite for apprentice evaluators.

(e) Evaluators may self-enroll in the Familiarization Course through the Army Training
Support Center’s learning management system at https://atsc.ellc.learn.army.mil. This course is
expected to also be available in the future through the Army Training Requirements and
Resource System (ATRRS) with the searchable name, “QAE Familiarization.”

(2) Quality Assurance Evaluator Basic Course.

(a) The objective of the Quality Assurance Evaluator Basic Course is for evaluators to
gain an intermediate understanding of the AQAP, the AEAS, evaluation processes and tools, and
the QAEDP.

(b) The Basic Course consists of 23 lessons and a culminating capstone exercise, and it is
roughly 37.5 academic hours, or 5 days. This course is delivered in-residence and using virtual
methods.

(c) The recommended timeframe for completing the Basic Course is after one to six
months of quality assurance evaluator experience on the job.

(d) The Basic Course is a pre-requisite for evaluator certification.

(e) Evaluators may register for the Basic Course through ATRRS at
https://www.atrrs.army.mil under the searchable name, “QAE Basic.”

(3) Quality Assurance Evaluator Senior Course.

(a) The objective of the Quality Assurance Evaluator Senior Course is for evaluators to
gain a fully comprehensive understanding of the AQAP, the AEAS, evaluation processes and
tools, and the QAEDP.

(b) The Senior Course consists of 15 lessons and a culminating capstone exercise, and it
is roughly 29.0 academic hours, or 4 days. This course is delivered in-residence and using virtual
methods.

(c) The recommended timeframe for completing the Senior Course is after 12 to 18
months of quality assurance evaluator experience on the job.

(d) The Senior Course is a pre-requisite for senior evaluator certification.

(e) Evaluators may register for the Senior Course through ATRRS at
https://www.atrrs.army.mil under the searchable name, “QAE Senior.”

(4) Quality Assurance Evaluator Master Course.

24
TRADOC Pamphlet 11-21

(a) The objective of the Quality Assurance Evaluator Master Course is for evaluators to
gain a strategic or expert understanding of the AQAP, the AEAS, evaluation processes and tools,
the QAEDP, coaching and mentoring, and conflict management.

(b) The Master Course consists of six lessons and is roughly 12.0 academic hours, or 2
days. HQ TRADOC QAO conducts this course in residence.

(c) The recommended timeframe for completing the Master Course is after 36 months of
quality assurance evaluator experience on the job.

(d) The Master Course is a pre-requisite for master evaluator certification.

(e) Evaluators may register for the Master Course through ATRRS at
https://www.atrrs.army.mil under the searchable name, “QAE Master.”

b. For more information about the quality assurance courses, contact the AQAP Quality
Assurance Course Manager. For more information about how to register for the courses, visit the
QAEDP section of the AQAP portal: https://armyeitaas.sharepoint-mil.us/sites/tr-hq-aqap.

3-4. Forums
HQ TRADOC QAO periodically schedules and hosts AQAP professional development forums
covering a variety of AQAP topics. Examples of AQAP forum events are the AQAP Annual
Forum for the entire AQAP community, the Quality Assurance Director’s Workshop, the USAR
Quality Assurance Workshop, the ARNG Quality Assurance Workshop, and the Soldier Quality
Assurance Workshop, among others. AQAP forums provide the AQAP community with
continuous development and professionalization opportunities, as well as the latest information
about AQAP initiatives and programmatic updates. The actual AQAP forum events offered each
year are subject to change. For more information about AQAP forum events and schedule, visit
the AQAP portal: https://armyeitaas.sharepoint-mil.us/sites/tr-hq-aqap.

3-5. Lunch-and-learn training sessions


HQ TRADOC QAO periodically schedules and hosts AQAP lunch-and-learn training sessions
covering a wide variety of topics of interest to the AQAP community. These sessions are hosted
in the lunch-and-learn channel of the AQAP’s Army 365 site. For more information about
AQAP’s lunch-and-learn training events and schedule, visit the QAEDP section of the AQAP
portal: https://armyeitaas.sharepoint-mil.us/sites/tr-hq-aqap. To recommend or request lunch-
and-learn topics or volunteer to be a lunch-and-learn presenter, contact the AQAP Deputy
Director.

3-6. Reading list


Reading is a powerful professional development strategy that equips quality assurance evaluators
with a solid knowledge base and effective tools for professional practice. The AQAP reading list
provides quality assurance professionals with required and recommended readings for their
continuous professional development and lifelong learning. Required readings, which are aligned
with the four QAECs and include official Army and TRADOC publications related to Army
accreditation and the AEAS, are essential for effective quality assurance practice. Recommended

25
TRADOC Pamphlet 11-21

readings include popular, professional, and academic articles and books covering a wide variety
of topics of interest and value to quality assurance professionals. For the current AQAP reading
list, visit the QAEDP section of the AQAP portal: https://armyeitaas.sharepoint-mil.us/sites/tr-
hq-aqap.

3-7. Continuous learning points

a. Continuous learning is the constant expansion of knowledge and skills needed to perform
more effectively and adapt more readily to ever-changing environments. It involves a wide range
of activities that increase performance capabilities.

b. Continuous learning points (CLP) are a method of ensuring quality assurance evaluators
develop and enhance their professional skills, remain current in their professions, and are flexible
and adaptable to ever-changing environments.

c. CLPs may be awarded for completion of academic courses, training courses, professional
activities, and professional experience. To remain active in the QAEDP, quality assurance
evaluators achieve at least 80 CLPs every two years and are encouraged to achieve at least 40
CLPs every year. All activities earn points only in the year they are accomplished or completed.

d. Quality assurance evaluators’ supervisors play a key role in continuous learning. They
ensure, within organizational workload and funding constraints, that their evaluators are allowed
duty time to participate in continuous learning activities. As appropriate, they allow telework for
virtual continuous learning. They also ensure that evaluators’ individual development plans
include continuous learning activities, and they document and validate their evaluators’ records
for completion of CLP requirements.

e. Quality assurance evaluators identify and discuss with their supervisors the types of
continuous learning activities to pursue and achieve. They also verify their records to ensure
their CLPs are recorded.

f. For information on how to determine points to be credited for continuous learning


activities, see the CLP guidelines in appendix E.

3-8. Quality assurance evaluator certification


The QAEDP recognizes evaluators who have acquired the education, skills, and experience
needed for successful evaluation practice.

a. Evaluator certification levels. The QAEDP offers four progressively increasing levels of
certification as shown in figure 3-3. To progress between the levels, evaluators meet established
minimum time, experience, and education requirements.

26
TRADOC Pamphlet 11-21

Senior Master
Apprentice Evaluator
Evaluator Evaluator

Figure 3-3. Levels of quality assurance evaluator certification

(1) Apprentice.

(a) The apprentice level recognizes evaluators who have achieved fundamental evaluator
knowledge and skills and have met the requirements shown in figure 3-4.

Figure 3-4. Requirements for apprentice recognition

(b) The recommended timeline for achieving apprentice level is within the first two
months of being assigned quality assurance evaluator duties. This is only a guideline; the
timeline may vary depending on each evaluator’s effort, pace of onboarding, and previous
knowledge, skills, abilities, and experience. The QAEDP recognizes the apprentice level with an
AQAP Director-signed nomination document. A certificate is not awarded for this level.

27
TRADOC Pamphlet 11-21

(2) Evaluator.

(a) The evaluator level recognizes evaluators who have achieved intermediate evaluator
knowledge and skills and have met the requirements shown in figure 3-5.

Figure 3-5. Requirements for evaluator certification

Note. An Army accreditation right-seat-ride is an opportunity for an evaluator to practice using


their evaluation skills under the supervision of an experienced evaluator. Conducting a right-
seat-ride is a very structured activity, with the team lead planning and coordinating the
evaluator’s right-seat-ride activities throughout the accreditation. HQ TRADOC QAO funds
accreditation right-seat-rides as part of the QAEDP.

(b) The recommended timeline for achieving evaluator level is after six months of
quality assurance evaluator experience on the job. This is only a guideline; the timeline may vary
depending on each evaluator’s effort, pace of onboarding, and previous knowledge, skills,
abilities, and experience. The QAEDP recognizes the evaluator level with an AQAP Director-
signed certificate.

(3) Senior Evaluator.

(a) The senior evaluator level recognizes evaluators who have achieved advanced
evaluator knowledge and skills and have met the requirements shown in figure 3-6.

28
TRADOC Pamphlet 11-21

Figure 3-6. Requirements for senior evaluator certification

(b) The recommended timeline for achieving senior evaluator level is after 12 to 18
months of quality assurance evaluator experience on the job. This is only a guideline; the
timeline may vary depending on each evaluator’s effort, pace of onboarding, and previous
knowledge, skills, abilities, and experience. The QAEDP recognizes the senior evaluator level
with an AQAP Director-signed certificate.

(4) Master Evaluator.

(a) The master evaluator level recognizes evaluators who have achieved expert evaluator
knowledge and skills and have met the requirements shown in figure 3-7.

Figure 3-7. Requirements for master evaluator certification

(b) The recommended timeline for achieving master evaluator level is after 36 months of
quality assurance evaluator experience on the job. This is only a guideline; the timeline may vary
depending on each evaluator’s effort, pace of onboarding, and previous knowledge, skills,
abilities, and experience. The QAEDP recognizes the master evaluator level with an AQAP
Director-signed certificate.

29
TRADOC Pamphlet 11-21

b. e-Portfolio.

(1) An e-Portfolio is an electronic evaluator record maintained in the QAEDP section of


the AQAP portal. All Army quality assurance evaluators are required to have an e-Portfolio.

(2) Supervisors use the e-Portfolio system to enroll their evaluators into the QAEDP and
manage their QAEDP records. Evaluators whose supervisors are not actively associated with the
AQAP, such as may occur with evaluators assigned additional quality assurance duties and not
assigned to a formal QAO, enroll themselves into the QAEDP and notify their supervisors of this
action.

(3) QAEDP records include training certificates, CLP trackers, nominations for evaluator
certification, other evidence of having met evaluator certification requirements, and evaluator-
through-master certificates.

(4) Users may access e-Portfolio from their learning institutions’ individual AQAP portal
sites.

c. Evaluator certification nomination. For a quality assurance evaluator whose supervisor is


actively associated with the AQAP, the supervisor prepares and submits the evaluator’s
nomination packet for progression to each of the evaluator levels. Evaluators whose supervisors
are not actively associated with the AQAP, such as may occur with evaluators assigned
additional quality assurance duties and not assigned to a formal QAO, keep their supervisors
informed of their activities and performance in the QAEDP, but prepare and submit their own
nomination packets. For information about the evaluator certification nomination process, see
appendix F.

Chapter 4
Army Enterprise Accreditation Standards
This chapter provides an overview of the AEAS and the AEAS evaluation report and rubrics.

4-1. Introduction

a. The Army’s accepted standards for accreditation are the AEAS. The AEAS establishes
criteria for institutional quality and provide the Army the means to assess and improve all Army
learning institutions across active and reserve components, and across the DOTMLPF-P
domains. The AEAS are aligned with the TRADOC core competencies, which are driven by the
TRADOC and Army campaign plans. Also aligned with well-established accreditation theory
and practice, the AEAS enhance rigor and help organizations learn and grow.

b. HQ TRADOC QAO develops and maintains the AEAS in collaboration with the AQAP
community and subject matter experts (SME) across the Army. This collaborative effort helps
ensure the AEAS remain current with respect to Army requirements and changing circumstances

30
TRADOC Pamphlet 11-21

in the operational environment. The Commanding General (CG), TRADOC is the AEAS
approval authority.

c. The most current version of the AEAS is available on the AQAP portal:
https://armyeitaas.sharepoint-mil.us/sites/tr-hq-aqap.

4-2. Standards subject areas

a. The AEAS are comprised of seven enduring standards addressing the following topic
areas:

(1) AEAS 1: Mission, purpose, and functions. Learning institutions have clearly defined,
aligned, and communicated mission, vision, goals, objectives, and organizational performance
criteria. They effectively manage and execute doctrine, knowledge management, and lessons
learned programs and processes. They also actively seek academic and vocational credentials to
enhance their training and education programs.

(2) AEAS 2: Governance and administration. Learning institutions have governance and
administrative structures that promote and support effective leadership and collaborative
processes, and their leaders execute the principles of mission command. Learning institutions
effectively manage academic testing and assessment materials, student records, and course
records. They also effectively manage and execute requirements of the AQAP and the TRADOC
Safety and Occupational Health Program.

(3) AEAS 3: Learning programs – analysis, design, development, and implementation.


Learning institutions effectively analyze, design, develop, and implement their learning
programs, to include their distributed learning courses and products. They develop and distribute
equivalent learning to AC and RC Soldiers under the One Army School System. They also
incorporate mandatory and command-directed learning, and digital mission command systems
training into their learning programs.

(4) AEAS 4: Institutional training and education mission management. Learning


institutions effectively execute processes for obtaining, managing, and maintaining required
resources, to include equipment, facilities, physical training areas, ranges, ammunition, and
library. They participate in the Structure Manning Decision Review and provide documentation
needed to validate resources. They utilize the Army’s systems of record to manage course and
student data. They effectively manage environmental-related issues impacting training. They also
work closely with other organizations for inter-service, foreign, RC, and new systems training.

(5) AEAS 5: Assessment, evaluation, and effectiveness. Learning institutions measure


how effectively they achieve their missions and goals. They use formative and summative
evaluation methods to evaluate their learning programs. They conduct AQAP internal and
external evaluations. They have and execute policies and procedures for supporting Soldier for
Life requirements. They also assess and evaluate applicable occupational standards.

31
TRADOC Pamphlet 11-21

(6) AEAS 6: Faculty and staff. Learning institutions effectively execute requirements of
the Army Civilian Career Program and the TRADOC Civilian Leader Development Program.
They proactively recruit, select, assign, and develop faculty and staff with requisite skills. They
manage and execute faculty and staff development programs and processes that support skill
development and sustainment. Faculty are qualified, certified, and current in the subjects they
teach. Additionally, effective instructor and developer recognition programs are in place.

(7) AEAS 7: Leadership and leader development. Learning institutions place high priority
on developing future leaders who can effectively exercise mission command and operate in
complex and decentralized operational environments. They develop, maintain, and execute
effective leader development programs. Leaders at all levels establish and maintain a positive
climate and culture in support of leader development across the institution. Additionally, the
institution deliberately and progressively integrates leader development into its learning
programs.

b. Each overarching standard has multiple criteria supporting an acceptable outcome of the
standard. Those criteria may contain more-specific sub-criteria supporting achievement of the
criteria, and those sub-criteria may contain even more-specific sub-criteria.

4-3. Criteria applicability

a. All AEAS (for example, 1, 2, and 3) apply to all Army learning institutions; however, not
all criteria (for example, 1a, 2b, and 3c) apply to all Army learning institutions. Criteria
applicability varies based on each learning institution’s mission, complexity, and functions. To
help address questions about applicability, published AEAS criteria are annotated with the letters
“C,” “P,” or “ID” to indicate the most-common type of organization(s) to which each criterion is
likely to apply.

(1) Center “C” refers to those learning institutions with authority over proponents and/or
instructional delivery institutions; for example, Cyber Center of Excellence (CoE), and
Maneuver Support CoE. Some centers may also be considered proponents; for example, Aviation
CoE and Intelligence CoE. Some centers that are also proponents may also provide instructional
delivery.

(2) Proponent “P” refers to those learning institutions that have proponent
responsibilities. Some proponents may also be considered centers; for example, Aviation CoE
and Intelligence CoE. Some proponents may also provide instructional delivery.

(3) Instructional Delivery “ID” refers to those institutions that only provide instructional
delivery of, or implement, proponents’ learning materials; for example, most RC learning
institutions, noncommissioned officer academies (NCOA), regional training sites – maintenance
(RTSM), ARNG aviation training sites (AATS), and troop schools.

b. Criteria applicability annotations are estimated starting points and are not necessarily true
for all Army learning institutions. Learning institutions and HQ TRADOC QAO work together

32
TRADOC Pamphlet 11-21

to assess and determine criteria and sub-criteria applicability as it relates to each institution’s
unique requirements.

4-4. Evaluation report and rubrics


Army accreditation evaluations provide leaders with accurate and timely feedback and a written
record of the results. Although initial feedback may be verbal, written reports are necessary
because they establish a historical record that informs corrective actions, follow-up evaluations,
and trend analysis. Written reports are narrative in form to provide context and clearly articulate
the evidence and analysis behind the evaluation results.

a. AEAS evaluation report tool.

(1) Learning institution QAOs and Army quality assurance evaluators may use the AEAS
evaluation report tool with associated rubrics for reporting the results of Army accreditations,
proponent assessments, and self-assessments. Learning institution QAOs may also use the AEAS
evaluation report tool for reporting the results of their internal evaluations.

(2) The AEAS evaluation report tool is a multiple-page Portable Document Format (PDF)
file. The tool’s first page includes fields for administrative information about the evaluation and
the evaluated institution, automatically calculated overall rating and individual standard ratings,
and a narrative field for the executive summary. The tool’s second page includes narrative fields
for any impact issues and value-add practices, and fields for administrative information about the
team lead and reviewing official. The tool’s subsequent pages include, by AEAS, fields for
manually inputting criteria ratings; automatically calculated standard ratings; narrative fields for
summary comments and recommendations, impact issues, and value-added practices; and fields
for administrative information about the AEAS evaluator. The AEAS rubrics are attached to the
report with the PDF paperclip function. No other attachments are included with the report
without the AQAP Director’s approval.

(3) For information about writing effective evaluation reports, see paragraph 7-17.

b. AEAS rubrics and ratings.

(1) Each AEAS criterion (for example, 1a, 2b, and 3c) has a corresponding rubric. Each
rubric’s heading contains administrative fields for the name of the evaluated learning institution,
the date or date range that the criterion was evaluated, the name of the criterion’s primary
evaluator, and an automatically calculated overall rubric rating. The heading also contains a
description of the criterion. In the body of the rubric are the criterion’s sub-criteria, associated
rubric scales for the sub-criteria, and fields for entering sub-criteria ratings.

(2) Although the rubric scales are written specifically for each criterion, table 4-1 shows
the underlying meaning of each scale item.

33
TRADOC Pamphlet 11-21

Table 4-1
Underlying meaning of rubric scale items
Scale Underlying Meaning
The learning institution meets the criterion with sustainable continuous quality
100
improvements.
75 The learning institution would proficiently meet the criterion with minor corrections.
50 The learning institution is developing competencies toward meeting the criterion.
The learning institution is aware of the criterion but has done little toward meeting
25
the criterion.
The learning institution is unaware of the criterion, or the learning has done nothing
0
toward meeting the criterion.
N/A The criterion does not apply to the learning institution.
N/O Evaluators did not observe the criterion.

(3) Also included in the body of the rubric are narrative fields for supporting comments,
impact issues, and value-added practices.

(4) For more information on quality assurance evaluation rubrics, see paragraph 7-14.

4-5. Changes to the standards, report, and rubrics

a. Recommending changes to the AEAS, report, and rubrics.

(1) Learning institution QAO directors (or equivalent) submit recommended changes
using the process posted on the AQAP portal for requesting changes to the AEAS, reports, and
rubric: https://armyeitaas.sharepoint-mil.us/sites/tr-hq-aqap.

(a) Recommendations for AEAS changes clearly and concisely identify the issue or error,
describe the recommended change, and explain why the recommended change is important.

(b) If the recommendation for AEAS change is related to a policy change, the
recommendation lists the publication number, date, and paragraph that provides the updated
policy.

(2) Learning institution-level quality assurance evaluators and other learning institution-
level stakeholders recommend changes to the AEAS, report, and rubrics through their respective
QAO directors (or equivalent).

(3) AEAS SMEs not associated with a learning institution-level QAO submit
recommended changes following the same procedure as QAO directors.

b. Making changes to the AEAS, report, and rubrics.

(1) Although the AEAS are enduring, it is critical that they continually adapt to the ever-
evolving environment in which they operate. HQ TRADOC QAO periodically reviews the
AEAS for accuracy, currency, and relevance; decides courses of action; and makes all needed

34
TRADOC Pamphlet 11-21

changes. Table 4-2 outlines the general process the AQAP AEAS Manager uses to adjudicate
recommended changes.

Table 4-2
Adjudicating recommended changes to Army Enterprise Accreditation Standards
Step AEAS Manager Process
1 Receive all recommended changes for the AEAS review period.
2 Evaluate each recommended change for clarity and completeness; determine the need
for further coordination with submitting agent and/or subject-matter expert(s).
3 Coordinate and discuss each recommended change with the submitting agent for
clarification as needed.
4 Coordinate and discuss each recommended change with subject-matter expert(s) as
needed.
5 Discuss recommended changes with the AQAP Director for final adjudication*.
6 Post adjudication results to the AQAP portal.
7 Update draft AEAS and brief accepted AEAS changes at the next AQAP forum.
8 Publish accepted changes in the next AEAS version.
*When necessary, the AQAP Director staffs recommended changes with the CG, TRADOC.

(2) When determining and deciding changes to the AEAS, HQ TRADOC QAO reviews
new and updated policies and publications, and works collaboratively with leaders and subject-
matter experts across TRADOC and the Army. This collaboration includes inviting feedback and
recommendations from all members of the AQAP community.

(3) Because changes to the AEAS drive changes to the AEAS evaluation report tool and
rubrics, HQ TRADOC QAO updates the products concurrently on the AQAP portal. They also
update the AEAS evaluation report tool and rubrics to correct any significant errors in
functionality as needed.

Chapter 5
Army Accreditation
This chapter provides an overview of Army accreditation and its administration, to include
scheduling and funding. It describes the Army accreditation team, timeline, and process, and it
provides guidance on the Army accreditation staff assistance visit.

Section I
Overview of Army Accreditation
This section provides an overview of Army accreditation and its associated concepts and
describes Army accreditation ratings, Army accreditation methodology, and the role of
proponent assessment in Army accreditation.

35
TRADOC Pamphlet 11-21

5-1. Introduction

a. Army accreditation, one of the AQAP’s five major functions, assures Army leaders that
learning institutions meet and sustain accepted quality standards. It assesses whether a learning
institution’s processes are working within established limits. It looks at various aspects of a process,
including conformance to policy, regulation, and other guiding directives; resources, such as
personnel and equipment; methods; environment; process controls, such as standard operating
procedures (SOPs) and training; and metrics for tracking process performance. Accreditation also
assesses whether a learning institution’s products conform to requirements, such as those described
in the AEAS.

b. The accreditation process is designed to be transparent and collaborative so that learning


institutions feel that it is fair, credible, and yields accurate results. Accreditation results identify
gaps, help drive change, and help improve organizational effectiveness. Results help focus
commanders’ attention on the state of their learning institutions’ programs and processes across
DOTMLPF-P domains.

c. Army accreditation is a system of self-regulation developed by quality assurance


professionals across commands to evaluate overall institutional quality and encourage continuous
improvement. The Army values accreditation as a mark of quality.

d. The CG, TRADOC is the accrediting authority for all Army learning institutions, except
for the USMA. As the lead agency for AQAP and Army accreditations, HQ TRADOC QAO
plans, organizes, coordinates, schedules, and leads matrixed evaluation teams on Army
accreditations.

e. All Army learning institutions are on a three-year Army accreditation cycle. The three-year
Army accreditation cycle is driven by enterprise system updates, military personnel turnover, and
the pace of change in military operations. Exceptions to the three-year cycle are combat training
centers, which are on a two-year cycle; Command and General Staff College, which is on a six-
year cycle; and Army War College, which is on a six-year cycle.

5-2. Army accreditation core purposes

a. Army accreditation has three core purposes: compliance, improvement, and accountability.
As shown in figure 5-1, each core purpose consists of three different actions: scope of review,
level of judgment, and reporting.

36
TRADOC Pamphlet 11-21

Compliance Improvement Accountability


Scope of review: Evaluate Scope of review: Key areas Scope of review: Specific
against the standards to (standards, focus courses) areas (standards) identified
assure adherence to Army- selected and evaluated, as part of all reviews to
wide training and education supported by self- address common policy
standards. assessment. issues.

Level of judgment: Level of judgment: Provides Level of judgment: External


Institution demonstrates feedback in the form of reference points (policies,
standards are met coaching, counseling, and regulations, statues)
(qualitative and quantitative). mentoring (snapshot in time), reviewed and evaluated (with
corrective actions standards)

Reporting: Announcement Reporting: Reports and Reporting: Meaningful and


of accreditation internally and trends internally circulated for appropriate information about
externally (in the aggregate). improvement; accrediting institutional performance and
action publicly reported actions reported (impact
(aggregate). issues, value-added, trends).

Figure 5-1. Core purposes of Army accreditation

b. Army accreditation’s core purposes serve as a comprehensive framework underlying the


AEAS and evaluation rubrics; underscoring a multifactorial versus a binary process, or
accreditation versus inspection; and facilitating a culture of evidence across Army learning
institutions. Army accreditation’s focus is on helping all Army learning institutions continuously
improve, and its aim is to move all learning institutions to their highest levels of performance.

5-3. Army accreditation classifications

a. Army learning institutions undergoing accreditation earn one of two Army accreditation
classifications: level 1, accredited or reaffirmed; or level 2, non-accredited.

(1) Level I, accredited or reaffirmed. This classification is awarded to Army learning


institutions achieving an overall accreditation rating of 80 or above. “Accredited” is the
classification awarded to learning institutions achieving Army accreditation for the first time.
“Reaffirmed” is the classification awarded to learning institutions having previously achieved
and sustained Army accreditation: It is a renewal of their accreditation status.

(2) Level II, non-accredited. This classification is assigned to Army learning institutions
receiving an overall accreditation rating below 80. The non-accredited classification consists of
two sub-classifications: conditional accreditation, and candidate for accreditation.

(a) Conditional accreditation. This non-accredited sub-classification is assigned to


learning institutions receiving an overall accreditation rating of 60 or greater and less than 80.
Within 12 months, an Army accreditation team reevaluates all AEAS criteria rated 75 or less.

37
TRADOC Pamphlet 11-21

(b) Candidate for accreditation. This non-accredited sub-classification is assigned to


learning institutions receiving an overall accreditation rating of less than 60. Within 12 months,
an Army accreditation team conducts another complete accreditation.

b. All Army learning institutions, regardless of classification or sub-classification, prepare a


corrective action plan for all AEAS criteria and sub-criteria rated below 100.

5-4. Army accreditation methodology

a. All Army accreditations are conducted using virtual methods. Based on the results of each
accreditation team lead’s mission analysis and planning, an accreditation may be conducted
using primarily virtual methods plus an approximately one-week in-person on-site visit at or
toward the end of the 120-day Army accreditation period. Or it may be conducted using entirely
virtual methods, with no on-site visit. The team lead determines if a site visit is required to verify
data.

b. The accreditation team lead, considering input from the learning institution and the
accreditation team, establishes the most-appropriate virtual methods to use during the
accreditation, and communicates specific requirements to the learning institution and the
accreditation team. The team lead typically uses virtual methods to conduct the in-brief and
initial impressions out-brief, as well as all internal and external meetings associated with the
accreditation.

c. Accreditation evaluators use virtual methods to the greatest extent possible to conduct
interviews, focus groups, student and instructor record reviews, test-control procedure reviews,
training observations, and facility walk-throughs. Evaluators, coordinating through the team lead,
may also request the learning institution take photographs or make recorded videos of various
facilities or activities. Any requirements for recorded video should be communicated to learning
institutions as early as possible to allow adequate time for coordinating resources and executing
recording processes.

d. Most virtual methods use the Army 365 digital platform.

5-5. Proponent assessment and Army accreditation.


Proponent learning institutions assess their outlying subordinate schools and functionally aligned
RC learning institutions against the AEAS. The resulting proponent assessment reports become
part of the higher-level accredited institutions’ accreditation reports. For more information about
proponent assessment, see chapter 6, section VII. For an example proponent assessment timeline
and process, see appendix N.

38
TRADOC Pamphlet 11-21

Section II
Army Accreditation Administration
This section provides guidance on the Army accreditation schedule and Army accreditation
funding.

5-6. Army accreditation funding


HQ TRADOC QAO seeks funding using the program objective memorandum (POM) process to
support accreditation functions. Base-level temporary duty (TDY) funding for Army
accreditation team members to conduct accreditation on-site visits is contingent on availability of
TDY funds provided in the POM process. For more information on the Army accreditation
funding process, or on the availability of TDY funds for conducting accreditation evaluations,
contact the Chief, HQ TRADOC QAO Plans and Operations Division.

5-7. Army accreditation schedule

a. Developing and maintaining the Army accreditation schedule.

(1) HQ TRADOC QAO develops and maintains a three-year Army accreditation


schedule based on the three-year Army accreditation cycle, organizational realignments, and as
directed by CG, TRADOC. Specific factors informing the accreditation schedule include the
dates of each learning institution’s last actual accreditation period, each institution’s ATRRS
course schedule for the upcoming accreditation period, the availability of each institution’s
personnel and other resources, and the availability of the Army accreditation team’s personnel
and other resources.

(2) HQ TRADOC QAO reviews and updates the Army accreditation schedule
approximately every 120 days and as changes occur. Each learning institution should review the
published accreditation schedule periodically to ensure continued awareness of their institution’s
scheduled accreditation period.

b. Publishing the Army accreditation schedule.

(1) HQ TRADOC QAO publishes the most-current three-year Army accreditation


schedule to the AQAP portal: https://armyeitaas.sharepoint-mil.us/sites/tr-hq-aqap.

(2) The accreditation schedule’s entries list the evaluated learning institution’s name, the
assigned accreditation team lead’s name, and the inclusive dates of the 120-day Army
accreditation period.

c. Rescheduling Army accreditation.

(1) Army learning institutions may request to reschedule their 120-day Army
accreditation period for good cause, such as no training in session during the scheduled
accreditation period or a significant interruption in the learning institution’s operations is
anticipated. The need to reschedule should be significant and justifiable and not just seen as or

39
TRADOC Pamphlet 11-21

considered an inconvenience. Every effort should be made to maintain a three-year accreditation


cycle.

(2) To request a change to their scheduled accreditation period, the learning institution
submits an official memorandum from their institution’s commander, commandant, or civilian or
military equivalent addressed to the AQAP Director. The memorandum provides justification for
the requested change and recommends primary and alternate 120-day Army accreditation
periods.

(3) If a learning institution wishes to request a change to its accreditation period, it should
do so as soon as possible to support accreditation resource planning and allocation. If a learning
institution requests a change too late in the accreditation scheduling process, HQ TRADOC
QAO may not be able to re-allocate resources to support the requested change.

(4) This process does not apply to RC learning institutions needing to reschedule their
Army accreditation as a result of scheduling outcomes from the RC’s institutional
training/schedule workshops held for the year of accreditation execution (see para 5-7d).

d. Rescheduling reserve component Army accreditation.

(1) RC learning institutions build pre-execution-year course schedules during the RC’s
annual institutional training/schedule workshop. Because HQ TRADOC QAO develops the
Army accreditation schedule three years in advance based on estimated future RC course
schedules, RC learning institutions’ actual course schedules in the year of accreditation
execution may not align with their scheduled accreditation period.

(2) Immediately following the RC’s institutional training/schedule workshop held for the
year of accreditation execution, RC learning institutions compare their course scheduling
outcomes with the Army accreditation schedule to ensure classes will be in session at some point
during their 120-day Army accreditation period. If no classes will be in session during that
period, RC learning institutions contact the HQ TRADOC QAO Accreditation Program
Specialist as soon as possible to coordinate adjustments to the Army accreditation schedule.

(3) RC learning institutions wishing to reschedule their Army accreditation for any reason
other than the results of the RC’s institutional training/schedule workshops held for the year of
accreditation execution may follow the process described in paragraph 5-7c.

40
TRADOC Pamphlet 11-21

Section III
Army Accreditation Team
This section provides guidance on Army accreditation team roles, responsibilities, and
qualifications; accreditation observers; and confidentiality and non-disclosure.

5-8. Forming the Army accreditation team

a. HQ TRADOC QAO identifies, assigns, leads, and funds matrixed Army accreditation
teams to conduct Army accreditations. The AQAP Director assigns experienced evaluators from
HQ TRADOC QAO to serve as Army accreditation team leads. Other team members, whom HQ
TRADOC QAO selects based on their professional experience and expertise, serve as AEAS
leads and/or criterion evaluators. An accreditation team’s size and composition may vary
depending on the accredited learning institution’s size and complexity.

b. Depending on the expertise required for any given Army accreditation, AEAS leads and
criterion evaluators may be selected from various organizations across the Army, resulting in a
cross-functional or matrixed team of evaluators. Using a matrixed evaluation team facilitates a
rigorous and unbiased accreditation effort.

5-9. Army accreditation team member qualifications


To serve on an Army accreditation team, team members meet the qualifications shown in figure 5-2.

Figure 5-2. Army accreditation team member qualifications

5-10. Army accreditation team member roles and responsibilities

a. As shown in table 5-1, Army accreditation team members’ responsibilities vary depending
on their assigned roles.

41
TRADOC Pamphlet 11-21

Table 5-1
Army accreditation team roles and major responsibilities
Role Major Responsibilities
Team • Lead and manage entire accreditation effort, from mission analysis and
Lead planning through final report, and review the institution’s corrective action
plan.
• Lead and manage all accreditation team members.
• Communicate and coordinate with associated proponent assessment team
leads.
• Arbitrate and adjudicate issues and findings from AEAS leads.
• Review impact issues and value-added practices, and process as required.
• Coordinate, analyze, and synthesize AEAS reports into final accreditation
report; write executive summary and summary impact issues and value-added
practices.
• Serve as AEAS lead for at least one AEAS.
AEAS • Arbitrate and adjudicate issues and findings from AEAS criterion evaluators.
Lead • Review impact issues and value-added practices within AEAS, and process
through the team lead.
• Coordinate, analyze, and synthesize AEAS rubrics into AEAS report; write
AEAS summaries and recommendations, impact issues, and value-added
practices.
• Serve as criterion evaluator, as assigned.
Criterion • Evaluate all assigned criteria by collecting and polyangulating data.
Evaluator • Identify any impact issues and value-added practices, and process through the
AEAS lead.
• Complete rubrics for all assigned criteria.

b. Sometimes multiple Army accreditation teams work together on one learning institution’s
associated accreditations; for example, there may be one team assigned to a CoE, and another
team assigned to the CoE’s functionally aligned NCOA. In these cases, each team has its own
team lead responsible for leading and managing the separate accreditation events, evaluators, and
reports; however, the CoE accreditation team lead normally assumes the overall lead for
combined events and resources, such as in-brief, action officer meeting, virtual and on-site visit
coordination, initial impressions out-brief, and most shared evaluator resources.

5-11. Expanded team effort


Army accreditation is a team effort, and not just for members of the formal accreditation team,
but for learning institutions as well. The learning institution QAO supports the Army
accreditation team by, for example, providing all requested information in a timely manner,
coordinating and scheduling accreditation events for their institution, and working closely with
the team to address any issues or concerns.

42
TRADOC Pamphlet 11-21

5-12. Army accreditation observers

a. Opportunities sometimes exist for AQAP community members to observe another learning
institution’s Army accreditation. The primary purpose of observing is to watch and learn about
accreditation and its multitude of associated processes. Because its primary purpose is learning,
observation is normally best-suited for newer evaluators or QAO directors with little-to-no
accreditation experience.

b. All requests to observe another learning institution’s Army accreditation go through the
accreditation team lead. The team lead coordinates the request between the requestor and the
learning institution’s QAO.

(1) The learning institution may approve the request, but also has the first right of refusal
and may disapprove the request for any reason: Learning institutions have no obligation to allow
observers.

(2) If the learning institution does approve the request, the team lead has the second right
of refusal and may disapprove the request after considering factors such as the number of
observers already approved, the size and complexity of the accreditation, and other factors
related to effective execution of the accreditation. The team lead may also consider the
developmental needs of the requestor when deciding between multiple requestors.

c. If the learning institution and the accreditation team lead approve a request to observe an
accreditation:

(1) The observer obtains funding from their own organization and makes their own travel
and lodging arrangements if observing during an on-site visit.

(2) The observer coordinates all observation activities through the team lead so that the
team lead is situationally aware of the observer’s activities at all times.

(3) The observer may observe most accreditation activities, depending on the
circumstances; however, they should not observe focus groups or interviews addressing sensitive
topics, interviews with the command team, or evaluator huddles.

(4) Observers should not ask evaluative questions of any member of the accredited
institution, and they should not interrupt or interfere in accreditation events or activities in any
way. Observers exercise the mindset of being a “fly on the wall” as they observe the
accreditation process.

(5) For learning purposes and for enhancing the observation experience, the accreditation
team lead and other team members provide observers with situational coaching and mentoring,
as appropriate.

43
TRADOC Pamphlet 11-21

5-13. Confidentiality and non-disclosure

a. Army accreditation team members, which include evaluators conducting right-seat-rides


and accreditation observers, commit to being good stewards of the information they are exposed
to during every accreditation. Team members and observers do not breach the confidentiality of
the evaluated learning institution, the institution’s individual participants, or the accreditation
team. This means that team members and observers do not disclose:

(1) The institution’s information, to include findings and report information, with anyone
outside of the accreditation team.

(2) The identities of individual participants who provide accreditation information (for
example, during focus groups) whenever assurances of confidentiality are stated, implied, or
assumed.

(3) Information about the accreditation team’s discussions and deliberations regarding
any aspect of the institution’s accreditation performance without the AQAP Director’s explicit
permission.

b. To ensure full understanding of confidentiality and non-disclosure guidelines, all


accreditation team members and observers enter into a non-disclosure agreement with the AQAP
Director for all Army accreditations in which they participate. The AQAP Director reviews and
approves any requests for disclosure falling outside the scope of the non-disclosure agreement.

5-14. Proponent as Army accreditation team lead


Some proponent QAOs provide the accreditation team lead for accreditations of certain other
learning institutions, such as RTSMs, AATS, Army troop schools, the Marksmanship Training
Center, the Recruiting and Retention Battalion, and others. In these cases, a HQ TRADOC QAO
evaluator assists with evaluation and serves as the accrediting agency’s representative. The
proponent QAO team lead coordinates all accreditation events and works collaboratively with
the HQ TRADOC QAO evaluator to prepare and submit the accreditation report to the AQAP
Director, who approves and forwards the report through appropriate staffing channels for final
CG, TRADOC approval.

Section IV
Army Accreditation Timeline and Process
This section provides guidance on the Army accreditation timeline and process from mission
planning and analysis through corrective action plan and follow up.

5-15. Army accreditation timeline

a. Within the three-year Army accreditation cycle, the formal accreditation period is 120
days, or four months; however, the full accreditation activity timeline spans a period of
approximately eight months.

44
TRADOC Pamphlet 11-21

(1) The timeline begins with the Army accreditation team lead conducting accreditation
mission analysis and planning at least two months before the first day of the 120-day Army
accreditation period.

(2) The timeline ends with the accredited learning institution submitting a corrective
action plan approximately two months after the end of the 120-day Army accreditation period.

b. For more-detailed information about the Army accreditation timeline, see appendix G.

5-16. Army accreditation process


As shown in figure 5-3, the Army accreditation process can be divided into three high-level
phases. Phase one occurs before the first day of the scheduled 120-day Army accreditation
period; phase two occurs during the 120-day Army accreditation period; and phase three occurs
after the end of the 120-day Army accreditation period. For an example Army accreditation
process, see appendix H.

Phase 1 Phase 2 Phase 3


Before the 120- During the 120- After 120-day
day period day period period

• Mission analysis • Stakeholder • Final draft report


and planning meeting and in- • Memorandum of
• Self-study and brief acceptance or
self-assessment • Event scheduling rebuttal
• List of courses in and coordination • Final report and
session • Data collection certificate of
• Proponent events accreditation
assessment • Initial analysis
reports (as • Initial
applicable) impressions out-
• Letter of brief
notification

Figure 5-3. Phases of the Army accreditation process

a. Phase 1: Before the 120-day Army accreditation period.

(1) The accreditation team lead completes mission analysis and planning, which includes
selecting and notifying the accreditation team. The team lead also determines focus courses.

(2) The learning institution posts their self-study (or a link to their self-study) with their
self-assessment and corrective action plan to the AQAP portal and provides the team lead a list
of their courses that will be in session during the scheduled accreditation period.

(3) Associated proponent assessment team leads post their proponent assessment reports
to the AQAP portal (also see para 6-25).

45
TRADOC Pamphlet 11-21

(4) The AQAP Director provides the learning institution a letter of notification (LON).

b. Phase 2: During the 120-day Army accreditation period.

(1) As shown in figure 5-4, the 120-day Army accreditation period can be divided into
three sub-phases. Sub-phase one includes days 1 through 30 (30 days); sub-phase 2 includes days
31 through 90 (60 days); and sub-phase 3 includes days 91 through 120 (30 days).

Sub-Phase 1 Sub-Phase 2 Sub-Phase 3


Days 1-30 of Days 31-90 of Days 91-120 of
120-day period 120-day period 120-day period

• Upload • Ongoing data • Conclude any


remaining collection and final data
documentary evaluation collection and
evidence • Schedule on-site evaluation
• Review visit if needed • Conduct on-site
documents • 75+ percent visit if needed
• Action officer evaluative work • Initial
meeting complete impressions out-
• In-brief brief
• Event scheduling • 100 percent
evaluative work
complete

Figure 5-4. Sub-phases of the 120-day Army accreditation period

(2) Sub-phase 1: During the first 30 days of the 120-day Army accreditation period.

(a) The accreditation team lead conducts an action officer meeting and formal in-brief
with the learning institution and determines data collection event requirements based on initial
analysis and evaluator input.

(b) Accreditation evaluators review the learning institution’s self-study and all
documentary evidence. After reviewing the self-study, evaluators coordinate their data collection
event requirements with the team lead.

(c) The learning institution’s QAO uploads any remaining documentary evidence and
assigns a scheduling coordinator to coordinate and schedule all virtual and any on-site data
collection events.

(3) Sub-phase 2: From day 31 through day 90 of the 120-day Army accreditation period.

(a) The accreditation team lead oversees execution of virtual data collection events.

(b) Accreditation evaluators execute virtual data collection events.

46
TRADOC Pamphlet 11-21

(c) The learning institution’s QAO continues coordination of the accreditation execution
schedule as needed, and stakeholders participate in virtual data collection events.

(d) The aim is for at least 75 percent of the evaluative work to be completed by the end of
this 60-day sub-phase.

(4) Sub-phase 3: From day 91 through day 120 of the 120-day Army accreditation period.

(a) The accreditation team lead oversees any virtual or on-site data collection events
needed for data polyangulation. They conclude this phase by conducting a formal initial
impressions out-brief with the learning institution’s senior leader and key stakeholders.

(b) Accreditation evaluators execute any virtual or on-site data collection events needed
for data polyangulation. They begin completing and submitting rubrics to the AEAS leads and
AEAS reports to the accreditation team lead as they complete data collection and analysis. They
also help prepare for and participate in a formal initial impressions out-brief with the learning
institution’s senior leader and key stakeholders.

(c) The learning institution QAO coordinates and hosts any on-site visit, if needed. They
also receive the formal initial impressions out-brief.

(d) The aim is for 100 percent of the evaluative work to be completed by the end of this
30-day sub-phase.

c. Phase 3: After the 120-day Army accreditation period.

(a) After the end of the 120-day scheduled Army accreditation period or initial
impressions out-brief, accreditation evaluators submit any outstanding AEAS reports with
completed rubrics to the accreditation team lead, who develops a final draft Army accreditation
report.

(b) The AQAP Director provides the final draft report to the learning institution, who
reviews the report and returns a memorandum of acceptance or rebuttal and any official requests
for clarification. The final report goes through appropriate staffing channels for final CG,
TRADOC approval. The AQAP Director provides the learning institution with the approved
final report and certificate of accreditation.

(c) The learning institution submits a corrective action plan for all AEAS criteria and sub-
criteria rated below 100. Completion of this phase may take up to approximately 60 days.

5-17. On-the-spot corrections


The learning institution is encouraged to make immediate or “on-the-spot” corrections whenever
feasible; however, the accreditation report still reflects all issues observed, whether or not
corrected during the accreditation period. The report reflects the condition at the time of initial
observation.

47
TRADOC Pamphlet 11-21

5-18. Impact issues and value-added practices


The accreditation team lead processes any impact issues and value-added practices identified
during accreditation following the processes described in appendix B.

5-19. Army accreditation follow-up actions


Army accreditation follow-up actions may include reevaluation for learning institutions receiving
a non-accredited rating; review of updated self-studies and new self-assessment reports; or
telephone calls or e-mails to discuss the progress and status of any impact issues.

Section V
Army Accreditation Staff Assistance Visit
This section provides an overview of the Army accreditation staff assistance visit (SAV), to
include requesting a SAV, funding a SAV, conducting a SAV, and conducting an after action
review (AAR) of the SAV.

5-20. Introduction

a. An Army accreditation SAV is a formalized event in which a team or an evaluator


representing the accrediting agency helps a learning institution be better able to identify its own
strengths and weaknesses in relation to the AEAS and recommends ways for the institution to
improve its processes.

b. The purpose of an Army accreditation SAV is to assist, coach, counsel, and mentor Army
learning institutions on the AEAS and Army accreditation process. A SAV is not an
accreditation or pre-accreditation.

5-21. Requesting a staff assistance visit

a. Some of the reasons Army learning institutions request accreditation SAVs are shown in
figure 5-5.

Figure 5-5. Reasons for requesting an Army accreditation staff assistance visit

b. Learning institutions requesting an Army accreditation SAV are responsible for knowing
and clearly communicating their needs and expectations for a SAV. One reason for this is to
effectively inform the AQAP Director’s SAV-approval decision. Another reason is to enable the

48
TRADOC Pamphlet 11-21

team or evaluator conducting the SAV to focus their efforts and provide the most-needed
assistance. Some learning institutions, particularly those that have never been accredited, may
not yet know what they need or what their expectations should be. Such learning institutions
should, before submitting a request for a SAV, contact and coordinate with the AQAP Director
for guidance.

c. A learning institution requests an Army accreditation SAV by submitting a formal


memorandum to the AQAP Director from their learning institution’s commander, commandant,
or civilian or military equivalent. The request describes the learning institution’s specific needs
and expectations for the SAV and the requested SAV dates.

d. An Army accreditation SAV is ideally conducted 12 months before the learning


institution’s 120-day Army accreditation period. This timeframe allows the institution adequate
time to apply what they learn from the SAV to prepare themselves for accreditation. As shown in
figure 5-6, the learning institution submits a SAV request not later than 18 months before their
scheduled accreditation period. This timeframe allows HQ TRADOC QAO adequate time to
plan, allocate resources, and otherwise prepare for conducting the SAV, if approved, 12 months
before the learning institution’s accreditation period begins.

Not Later Than

Minus 18 Minus 12 Day 1 of Army


Months Months Accreditation
Conduct SAV Period
Request SAV
Figure 5-6. Accreditation staff assistance visit request timeline

e. After reviewing the Army accreditation SAV request and the learning institution’s last
accreditation report, if applicable, the AQAP Director determines if the need for a SAV is
indicated. If the need for a SAV is indicated, the schedule permits, and personnel resources are
available, the director approves the request and decides if the SAV will be conducted using
virtual methods or on-site. Most SAVs can and should be conducted using virtual methods as
much as possible.

5-22. Staff assistance visit funding


A learning institution requesting an Army accreditation SAV funds all SAV-related costs. One
exception is a learning institution seeking accreditation for the first time, in which case HQ
TRADOC QAO funds all SAV-related costs. Because a SAV can and should be conducted using
virtual methods as much as possible, there is often no requirement for an expensive on-site visit.

5-23. Conducting a staff assistance visit


During an Army accreditation SAV, a SAV team or SAV evaluator assists, coaches, counsels,
and mentors the learning institution’s stakeholders in those areas that best align with the reasons
the learning institution requested the SAV. The team or evaluator conducts assistance activities

49
TRADOC Pamphlet 11-21

that enable the learning institution to effectively assess its own institution against the AEAS and
improve institutional performance.

5-24. Staff assistance visit after action review summary


An Army accreditation SAV does not result in a formal report; however, it does result in
feedback and discussion in the form of an AAR and a written AAR summary. The SAV team or
evaluator conducting the SAV prepares the written AAR summary, and in it describes the
assistance, feedback, and recommendations provided during the SAV. The team or evaluator
provides the AAR summary to the visited institution and the AQAP Director.

Chapter 6
Learning Institution Quality Assurance Program Management
This chapter describes the major functions of quality assurance program management at the
learning institution level. These major functions include the master evaluation plan (MEP),
internal evaluation, external evaluation, self-study and self-assessment, accreditation and
assessment coordination and preparation, quality assurance review, and instructor actions (IA)
review.

Section I
Quality assurance function applicability
Not all quality assurance functions described in this chapter apply to all types of Army learning
institutions. Table 6-1 below depicts each function’s applicability by type of learning institution.
For a full list of all accredited and assessed Army learning institutions by organization and type,
visit the AQAP portal: https://armyeitaas.sharepoint-mil.us/sites/tr-hq-aqap.

50
TRADOC Pamphlet 11-21

Table 6-1
Learning institution quality assurance program applicability chart
Accredited Accredited Assessed
QAO Function (Center/Proponent) (Non-Proponent) (Non-Proponent)
MEP (Develop)   *
Internal Evaluation (Conduct)   
External Evaluation (Conduct)  † †
Self-Assessment (Conduct)   *
Institutional Self-Study   *
(Prepare)
Quarterly Quality Assurance   
Review (Conduct)
Accreditation/Assessment   
Coordination (Conduct)
Proponent Assessment 
(Conduct)
RTSM, AATS, Troop School ‡
Accreditation (Conduct)
IA Review (Conduct) 
*Contributes to higher
†Receives external survey data for courses taught from proponent(s); conducts other types of external evaluation
‡Works with HQ TRADOC QAO to lead the accreditation team; HQ TRADOC QAO is the accrediting authority

6-1. Accredited (center/proponent)


“Accredited (Center/Proponent)” in table 6-1 above refers to CoEs and non-CoE learning
institutions with proponent responsibility for courses and/or learning programs. CoE examples
include Cyber CoE, Intelligence CoE, and Medical CoE. Examples of non-CoE proponent
learning institutions include the MG Robert M. Joyce School for Family and MWR (morale,
welfare, and recreation), and the National Guard Professional Education Center.

6-2. Accredited (non-proponent)


“Accredited (Non-Proponent)” in table 6-1 above refers to learning institutions that are not the
proponent for the courses or programs they implement, but that are accredited and awarded an
Army accreditation certificate. Examples of accredited non-proponent learning institutions are
NCOAs, Army troop schools, USAR training brigades, RTSMs, ARNG regional training
institutes (RTI), AATS, and other ARNG learning institutions requiring accreditation.

6-3. Assessed (non-proponent)


“Assessed (Non-Proponent)” in table 6-1 above refers to learning institutions that are not the
proponent for the courses or programs they implement, are not individually accredited, and are
not awarded an Army accreditation certificate. Instead, course proponents assess these learning
institutions through the proponent assessment process. Proponent assessment results become part
of the higher-level accredited institution’s accreditation report. Although not individually
accredited, an assessed learning institution is considered accredited under its higher-level
institution’s accreditation. Examples of assessed learning institutions are USAR training
battalions, RTI training battalions, and proponents’ own outlying subordinate schools.

51
TRADOC Pamphlet 11-21

Section II
Master Evaluation Plan
This section provides an overview of the MEP, MEP frequency, and MEP elements.

6-4. Introduction
A MEP is a learning institution’s three-year planning document defining the institution’s strategy
for meeting its quality assurance evaluation requirements throughout the three-year Army
accreditation cycle. A MEP captures projected plans for conducting internal and external
evaluations and proponent assessments, as applicable.

6-5. Frequency

a. Not later than 1 June of each year, all accredited learning institutions post their three-year
MEPs to their designated folders in the MEP section of the AQAP portal. Assessed learning
institutions contribute to their higher-level accredited institutions’ MEPs in accordance with their
higher-level institutions’ policies and procedures.

b. Because MEPs are living documents, learning institutions may update their MEPs at any
time throughout the year, as needed. Learning institutions also update their self-studies with their
new or updated MEPs.

6-6. Master evaluation plan elements

a. A MEP consists of three major elements: a memorandum from the learning institution’s
commander, commandant, or civilian or military equivalent; a plan statement from the learning
institution’s QAO director (or equivalent); and a three-part MEP schedule workbook.

(1) Memorandum. The MEP’s memorandum from the learning institution’s commander,
commandant, or civilian or military equivalent introduces the MEP and serves as evidence that
the institution’s senior leader has reviewed and approved the MEP.

(2) Plan statement. The MEP’s plan statement from the QAO director (or equivalent) is
comprised of four sections: overview, methodology, challenges, and contents.

(a) The overview section concisely describes the MEP’s essential content.

(b) The methodology section describes the learning institution’s approach to conducting
internal and external evaluations.

(c) The challenges section briefly identifies any challenges to accomplishing the
evaluation mission, as identified through risk assessment.

(d) The contents section lists the applicable tabs of the MEP schedule workbook and
describes any other content or material significant to and included with the MEP.

52
TRADOC Pamphlet 11-21

(3) Schedule workbook. The MEP’s schedule workbook is comprised of three parts: Tab
A Part I, Tab A Part II, and Tab B.

(a) Tab A Part I is for scheduling internal and external evaluations of the learning
institution’s courses. This tab is also for identifying, based on risk assessment, those courses that
cannot be evaluated during the MEP period.

(b) Tab A Part II is for scheduling non-course evaluations. Non-course evaluations are
technically classified as internal evaluations; however, they address AEAS criteria that are not
normally observed, or not solely observed, at the course level. Non-course evaluations include
the institution’s self-assessment. They may also include evaluations of any of the institution’s
DOTMLPF-P domains against the AEAS.

(c) Tab B is for scheduling proponent assessments. This tab only applies to proponent
learning institutions with courses taught at functionally aligned RC learning institutions and/or
the proponent’s outlying subordinate schools. Proponent assessments align with the higher-level
accredited learning institution’s scheduled accreditation period as published in the Army
accreditation schedule.

b. A multi-branch CoE’s MEP consists of the three major elements, plus QAO annexes for
each subordinate proponent. QAO annexes are simply the subordinate proponents’ MEPs, each
consisting of the five major MEP elements.

c. For an example commander’s memorandum and QAO director’s plan statement, and for
the most-current schedule workbook template, visit the AQAP portal:
https://armyeitaas.sharepoint-mil.us/sites/tr-hq-aqap.

Section III
Internal Evaluation
This section provides an overview of internal evaluation, to include course and non-course
evaluations.

6-7. Introduction
Internal evaluation, one of the AQAP’s five major functions, is a learning institution’s quality
assurance review of its own processes and functions. It provides the means to assure Army leaders
that the Army’s training and education products, programs, and processes are efficient and
produce desired results. It also provides learning institutions the means to improve and sustain
high levels of institutional performance across the DOTMLPF-P domains. Types of internal
evaluation include course evaluation, non-course evaluation, self-study, and self-assessment.
This section addresses course and non-course evaluation only. For information about self-study
and self-assessment, see chapter 6, section IV.

6-8. Applicability
All learning institutions conduct internal evaluation.

53
TRADOC Pamphlet 11-21

6-9. Course evaluations

a. Course evaluations are focused evaluations of individual courses as measured against


applicable AEAS criteria.

b. Within the three-year Army accreditation cycle, all Army learning institutions conduct course
evaluations of all courses designed, developed, and/or implemented at their institutions.

c. Course evaluations may include any or all of the possible activities shown in figure 6-1
(below) depending on the nature of the evaluated course and the scope of the evaluation.

Figure 6-1. Possible course evaluation activities

d. All course evaluations result in a written report of evaluation and the evaluated course’s
corrective-action plan for all AEAS criteria and sub-criteria rated below 100. Although the
AEAS evaluation report tool is recommended for course evaluation reports, QAOs may use any
report format as long as it provides the same information as the AEAS evaluation report tool.
QAOs distribute course evaluation reports to institutional stakeholders as required by local
policy and procedure. Additionally, QAOs incorporate course evaluation results and corrective
action plans into their institution-level corrective action plans as applicable at the institution level
(see para 6-22).

e. Although course evaluations are less formal than Army accreditations or proponent
assessments, course evaluation processes should mirror elements of accreditation or proponent
assessment processes as much as practicable. Mirroring processes is one key method of

54
TRADOC Pamphlet 11-21

preparing an institution’s stakeholders for their actual accreditation or proponent assessment. For
an example course evaluation process that mirrors elements of accreditation and proponent
assessment processes, see appendix J.

6-10. Non-course evaluations

a. Non-course evaluations are focused evaluations of AEAS criteria and sub-criteria not
normally observed, or not solely observed, at the course level. Learning institutions conduct a
variety of non-course evaluations, including but not limited to institution-level evaluations of
various processes, such as ADDIE, faculty and staff development, test control, student
recordkeeping, instructor recordkeeping, training facilities maintenance, and all other AEAS-
and DOTMLPF-P domain-related processes.

b. A learning institution may tailor its non-course evaluation approach to the institution’s
mission, priorities, performance needs, available resources, and command guidance. All non-
course evaluations result in a written report of evaluation and a corrective-action plan from owners
of the evaluated processes for all AEAS criteria and sub-criteria rated below 100. Although the
AEAS evaluation report tool is recommended for non-course evaluation reports, QAOs may use
any report format if it provides the same information as the AEAS evaluation report tool. QAOs
distribute non-course evaluation reports to institutional stakeholders as required by local policy
and procedure.

6-11. Impact issues and value-added practices


The learning institution QAO processes any impact issues and value-added practices identified
during internal evaluation following the processes described in appendix B.

Section IV
Self-Study and Self-Assessment
This section provides an overview of self-study and self-assessment.

6-12. Introduction

a. Self-study is the process in which a learning institution critically examines its form and
substance, programs and processes, and strengths and challenges, then judges its performance
and effectiveness relative to its goals. A self-study’s primary purpose is to advance an
institution’s understanding of itself; however, it is not limited to self-understanding. Army
accreditation teams use a learning institution’s self-study to better understand the institution and
its history, mission, programs, functions, strengths, challenges, and more. They also use the self-
study to locate and review evidence of the institution’s compliance with accreditation standards.
An effective process for gathering information to help inform a self-study is self-assessment.

b. Self-assessment is the process in which a learning institution evaluates its own programs
and processes against the AEAS. Self-assessments provide learning institutions the means to
assure that their training and education products, programs, and processes are efficient and
produce desired results. Self-assessments also provide learning institutions the means to identify

55
TRADOC Pamphlet 11-21

and correct deficiencies to improve and sustain high levels of institutional performance across
the DOTMLPF-P domains.

c. Self-study and self-assessment are closely related, but the former focuses primarily on
introducing the institution and telling its story, while the latter focuses solely on evaluating the
institution against the AEAS. Both processes, especially when combined, drive greater
institutional self-awareness and improved institutional performance.

6-13. Applicability and frequency

a. Applicability. All accredited learning institutions develop and maintain self-studies and
conduct annual self-assessments against all applicable AEAS criteria. All assessed learning
institutions conduct annual self-assessments. Assessed learning institutions contribute to their
higher-level accredited institutions’ self-studies in accordance with their higher-level
institutions’ policies and procedures.

b. Frequency. Although accredited learning institutions formally update their self-studies


annually, the self-study is a living document that learning institutions should informally update
as changes occur. Each year, learning institutions attach their self-assessment reports with
associated corrective action plans to their self-studies, and post their self-studies (or a link to
their self-studies) to their learning institution’s site on the AQAP portal. The institution’s
commander, commandant, or civilian or military equivalent approves and signs all initial self-
studies, annual updated self-studies, and self-assessment reports.

6-14. Process
For an example self-study and self-assessment process, see appendix L.

6-15. Impact issues and value-added practices


The learning institution QAO processes any impact issues and value-added practices identified
during self-study and self-assessment following the processes described in appendix B.

Section V
External Evaluation
This section provides an overview of external evaluation, to include external surveys and other
types of external evaluation.

6-16. Introduction
External evaluation, one of the AQAP’s five major functions, is a quality assurance process that
provides Army learning institutions the means to determine if their training and education courses
meet the performance needs of the operational Army. Types of external evaluation methods include
external surveys and other types of external evaluation.

6-17. External surveys

a. External surveys provide Army learning institutions a means to solicit feedback from
graduates and their leaders on the quality of their learning institutions’ courses. This feedback

56
TRADOC Pamphlet 11-21

from the operational force informs institutions of how well their courses prepare Soldiers and
Army Civilians to perform their jobs when they arrive or return to their units.

b. HQ TRADOC QAO manages the AQAP’s external survey program in compliance with
AR 25-98 (see app A for required publications) and ensures that any AQAP external survey in
use is approved and displays the survey control number (SCN) and SCN expiration date issued
by the Army Information Management Control Officer (see app A for required reports). Note: a
report control symbol or Office of Management and Budget control number might be issued in
place of a SCN.

c. Proponent learning institutions conduct external surveys, ensuring the AQAP’s external
survey program is implemented in compliance with AR 25-98. Non-proponent learning
institutions receive external survey data for the courses they teach from their associated
proponent learning institution(s).

d. To support the external survey program, the AQAP provides proponent learning institutions
with at least one user license to administer their external surveys in the approved survey system of
record.

e. The AQAP Director reports aggregate external survey results at the strategic level, which
requires a commonality of data collection across Army learning institutions. It requires the use of
comparable survey items, and, most importantly, it requires identical feedback responses. To
achieve a commonality of data collection across the AQAP, learning institutions follow the external
survey guidelines in appendix K.

f. Learning institutions submit the names and description of any new or re-named external
surveys to the AQAP External Survey Program Manager before administering those surveys. The
External Survey Program Manager processes new and re-named survey submissions through
TRADOC G-6 for Headquarters, Department of the Army (HQDA) Records Management Division
processing and issuance of a SCN.

g. External surveys consist of two types of survey: graduate and leader.

(1) Graduate surveys consist of, at minimum, two specific AQAP questions (see para K-
1c). Learning institutions survey all course graduates 6 to 12 months after graduation. Depending
on the type and nature of a course, some graduates may be surveyed as early as three months
after graduation. An important consideration when determining how long after graduation to
survey graduates is how long graduates typically need at their units to implement the knowledge
and skills they gained at the course.

(2) Leader surveys consist of, at minimum, one specific AQAP question (see para K-2d).
Proponent QAOs survey leaders in the operational force every six months.

h. Learning institutions submit a quarterly summarized external survey data report to the HQ
TRADOC QAO External Survey Program Manager, who prepares a summary of the aggregate

57
TRADOC Pamphlet 11-21

results for the AQAP Director to brief TRADOC senior leaders. QAOs distribute their external
survey reports to institutional stakeholders as required by local policy.

i. For information about the external survey process and reporting, and the specific AQAP
graduate and leader survey questions, see appendix K. For information about general survey
concepts and methods, see paragraph 7-2.

6-18. Other types of external evaluation

a. Learning institutions execute other types of external evaluation based on available


resources and the institution’s mission, priorities, performance needs, and command guidance.
Other types of external evaluation may include any initiative that garners feedback from the
operational force on training and educational outcomes. They may include conducting interviews
or administering questionnaires at events such as professional conferences and forums;
reviewing relevant feedback from the Center for Army Lessons Learned; and reviewing the
results of studies that examine the impact of the institution’s training and education programs.

b. When QAOs conduct other types of external evaluations, they prepare written reports of
each evaluation and include the following information in each report: the purpose of the
evaluation, the evaluation methods used, and evaluation results (overall and by AEAS criterion).
QAOs may use any report format, depending on the purpose and nature of the evaluation. QAOs
distribute external evaluation reports to institutional stakeholders as required by local policy and
procedure.

Section VI
Corrective Action Plan
This section provides an overview of corrective action plans.

6-19. Introduction
A corrective action plan is the commander, commandant, or civilian or military equivalent’s plan
for resolving shortcomings and deficiencies for all AEAS criteria and sub-criteria rated below
100 during an Army accreditation, proponent assessment, and/or self-assessment. A corrective
action plan identifies the staff lead responsible for ensuring compliance with the Army
requirement and includes all actions taken toward resolution.

6-20. Applicability
All Army learning institutions develop corrective action plans for all AEAS criteria and sub-
criteria rated below 100 during all Army accreditations, proponent assessments, and/or self-
assessments.

6-21. Process

a. Learning institution QAOs coordinate with staff lead action officers and other stakeholders
as needed to develop their institutions’ corrective action plans. QAOs monitor corrective action
plan implementation and progress, maintain the corrective action plan document, and brief senior

58
TRADOC Pamphlet 11-21

leadership on the progress at least quarterly. This is a continuous, ongoing effort for as long as
the learning institution has any corrective actions outstanding.

b. For greater efficiency and to avoid confusion, QAOs normally maintain only one
institution-level corrective action plan at a time. For example, when a learning institution
conducts its next self-assessment following an Army accreditation, the QAO transfers any
corrective actions still outstanding from the accreditation corrective action plan to the self-
assessment corrective action plan for a single, consolidated plan.

c. QAOs may format their learning institutions’ corrective action plans based on local
requirements and preferences. A corrective action plan template is available in the AEAS section
of the AQAP portal: https://armyeitaas.sharepoint-mil.us/sites/tr-hq-aqap.

6-22. Corrective action plans for internal evaluation

a. Learning institution QAOs implement corrective action plans for course evaluations and
other internal evaluations, with the course or organization developing and maintaining the plan,
and the QAO following up with the course or organization on the plan’s implementation and
progress. This facilitates a structured and intentional approach for courses and organizations to
correct shortcomings and deficiencies at the course or organization level. It also facilitates a
more-focused follow up with the course or organization.

b. Learning institution QAOs incorporate internal evaluation results and corrective action
plans into their institution-level corrective action plans as applicable at the institution level.

Section VII
Proponent Assessment

6-23. Introduction
Proponent assessment, one of the AQAP’s five major functions, is the quality assurance process of
proponent learning institutions evaluating their outlying subordinate schools and functionally
aligned RC learning institutions against the AEAS. The resulting proponent assessment report
becomes part of the higher-level learning institution’s Army accreditation report.

6-24. Applicability and frequency


Proponent QAOs conduct proponent assessments in accordance with the three-year Army
accreditation cycle and the Army accreditation schedule.

6-25. Timing
Proponent QAOs either conduct proponent assessment prior the associated Army accreditation or
participate directly in the associated accreditation. They coordinate with the associated Army
accreditation team lead to confirm the required proponent assessment timing.

a. Prior to accreditation. If the associated Army accreditation team lead determines that the
proponent assessment will be conducted prior to the associated Army accreditation, the

59
TRADOC Pamphlet 11-21

proponent QAO submits the final proponent assessment report not later than 60 days before the
first day of the associated 120-day Army accreditation period.

b. Concurrently with accreditation. If the associated Army accreditation team lead determines
that the proponent QAO will participate directly in the accreditation, the proponent QAO does
not conduct a proponent assessment and does not submit a separate proponent assessment report.
Instead, select proponent QAO evaluators serve as part of the HQ TRADOC QAO-led matrix
accreditation team and contribute directly to the Army accreditation report in accordance with
the accreditation team lead’s guidance. Utilizing a matrix team to conduct Army accreditation
maximizes resources.

Note. Proponent assessment team members serving on Army accreditation teams may be asked
to evaluate criteria outside of those normally evaluated during a proponent assessment.

6-26. Process

a. Proponent assessment processes mirror those of Army accreditation; however, there are
some differences. One difference is that a proponent assessment’s scope is considerably smaller
than that of accreditation in terms of applicable AEAS criteria. In most other ways, the two
processes are very similar. As with all AQAP evaluation processes, proponent assessment
processes may vary depending on the assessed learning institution’s mission, structure, and other
variables.

b. For an example proponent assessment timeline and process, see appendix N. For
information about how to submit funding requests to HQ TRADOC QAO for travel to conduct
proponent assessments, see appendix O.

6-27. Impact issues and value-added practices


The assessment team lead processes any impact issues and value-added practices identified
during proponent assessment following the processes described in appendix B.

6-28. Proponent assistance


In addition to conducting proponent assessments, proponent QAOs provide the learning
institutions that they assess with ongoing informal assistance, coaching, counseling, mentoring,
and other quality assurance support throughout the three-year Army accreditation cycle. The type
and level of assistance will vary depending on the needs of the assessed learning intuition and the
resources available. Proponent QAOs conduct assistance activities that enable the learning
institution to effectively assess their own institution against the AEAS and improve institutional
performance. Proponent assistance may be conducted using virtual methods or on-site.

60
TRADOC Pamphlet 11-21

Section VIII
Accreditation and Assessment Coordination and Preparation

6-29. Introduction
Learning institution QAOs coordinate Army accreditation or proponent assessment events with
the accreditation or assessment team and prepare their institutions to participate fully in the
Army accreditation or proponent assessment process, as applicable.

6-30. Coordinating with accreditation or assessment team


Learning institution QAOs coordinate Army accreditation or proponent assessment events with
the accreditation or assessment team for their institutions. These coordination activities are
shown in figure 6-2.

Figure 6-2. Accreditation and assessment coordination activities

6-31. Preparing institutions for accreditation or assessment


Learning institution QAOs prepare their institutions to participate fully in the accreditation or
proponent assessment process, as applicable. These preparation activities are shown in figure 6-
3.

61
TRADOC Pamphlet 11-21

Figure 6-3. Accreditation and assessment preparation activities

6-32. Operations center for on-site visits


During Army accreditation and proponent assessment on-site visits, when held, learning
institutions provide the accreditation or assessment team a work location, known as an operations
center.

a. The operations center should be large enough to seat and provide ample workspace for all
evaluators and any observers. It should be open and ready for the team when they arrive, and it
should be lockable so that evaluators may leave materials overnight.

b. The learning institution’s QAO supports operations center management and serves as
liaison for coordinating schedule changes, resolving any equipment or other problems, and
ensuring no interruptions to on-site visit events and activities.

c. The learning institution provides a projector and any necessary power strips, extension
cords, or any other devices needed to support equipment in the operations center. The learning
institution also provides printer and copier capabilities, which could be direct access to a nearby
printer and copier, or a liaison to print and/or make copies for the accreditation team.

d. The learning institution normally provides the following office supplies in the operations
center: easel with butcher block paper, markers, pens, notebooks, paper clips, binder clips, and
stapler with staples.

e. Only if feasible, the learning institution provides wireless internet capability in the
operations center. This capability can enhance accreditation and assessment operations but is not
required.

62
TRADOC Pamphlet 11-21

Section IX
Quarterly quality assurance review

6-33. Introduction
Quarterly quality assurance reviews provide a learning institution’s senior leaders with
summarized progress and results collated and synthesized from all quality assurance activities
conducted during the last quarter, to include course, non-course, and external evaluations;
proponent assessments; self-assessments and self-studies; and accreditation, as applicable. The
review includes a summary of the quality assurance trends across the institution, any new impact
issues and value-added practices, and the status of any open corrective action plan items and
impact issues.

6-34. Applicability
All learning institution QAO directors (or equivalent), brief a quarterly quality assurance review
to their learning institution’s commander, commandant, civilian or military equivalent, and/or
other responsible senior leader, such as deputy commander, deputy commandant, deputy
director, or other senior staff member.

Section X
Instructor Actions Review

6-35. Introduction
TRADOC’s Instructor Requirements Model (IRM) is a method of determining instructor
manpower requirements. The IRM allows courses to identify additional instructor workload in
their programs of instruction (POI). The tasks associated with this additional instructor workload
are IAs. IAs are requirements-producing instructor work categories documented in POIs. They
are based on time and instructor-to-student ratios (ISR), and they capture instructors’ tasks and
work hours when not formally executing POI lessons with students. For more information about
IAs, see TP 350-70-14 (see app A for required publications).

6-36. Applicability and scope

a. Proponent QAOs conduct IA reviews of their institutions’ new POIs (new POIs only) as
part of the new POI submission process. This is a quality assurance review of the IAs to assure
effective processes; it is not a quality control review of the IAs, the POI, or the POI’s lessons.
For more information about quality assurance versus quality control, see paragraph 2-8.

b. Several categories of courses or types of instructor tasks are currently exempt from IRM,
so IAs and IA reviews are not currently required for those courses or tasks. Example IA
exemptions include all RC courses; all inter-service training review organization courses; basic
combat training/one station unit training, phase I; instructor pilot tasks within aviation courses;
and non-TRADOC courses. For information about specific exemptions, see TP 350-70-14.

63
TRADOC Pamphlet 11-21

6-37. Process

a. Upon completion of all training developer and course manager quality control reviews of
IAs in the new POI, proponent quality assurance evaluators review the IAs to assess whether
they were developed in accordance with the IA guidance provided in TP 350-70-14. Evaluators
may also refer to the IA review evidence guidelines in appendix P.

(1) If the quality assurance evaluator assesses that the IAs were not developed in
accordance with IA guidance, he or she documents a statement of non-concurrence, the total
number of IAs in the new POI, and the reason for non-concurrence in the POI’s TDC change
history section.

(2) If the quality assurance evaluator assesses that the IAs were developed in accordance
with IA guidance, he or she documents a statement of concurrence, the total number of IAs in the
new POI, and the reason for concurrence in the POI’s TDC change history section.

b. Once the course manager and the quality assurance evaluator have concurred with the IAs
in the new POI, the course manager or developer updates the memorandum of transmittal to
reflect the total IA time and identify the concurring quality assurance evaluator, then forwards
the POI and memorandum to the commandant for approval.

Chapter 7
Quality Assurance Evaluation Concepts and Methods
This chapter introduces key quality assurance evaluation concepts and methods. Topics include
data collection, data analysis, communicating evaluation results, project management for
evaluators, and standards of professional evaluation practice. This chapter is not intended to be
all-inclusive, and readers are encouraged to expand their knowledge on these topics through
other means, such as current literature, professional development courses, workshops and
forums, and online resources.

Section I
Data Collection
This section introduces data collection concepts as they apply to quality assurance evaluation.
Data collection is the systematic process of gathering evaluation data. It involves data collection
planning and data collection methods. Data collection methods include surveys, document
reviews, observations, interviews, and focus groups. Quality assurance data is collected to inform
decisions as part of Army accreditation, proponent assessment, and internal and external
evaluation. Quality assurance data is not collected for programmatic or scholarly research
purposes.

7-1. Data collection planning


Data collection planning is the process of determining what and how much data to collect, how
and when to collect the data, from whom or where to collect the data, and the tools needed to
facilitate effective data collection. A data collection plan, which becomes part of the overall
evaluation plan, helps ensure that all evaluation team members and stakeholders clearly

64
TRADOC Pamphlet 11-21

understand the data collection requirements. By focusing on the evaluation questions, a data
collection plan facilitates efficiency, ensuring that only data relevant and meaningful to the
evaluation’s goals are collected.

a. Data sources. Data sources are the resources used to obtain data. Evaluators should choose
data sources carefully to ensure they can provide reliable (consistent) and valid (accurate)
information that is pertinent to the evaluation questions. Whenever feasible, data should come
from multiple sources to enable polyangulation, which increases evaluation rigor and enables
higher-quality results.

(1) Three categories of data sources are: existing data, people, and observations. Existing
data sources include course materials, organizational documents and records, reports, and
previously collected data. People data sources include individuals surveyed or interviewed.
Observation data sources include conditions and behaviors as they occur in their natural settings.

(2) Two types of data sources are internal and external. Internal data sources are found
within the evaluated organization: examples include course materials, instructors, and training
observations. External data sources are found outside the evaluated organization: examples
include ATRRS data, and graduates and leaders in the operational force.

b. Sampling.

(1) The intent of data collection is to collect sufficient data to enable successful analysis
and interpretation, and to ensure the validity of the results. Because of the vast amount of data
often available in quality assurance evaluations, collecting sufficient data usually involves
sampling. Sampling is the process of selecting a subset of a population of people or other units of
analysis, such as documents, records, or facilities, for the purpose of drawing conclusions about
the entire population. Sampling is necessary whenever it is not feasible to examine an entire
population.

(2) Representative samples provide an accurate reflection of the population. The sample
size needed for a representative sample depends on the evaluation’s goals and the resources
available. It also depends on the desired precision of the results, the level of confidence needed,
and the estimated degree of variability. Precision is how closely the sample can predict values in
the entire population. The greater precision needed, the higher the sample size should be.
Confidence is the acceptable risk that the sample represents the population’s average. The less
risk that can be accepted, the higher the sample size should be. Variability refers to the degree of
difference within the population. The more difference within the population, the higher the
sample size should be.

c. Sampling methods. Two over-arching sampling methods are probability and non-
probability. The method selected depends on the evaluation’s purpose, design, questions, and
resources, and the desired level of accuracy.

65
TRADOC Pamphlet 11-21

(1) Probability sampling is random selection that provides each participant or other unit
of analysis an equal chance of being selected. This method can be costly and time-consuming.
Two types of probability sampling are simple random sampling and stratified sampling.

(a) Simple random sampling is completely random and uses methods such as lottery,
drawing names, sampling software, and random numbers tables from statistics books.

(b) Stratified sampling divides a population into like groups, then performs simple
random sampling within each group. An example of stratified sampling is dividing a population
of students into assigned courses, then randomly selecting samples of students from each course
to participate in an institution-level student focus group.

(2) Non-probability sampling is non-random selection that provides some participants or


other units of analysis a greater chance of being selected. This method is less rigid and less
resource intensive than probability sampling; however, the bias potential is higher. Types of non-
probability sampling include convenience sampling, purposive sampling, quota sampling,
snowball sampling, and self-selection.

(a) Convenience sampling uses whatever participants or other units of analysis are
available at a given time.

(b) Purposive sampling, which is the most common type of sampling strategy used in
quality assurance evaluations, uses the specific participants or other units of analysis that the
evaluator feels are representative of the population according to pre-selected criteria.

(c) Quota sampling divides a population into like groups, then performs some form of
non-probability sampling within each group.

(d) Snowball sampling begins with selecting a few key participants who recommend
others to the selection.

(e) Self-selection involves advertising the data collection activity and asking for
volunteer participants.

d. Data collection job aids.

(1) In full-scope quality assurance evaluations, evaluators can collect data from numerous
sources to answer questions related to literally hundreds of AEAS criteria. Without the right
tools in place, this can seem like a daunting task. Developing and implementing effective data
collection job aids can make the data collection task much less daunting and much more
manageable.

(2) Data collection job aids are just-in-time performance aids that guide and enhance
evaluator performance. They enable smarter performance, accurate task completion, and
standardized evaluation processes across evaluators. They save time, minimize errors, and reduce

66
TRADOC Pamphlet 11-21

the need to recall information from memory. They are especially useful when a task requires
judgment, is unusually complex, or is performed infrequently.

(3) Types of data collection job aids include process step guides, information crosswalks,
worksheets, checklists, questionnaires, procedure guides, quick start guides, reference
documents, call-out sheets, or any combination thereof.

(4) Evaluation data collection activities should use one or more job aids to guide the
process. For example, evaluators can use document checklists, by type of document, to conduct
document reviews. They can use a reference document with callouts to better understand what to
look for in a document review. And they can use an information crosswalk when reviewing for
alignment between different types of documents. Additionally, evaluators can use observation
forms, which are generally checklists with space to record notes, to conduct observations. They
can use interview guides to conduct interviews and focus group guides to conduct focus groups.
Interview and focus group guides are generally questionnaires with space to record notes.

(5) Complex data collection support processes should also use one or more job aids to
guide the process. For example, a quick-start guide, process step guide, and/or procedure guide
can help evaluators develop interview guides and conduct interviews.

(6) Effective data collection job aids make clear the ideal way to perform the task and the
desired output or result. They represent exemplary performance and are developed from an
expert performer, or SME, perspective. They are as direct and concise as possible, yet complete.
They present information in small chunks, and evaluators can quickly find what they need. When
appropriate, they provide examples and use visual presentations. They are relevant and aligned
with the evaluation questions. They address common errors to ensure those are not repeated.
They are easily accessible, kept current, and use version control to make clear which version is
the most recent.

(7) Although SMEs are integral to data collection job aid development, they are written
for the person with the lowest level of expertise expected to use them.

(8) It is important to pilot and validate job aids before implementing them. This provides
the opportunity to find out if instructions are clear; if all process steps are included and outlined
in the right order; if all needed information is available, or if too much information is included; if
the job aid is easy to use; and if the job aid facilitates collecting the right data.

7-2. Surveys
A survey is the process of collecting data (feedback) from a group of people and aggregating,
analyzing, and interpreting that data. A survey always contains a written set of questions, also
known as a questionnaire. Quality assurance surveys may be conducted to collect feedback from
stakeholders as part of Army accreditation, proponent assessment, and internal and external
evaluation. Quality assurance surveys are not conducted for programmatic or scholarly research
purposes. Quality assurance surveys are conducted in compliance with AR 25-98.

a. Advantages and challenges of surveys.

67
TRADOC Pamphlet 11-21

(1) Surveys are relatively quick and easy to administer, and responses, particularly
quantitative responses, are relatively easy to analyze and interpret. Surveys are usually more
feasible and efficient than in-depth interviews and focus groups. They can save participants and
evaluators time, and they can capture a much larger and more diverse audience.

(2) Some of the challenges of surveys include their impersonal nature, possible response
bias, gaps in the data, and the inability to explore responses in depth.

b. Planning for surveys.

(1) Survey planning involves defining a survey’s purpose and goals, and determining the
information needed to answer the evaluation questions. Every survey question should have a
specific purpose and be important to the evaluation, and survey responses should provide
actionable insights. Time is an important resource: participants should not spend their valuable
time answering questions that are not important to the evaluation, and evaluators should not
spend their valuable time analyzing responses to questions that are not important to the
evaluation. A survey should not collect information that is only nice to know.

(2) Survey planning also involves determining if data already exists that meets the
information need, and if a survey is the best method for collecting the data. Additionally, it
involves determining who will be surveyed, the questions to ask, and how the survey will be
administered.

c. Types of survey questions. There are four types of survey questions: closed-ended with
ordered choices, closed-ended with unordered choices, partially closed-ended, and open-ended.
Closed-ended questions have a set number of response options and provide quantitative data.
Open-ended questions include a field for writing short-answer and essay responses and provide
qualitative data. The type of survey question selected depends on the information needed.

(1) Questions that are closed-ended with ordered choices include dichotomous, Likert
scale, rating scale, and rank-order questions.

(a) Dichotomous questions offer only two response options; for example, “yes” and “no.”
Although these questions are quick and easy to answer and analyze and are appropriate in some
situations, they do not provide depth of information. Most topics need more than a “yes” or “no”
response to understand something about the degree of sentiment toward a topic. In some cases,
however, understanding the degree of sentiment is not required, such as when asking a branching
question that sends participants to certain survey questions or to the end of the survey. An
example dichotomous branching question is, “Did you graduate from an Army course in the last
six months? (yes or no).”

(b) Likert scale questions evaluate the degree to which participants agree or disagree with
a statement, their level of acceptability with a statement, the degree to which a statement reflects
them, how likely or unlikely they are to do something, and a wide variety of other measures.
Likert scale questions are effective when assessing the degree of participant satisfaction. They

68
TRADOC Pamphlet 11-21

usually offer six or seven response options; for example, “strongly disagree – disagree –
somewhat disagree – somewhat agree – agree – strongly agree.” They may also include a middle,
or “neutral,” option.

(c) Rating scale questions ask participants to rank the degree of their sentiment toward a
topic on a scale of usually one to ten. These questions are effective when assessing the degree of
participant satisfaction.

(d) Rank-order questions ask participants to rank items in order of preference, frequency,
or perceived value.

(2) Questions that are closed-ended with unordered choices include select-one and select-
many multiple-choice questions. These questions do not include any open-ended response
options.

(3) Questions that are partially closed-ended include select-one and select-many multiple-
choice questions. These questions include an open-ended response option, such as “Other (please
specify).”

(4) Open-ended survey questions are text boxes that allow participants to write or type in
a response. Some open-ended text boxes are controlled by text masking, which may limit the
type or format of the data that can be entered, but most open-ended text boxes allow for free-
flow entries in which participants may write whatever they want, however they want. Open-
ended survey questions can be helpful for understanding participants’ responses to closed-ended
questions and their sentiments toward a topic. An effective way to elicit feedback on how
something can be improved is to ask an open-ended question such as, “If you could change one
thing about leader development in your organization, what would it be?” Asking for one idea
helps participants focus on the ideas most important to them. Those who have two or more
equally important ideas will likely share them all. Because open-ended questions require more
effort to answer, they should be used in moderation.

d. Good survey questions and response options.

(1) Good survey questions are at the heart of effective surveys. They accurately measure
participants’ opinions, experiences, reactions, behaviors, or actions. Developing good survey
questions, which is both science and art, requires attention to basic survey question design
principles.

(a) Good survey questions are clear, simple, and easy to answer. They use active voice,
short sentences, and simple and relevant terms. They avoid jargon and overuse of acronyms and
abbreviations, which can be confusing. Participants do not need to consult a dictionary or
conduct an internet search to complete the survey. Unclear and confusing questions can lead to
incomplete, inaccurate, or irrelevant data.

(b) Good survey questions are unbiased. Biased questions sway participants toward a
particular response and lead to inaccurate data. An example biased question is, “XZY’s

69
TRADOC Pamphlet 11-21

classroom facilities are the best in the Army; would you agree?” An example of an unbiased way
to ask this question is, “How would you rate XYZ’s classroom facilities overall?”

(c) Good survey questions ask about one concept at a time. A question asking about more
than one concept at a time is a compound question. An example compound question is, “How
would you rate the maintenance of the classroom facility and the barracks?” This question asks
about two different facilities. Participants will be confused about which facility to consider in
their responses, and evaluators will not know how to categorize, analyze, and interpret the
resulting data. When participants do not know which part of a question to answer, results will be
misleading. Participants may skip compound questions altogether. A solution is to simply
separate the compound question into two separate questions.

(d) Good survey questions are specific, particularly when it comes to issues of frequency
and meaning. An example non-specific question is: “Do you regularly review lesson plans?” In
this example, it is not clear what is meant by “regularly.” It is also not clear for what purpose the
lesson plans are reviewed. A more-specific way to ask this question is, “How frequently do you
review lesson plans for currency?”

(e) Good survey questions avoid negatives and double negatives, which are confusing
and frustrating. An example negative question is, “How uncommon is it for leaders in your
organization to demonstrate leadership principles?” An example double negative question is,
“How not uncommon is it for leaders in your organization to demonstrate leadership principles?”
A better way to ask both questions is, “How common is it for leaders in your organization to
demonstrate leadership principles?”

(f) Quality assurance evaluators should carefully consider assumptions when developing
good survey questions. For example, if the participant group for a survey is students, the
evaluator should examine the assumption that all students live in the barracks before writing a
question about how well the barracks are maintained. One way to ask the question, even if it
does not apply to the entire participant group, is to ask a preceding question about where
participants live, then branch those who live in the barracks to the question about the barracks.
Another way to ask the question is to include a response option such as, “I did not live in the
barracks,” then, using skip logic, branch those who selected this option past any additional
questions about the barracks.

(2) Good response options include all possibilities and are mutually exclusive; they do
not overlap. Questions eliciting sensitive information, such as some demographic questions (for
example, age, gender, education level), should provide an alternative answer; for example, “I
prefer not to answer”. When not sure if all response options are included, a good option to
include is “Other (please specify).” In some cases, it is a good idea to exclude a neutral or middle
point so that respondents provide a sentiment or opinion one direction or another. A neutral or
middle response option may provide participants a way to avoid taking a position, or a way to
simply answer questions as quickly as possible.

70
TRADOC Pamphlet 11-21

e. Survey layout. A good survey layout can be as important to effective survey results as
good survey questions. Developing a good survey layout requires attention to basic survey layout
guidelines.

(1) Surveys should be easy to complete. Their layout and presentation should be clean
and simple. Their questions should be clear and concise. Fonts should be consistent and easy to
read. Buttons and checkboxes should be easy to find and click. Text color should be dark, such
as dark blue, dark green, or black; with a plain, light-colored background for contrast. Images
should be used sparingly, if at all, so as not to distract participants from the survey questions.
Questions should be visible on both computer and mobile screens.

(2) Surveys should be balanced with closed- and open-ended questions, and questions
should be logically ordered, progressing from simple to more complex. For example, a survey
might start with basic closed-ended questions, such as demographic questions, which can be used
to branch participants to various parts of the survey as needed. Next would come the survey’s
core questions, which provide insights into the evaluation. Concluding the survey with one or
more open-ended questions is an effective way to collect in-depth feedback.

(3) Most survey questions should be optional to answer. Forcing participants to answer
questions can lead to them quitting the survey or providing irrelevant responses.

f. Survey length. Keeping a survey as short and to the point as possible increases the chances
of survey completion and of care in completing the survey. When considering survey length, it is
important to consider the audience. A student who is provided the opportunity and time to take a
survey in the classroom will be better able to complete a longer survey than a busy graduate or
leader in the field. Although students in the classroom may be provided the opportunity and time
to complete a survey, evaluators still need to consider the possibility of survey fatigue with
surveys that are too long. Survey fatigue can lead to incomplete, inaccurate, or irrelevant
responses. A survey sent to participants in the classroom should usually take no longer than 10 or
15 minutes to complete. A survey sent to participants in the field should usually take no longer
than five minutes to complete. Shorter surveys show participants that their time is valued.

g. Survey pilot testing. Quality assurance evaluators should pilot-test their surveys before
administering them to actual participants. Pilot testing involves asking colleagues and others to
complete the survey in advance and provide feedback on accessibility, layout, format, flow,
functionality, appropriateness of language, clarity of terms, adequacy of response options, and
reasonableness of survey length. It also involves refining the survey based on the feedback, and
testing it again as needed to ensure accurate measures and relevant responses. It is important to
remove all test data before administering the survey to actual participants.

h. Administering AQAP surveys.

(1) Survey administration involves inviting participants to complete a survey. The


method of inviting participants can impact survey responses and results. For example, inviting
participants via a survey system’s e-mail distribution function can result in higher response rates
as this method makes it easy to send automatic reminders to those invitees who have not yet

71
TRADOC Pamphlet 11-21

completed the survey. Having someone else share a survey link runs the risk of that person
sharing the link with only those whom they think will answer the survey a certain way. Posting a
survey link to a website runs the risk of individuals who are not part of the target population
completing the survey.

(2) Most surveys in quality assurance evaluation are administered online; however, they
may also be administered in person. Administering surveys in person may involve asking
participants at a forum to complete a printed questionnaire.

(3) Administered surveys are always developed and managed in accordance with AR 25-
98 and always display the approved survey control number (SCN).

i. Survey response rate.

(1) Survey response rate is the percentage of people invited to participate in a survey who
completed the survey. Higher response rates mean higher-quality data. Lower response rates can
skew the data and provide less-reliable results. Response rate does not apply to open access
surveys, such as a survey with a link on a website accessible to anyone; it only applies to surveys
with controlled access, such as a survey only accessible via invitation.

(2) Defining an average response rate is not simple because of the variables that make
each survey unique. For example, inviting participants working in the operational force to
complete a survey will typically yield lower response rates than inviting students sitting in a
classroom. A 100 percent response rate is not unusual with students sitting in a classroom;
however, response quality cannot be guaranteed with captive audiences. For surveys sent to
participants in the operational force, a ten percent response rate might be considered very good.
More important than response rate is the number of responses received, and whether or not that
number is a representative sample of the population.

(3) Ways to increase survey response rates are shown in figure 7-1.

72
TRADOC Pamphlet 11-21

Figure 7-1. Ways to increase survey response rates

j. Survey completion rate.

(1) Survey completion rate is the percentage of people who entered or started a survey
who completed the survey. Completion rate provides an indication of how easy the survey is to
take and where any completion roadblocks may exist. A completion roadblock is a common
point where people abandon the survey.

(2) Ways to increase survey completion rates are shown in figure 7-2.

Figure 7-2. Ways to increase survey completion rates

7-3. Document reviews


Document review is a systematic method of collecting data by reviewing and evaluating existing
documents. Documents reviewed for quality assurance evaluation take a variety of forms,
including but not limited to course materials, audit trail documents, training waivers, visitor

73
TRADOC Pamphlet 11-21

folders, student records, instructor records, test control records, assignment orders, facility
maintenance logs, tables of distribution and allowance (TDA), SOPs, memorandums, self-
studies, meeting minutes, and MEPs. Quality assurance document reviews are conducted to
collect data as part of Army accreditation, proponent assessment, and internal and external
evaluation. Quality assurance document reviews are not conducted for programmatic or scholarly
research purposes.

a. Advantages and challenges of document reviews.

(1) Document reviews are useful for gathering background information about an
organization’s history and operations, answering basic evaluation questions, revealing
differences between course or program design and implementation, formulating interview and
focus group questions, and developing observation forms. They are also useful for
polyangulating and confirming whether or not assumptions are borne out in the documentation.
When data is readily available, document reviews are relatively quick, unobtrusive, and cost
effective.

(2) Some challenges to document reviews are that the documents may be outdated,
inaccurate, incomplete, disorganized, or difficult to access. Because it can be time consuming to
collect and analyze many documents, document reviews should be limited to only those
documents needed to answer the evaluation questions. Selecting a representative sample of
documents to review is usually warranted.

b. Planning and preparing for document reviews. Planning for document reviews involves
determining which types of documents will answer the evaluation questions and the sample of
documents needed. Preparing for document reviews involves creating a data collection job aid
that guides the data collection.

c. Conducting document reviews. Conducting document reviews involves accessing the


documents, compiling the documents, gleaning relevant data from the documents, and analyzing
the data. It may also involve speaking to the people involved in developing and/or managing the
documents.

7-4. Observations
Observation is a systematic method of collecting data about behaviors as they occur in their
natural settings. It is also a method of collecting data when information about the physical setting
is needed, such as determining if a classroom is conducive to learning, or if barracks facilities
adequately support students’ needs. Effective observations are structured, selective, and focused.
Observations often conducted for quality assurance evaluations include but are not limited to
observing training, training equipment, training facilities, barracks facilities, and test
administration. Quality assurance observations are conducted to collect data as part of Army
accreditation, proponent assessment, and internal and external evaluation. Quality assurance
observations are not conducted for programmatic or scholarly research purposes.

a. Advantages and challenges of observations.

74
TRADOC Pamphlet 11-21

(1) Observations are useful for collecting information about how organizations actually
operate. They are useful for polyangulating and confirming whether or not assumptions are borne
out in actual behaviors and conditions. They are also useful when an issue is suspected but not
understood. Quality assurance evaluators can gain a much better understanding of an issue by
observing the process: they can see what people actually do versus what they say they do.

(2) A challenge to observations is that they are susceptible to evaluator bias and selective
perception, which can make it difficult to accurately interpret and categorize behaviors.
Additionally, the act of observation itself can influence the behaviors of those being observed.

b. Planning and preparing for observations. Planning for observations involves focusing the
observations on the evaluation questions and the areas likely to generate the most useful
information and insights. It involves deciding how many sites to observe while considering time
and resource constraints. It also involves deciding when to conduct the observations. Timing is
critical for being able to observe the parts of the process needed to answer the evaluation
questions. Preparing for observations involves developing observation forms, which are job aids
listing the processes and/or the inventory of items to observe (see para 7-4c).

c. Observation forms. Observation forms are similar to questionnaires; however, instead of


participants providing responses, evaluators provide responses based on their observations.
Observation forms help ensure critical information is observed and captured, and they
standardize the observation process across evaluators. They also facilitate better aggregation of
data collected from multiple sites or by multiple evaluators. Observation forms may include
yes/no checklist items, leaving space for notes.

d. Conducting observations. Conducting an observation involves allowing sufficient time for


the observation; establishing rapport with those being observed; and observing the performance,
behaviors, and/or environment. It also involves taking notes as inconspicuously as possible. If it
is not possible to take notes during the observation, notes should be recorded immediately after
the observation. When feasible, using two observers during observations facilitates more-
comprehensive notes and helps mitigate individual bias.

7-5. Interviews
An interview is another way to conduct a survey; however, it is an oral method of questioning
participants through one-on-one discussion. Interviews can be conducted in person, over the
telephone, or through videoconferencing. In person is ideal. Over the telephone is the least
desirable method because the telephone makes it impossible to observe visual cues. Quality
assurance interviews are conducted to collect data from stakeholders as part of Army
accreditation, proponent assessment, and internal and external evaluation. Quality assurance
interviews are not conducted for programmatic or scholarly research purposes. Quality assurance
interviews are conducted in compliance with AR 25-98.

a. Advantages and challenges of interviews

(1) Interviews can capture insightful data that surveys and other data collection methods
often cannot. They are useful for collecting descriptive and detailed information and exploring

75
TRADOC Pamphlet 11-21

specific issues or concerns. They allow varied questions, open-ended questions, follow-up
questions, and open dialogue, which can lead to more in-depth understanding. They allow
evaluators to learn about participants’ knowledge, experiences, perspectives, and attitudes
directly from the participants themselves. In addition to collecting descriptive and detailed oral
responses, evaluators are usually able to observe participants’ nonverbal responses, such as facial
expressions; gestures; eye contact; rate, volume, and tone of voice; and body language.

(2) Interviews also have some challenges. For example, they are time consuming and
generally reach fewer people than surveys or focus groups. They typically lack standardization
and are prone to interviewer bias and subjectivity. There is a greater chance that the interviewer
can influence or bias participants’ responses. Because interview data can be rich and complex,
they can also be challenging to analyze.

b. Planning and preparing for interviews. Planning for interviews involves considering the
purpose of the evaluation and determining the purpose of the interviews. It involves identifying
whom and how many to interview, and where and when the interviews should take place. It also
involves determining the type of interviews to conduct and the questions to ask. Preparing for
interviews involves developing an interview guide, which is a job aid listing the questions to ask
during the interview (see para 7-5e).

c. Types of interviews. There are four types of interviews: structured, semi-structured,


unstructured, and informal.

(1) Structured interviews are essentially verbal questionnaires. With structured


interviews, all questions are pre-determined, and the interviewer asks each participant the same
questions. Questions typically have a limited set of response options and contain few, if any,
open-ended questions. Structured interviews are rarely used in quality assurance evaluation:
surveys can be used for this purpose.

(2) Semi-structured interviews contain several key questions defining areas for
exploration but allow the interviewer to diverge or follow up to a response. The interviewer uses
an interview guide with a usually ordered list of questions and topics, and the evaluator follows
the guide; however, they also have the option to stray from the guide if needed to follow relevant
areas. Semi-structured interviews are the most often used type of interview in quality assurance
evaluation.

(3) Unstructured interviews contain an opening question followed by open discussion.


The interviewer has a clear focus and goal for the interview that guides the discussion; however,
there is no structured interview guide. Unstructured interviews are most appropriate when an
evaluator has enough understanding of a topic but is open to revising that understanding. For
example, an unstructured interview might be a useful approach when following up with a
participant on a single issue identified elsewhere during the evaluation.

(4) Informal interviews are casual conversations with individuals in their work
environments. The interviewer asks questions without using an interview guide and without
taking immediate notes. They remember conversations and take notes about them as soon as

76
TRADOC Pamphlet 11-21

possible after each interview so that the information is not lost from memory. Informal
interviews can be conducted quickly, without thought or preparation, and are often conducted
hand-in-hand with observations.

d. Good interview questions. Developing good interview questions requires attention to


several basic design principles. For example, good interview questions always support answering
the evaluation questions. They are clear, simple, and asked one at a time – no compound
questions. They are open-ended, eliciting more than one-word responses. They are tailored to
different audiences based on each audience’s expertise and experience. They ask how rather than
why. They never ask for second-hand information, such as hearsay. They are never leading or
loaded questions that influence responses.

e. Interview guide. An interview guide is an important tool for conducting structured and
semi-structured interviews. It is a data collection job aid containing the questions to ask during
the interview, as well as possible follow-up questions that might be asked depending on
participants’ responses. When developing an interview guide, it is important to consider the
focus of the evaluation and how much time is available, and to include only those questions
likely to yield as much information related to the evaluation as possible. It is also important to
consider how much the evaluator already knows about the interview topic. An interview guide
starts with easy, warm-up questions before proceeding to more-complex questions. The last
question provides some closure and leaves the participant feeling listened to.

f. Interviewer skills. To be effective, interviews require skilled interviewers. Becoming a


skilled interviewer requires training and practice. It also requires possessing a set of important
supporting skills.

(1) Listening is one of the most important skills an interviewer can possess. Skilled
interviewers are engaged and listen attentively to what participants have to say. They clarify
participants’ meaning without imposing meaning. They also listen to what is not being said: they
are adept at reading body language and other non-verbal cues.

(2) Skilled interviewers are approachable, warm, and relaxed. They use open and neutral
body language, such as nodding, smiling, being interested, and making encouraging sounds and
gestures. They are empathetic and considerate, and they display a non-judgmental attitude.

(3) Skilled interviewers are good time managers. They manage interview time effectively
and respect participants’ time.

g. Conducting interviews.

(1) Interviews should be conducted in distraction-free areas, and at locations and times
most suitable for participants. Interviewers should respect participants’ time by arriving early
and starting on time. Interview length varies; however, they usually last 20 to 60 minutes. They
should last no longer than 60 minutes.

77
TRADOC Pamphlet 11-21

(2) When conducting an interview, the interviewer introduces themselves, establishes


rapport, thanks the participant for agreeing to the interview, and ensures the participant
understands the purpose of the evaluation and the interview. The interviewer tells the participant
how much time the interview will take and does not exceed that time. The interviewer also
provides assurance of confidentiality (see para 7-25a(2)), which increases the likelihood of
honest responses.

(3) The interviewer guides and controls the interview and recognizes when responses are
not on target. When a participant takes a conversation in a direction not useful to the evaluation,
the interviewer listens for an opportunity to segue or otherwise respectfully move the
conversation along in the right direction. The interviewer does not talk so much that the
participant does not talk, and they do not unnecessarily interrupt the participant. They also do not
answer for the participant. They are comfortable with silence and use it strategically to get
participants to think more about their responses or to talk more. They remember what was said
previously, and they are prepared to respectfully challenge inconsistent responses. Throughout
the interview, the interviewer appears natural and unrehearsed.

(4) The interviewer ends on time, thanks the participant for their time, and asks the
participant if there is anything they would like to add. This allows the participant to share
information they think is important to the evaluation but that the interviewer did not ask. The
interviewer also lets the participant know how to obtain information about the subsequent
evaluation report, and how to contact the facilitator after the session if needed.

(5) It is important to take notes during an interview, unless it is an informal interview (see
para 7-5c(4)); however, taking too many notes during an interview can shift focus and lessen
rapport. A solution to that dilemma is to have a note taker. If it is not possible to have a note
taker, then it is important to record detailed notes immediately after the interview before memory
of the interview fades.

Note. Interviews conducted for quality assurance evaluations should not be recorded
electronically. This includes when using virtual platforms to conduct the interview. Neither the
interviewer, note taker, nor participant should record the interview electronically.

(6) Even if the interviewer and/or note taker took notes during the interview, the
interviewer should reconstruct the interview immediately following the interview. This involves
sharing and comparing notes with any note taker and developing a short summary of the
interview to retain critical information. The summary should include participant demographics
and characteristics, first-hand experience versus hearsay, response themes and sub-themes,
trends, patterns, big ideas, descriptive words or phrases used, and the mood of the discussion.

7-6. Focus groups


The purpose of focus groups is to gain a range and depth of understanding of participants’
common experiences and perceptions through facilitated group discussion. They are similar to
interviews; however, there is more to focus groups than conducting multiple interviews at once.
Interviews depend primarily on the interviewer asking questions; focus groups depend more on
group interaction and the data that emerges. Quality assurance focus groups are conducted to

78
TRADOC Pamphlet 11-21

collect feedback from stakeholders as part of Army accreditation, proponent assessment, and
internal and external evaluation. Quality assurance focus groups are not conducted for
programmatic or scholarly research purposes. Quality assurance focus groups are conducted in
compliance with AR 25-98.

a. Advantages and challenges of focus groups.

(1) Focus groups are useful for gathering information on collective views and
understanding participants’ collective insights, attitudes, and experiences. They are useful for
exploring topics that are difficult to observe directly, identifying issues of most concern,
collecting a large amount of data in a short time, and examining a topic in-depth. They are
particularly useful for supplementing, corroborating, clarifying, or challenging data collected
using other methods.

(2) Focus groups have some challenges. For example, it can be challenging to schedule a
meeting time for multiple people. A few individuals in a focus group can dominate or sidetrack
the discussion, which can require expert facilitation to remedy. If the right participants are not
present for the discussion, the data is likely to be unhelpful or worthless. Focus groups generate a
lot of qualitative data at once and can be difficult to record. It can also be difficult to analyze
focus group data because of the focus group’s interactive nature. Focus groups are susceptible to
facilitator bias, and results can be subjective. Perspectives shared in focus groups are only valid
for the participants and cannot be generalized across the institution or program unless the focus
group is comprised of all members of the target population. Focus groups are not appropriate if
statistical data is needed, if harm could come to participants sharing their ideas, or if a topic is
polarizing.

b. Planning and preparing for focus groups. Planning for focus groups involves considering
the purpose of the evaluation and determining the purpose of the focus groups. It involves
identifying whom and how many to invite to participate in the focus groups, and where and when
the focus groups should take place. It also involves determining the questions to ask. Preparing
for focus groups involves developing a focus group guide, which is a job aid listing the questions
to ask during the focus group (see para 7-6g).

c. Focus group size and composition. Focus groups are typically composed of no fewer than
three and no more than twelve participants, one facilitator, and one or two note takers. When
possible, having two note takers allows each to focus on different aspects of the focus group,
such as what was said, how it was said, nonverbal behaviors, themes, and group dynamics.
Facilitators should not take notes during the discussion.

d. Participant characteristics. Participants should be a representative sample of the population


of interest, and they should be similar in characteristics and roles. Focus groups should be
avoided if participants would be uneasy with one another and would not be willing to openly
discuss their opinions and feelings. For example, supervisors should not be present with their
subordinates in focus groups. If information is needed from both supervisors and subordinates, it
should be gathered separately. Focus group participants may be nominated by key individuals,

79
TRADOC Pamphlet 11-21

selected randomly if coming from a large defined target population, or all members of the target
population if the population is not too large.

e. Facilitator skills. To be effective, focus groups require skilled facilitators. Becoming a


skilled facilitator requires training and practice. It also requires possessing a set of important
supporting skills.

(1) As with skilled interviewers (see para 7-5f), effective facilitators are active listeners
and good time managers. They are approachable, warm, relaxed, and open. They are empathetic
and considerate, and they display a non-judgmental attitude.

(2) Skilled facilitators are also good conversationalists. They have knowledge of the topic
and the local culture. They can build rapport, create relaxed atmospheres, and engage all
participants.

(3) Skilled facilitators are courteous, treating participants like the experts that they are.
They make it clear that they are there to learn from the participants.

(4) Skilled facilitators can manage challenging group dynamics. They can seamlessly
control dominant participants, encourage hesitant participants, and react appropriately to
unexpected circumstances. They deal with all challenges tactfully and employ appropriate
strategies. They are alert, observant, and can think quickly on their feet.

f. Good focus group questions. Developing good focus group questions requires attention to
several basic design principles. For example, good focus group questions always support
answering the evaluation questions. They are easy to understand and asked one at a time. To
facilitate discussion, they are always open-ended. Good focus group questions should either
engage, explore, or close. Questions that engage are warm-up questions that get participants
talking. Questions that explore get to the heart of the discussion. Questions that close provide
participants the opportunity to address anything they thought may have been missed.

g. Focus group guide. A focus group guide is an important tool for conducting effective focus
groups. It is a data collection job aid containing the questions to ask during the focus group, as
well as possible follow-up questions that might be asked depending on participants’ responses.
When developing a focus group guide, it is important to consider the focus of the evaluation and
how much time is available, and to include only those questions likely to yield as much
information related to the evaluation as possible. A focus group guide starts with an engaging,
warm-up question before proceeding to more-complex exploring questions and ending with a
closing question. The number of questions should be aligned with the time available for the focus
group.

h. Conducting a focus group.

(1) A focus group can be conducted in person, or through teleconferencing or


videoconferencing. In person is ideal. Teleconferencing is the least desirable method because the
telephone makes it impossible to observe visual cues.

80
TRADOC Pamphlet 11-21

(2) A focus group should be conducted at a location and time most suitable for
participants. The location should be easily accessible, private, and free from disturbances and
distractions. Participants should sit facing one another, preferably around a table. This seating
arrangement enables the focus group to be free flowing. Participants interact not only with the
facilitator but with each other. The note taker and any observers from the evaluation team do not
interact with participants during the session so should sit apart from the discussion.

(3) The length of focus groups varies; however, focus groups usually last 45 to 90
minutes. They should last no longer than 90 minutes. Focus groups longer than 90 minutes have
too many questions or topics and are not usually productive. Facilitators should respect
participants’ time by arriving early and starting on time.

(4) When conducting a focus group, the facilitator introduces themselves, the note taker,
and any observers from the evaluation team. They make it clear that the note taker is just there to
observe and take notes, and the observer is just there to observe. They establish rapport, thank
participants for participating in the focus group, and ensure participants understand the purpose
of the evaluation and the focus group. They tell participants how much time the focus group will
take, and do not exceed that time. They provide assurance of confidentiality (see para 7-25a(2)),
which increases the likelihood of honest responses. They also ask participants to introduce
themselves.

Note. When providing assurance of confidentiality, participants should understand that the
facilitator, note taker, and any observer from the evaluation team will respect participants’
privacy and incorporate their feedback into the evaluation results in a non-attributional way;
however, they cannot control focus group members and what they might say or share outside of
the focus group. It is important to emphasize a ground rule for all participants to respect each
other’s privacy and not discuss information shared during the session with anyone outside the
group.

(5) The facilitator establishes ground rules, which support a safe and open environment.
Example ground rules are shown in figure 7-3.

Figure 7-3. Example focus group ground rules

81
TRADOC Pamphlet 11-21

(6) The facilitator opens the focus group session with an easy, general-topic, open-ended
question to encourage participation and conversation. The facilitator then asks the topic
questions one at a time. To demonstrate active listening and clarify responses, the facilitator
paraphrases and summarizes long responses back to participants. The facilitator asks probing
questions for clarification as needed. Example probing questions include, “Can you say more
about…?” and “Can you explain what you mean by…?”

(7) The facilitator guides and controls the session and respectfully enforces ground rules.
They do not allow one or two people to dominate the discussion. If someone is monopolizing the
conversation, the facilitator can thank the participant for their contribution then ask others what
they think. They ensure maximum participation by asking balancing questions as needed.
Example balancing questions include, “Who else has something to say about that?” and “I would
like to hear more from…” If someone seems too shy to speak up, the facilitator can make eye
contact, smile at them, and ask them a question directly. The facilitator does not allow side
conversations to distract from the discussion. If someone is engaged in a side conversation, the
facilitator can remind them that their contributions are valuable and politely ask them to re-join
the main discussion. If someone is rambling, the facilitator can break eye contact, and/or jump in
during a small pause. If the discussion veers off track, the facilitator can redirect the discussion.
An example redirecting statement is, “These points are interesting and important; however, we
need to bring the discussion back to…”

(8) The facilitator guides the discussion without participating in the discussion. They
clarify participants’ meaning without imposing meaning. They do not express their own views,
either verbally or non-verbally, and they do not take sides. They do not finish participants’
sentences, and they do not challenge what participants say. They always appear neutral, to
include controlling their facial expressions and body language.

(9) The note taker takes effective notes during the session. Effective notes are clear,
consistent, organized, and legible to others. They capture key points, themes, notable quotes,
emotions, tones, body language, and other non-verbal factors, such as eye contact between
participants.

Note. Focus groups conducted for quality assurance evaluations should not be recorded
electronically. This includes when using virtual platforms to conduct the focus group. Neither the
facilitator, note taker, nor participants should record the focus group electronically.

(10) The facilitator ends on time and asks participants if there is anything they would
like to add. This allows participants to share information they think is important to the evaluation
but that the facilitator did not ask. Although the note taker does not participate in the discussion,
they may ask follow-up questions at the end of the session as needed to close any gaps in their
notes. The facilitator thanks participants for their time and lets them know how to obtain
information about the subsequent evaluation report, and how to contact the facilitator after the
session if needed. Providing facilitator contact information is especially important when there
may be a sensitive issue at hand.

82
TRADOC Pamphlet 11-21

(11) Right after the focus group, the note taker shares their notes with the facilitator. The
facilitator and note taker discuss their observations and evaluate how well the information
collected during the focus group helped answer the evaluation questions.

Section II
Data Analysis
This section introduces data analysis concepts as they apply to quality assurance evaluation. Data
analysis is the systematic process of transforming raw data into usable information. The process
involves organizing, cleaning, analyzing, and interpreting data. The purpose of data analysis is to
reveal patterns and trends in the data and relationships among variables, and to find meaning in
the data for drawing conclusions and making informed decisions. The data analysis method and
technique used will depend on the type of data collected: quantitative or qualitative.

7-7. Types of data


Two overarching types of data are quantitative and qualitative. Quality assurance evaluations
typically examine both types of data.

a. Quantitative data are numerical and can be statistically analyzed. Examples of quantitative
data include scores, quantities, frequencies, time, and responses to closed-ended survey
questions.

b. Qualitative data are non-numerical and descriptive. They can be analyzed using a variety
of methods such as content or narrative analysis. Qualitative data provide context and meaning
and can reveal a part of a story that quantitative data cannot, such as how or why something has
occurred. Qualitative data either cannot be converted to numerical data or are considered more
valuable in qualitative form. Examples of qualitative data include interview, focus group, and
observation notes; document content; and responses to open-ended survey questions.

7-8. Organizing data

a. The first step in data analysis is organizing the data. This is an important step because
organized data allows all members of the evaluation team to easily locate and use the data.
Organized data saves time and frustration, and it helps avoid duplication and prevent errors. How
data is organized should be determined during evaluation planning before any data is collected.

b. Organizing data involves determining a logical and consistent way to name and structure
folders and files. A logical and hierarchical folder structure groups data files and supporting
documentation together by topic, data collection activity, or some other relevant category so that
the data is connected in a meaningful way and easy to find. Using a logical and documented
naming convention for folders and files provides consistency. Useful folder and file names are
descriptive, clear, consistent, unique, and meaningful to everyone on the evaluation team. They
allow everyone to easily find the file they need. They use standard vocabulary and format, and
elements are listed in the same order, for example, “org_file-name_v01-2_YYYYMMDD.”

83
TRADOC Pamphlet 11-21

7-9. Storing and protecting data


Properly storing and protecting evaluation data, to include observation, interview, and focus
group notes, is essential for protecting participants and other stakeholders from harm that could
result from unauthorized or unintended disclosure. Evaluation data should be appropriately
stored and protected from unauthorized access without imposing excessive or unwarranted
burden on the evaluation team. All quality assurance evaluators have a responsibility to
adequately store and protect evaluation data. Evaluation team leads and evaluators working alone
have a responsibility to develop and implement an adequate data storage and protection plan for
all accreditations, assessments, evaluations, and other events involving evaluation data
collection.

7-10. Cleaning data


Cleaning data is the process of identifying and purging or changing incomplete,
incomprehensible, irrelevant, inconsistent, inaccurate, duplicative, and/or erroneous data from a
dataset to improve data quality so that it is ready for analysis. Dirty data adds no value, confuses
results, limits the ability to capture valuable insights, and can have significant ramifications for
any decisions made based on the data. When purging or changing data, it is important to always
retain a copy of the original, unedited dataset. Cleaning data also involves documenting all data
cleaning rules and procedures so that they are applied consistently.

a. Cleaning quantitative data.

(1) Many types of quantitative data are collected as part of quality assurance evaluations
through data collection activities such as closed-ended survey questions, observations, and
document reviews. Quantitative data is cleaned prior to data analysis.

(2) Although software tools exist to clean quantitative data, they can also be cleaned
using manual inspection methods. Manual inspection normally involves exporting or transferring
the data to a spreadsheet to be able to view and analyze the data all together. This facilitates
more easily identifying potential issues, patterns, and outliers.

(3) Potential issues to look for when cleaning all quantitative data include inaccurate data,
data-entry errors, and missing data.

(4) Patterns to look for when cleaning quantitative survey data include:

(a) Straight-line responses, such as always selecting the first, central, or last response
option.

(b) Visually patterned responses, such as Christmas tree or zig-zag response patterns.

(c) Selecting all response options in all select-many questions.

(5) Possible outliers to look for in quantitative survey data include responses that are
unintelligible, incoherent, meaningless, not aligned with the question, and/or contradictory of
earlier responses. Another possible outlier is one or more “speeders,” who finish the survey far

84
TRADOC Pamphlet 11-21

faster than the average completion time. Speeders are less likely to have read and considered the
questions, and/or to have spent any time reflecting on their responses.

(6) Quality assurance evaluators develop decision rules so that all data issues are handled
consistently, and they always use caution when applying the decision rules. If changing or
purging data, they retain a copy of the original, unedited dataset.

(7) Cleaning quantitative survey data, particularly using manual inspection methods, can
be time consuming, so needs should be balanced. Solid survey design (see para 7-2) can improve
data quality and reduce the need for data cleaning.

b. Cleaning qualitative data.

(1) Many types of qualitative data are collected as part of quality assurance evaluations
through data collection activities such as interviews, focus groups, observations, and open-ended
survey questions. Qualitative data is cleaned in conjunction with data analysis.

(2) Although software tools exist to clean qualitative data, they can also be cleaned using
manual inspection methods. Manual inspection normally involves exporting or transferring the
data to a spreadsheet to be able to view and analyze the data all together. This facilitates more
easily identifying potential issues.

(3) Potential issues to look for when cleaning qualitative data include missing data,
irrelevant data, and data entry errors. Other potential issues to look for include misspellings,
which can interfere with automated search and sort functions, and misplaced data, such as
narrative survey responses provided in the wrong field.

(4) Quality assurance evaluators develop decision rules so that all data issues are handled
consistently, and they always use caution when applying the decision rules. When possible, they
go back to the original data source to gain additional clarification. If changing or purging data,
they retain a copy of the original, unedited dataset.

7-11. Quantitative data analysis

a. Analyzing quantitative data in quality assurance evaluations normally involves using


descriptive statistics, which describe a sample by summarizing and graphing the data. Examples
of descriptive statistics include distribution, central tendency, and variability.

(1) Distribution is the frequency that a value appears in the data. For example, a survey
question might ask respondents if they have an approved Individual Development Plan in the
Army Career Tracker. The number or percentage of respondents answering “yes” and the
number or percentage of respondents answering “no” represent the distribution.

(2) Central tendency estimates the center or average of values. For example, a survey
question might ask respondents to rate the overall perceived quality of the training they received
on a scale of one to five. The mean of all respondents’ ratings represents the response average.

85
TRADOC Pamphlet 11-21

Mean can be sensitive to extreme values, especially when outliers exist; therefore, when
examining mean, median and mode should also be considered. Median is the middle value in a
list of data, and mode is the most repeated value in a list of data.

(3) Variability is how spread out values are. Measures of variability include range,
standard deviation, and variance.

(a) Range is how far apart the most extreme values are, and it is calculated by subtracting
the lowest value from the highest value. Range can be sensitive to extreme values, especially
when outliers exist; therefore, when examining range, standard deviation and variance should
also be considered.

(b) Standard deviation is the average distance the values are from the mean.

(c) Variance reflects the degree of spread in the data, and it is represented by the average
of squared standard deviations from the mean.

b. Descriptive statistics can be analyzed one variable at a time, known as univariate


descriptive analysis; two variables at a time, known as bivariate descriptive analysis; or more
than two variables at a time, known as multivariate descriptive analysis. Bivariate and
multivariate descriptive analysis, using tools such as contingency tables for nominal and ordinal
variables and scatter plots for statistical variables, facilitate exploring relationships, or
correlations, between the variables.

c. Inferential statistics are used to make estimates about a population based on a randomly
drawn sample. The most common inferential methods are hypothesis testing, confidence
intervals, and regression analysis. Inferential statistics are more advanced statistical techniques
and are not often used in quality assurance evaluations.

d. Readily available tools for conducting statistical analysis include spreadsheet software and
the AQAP-approved survey system of record, both of which have built-in data analysis and
graphing functionality. Contact HQ TRADOC QAO with questions about which statistical
analysis tools are appropriate and authorized for use.

e. If using unfamiliar statistical methods and/or analysis tools, quality assurance evaluators
should solicit the assistance of a statistician or more experienced evaluator.

7-12. Qualitative data analysis

a. Qualitative data analysis focuses on making sense of unstructured data, such as interview,
focus group, and observation notes; and responses to open-ended survey questions. Evaluators
should analyze qualitative data as soon as possible after collecting it.

b. The first step in analyzing qualitative data is to simplify or reduce the data. For example,
data may be interspersed throughout focus group notes that is both relevant and not relevant to
answering the evaluation questions. Evaluators determine which elements of the data should be

86
TRADOC Pamphlet 11-21

emphasized, minimized, or omitted in order to keep the analysis focused on the evaluation
questions. When conducting data analysis, it is important to consider how useful data points are:
high volume does not guarantee usefulness.

c. The next step is to organize the simplified data into categories, identify patterns, themes,
and trends in the data, and examine relationships between themes. Included in this step is
identifying powerful quotes that best illustrate themes and trends.

d. If there are a lot of data, it can be helpful to code the data before grouping them into
categories. Evaluators can develop and maintain a set of codes based on expected and/or
emerging patterns, themes, and trends.

e. When analyzing qualitative data, particularly interview and focus group data, it is
important to consider the words used, the context in which they were used, shifts in opinion, the
extensiveness of topics discussed, the frequency of comments, how intensely participants talked
about something (for example, tone of voice, speed, excitement, word emphasis), and special or
unusual sounds (for example, loud voices, laughter, shouting, interrupting). Greater emphasis
should always be placed on first-person versus third-person responses.

f. A readily available tool for analyzing large volumes of qualitative data is spreadsheet
software. Evaluators can use such software to compile then separate, code, sort, group, analyze,
and synthesize the data. Evaluators can also analyze qualitative data using manual methods. For
example, they can consolidate qualitative notes into a document, print a copy of the document,
cut note entries into separate items, and tape, paste, or tack the separate items to butcher block
paper with broad theme headings. As the analysis progresses, they can rename the themes and/or
recategorize the items as needed.

7-13. Interpreting results

a. Interpreting results is the process of applying value judgments to analyzed data in


accordance with the evaluation criteria and drawing conclusions. Results may suggest
recommendations for improvement. They may also lead to additional questions about the
learning institution or program.

b. Quantitative data is objective data, and, once effectively organized, cleaned, and analyzed,
is open to objective interpretation. Qualitative data is subjective data, and even once effectively
organized, cleaned, and analyzed, is open to subjective interpretation. It is important with both
types of data, but especially with qualitative data, for evaluators to carefully examine and
consider their biases. They should also frequently re-visit the evaluation’s purpose and goals.

c. Interpreting results and reaching conclusions can be challenging. For example, information
can sometimes be inconsistent or contradictory. It is important in these cases to consider the
validity and unique perspectives of each source, and to resolve discrepancies to the greatest
extent possible.

87
TRADOC Pamphlet 11-21

7-14. Quality assurance evaluation rubrics

a. A quality assurance evaluation rubric is an evaluation tool that provides a common


framework for evaluation. It provides performance criteria and describes standards for different
levels of performance of each criterion. It guides judgment on performance and facilitates
efficient analysis, synthesis, and evaluation.

b. The primary difference between a rubric and a checklist for evaluation is that a rubric
provides information about the level or degree of performance; whereas a checklist provides
information about whether or not criteria are met, but not about the level or degree to which they
are met. A rubric is multilevel and makes evaluative judgment systematic, explicit, and
transparent; a checklist is dichotomous and makes evaluative judgment simplistic and obscure.

c. Evaluation rubrics clarify for stakeholders the quality of work they should achieve. They
should always be shared with stakeholders ahead of time so that they may understand what the
desired performance is and the criteria for success, and so that they may plan and monitor their
own performance.

d. To produce accurate and consistent ratings, all evaluators need to interpret the rubric in the
same way. Evaluators should have a shared understanding of the rubrics’ performance
expectations so that they interpret and apply the rubrics consistently. This can be accomplished
through training and norming the rubrics across each QAO and across the AQAP.

7-15. Root cause analysis

a. Root cause analysis is the systematic process of uncovering the underlying causes of
problems and identifying and applying appropriate solutions. It seeks to discover why something
occurred and how processes or systems failed.

b. If root causes are not identified, solutions target and treat symptoms instead. Treating
symptoms can feel productive and is okay for short-term relief; however, if root causes are not
identified and treated, the same problem is likely to occur over and over. Conducting root cause
analysis and applying the right solutions to core processes and systems helps prevent problems
from recurring.

c. It is important to keep in mind that most problems have multiple root causes, and finding
and addressing just one of several or many root causes is likely to eventually result in recurrence
of the problem. Therefore, it is important to identify as many root causes as possible.

d. Root cause analysis is best conducted with a team to ensure a variety of expertise and
perspectives. Team members can include evaluators, SMEs, managers, decision makers, and end
users. Prior to beginning root cause analysis, everyone on the team should agree on the problem.
It can help to visualize the problem by drawing it out and illustrating data and facts related to the
problem, to include past solutions.

88
TRADOC Pamphlet 11-21

e. A common technique for root cause analysis is asking questions that systematically drill
down to the real root causes. The key question to ask is, “Why?” The “what,” “where,” “how,”
and “who” questions are important as they provide context and greater understanding of the
problem; however, it is the “why” questions that ultimately reveal root causes.

f. It may only take two “why” questions to get to a root cause, or it may take fifty. The
answers to “why” questions become clearer and more concise each time a question is asked. The
last “why” question reveals an answer that has a solution(s) that can solve or control the original
problem. This method can work in almost any situation.

g. Very often, root cause analysis leads to a policy, process, or person. When a policy or
process becomes the answer to a “why” question, there is a likelihood that a root cause has been
reached. When the answer to a “why” question is a person who made a mistake, the root cause
has not been reached. It is important to drill down beyond that and ask, “Why did that person
make a mistake?” Maybe the procedure was confusing, the person was not familiar with the task,
or the climate was poor.

h. Root causes are specific underlying issues for which effective recommendations can be
made and that management has control to influence and fix. For example, “Inclement weather” is
not a problem that management has control to influence and fix; however, “insufficient weather
gear” is. “Operator error” is not a specific root cause; however, “outdated procedure” is. Without
knowing the true root cause of “operator error,” a commonly recommended solution is training;
however, training is not the right solution if the true root cause is actually “outdated procedure.”

i. It is not always possible to discover the root cause of a problem during a quality assurance
evaluation; however, every feasible effort should be made to get there. If a finding is likely a
symptom and not a root cause, and time is not available to conduct a full root cause analysis, this
should be made clear in the report. Recommendations should include that the issue be further
examined through root cause analysis.

j. Root cause analysis is not just effective for diagnosing problems and discovering where
something went wrong; it is also effective for diagnosing successes and discovering where
something went right. Diagnosing success can help spread success.

Section III
Communicating Evaluation Results
This section provides an overview of two methods of communicating quality assurance
evaluation results: briefing evaluation results and writing evaluation reports.

7-16. Briefing evaluation results


Effectively briefing evaluation results is about more than just developing an effective out-brief
presentation (see app I). It is also about effectively preparing for and conducting the briefing and
exercising certain skills and characteristics.

a. Effectively preparing for evaluation briefings includes:

89
TRADOC Pamphlet 11-21

(1) Understanding the briefing requirements.

(2) Knowing the audience, their official positions, and their current knowledge of the
topic.

(3) Knowing the names of key leaders and stakeholders and how to pronounce them.

(4) Knowing the topic and how it is relevant to the audience.

(5) Developing briefing material.

(6) Organizing the briefing material into clear sections.

(7) Preparing an outline and developing transitions.

(8) Rehearsing to ensure effective and confident delivery.

(9) When virtual, ensuring reliable connectivity with the audience.

b. Effectively conducting evaluation briefings includes:

(1) Introducing oneself and the team.

(2) Explaining the purpose of the briefing.

(3) Providing a briefing outline or agenda.

(4) Presenting the information clearly and concisely.

(5) Providing logical flow with the bottom line up front.

(6) Using visual aids effectively to make the information clearer.

(7) Making recommendations as appropriate.

(8) Summarizing the main points and asking for questions.

(9) Respecting the audience’s time by remaining within the allotted time frame.

(10) Closing and thanking the audience for their time.

c. Effective briefing skills are shown in figure 7-4.

90
TRADOC Pamphlet 11-21

Figure 7-4. Effective briefing skills

d. The characteristics of an effective briefer are shown in figure 7-5.

Figure 7-5. Characteristics of an effective briefer

7-17. Writing evaluation reports


Evaluation reports share an evaluation’s findings, conclusions, and recommendations for
improvement with senior leaders and other stakeholders. Writing effective evaluation reports is
critical for clear communication and understanding, influencing decisions, and inspiring readers
to action.

a. Key elements of effective evaluation reports. Effective evaluation reports have all required
parts and comply with the basic principles of good writing. Good writing includes being
grammatically correct and using plain language (see the DOD Plain Language website at

91
TRADOC Pamphlet 11-21

https://www.esd.whs.mil/DD/plainlanguage/ for more information on plain language). Other key


elements of effective evaluation reports include the following:

(1) Written for readers. Effective evaluation reports are written for the readers of each
part of the report: the Executive Summary (EXSUM), the AEAS summaries, and the detailed
rubrics. Different stakeholders at various levels have different information needs and will likely
be most interested in different parts of the report.

(2) Timely and actionable. Effective evaluation reports are timely and actionable. They
are delivered promptly and expeditiously. A report that arrives late, after conditions have
changed, is of little use.

(3) Bottom line up front. Effective evaluation reports capture readers’ attention by getting
quickly to the point, presenting the main message within the report’s first sentence or two. Each
element within the report also begins with that element’s main message.

(4) Well-organized. Effective evaluation reports hold readers’ attention by being well-
organized and logically ordered. They are organized for the reader and focus on essential
information. The report is easy to read and follow.

(5) Clear. Effective evaluation reports are clearly phrased, unambiguous, and easily
understood. Readers can understand the report’s ideas in a single reading.

(6) Consistent. Effective evaluation reports use terms consistently. For example, a report
might use the term “corrective action plan” consistently to mean a plan for resolving
shortcomings and deficiencies. An example of inconsistent terminology is if a report uses the
terms “corrective action plan,” “improvement plan,” and “get well plan” to mean the same thing
throughout the report. Using inconsistent terminology in a report confuses readers.

(7) Concise. Effective evaluation reports are concise while still including all essential
information. Sentences and words are direct and uncomplicated without being abrupt. Reports
that are too lengthy or wordy typically do not get read.

(8) Accurate and relevant. Effective evaluation reports are accurate and include all
relevant facts.

(9) Complete. Effective evaluation reports are complete, include enough detail to fully
explain observations and recommendations, and leave no pertinent questions unanswered.

(10) Unbiased. Effective evaluation reports are unbiased, containing only information
that is factual and objective, and without distortion, assumption, or influence. They separate fact
from opinion. Unbiased reporting is key to evaluation integrity.

(11) Active voice. Effective evaluation reports use active voice in most situations. Active
voice is simple and direct, whereas passive voice can be difficult to understand and challenging
to read. With active voice, the subject of a sentence performs the verb’s action, and emphasis is

92
TRADOC Pamphlet 11-21

on the actor; for example, “SGT Jones facilitated the lesson.” With passive voice, the subject of a
sentence receives the verb’s action, and emphasis is on the recipient; for example, “The lesson
was facilitated by SGT Jones.” Occasionally passive voice is appropriate, such as when the actor
is unknown or irrelevant; for example, “Students were paid on time.”

(12) Tense. Effective evaluation reports use past tense to report evaluation findings and
results as they existed at the time the data was collected and analyzed, which occurred at some
specific, definite time in the past. Evaluation findings and results are relevant to a particular
snapshot in time and should not be accepted as present truth. Reporting findings and results in
past tense leaves no room for misinterpretation.

(13) Recommendations. Effective evaluation reports include good recommendations,


which increase the likelihood that issues will be resolved. Good recommendations are specific,
supported by evidence, within the reader’s authority, realistic, and actionable. They tell readers
what assistance or additional information is available and where they can obtain it. Because the
scope of quality assurance evaluations does not often allow time and other resources needed to
analyze and reveal true root causes of identified issues (see para 7-15), in many cases a
recommendation may be that the issue be further examined through root cause analysis.

(14) Writing style and format. Effective evaluation reports use effective writing style and
formatting. For information on writing style and formatting, see the U.S. Government Publishing
Office Style Manual, AR 25-50, and DA PAM 25-40 (see app A for related publications).

b. Approaches to writing evaluation reports. There are many viable approaches to writing
evaluation reports. What follows is just one possible approach.

(1) Start with an outline. Outlining provides structure and a roadmap, helps reduce
writer’s block, and makes writing the report much faster and easier. It also helps keep the report
focused on the evaluation’s purpose.

(2) Just write. For the first draft, it helps to just write and let words flow. This draft is all
about simply getting the information down in writing: it is not about quality at this point. Things
to write about in this first draft include background information, positive and negative findings,
evidence and justification for those findings, conclusions, and recommendations. Although it
may be tempting, it is important to avoid copying and pasting from previous reports: every
evaluation is different, and the information in the current report needs to be specific to the
current evaluation.

(3) Revise. The second and subsequent drafts are all about quality. This is the time for
revisions to ensure the words flow logically and the report meets all of the key elements of an
effective evaluation report (see para 7-17a). It is also a good time to ensure the report does not
only speak to negative findings, but also highlights the organization’s strengths and what the
organization did well. It helps a lot to read the drafts out loud. When reading out loud, the brain
receives the information in a new way and is likely to catch things that were not apparent when
reading silently.

93
TRADOC Pamphlet 11-21

(4) Write the EXSUM. An EXSUM is a condensed version of the full report and is
designed for senior leadership. It contains the background, purpose, methodology, positive and
negative key findings, conclusions, and recommendations. The EXSUM should be written last.
Too much can happen between the first and final drafts of the rest of the report, and if the
EXSUM is written too early, it may need to be rewritten before the report is finished. See TR 1-
11 for more information about writing EXSUMs (see app A for related publications).

(5) Wrap up loose ends. This is the time to ensure the report is complete and ready for
review. For example, this is the time to ensure the report supports findings and recommendations
with guiding references. This supports the evaluation’s credibility and transparency and makes it
easier for the reader to take corrective action.

(6) Review. Reviewing a report is essential. Both the report writer and at least one other
evaluator should review the report for accuracy, completeness, format, spelling, grammar,
consistency, readability, tone, and presentation.

(7) Seek external feedback. Once published, report writers should seek feedback from
their readers on the quality of their reports. This will help improve the quality of future reports
and ensure those reports provide readers with what they need.

(8) When a team writes an evaluation report together, it is important that the team lead
clearly communicates expectations for how each part of the report should be written. It is also
important that the final consolidated report be cohesive in all respects to ensure readability.

Section IV
Project Management for Evaluators
This section introduces project management concepts as they apply to leading quality assurance
evaluations. Project management is the process of organizing, coordinating, and managing
people and tasks from the initial stages of a project to completion. The ultimate goal of project
management is to meet project requirements; however, it also aims to improve the team’s
efficiency and effectiveness. Quality assurance accreditation, assessment, and evaluation team
leads as project managers ensure an evaluation’s tasks and milestones align with its goals. They
break an evaluation down into manageable components: tasks, deliverables, milestones, and
deadlines. They ensure quality control of the evaluation process by overseeing how well tasks
are executed. They mitigate risks and optimize resources. They are accountable for guiding,
monitoring, and regulating the entire evaluation from start to finish. They motivate the team to
finish the evaluation on time, and they report progress to stakeholders.

7-18. Project management process for evaluators


Managing evaluations is a complex job requiring knowledge of project management processes
and tools. Evaluation management typically has four phases: initiation, planning, execution, and
closure.

a. Initiation phase. During the initiation phase, the evaluation team lead as project manager
clearly defines the evaluation goals, critical required information, scope, timeline, organization,
constraints, risks, deliverables, and stakeholders.

94
TRADOC Pamphlet 11-21

b. Planning phase. During the planning phase, the evaluation team lead as project manager
develops a written evaluation plan. An evaluation plan focuses efforts on what is important for
the evaluation, prevents wasted time, and guides each step of the process. It also ensures that the
process is transparent, and that everyone involved in the evaluation understands their
responsibilities. An evaluation plan includes everything defined during the initiation phase
(evaluation goals, critical required information, scope, timeline, organization, constraints, risks,
deliverables, and stakeholders). It also includes the evaluation questions, the data needed (both
existing and new), data collection methods, the resources needed (for example, evaluators, travel
funds, and facilities), major milestones, tasks (for example, data collection activities, evaluator
huddles, briefings, and report writing), task duration and deadlines, and the evaluation schedule.
The team lead develops and communicates an evaluation plan as soon as possible before
implementation.

c. Execution phase. During the execution phase, the evaluation team lead as project manager
puts the evaluation plan into action, and the team carries it out. The team lead manages the tasks
and monitors and controls the evaluation to ensure that it is proceeding as planned. If not
proceeding as planned, the team lead works to resolve issues. The team lead keeps the evaluation
moving through periodic meetings and email updates.

d. Closure phase. During the closure phase, the evaluation team lead as project manager
formally closes the project. Considering feedback through AARs and surveys, the team lead
evaluates the evaluation’s success and shares lessons learned.

7-19. Project management skills for evaluators


Exercising effective evaluation management skills leads to realistic project planning and
timelines, strategic alignment between all stakeholders, and quality evaluations. Critical skills
required for effective evaluation management include leadership, organizational, time
management, and interpersonal.

a. Leadership skills. Evaluation team leads as project managers are leaders. They organize
and manage evaluation teams and monitor task execution. They ensure the evaluation team can
work unobstructed. They lead evaluations through every phase, manage risks, and ensure the
quality of deliverables. They teach, influence, and motivate others. They think creatively and
solve problems. They are adaptable and work well under pressure.

b. Organizational skills. Evaluation team leads as project managers are organized, goal-
oriented, and committed to the evaluation process. They determine all tasks needed to achieve
the evaluation goals, and they organize those tasks into schedules. They develop evaluation
plans, identify needed resources, and obtain those resources. They develop and manage reports,
and they ensure the quality of all report contributions. They monitor every aspect of an
evaluation and ensure everyone understands their roles, what tasks need to be completed, and by
when tasks need to be completed.

c. Time management skills. Evaluation team leads as project managers are experts at
managing time. They set and clearly communicate realistic milestones, timelines, and deadlines

95
TRADOC Pamphlet 11-21

to the evaluation team and stakeholders. They ensure timely completion of the evaluation and all
of its milestones and deliverables.

d. Interpersonal skills. Evaluation team leads as project managers possess strong


interpersonal skills, such as communication, active listening, collaboration, negotiation, conflict
management and resolution, patience, empathy, and diplomacy. They are able to build and
maintain positive relationships with stakeholders and their evaluation teams. They represent
stakeholders’ and their team’s needs throughout the evaluation process. Their interpersonal skills
align with the AQAP’s core values (see para 2-2).

Section V
Standards of Professional Evaluation Practice
This section introduces standards of professional evaluation practice. The standards are aligned
with the AQAP’s core values (see para 2-2).

7-20. Integrity and fairness

a. Integrity. Quality assurance evaluators do what is right and act honestly and ethically.
They keep their word and honor their commitments. They ensure the integrity of all evaluation,
assessment, and accreditation processes. They disclose and mitigate any conflict of interest
related to any evaluation assignment.

b. Fairness. Quality assurance evaluators balance multiple interests and treat varying
perspectives and interests fairly. They are impartial; they mitigate bias and maximize objectivity.
They put their own agendas and expectations aside.

7-21. Systematic inquiry and rigor

a. Systematic inquiry. Quality assurance evaluators select and apply evaluative methods
thoughtfully and deliberately to explore and fully understand shortcomings and strengths. They
conduct additional inquiries as needed to make judgments based on complete, reliable, and valid
data.

b. Rigor. Quality assurance evaluators use effective project management strategies and
rigorous evaluation design, data collection, and analysis methods. They ensure accurate, relevant,
timely, and actionable findings and recommendations.

7-22. Competence, learning, and credibility

a. Competence. Quality assurance evaluators possess the education, training, knowledge,


skills, and abilities required for competent evaluation practice. They adhere to the highest
technical standards. They declare their limitations and do not pretend competence where it does
not exist.

b. Learning. Quality assurance evaluators continually develop professionally to learn new


skills required for competent evaluation practice. They teach and learn from customers, partners,

96
TRADOC Pamphlet 11-21

and other evaluators. They foster an environment of openness and curiosity, and they welcome
new information, thoughts, and ideas. Through engagement, they encourage others to question
their ideas. They take an interest in what others feel, think, and say.

c. Credibility. Quality assurance evaluators establish and maintain credibility by being


trustworthy evaluation experts. They develop and maintain expertise by continually pursuing
education and training, and by staying current on the regulations, policies, and industry trends
that guide their practice. They are objective, basing conclusions on systematic inquiry and
polyangulation, and citing credible sources. They recognize, admit, and learn from their
mistakes.

7-23. Collaboration and transparency

a. Collaboration. Quality assurance evaluators build and nurture positive and collaborative
professional relationships with customers, partners, and other evaluators. They strive to create
meaningful dialogue instead of debate. They understand that solving important problems and
complex issues requires a community of diverse voices and skills. They are made stronger by
their relationships, and they make the greatest impact by working together toward shared
success.

b. Transparency. Quality assurance evaluators increase the value of information by sharing it


broadly and transparently. They readily explain an evaluation’s purpose, the information sought,
the methods and procedures used, the decisions made, the actions taken, and the results achieved.
They also readily explain an evaluation’s benefits, limitations, and risks. They openly deal with
problems and challenges, and they maximize stakeholders’ ability to make informed decisions
based on relevant information in a way that builds commitment. They engage stakeholders in
evaluations, and they provide clear, simple, and credible reporting.

7-24. Prevention of harm


Quality assurance evaluators respect the rights, privacy, and dignity of all evaluation participants
and other stakeholders. They reduce unnecessary risks of harm to organizations, groups, and
individuals. They protect anonymity and confidentiality.

7-25. Anonymity versus confidentiality

a. When conducting evaluation data collection activities, evaluators often claim the data will
be collected anonymously or confidentially. Evaluators should be clear and precise about the
difference between anonymity and confidentiality when making these claims. Understanding the
difference is critical for protecting participants.

(1) Anonymity. A data collection activity conducted with an assurance of anonymity


means that there is no way for anyone, including the evaluator, to personally identify any
participant because the activity does not collect any unique identifiers. Unique identifiers include
name, address, phone number, e-mail address, common access card number, IP address, online
handle, photograph, and so on. Unique identifiers also include non-unique identifiers, such as
unit and duty position, which, when combined, allow an individual to be identified. For example,

97
TRADOC Pamphlet 11-21

there is only one commander of any given training battalion; therefore, when those two non-
unique identifiers are combined, the combined result is a unique identifier. Other identifiers that
are typically non-unique when used alone but that could be combined to form a unique identifier
include rank, years of service, time on the job, age, gender, education level, course, class
number, and so on. If any data collected could be used alone or in combination to identify an
individual, either directly or indirectly, the data collection activity is not anonymous.
Additionally, any data collection activity conducted face-to-face, telephonically, or through a
collaboration site is not anonymous.

(2) Confidentiality. A data collection activity conducted with an assurance of


confidentiality means that only the evaluator or evaluation team is able to personally identify
participants. This means that the data collection activity does collect unique identifiers, or non-
unique identifiers that could be combined to form a unique identifier, but the evaluator does not
report the data in a way that allows any participant’s identity to be known or tied to their
responses. With an assurance of confidentiality, the evaluator or evaluation team collecting the
data can know participants’ identities, but they do not reveal their identities, and they put
measures in place to protect their identities and not reveal them to anyone else. They accomplish
this through proper data management and security.

b. Whether a data collection activity is anonymous or confidential, evaluators inform


participants and assure them that their identities will be protected. Evaluators also explain limits
to confidentiality if a participant discloses that they have committed or about to commit a crime,
or that they are at risk of harming themselves or others.

98
TRADOC Pamphlet 11-21

Appendix A
References
Unless otherwise indicated, TRADOC publications and forms are available at
https://adminpubs.tradoc.army.mil. Army publications and forms are available on the Army
Publishing Directorate website at https://armypubs.army.mil.

Section I
Required Publications

AR 10-87
Army Commands, Army Service Component Commands, and Direct Reporting Units

AR 25-98
Information Management Control Requirements Program

AR 350-1
Army Training and Leader Development

TP 350-70-14
Training and Education Development in Support of the Institutional Domain

TR 10-5
U.S. Army Training and Doctrine Command

TR 10-5-1
Headquarters, U.S. Army Training and Doctrine Command

TR 11-21
TRADOC Implementation of the Army Quality Assurance Program

TR 350-18
The Army School System

TR 350-70
Army Learning Policy and Systems

Section II
Related Publications
A related publication is a source of additional information. The user does not have to read it to
understand this publication.

AR 1-201
Army Inspection Policy

AR 25-50
Preparing and Managing Correspondence

99
TRADOC Pamphlet 11-21

DA PAM 25-40
Army Publishing Program Procedures

DA PAM 25-403
Army Guide to Recordkeeping

U.S. Government Publishing Office Style Manual


(Available at https://www.govinfo.gov)

TRADOC Supplement 1-201


Army Inspection Policy

TR 1-11
Staff Procedures

Section III
Prescribed Forms

This section contains no entries.

Section IV
Referenced Forms

DA Form 2028
Recommended Changes to Publications and Blank Forms

Section V
Reports

AAHS-RDR-PR-21-190
Army Quality Assurance Program External Survey

100
TRADOC Pamphlet 11-21

Appendix B
Impact Issue and Value-Added Practice Processes
This appendix provides guidance on processing impact issues and value-added practices.

Section I
Impact Issues
This section provides guidance on processing impact issues identified during Army accreditation,
during or outside of proponent assessment, and internally.

B-1. Impact issue identified during Army accreditation

a. When Army accreditation evaluators identify or are made aware of an impact issue during
an Army accreditation, they submit it to the accreditation team lead. The team lead, with input
from the team, analyzes and assesses the impact issue to determine if it meets the criteria for a
valid impact issue as shown in figure B-1.

Figure B-1. Criteria for a valid impact issue

b. If the accreditation team lead assesses that the impact issue meets all criteria, the
accreditation team includes the impact issue in the initial impressions out-brief and writes the
impact issue into the accreditation report. The impact issue narrative describes the impact issue,
explains how it meets each of the criteria, and identifies the activity responsible for resolving it.

c. The accreditation team lead monitors the impact issue as the learning institution works
with the responsible activity to resolve the issue. The team lead continues monitoring the impact
issue until it is resolved.

d. If the impact issue requires TRADOC- and/or Army-level attention, and if the learning
institution has exhausted all efforts to effectively mitigate or resolve the impact issue, the
accreditation team lead elevates the impact issue to the AQAP Director.

e. The learning institution maintains an audit trail of all actions taken to resolve the issue.

101
TRADOC Pamphlet 11-21

B-2. Impact issue identified during or outside of proponent assessment

a. If a proponent QAO identifies or is made aware of an impact issue at one of their assessed
learning institutions, either during or outside of a proponent assessment, they assess whether the
institution’s leaders and stakeholders are already aware of the issue, and if not, they inform them
of the issue. They work closely with the institution to assess whether the impact issue meets all
of the criteria described in paragraph B-1a. They also work with the institution to determine the
internal and external factors causing the issue.

b. The proponent QAO works with the learning institution to assess the level of attention the
impact issue requires. If the impact issue does not require TRADOC- or Army-level attention,
the proponent QAO works with the learning institution to help connect them with the most-
suitable activity for resolution. The proponent QAO monitors the impact issue as the learning
institution works with the responsible activity to resolve the issue. The proponent QAO
continues monitoring the impact issue until it is resolved.

c. If a proponent QAO identifies or becomes aware of an impact issue during a proponent


assessment, the assessment team lead includes the impact issue in the initial impressions out-
brief and writes the impact issue into the proponent assessment report. The impact issue narrative
describes the impact issue, explains how it meets each of the criteria described in paragraph B-
1a, and identifies the activity responsible for resolving it. The assessment team lead also notifies
the associated accreditation team lead. If the impact issue requires TRADOC- or Army-level
attention, and if the learning institution has exhausted all efforts to effectively mitigate or resolve
the impact issue, the accreditation team lead elevates the issue to the AQAP Director.

e. If a proponent QAO identifies or becomes aware of an impact issue at an assessed learning


institution outside of a proponent assessment, if the impact issue requires TRADOC- or Army-
level attention, and if the learning institution has exhausted all efforts to effectively mitigate or
resolve the impact issue, the proponent QAO may elevate the issue directly to the AQAP
Director.

f. The assessed learning institution maintains an audit trail of all actions taken to resolve the
impact issue.

B-3. Impact issue identified internally

a. When a learning institution QAO identifies or is made aware of an impact issue at their
own institution, they assess whether their institution’s leaders and stakeholders are already aware
of the issue, and if not, they inform them of the issue. They work closely with their institutions’
leaders and stakeholders to assess whether the impact issue meets all of the criteria described in
paragraph B-1a. They also work with leaders and stakeholders to determine the internal and
external factors causing the issue.

b. The QAO records the impact issue in any related internal evaluation report, as applicable.
They also include the impact issue in the institution’s self-study and self-assessment reports. The

102
TRADOC Pamphlet 11-21

impact issue narrative describes the impact issue, explains how it meets each of the criteria
described in paragraph B-1a, and identifies the activity responsible for resolving it.

c. The QAO reviews each impact issue periodically with leaders and stakeholders and
maintains an impact issue audit trail. QAO directors (or equivalent) communicate each impact
issue’s status to the learning institution’s commander, commandant, or civilian or military
equivalent during quarterly quality assurance reviews, or more frequently as required.

d. If inside of an Army accreditation or proponent assessment period, the QAO reports the
impact issue to the accreditation or assessment team lead, as applicable, who processes the
impact issue following the processes described in paragraph B-1 or paragraph B-2, as applicable.

e. If outside of an Army accreditation or proponent assessment period, and after the learning
institution has exhausted all efforts to effectively mitigate or resolve an impact issue requiring
TRADOC- or Army-level attention, accredited learning institution QAOs may elevate the issue
directly to the AQAP Director. Assessed learning institution QAOs process impact issues
through their proponent QAOs (see para B-2).

Section II
Value-Added Practices
This section provides guidance on processing value-added practices identified during Army
accreditation, during or outside of proponent assessment, and internally.

B-4. Value-added practice identified during Army accreditation

a. When Army accreditation evaluators identify or are made aware of a value-added practice,
they submit it to the accreditation team lead.

b. The accreditation team lead, with input from the team, analyzes and assesses the value-
added practice to determine if it meets the criteria for a valid value-added practice as shown in
figure B-2.

Figure B-2. Criteria for a valid value-added practice

c. If the accreditation team lead assesses that the value-added practice meets all criteria, they
include the value-added practice in the initial impressions out-brief and write the value-added
practice into the accreditation report. The value-added practice narrative describes the value-
added practice and explains how it meets each of the criteria described in paragraph B-4b.

103
TRADOC Pamphlet 11-21

B-5. Value-added practice identified during or outside of proponent assessment

a. If a proponent QAO identifies or is made aware of a value-added practice at one of their


assessed learning institutions, either during or outside of a proponent assessment, they review the
value-added practice and its audit trail documentation with the learning institution’s leaders and
stakeholders to assess whether the value-added practice meets all of the criteria described in
paragraph B-4b.

b. If a proponent QAO identifies or becomes aware of a value-added practice during a


proponent assessment, the assessment team lead includes the value-added practice in the initial
impressions out-brief and writes the value-added practice into the proponent assessment report.
The value-added practice narrative describes the value-added practice and explains how it meets
each of the criteria described in paragraph B-4b. The assessment team lead also notifies the
associated accreditation team lead.

c. The assessed learning institution’s QAO monitors the value-added practice and maintains
an audit trail documenting the practice’s applicable AEAS criteria, initial identification, analysis
and staffing, and reviews.

d. If a proponent QAO identifies or becomes aware of a value-added practice at an assessed


learning institution outside of a proponent assessment, the proponent QAO may elevate the
value-added practice directly to the AQAP Director.

B-6. Value-added practice identified internally

a. When a learning institution QAO identifies or is made aware of a value-added practice at


their own institution, they assess whether the value-added practice meets all of the criteria
described in paragraph B-4b.

b. The QAO includes the value-added practice in any related internal evaluation report, as
applicable. The QAO also shares the value-added practice with stakeholders across the
institution, and includes the value-added practice in the institution’s self-study and self-
assessment reports. The value-added practice narrative describes the value-added practice and
explains how it meets each of the criteria described in paragraph B-4b.

c. The QAO monitors the value-added practice and maintains an audit trail documenting the
practice’s applicable AEAS criteria, initial identification, analysis and staffing, and reviews.

d. If identified inside of an Army accreditation or proponent assessment period, the QAO


reports the value-added practice to the accreditation or assessment team lead, as applicable, who
processes the value-added practice following the processes described in paragraph B-4 or
paragraph B-5, as applicable.

e. If identified outside of an Army accreditation or proponent assessment period, accredited


learning institution QAOs may elevate the value-added practice directly to the AQAP Director.

104
TRADOC Pamphlet 11-21

Assessed learning institution QAOs process value-added practices through their proponent
QAOs.

105
TRADOC Pamphlet 11-21

Appendix C
Marketing the Army Quality Assurance Program
There are numerous ways to market the AQAP, and some of those ways are the AQAP logo, the
e-mail signature block, the elevator pitch, and overview briefings.

C-1. Logo
Using the AQAP logo shown in figure C-1 (below) wherever appropriate can help promote
AQAP brand awareness. Products that might display the AQAP logo include AQAP or QAO
newsletters, brochures, posters, presentations, and websites. For information on how to obtain
the AQAP logo and other marketing information, see paragraph C-6.

Figure C-1. Army Quality Assurance Program logo

C-2. E-mail signature block


An e-mail signature block is a simple way to market the AQAP on a daily basis. Figure C-2
below shows an example e-mail signature block marketing the AQAP. Before changing or
adding new content to a signature block, it is important to first check local policy and guidance.

Jane I. Doe
Army Quality Assurance Evaluator
Quality Assurance Office, Some Center of Excellence
123 Main Street
Fort Somewhere, XX 12345
123-456-7890; DSN 111-7890
jane.i.doe.civ@army.mil
MilSuite: https://www.milsuite.mil/book/groups/tradoc-quality-assurance-office
@US_Army_QA
Figure C-2. Example e-mail signature block

106
TRADOC Pamphlet 11-21

C-3. Elevator pitch


An elevator pitch, also known as an elevator speech, is a short, persuasive speech that can be
used to introduce the AQAP. Its purpose is to explain the program clearly and quickly, and to
spark interest in the program. An elevator pitch should be no longer than 30 seconds, short
enough to use anytime and anywhere. Elements of an elevator pitch might include a quick
personal introduction, a brief summary of what the AQAP does and what makes the AQAP
unique, and an example of how the AQAP has helped an organization improve its performance
and better meet its goals.

C-4. Overview briefing


An overview briefing is a presentation explaining the AQAP’s purpose, mission, goals, values,
functions, processes, services offered, and any other AQAP-related topics of interest to various
audiences. An overview briefing can be tailored to meet the needs of different audiences, such as
commanders as they assume leadership of their institutions; mid-level leaders as they assume
leadership roles within their institutions; instructional designers, developers, and course
managers as they assume new responsibilities; new instructors as they attend the instructor
course; and new developers as they attend the developer course. An overview briefing can be a
formal presentation to a mid-size or large group, or it can be a more-casual desk-side
presentation for an audience of one, two, or just a few. The length of an overview briefing may
vary depending on the needs of the audience; however, they are typically one hour or less.

C-5. Other ways to market


Other ways to market the AQAP include branded apparel and other merchandise, branded name
tags, independently funded coins, newsletters, brochures, posters and displays, social media, and
collaborative activities with internal and external stakeholders, to name a few.

Note. Before engaging in any AQAP branding, merchandising, marketing, or similar efforts,
coordinate first with the servicing Office of the Staff Judge Advocate.

C-6. Sharing marketing resources


Learning institution QAOs are encouraged to share their new and innovative marketing ideas and
strategies with the AQAP marketing manager. For AQAP marketing tools already available for
AQAP community members, visit the AQAP portal: https://armyeitaas.sharepoint-mil.us/sites/tr-
hq-aqap.

107
TRADOC Pamphlet 11-21

Appendix D
Essential Competencies for Quality Assurance Evaluators
This appendix provides a full listing of the essential competencies for quality assurance
evaluators, as shown in table D-1. The essential competencies have 62 competencies falling
under six domains. Once published, performance measures for each competency will be
available on the AQAP portal: https://armyeitaas.sharepoint-mil.us/sites/tr-hq-aqap.

Table D-1
Essential competencies for quality assurance evaluators
Number Competency or Domain
1.0 Professional Practice Domain
1.1 Applies professional evaluation standards
1.2 Acts ethically and strives for integrity and honesty in conducting evaluations
1.3 Conveys personal evaluation approaches and skills to potential clients
1.4 Respects clients, respondents, program participants, and other stakeholders
1.5 Considers the general and public welfare in evaluation practice
1.6 Contributes to the knowledge base of evaluation
2.0 Systematic Inquiry Domain
2.1 Understands the knowledge base of evaluation (terms, concepts, theories, assumptions)
2.2 Knowledgeable about quantitative methods
2.3 Knowledgeable about qualitative methods
2.4 Knowledgeable about mixed methods
2.5 Conducts literature reviews
2.6 Specifies program theory
2.7 Frames evaluation questions
2.8 Develops evaluation designs or plans
2.9 Identifies data sources
2.10 Collects data
2.11 Assesses validity of data
2.12 Assesses reliability of data
2.13 Analyzes data
2.14 Interprets data
2.15 Makes judgments
2.16 Develops recommendations
2.17 Provides rationales for decisions throughout the evaluation
2.18 Reports evaluation procedures and results
2.19 Notes strengths and limitations of the evaluation
2.20 Conducts meta-evaluations

108
TRADOC Pamphlet 11-21

Table D-1
Essential competencies for quality assurance evaluators – Continued
Number Competency or Domain
3.0 Situational Analysis Domain
3.1 Describes the program
3.2 Determines program evaluability
3.3 Identifies the interests of relevant stakeholders
3.4 Serves the information needs of intended users
3.5 Addresses conflicts
3.6 Examines the organizational context of the evaluation
3.7 Analyzes the political considerations relevant to the evaluation
3.8 Attends to issues of evaluation use
3.9 Attends to issues of organizational change
3.10 Respects the uniqueness of the evaluation site and client
3.11 Remains open to input from others
3.12 Modifies the report as needed
4.0 Project Management Domain
4.1 Responds to requests
4.2 Negotiates with clients before the evaluation begins
4.3 Writes formal agreements or evaluation plans
4.4 Communicates with clients throughout the evaluation process
4.5 Identifies needed resources for evaluation, such as information, expertise, personnel,
and instruments
4.6 Budgets an evaluation
4.7 Justifies cost given information needs or evaluation requirements
4.8 Uses appropriate technology
4.9 Supervises others involved in conducting the evaluation
4.10 Trains others involved in conducting the evaluation
4.11 Conducts the evaluation in a non-disruptive manner
4.12 Presents work in a timely manner
4.13 Conducts appropriate follow through
5.0 Reflective Practice Domain
5.1 Aware of self as an evaluator (knowledge, skills, and abilities)
5.2 Reflects on personal evaluation practice (competencies and areas for growth)
5.3 Pursues professional development in evaluation
5.4 Builds professional relationships to enhance evaluation practice
6.0 Interpersonal Competence Domain
6.1 Uses written communication skills
6.2 Uses verbal/listening communication skills
6.3 Uses negotiation skills
6.4 Uses conflict resolution skills
6.5 Facilitates constructive interpersonal interaction (teamwork, group facilitation, and
processing)
6.6 Demonstrates cross-cultural competence
6.7 Demonstrates critical thinking/reasoning skills

109
TRADOC Pamphlet 11-21

Appendix E
Continuous Learning Points Guidelines

E-1. Continuous learning points guidelines overview

a. These CLP guidelines are designed to help AQAP professionals and their supervisors
determine how to apply CLP credits to various continuous learning activities. Although this
guide addresses quality assurance evaluators throughout, CLP foundations are universal, and
other AQAP professionals may easily adapt this guide to fit their AQAP roles.

b. These guidelines do not provide exhaustive examples of CLP credits. Wherever the
guidance is not specified or clear, supervisors are authorized to determine how many credits to
award by balancing these guidelines with their professional judgment. Supervisors and
evaluators may address CLP concerns or disputes through their QAO director (or equivalent) to
the AQAP Director as needed.

E-2. How to earn continuous learning points


To earn CLPs, continuous learning activities directly or indirectly support an evaluator’s
professional practice and increase their performance capabilities. Most importantly, the CLPs are
intended to show evidence that an evaluator is continuing to keep themselves relevant and
current through continual training, education, and development.

E-3. Categories of continuous learning activities


As shown in table E-1, continuous learning activities generally fall into one of four categories:
training courses and modules, professional activities, academic courses, and experience.

110
TRADOC Pamphlet 11-21

Table E-1
Continuous learning points credit guide
Creditable Activities Continuous Learning Points Credit*
Training Courses and Modules
Each quality assurance evaluator course 40 points
Awareness briefing or training without testing or 0.5 points per hour of instruction
assessment
Training course or module with testing or 1 point per hour of instruction
assessment
Functional training 1 point per hour of instruction
Leadership training 1 point per hour of instruction
Professional Activities
Professional exam, licensure, or certification 10 to 30 points
Teaching or lecturing 2 points per hour; maximum 20 points per year
Present at symposia or conference 2 points per hour; maximum 20 points per year
Attend symposia or conference 0.5 points per hour, maximum 4 points per day and
20 points per year
Attend AQAP forum 30 points
Workshop participation 1 point per hour; maximum 8 points per day and 20
points per year
Publication 10 to 40 points
Academic Courses
Quarter hour 5 points per quarter
Semester hour 10 points per semester hour
Continuous education unit 10 points per unit
Equivalency examinations (includes training Same points awarded for course
courses and modules)
Experience
Developmental assignments Maximum 20 points per year
Lead special project Maximum 30 points per year
Participate in special project Maximum 20 points per year
Mentoring Maximum 10 points per year
Lead external accreditation event Maximum 30 points per year
Participate in external accreditation event Maximum 20 points per year
*All activities may earn points only in the year accomplished, awarded, or published.

a. Training courses and modules. This category includes learning activities such as the four
QAECs, awareness briefings and training, training courses and modules, functional training, and
leader training. Supervisors determine whether specific training courses and modules are relevant
to evaluator professional development.

(1) Quality assurance evaluator courses. The four sequential QAECs – familiarization,
basic, senior, and master – offer both resident and distance learning training.

(2) Awareness briefings and training. Periodically, Army organizations provide briefings
or training sessions to acquaint the workforce with new or changed policy. Awareness briefings
and training do not include testing or assessment. Completion of an activity in this category earns
0.5 CLPs per hour of briefing or instruction.

111
TRADOC Pamphlet 11-21

(3) Training courses and modules. Training courses and modules are generally more
formalized and longer in duration than briefings or training sessions, and they always include
some form of testing or assessment. Completion of training courses and modules earns one CLP
per hour of instruction.

(4) Functional training. Functional training is formal training designed to qualify


students, often leaders and Soldiers, for jobs that require specific functional skills and
knowledge. Completion of functional training earns one CLP per hour of instruction.

(5) Leadership training. Leadership training is any structured training designed to


improve leadership skills. Completion of leadership training earns one CLP per hour of
instruction.

b. Professional activities. This category includes learning activities such as earning


professional licensure or certificate, presenting at or attending seminars or conferences,
participating in workshops, publishing, and attending the AQAP forum. Supervisors determine
whether specific professional activities are relevant to evaluator professional development.

(1) Professional examination, licensure, or certificate. Quality assurance evaluators may


earn professional designations that assure their qualification to perform a job or task. Authorities
in the field, such as trade and professional organizations, grant such designations. Examples of
activities falling under this category include passing a professional evaluation examination,
becoming certified as a return-on-investment professional, or receiving a human performance
improvement certificate. Activities in this category can earn between 10 and 30 CLPs at the time
the evaluator completes the activity. Evaluators work with their supervisors to determine the
appropriate number of CLPs based on the activity, with supervisors making the final decision.

(2) Teaching or lecturing. Quality assurance evaluators are encouraged to share their
AQAP-related knowledge and insights through teaching courses or modules to others in the
AQAP community or to their learning institutions’ stakeholders. Examples of activities falling
under this category are teaching a course or module at a forum held for a learning institution’s
key accreditation stakeholders; and teaching or lecturing about a quality assurance-related topic
at a school, college, or university. Teaching or lecturing earns 2 CLPs per hour, with a maximum
of 20 CLPs per year.

(3) Present at seminar, symposia, or conference. Examples of activities falling under this
category include presenting at an AQAP lunch-and-learn or the AQAP forum. Presenting at
seminars or conferences earns 2 CLPs per hour of preparation and delivery, with a maximum of
20 CLPs per year.

(4) Attend seminar, symposium, or conference. Quality assurance evaluators may earn
CLPs for attending professional seminars, symposia, or conferences. Attending seminars,
symposia, or conferences, excluding the AQAP forum, earns 0.5 points per hour, with a
maximum of 4 points per day and 20 points per year.

112
TRADOC Pamphlet 11-21

(5) AQAP forum attendance. Quality assurance evaluators may earn up to 30 CLPs for
attending the AQAP forum: Supervisors pro-rate points for partial attendance.

(6) Workshop participation. Quality assurance evaluators may earn CLPs for attending,
hosting, or planning workshops. Workshops are small, one- or two-day events dedicated to
discussing a specific topic. They usually involve structured activities and other forms of
engagement. Participating in a workshop earns 1 point per hour, with a maximum of 8 points per
day and 20 points per year.

(7) Publication. Quality assurance evaluators may earn CLPs for publishing professional
or academic articles related to their professional practice, with points awarded in the year
published. Supervisors determine how many points to award based on the article’s scope and
potential impact. Publishing earns 10 to 40 points per publication.

c. Academic courses. This category includes successful completion of formal college and
university courses, continuing education units (CEU), and equivalency examinations.
Supervisors determine whether specific academic courses, CEUs, and equivalency examinations
are relevant to evaluator professional development.

(1) Academic courses. Completed formal academic courses convert to CLPs using these
formulas: 1 quarter hour = 5 CLPs; 1 semester hour = 10 CLPs.

(2) Continuing education units. Some professional organizations offer educational


opportunities that award widely recognized continuing education units. Awarded CEUs convert
to CLPs using this formula: 1 CEU = 10 CLPs. Supervisors may use this formula to credit CLPs
for successful completion of any educational program awarding CEUs, even if evaluators opt to
not receive the CEUs.

(3) Equivalency examinations. An equivalency examination tests an individual’s


knowledge of a topic and awards formal credit as though the individual completed a formal
course on the topic. Equivalency examinations earn the same CLPs as the courses they represent
or replace.

d. Experience.

(1) This category includes developmental assignments, leading or participating in special


projects, mentoring others, and leading or participating in formal external evaluations.
Supervisors determine whether specific experience is relevant to evaluator professional
development. Experience normally earns anywhere from 10 to 30 points; however, supervisors
may use discretion when determining reasonable credits, and may decide more or fewer than the
credits shown in table E-1. Supervisors consider the long-term benefit to the Army, and the
immediate benefit to the evaluator and the organization.

(2) When experience is used to earn CLPs, supervisors and evaluators pre-define the
tasks and expected learning outcomes. If the experience is developmental, the evaluator should
receive mentoring during the experience. When practicable, evaluators are encouraged to

113
TRADOC Pamphlet 11-21

develop a product, such as a briefing, project design, report, or other product that demonstrates
the evaluator’s achievement of the expected learning outcomes. Evaluators are also encouraged
to share what they learned and any resulting product with others in their organizations.

Appendix F
Evaluator Certification Nomination Process

F. Evaluator certification nomination process

a. To nominate an evaluator for progression to the next evaluator level, the supervisor
reviews the evaluator’s associated QAEDP developmental records, assesses the evaluator’s
actual performance against progression requirements, and determines the evaluator’s eligibility
for progression to the next level.

b. Once an evaluator meets all progression requirements, to include supervisor


recommendation, the supervisor prepares an appropriate-level nomination packet, and uploads
the packet to the evaluator’s e-Portfolio folder in the AQAP portal.

c. The supervisor consolidates all of the evaluator’s QAEDP records associated with the
nomination into a single PDF file; names the consolidated file using a clear naming convention,
such as “lastname_senior_all-records_yyyymmdd.pdf;” and uploads the file to the evaluator’s e-
Portfolio folder. The supervisor emails the QAEDP manager, notifying them that the nomination
packet is in e-Portfolio and ready for review and processing.

d. Evaluators whose supervisors are not actively associated with the AQAP, such as may
occur with evaluators assigned additional quality assurance duties and not assigned to a formal
QAO, keep their supervisors informed of their activities and performance in the QAEDP, but
prepare their own nomination packets. These evaluators have their supervisors review their
nomination packets and sign their nomination requests; however, these evaluators upload their
own nomination packets to e-Portfolio. These evaluators email the QAEDP manager directly,
notifying them that their nomination packet is in e-Portfolio and ready for review and processing.

e. The QAEDP manager reviews and processes nomination packets and forwards them to the
AQAP Director for approval. Once nomination packets are approved, the AQAP Director
presents certificates for evaluator, senior evaluator, and master evaluator certification. A
certificate is not awarded for the apprentice level: Evidence of achieving the apprentice level is
an AQAP Director-signed nomination request. If possible, the AQAP Director presents
certificates at the AQAP forum or other AQAP community event.

f. Contact the AQAP’s QAEDP manager with any questions about enrolling evaluators into
the QAEDP, managing their QAEDP records, and nominating evaluators for certification.
Contact the AQAP portal administrator with any questions about accessing or navigating e-
Portfolio.

114
TRADOC Pamphlet 11-21

g. Visit the QAEDP tab within the AQAP portal for current example evaluator certification
nomination packets; and for current job aids explaining how to access, navigate, and upload
documents to e-Portfolio, and how to enroll evaluators into the QAEDP:
https://armyeitaas.sharepoint-mil.us/sites/tr-hq-aqap.

115
TRADOC Pamphlet 11-21

Appendix G
Army Accreditation Timeline
This appendix provides an example Army accreditation timeline. Table G-1 presents the
expected timeline of critical Army accreditation tasks. Day 1 of the timeline is the first day of the
scheduled accreditation period.

116
TRADOC Pamphlet 11-21

Table G-1
Expected timeline of critical Army accreditation tasks
By Day: Critical Accreditation Task Lead Action Officer
Before 120-Day Army Accreditation Period
Complete mission analysis and planning; team selection Accreditation team lead
and notification.
Post completed proponent assessments (as applicable) to Proponent assessment team
Minus 60 AQAP portal. lead
Post self-study/self-assessment to AQAP portal. Learning institution QAO
Provide list of courses in session. Learning institution QAO
Provide LON to learning institution. AQAP Director
Minus 30 Determine and publish list of focus courses. Accreditation team lead
During 120-Day Army Accreditation Period
Upload remaining documentary evidence. Learning institution QAO
1
Publish accreditation milestone schedule. Accreditation team lead
Review/analyze documentary evidence and proponent Accreditation team
5 assessments (as applicable).
Provide team lead virtual event requirements. Accreditation team
7 Provide virtual event requirements to institution’s QAO Accreditation team lead
Conduct initial meeting with institution’s key action Accreditation team
14 officers – processes and expectations.
Conduct formal virtual in-brief. Accreditation team
Schedule and coordinate virtual events. Learning institution QAO
15
Publish accreditation execution schedule. Accreditation team lead
31-90 Conduct ongoing data collection and evaluation. Accreditation team
90 IF on-site visit, coordinate and schedule visit. Accreditation team lead
91-120 Conduct final data collection and evaluation. Accreditation team
114-120 IF on-site visit, conduct visit (days may vary). Accreditation team
119 Conduct desk-side discussion with senior leader Accreditation team lead
120 Conduct initial impressions out-brief. Accreditation team
After 120-Day Army Accreditation Period
*120 + 3 Provide completed rubrics to AEAS lead. Criterion evaluator
*120 + 8 Provide completed AEAS report to team lead. AEAS lead
*120 + 14 Consolidate and review report; process for final review. Accreditation team lead
*120 + 17 Provide draft report to learning institution for review. AQAP Director
Receipt Complete report review and submit command Learning institution QAO
plus 14 acceptance/rebuttal (after receipt of draft report).
Receipt Complete clarification or rebuttal responses (as required). Accreditation team lead
plus 7
Receipt Forward report packet for CG, TRADOC approval. HQ TRADOC QAO
plus 3
As soon as Forward final approved accreditation report and AQAP Director
possible accreditation certificate to learning institution.
Receipt Submit corrective action plan (after receipt of final Learning institution QAO
plus 30 approved accreditation report).
*Day 120 or date of initial impressions out-brief, whichever comes first

117
TRADOC Pamphlet 11-21

Appendix H
Example Army Accreditation Process
This appendix provides an example Army accreditation process. The example provided here is
organized according to the Army accreditation timeline described in appendix G. The Army
accreditation process can be divided into three high-level phases: before, during, and after the
120-day Army accreditation period. Section I of this appendix describes the expected process for
accomplishing accreditation milestones due before the first day of the scheduled 120-day Army
accreditation period. Section II describes the expected process for accomplishing accreditation
milestones due during the 120-day Army accreditation period. Section III describes the expected
process for accomplishing accreditation milestones due after the 120-day Army accreditation
period or initial impressions out-brief.

Section I
Phase 1: Before the 120-Day Army Accreditation Period
This section describes the expected process for accomplishing accreditation milestones, tasks,
and deliverables 60 and 30 days before the first day of the scheduled 120-day Army accreditation
period. This phase consists of six critical accreditation tasks, as shown in table H-1 below and
described in the paragraphs that follow.

Table H-1
Critical accreditation tasks before 120-day Army accreditation period
Timeline Task Deliverable Lead Action Officer
Conduct accreditation Draft accreditation milestone Accreditation team lead
mission analysis and schedule
planning
Submit proponent assessment Proponent assessment reports Proponent assessment
reports with acceptance memos team leads
By minus Post self-study and self- Self-study and self- Learning institution
60 days assessment report assessment report with QAO
corrective action plan
Provide list of courses in List of courses in session Learning institution
session during accreditation period QAO
Provide LON LON, planning guidance, Accreditation team lead
evidence guidelines and AQAP Director
By minus Determine and publish focus List of focus courses Accreditation team lead
30 days courses

H-1. Accreditation mission analysis and planning


Accreditation mission analysis and planning is critical for effective and efficient execution of the
accreditation, to include all of its associated events and activities. Not later than 60 days before
the first day of the scheduled 120-day Army accreditation period, the accreditation team lead
completes accreditation mission analysis and planning, which includes the process steps shown
in Table H-2. Each of these steps is discussed in more detail in the sub-paragraphs that follow.

118
TRADOC Pamphlet 11-21

Table H-2
Accreditation mission analysis and planning process
Step Actions
1 Conduct initial review.
2 Verify training in session.
3 Determine AEAS criteria applicability.
4 Determine accreditation approach and method.
5 Determine resource requirements.
6 Build the accreditation team; send to AQAP Director for approval.
7 Conduct initial coordination with the accreditation team.
8 Conduct initial coordination with proponent assessment team leads.
9 Prepare the LON and its attachments.
10 Prepare schedule of accreditation milestones.

a. Conduct initial review. The team lead reviews the learning institution’s last Army
accreditation report and corrective action plan, most-recent self-study and self-assessment report,
and most-recent MEP. The primary purpose of this initial review is to become familiar with the
learning institution and understand its strengths, challenges, and unique characteristics.

b. Verify training will be in session. The team lead contacts the learning institution’s QAO to
verify that any courses will be in session during the time of any planned on-site visit, or
otherwise during the scheduled accreditation period. This step is not to verify which courses will
be in session, but that any courses will be in session at all. This information informs the team
lead’s decision about when to conduct an on-site visit if needed, or if the dates of the scheduled
accreditation period need to be rescheduled.

c. Verify criteria applicability. Working with the learning institution, the team lead analyzes
and determines which AEAS criteria are applicable to the learning institution. For information
about AEAS criteria applicability, see paragraph 4-3.

d. Determine accreditation approach and method.

(1) The team lead uses the results of initial mission analysis to:

(a) Determine required critical information.

(b) Replace any assumptions about the learning institution with facts.

(c) Determine possible constraints to conducting the accreditation.

(d) Conduct risk assessment for not conducting any part of the accreditation.

(e) Determine the approach and method for conducting the accreditation.

(2) The team lead also assesses whether the need for an on-site visit is indicated. This
initial assessment may change as new information emerges throughout the accreditation process.

119
TRADOC Pamphlet 11-21

e. Contact proponent assessment team leads. If the accreditation involves proponent


assessment of outlying subordinate schools and/or functionally aligned RC learning institutions,
the accreditation team lead contacts, as early in the process as possible, the associated proponent
assessment team leads. The purpose of this contact is to:

(1) Confirm the dates of the scheduled accreditation period.

(2) Determine if the assessment team can be part of a joint matrix accreditation team
during the higher-level accredited institution’s 120-day Army accreditation period.

(3) Confirm the date that associated proponent assessment reports are due.

(4) Provide guidance on any situations unique to the accreditation and/or assessment.

(5) Clarify expectations

(6) Address any questions or concerns

f. Determine resource requirements. The team lead determines resource requirements, such as
travel funding (if needed), equipment, supplies, and administrative support; and confirms
resource availability. The team lead also determines the number of evaluators, by subject-matter
expertise or specialization, needed for effective execution of all evaluative activities.

g. Build the Army accreditation team.

(1) Based on AEAS criteria applicability, the team lead designs the accreditation team’s
composition (without evaluator names) by required subject matter expertise and organization
and/or directorate and assigns AEAS and criteria to each team role. The team’s composition may
also include proponent assessment team(s) if proponent assessments will be conducted
concurrently. For concurrent accreditation of CoEs and their functionally aligned NCOAs, the
CoE and NCOA team leads coordinate to share evaluator resources.

(2) The team lead sends the proposed team composition, to include any participating
proponent assessment team(s), to the AQAP Director through the AQAP Deputy Director for
approval.

(3) Once the AQAP Director approves the proposed team composition, the team lead
emails the directors and/or chiefs of the applicable organizations and/or directorates requesting
evaluator support.

(4) Once the organizations and/or directorates confirm support and provide evaluator
names, the team lead assigns each evaluator to specific team roles.

h. Conduct initial coordination with the team.

120
TRADOC Pamphlet 11-21

(1) The team lead sends evaluators a list of the AEAS applicable to the learning
institution so that evaluators may begin their evaluation preparations.

(2) If an on-site visit is planned, the team lead sends evaluators the information they will
need to arrange travel to the site.

i. Draft accreditation milestone schedule.

(1) Purpose. The purpose of the accreditation milestone schedule is to clearly define and
organize the start and end dates and resources needed to complete all accreditation milestones,
tasks, and deliverables. The milestone schedule helps the accreditation team lead effectively
manage the accreditation. It also informs invested leaders, the accreditation team, and the
learning institution of when accreditation milestones and associated tasks and deliverables will
occur throughout and immediately following the accreditation period: This information is critical
for these stakeholders to be able to effectively plan and prepare for participating in the
accreditation.

(2) Milestones, tasks, and deliverables. Based on the results of initial analysis, the team
lead identifies all accreditation milestones, tasks, and deliverables; and begins developing the
initial accreditation milestone schedule.

(a) Milestones are a project management and scheduling tool used to symbolize anything
in a project that has started or finished. Milestones mark the end of one phase of a project and the
start of another. Milestones provide Army accreditation team leads, as project managers, an
effective way to determine major scheduling periods and more-accurately estimate the time it
will take to complete the Army accreditation.

(b) Tasks are all of the work necessary to complete the accreditation deliverables. Tasks
include all of the work involved in: planning and coordinating, ensuring proponent assessments
are completed and reported, conducting stakeholder meeting and formal in-brief, publishing a list
of focus courses, uploading remaining documentary evidence, reviewing documentary evidence,
determining data collection event requirements, scheduling and coordinating data collection
events, conducting data collection events and activities, conducting any on-site visit, conducting
an initial impressions out-brief, administering a post-accreditation survey, conducting an
accreditation team AAR, writing an accreditation report, reviewing the report, and submitting a
corrective action plan.

(c) Deliverables are actual items created to advance the accreditation. Deliverables
include a milestone schedule, completed proponent assessment reports, a list of focus courses,
uploaded documentary evidence, presentation materials, a list of data collection event
requirements, accreditation execution schedules, post-accreditation survey results, an AAR
executive summary, accreditation report rubrics (from criterion evaluators), accreditation report
AEAS sections (from AEAS leads), a final draft accreditation report, a memorandum of
acceptance or rebuttal, a memorandum of response to request(s) for clarification, an accreditation
report packet, a final accreditation report and certificate of accreditation, and a corrective action
plan.

121
TRADOC Pamphlet 11-21

(3) Scope and content. The accreditation milestone schedule includes the entire scheduled
accreditation period and ends with the learning institution submitting a corrective action plan.
The schedule identifies the start and end dates of all accreditation phases and critical
accreditation tasks, and the due dates for all accreditation deliverables.

(4) Alignment. The milestone schedule should align with the planning guidance attached
to the LON (see para H-1j). However, the planning guidance provides high-level initial guidance
and is published 60 days before the first day of the 120-day Army accreditation period; whereas
the milestone schedule provides more-inclusive and detailed guidance and is published on or
about the first day of the 120-day Army accreditation period.

j. Draft letter of notification and planning guidance. The HQ TRADOC QAO Plans and
Operations Division prepares an initial draft LON and planning guidance specific to the
accredited institution, and staffs the draft LON and planning guidance through the accreditation
team lead to the AQAP Director for approval.

H-2. Sixty days before the 120-day Army accreditation period


Five critical accreditation tasks, as described below, are accomplished at least 60 days before the
120-day Army accreditation period.

a. Conduct mission analysis and planning. Not later than 60 days before the first day of the
scheduled accreditation period, the accreditation team lead completes accreditation mission
analysis and planning. See paragraph H-1 for detailed information about the mission analysis and
planning process.

b. Submit proponent assessment reports. Not later than 60 days before the first day of the
scheduled accreditation period, proponent assessment team leads submit their completed
proponent assessment reports with associated acceptance memos to the accreditation team lead.

c. Post self-study and self-assessment. Not later than 60 days before the first day of the
scheduled accreditation period, the learning institution’s QAO posts their institution’s
commander-approved self-study and self-assessment report to the AQAP portal. Receipt of a
LON is not required for this step.

d. Provide list of courses in session. Not later than 60 days before the first day of the
scheduled accreditation period, the learning institution’s QAO provides the accreditation team
lead a complete list of the institution’s courses in session during the accreditation period,
annotating any courses that are not current in TDC. Receipt of a LON is not required for this
step.

e. Provide letter of notification. Not later than 60 days before the first day of the scheduled
accreditation period, the AQAP Director provides the learning institution a LON, Army
accreditation planning guidance, and Army accreditation evidence guidelines.

122
TRADOC Pamphlet 11-21

H-3. Thirty days before the 120-day Army accreditation period


One two-part critical accreditation task, as described below, is accomplished at least 60 days
before the 120-day Army accreditation period.

a. Determine focus courses. Not later than 30 days before the first day of the scheduled 120-
day Army accreditation period, the team lead analyzes the institution’s list of courses in session
and determines the focus courses to evaluate, ensuring a variety of courses that include at least
one of each: initial military training (IMT), professional military education (PME), and
functional, as applicable to the learning institution.

(1) A course does not have to be in session during the scheduled accreditation period for
the team lead to select it as a focus course.

(2) Although selecting focus courses defines the initial scope of the accreditation, the
accreditation is not necessarily limited to those focus courses if:

(a) Evaluating an AEAS criterion that refers to an institutional process that spans all
courses; for example, the institution’s process for obtaining required course resources, or

(b) Evidence emerges indicating the need to look at specific elements of non-focus
courses.

(3) The accreditation team lead reserves the right to evaluate any of the learning
institution’s courses, whether or not formally identified as focus courses.

b. Provide list of focus courses to evaluators. Not later than 30 days before the first day of the
scheduled 120-day Army accreditation period, the team lead develops and provides a list of
focus courses to the accreditation evaluators.

Section II
Phase 2: During the 120-Day Army Accreditation Period
This section describes the expected process for accomplishing accreditation milestones, tasks,
and deliverables during the 120-day Army accreditation period. This phase consists of three sub-
phases: days 1 through 30, days 31 through 90, and days 91 through 120.

H-4. Sub-phase 1: Days 1 to 30


This sub-phase consists of 11 critical accreditation tasks, as shown in table H-3 on page 124 and
described in the paragraphs that follow.

123
TRADOC Pamphlet 11-21

Table H-3
Critical accreditation tasks days 1 to 30 of 120-day Army accreditation period
Timeline Task Deliverable Lead Action Officer
Upload remaining documentary Documentary evidence Learning institution
evidence QAO
Conduct accreditation team Team lead determines if any Accreditation team lead
By
meeting
Day 1
Publish accreditation milestone Accreditation milestone Accreditation team lead
schedule schedule
Conduct initial review N/A Evaluators
Review and analyze self-study Data collection Evaluators
and documentary evidence requirements, evaluation
By
questions
Day 5
Provide virtual event Data collection event Evaluators
requirements to team lead requirements
Provide consolidated virtual Consolidated data collection Accreditation team lead
By
event requirements to event requirements
Day 7
institution’s QAO
By Conduct action officer meeting Presentation materials Accreditation team lead
Day 14 Conduct formal in-brief In-brief presentation Accreditation team lead
Develop and coordinate Draft accreditation execution Accreditation team lead
accreditation execution schedule schedule and coordinating
By scheduler
Day 15 Publish accreditation execution Accreditation execution Accreditation team lead
schedule and send invitations schedule; event invitations and coordinating
scheduler

a. Upload remaining documentary evidence. Not later than day 1 of the 120-day Army
accreditation period, learning institutions post all documentary accreditation evidence not
already available in TDC or their self-studies to their respective sites in the AQAP portal.
Institutions use a clear naming convention, such as “course_product_version_yyyymmdd.pdf” or
“unit-organization_document-type_yyyymmdd.docx.”

b. Conduct Army accreditation team meeting. Not later than day 1 of the 120-day Army
accreditation period, the team lead holds a virtual meeting with all accreditation team members
to clarify roles and expectations, and to address any issues or concerns.

c. Publish accreditation milestone schedule. Not later than day 1 of the 120-day Army
accreditation period, the team lead publishes the accreditation milestone schedule to the
accreditation team and the learning institution’s QAO. The team lead updates and re-publishes
the accreditation milestone schedule as needed.

d. Conduct initial review. Not later than day 1 of the 120-day Army accreditation period,
accreditation evaluators review the learning institution’s last Army accreditation report and
corrective action plan, most-recent self-study and self-assessment report, and most-recent MEP.
The primary purpose of this initial review is for team members to become familiar with the
learning institution and understand its strengths, challenges, and any unique characteristics or
processes, especially as they apply to each team member’s assigned AEAS or criterion.

124
TRADOC Pamphlet 11-21

e. Review and analyze documentary evidence. Not later than day 5 of the 120-day Army
accreditation period, accreditation evaluators review and analyze in depth the learning
institution’s self-study and all documentary evidence that applies to their assigned AEAS or
criteria, interpret results, and determine data collection approach and methods. This includes
identifying evaluation questions, and of whom to ask those evaluation questions, or what and
where to observe to find answers to those evaluation questions. This also includes determining if
any missing or additional documentary evidence is needed and requesting those documents
through the team lead.

f. Provide data collection event requirements to team lead.

Note. Surveys, interviews, and focus groups used for Army accreditation are conducted in
compliance with AR 25-98.

(1) Not later than day 5 of the 120-day Army accreditation period, accreditation
evaluators provide the accreditation team lead with their data collection event requirements, and
they include the following information for each event:

(a) The type of data collection event; for example, focus group, interview, record review,
observation, or walk-through.

(b) The purpose of the event, to include the specific AEAS criteria being evaluated.

(c) The associated course or courses (if course-related) or organization.

(d) The specific participants or groups of participants (by organization, duty position, or
role) needed at the event (for example, non-supervisors from the directorate of training, budget
analyst, facility manager, TDC administrator, G-1, G-4, and so on).

(e) The number of participants and demographics for focus groups.

(f) The names of any other accreditation evaluators who should attend the event.

(g) The estimated time required to conduct the event.

(h) The date and time they would like to conduct the event if there is a preference.

(2) Each evaluator also includes in their submission any known dates and times they will
not be available to participate in any events (for example, vacation time), the best days and times
to try and schedule their events, and their time zone.

(3) The team lead consolidates, analyzes, approves, and organizes all evaluators’ data
collection event requirements, ensuring that requested events focus on the accreditation’s critical
information. The team lead also identifies any duplications in AEAS criteria and/or event

125
TRADOC Pamphlet 11-21

participant populations and combines events wherever appropriate. The team lead provides the
resulting approved list of event requirements to the scheduling coordinator.

(4) Accreditation evaluators wishing to add event requirements to the schedule after the
initial list is approved make their requests to the team lead for situational awareness and
approval. They do not go directly to the scheduling coordinator with new requirements. The team
lead informs the coordinating scheduler of new approved requirements to add to the schedule.

g. Provide consolidated data collection event requirements to institution. Not later than day 7
of the 120-day Army accreditation period, the accreditation team lead provides a consolidated
list of data collection event requirements to the learning institution’s QAO.

h. Conduct action officer meeting. Not later than day 14 of the 120-day Army accreditation
period, the accreditation team lead coordinates and leads a virtual meeting with the learning
institution’s key accreditation action officers and the Army accreditation team. The purpose of
this meeting is to introduce the Army accreditation team; provide information and answer
questions about the purpose and intent of accreditation, accreditation processes, and how
evaluators apply the AEAS; and to set clear expectations of what to expect throughout the
accreditation process.

i. Conduct formal in-brief. Not later than day 14 of the 120-day Army accreditation period,
the team lead coordinates and leads a formal in-brief for the learning institution’s senior leader;
other key leaders; and key action officers. The team lead coordinates with the institution’s QAO
and consolidates the in-brief presentation material.

(1) One purpose of the formal in-brief is for the team lead to introduce the accreditation
team, provide the institution’s leadership an overview of the accreditation process, set clear
expectations for the process, and discuss the learning institution’s questions or concerns about
the process.

(2) Another purpose of the formal in-brief is for the learning institution’s senior leader to
present to the accreditation team an overview of the institution. This overview includes the
institution’s structure and organization, mission, and functions; its subordinate organizations’
missions and functions; its DOTMLPF-P functions; its relationships with functionally aligned
AC and RC learning institutions; its quality assurance program; its faculty and staff development
program; issues affecting the institution’s training and education; and special initiatives.

j. Develop and coordinate accreditation execution schedule.

(1) The accreditation team lead makes final determination on all data collection event
requirements and oversees all aspects of the accreditation execution schedule. The learning
institution’s scheduling coordinator develops, publishes, revises, and coordinates the schedule
based on the team lead’s requirements. The learning institution’s QAO provides a scheduling
coordinator to develop and coordinate the accreditation execution schedule.

(2) Not later than day 15 of the 120-day Army accreditation period, based on the
accreditation team lead’s approved list of data collection event requirements, the scheduling

126
TRADOC Pamphlet 11-21

coordinator schedules and coordinates events directly with the institution’s action officers and
accreditation evaluators, and develops the accreditation execution schedule.

(3) For each scheduled event, the accreditation execution schedule includes:

(a) Date event scheduled.

(b) Event start and end times (in all applicable time zones).

(c) Event type and brief description (for example, “Focus group: Non-supervisors, Army
Civilian staff”).

(d) Event location (normally within the AQAP’s Army 365 site).

(e) Name and organization of all participating evaluators (for example, “Dr. Jane Doe
(HQ TRADOC QAO),” or “Mr. John Smith (TRADOC G2)”).

(f) Name and organization of all participants from the institution (for example, “G1: MAJ
John Doe,” or “Brigade S4: Ms. Jane Smith”).

(g) Name of event host (normally from the institution’s QAO; see para H-5a(2)).

(h) Any special instructions and schedule change history.

(i) Any other information helpful for schedule and event management.

(4) The coordinating scheduler de-conflicts the accreditation execution schedule with
action officers and accreditation evaluators as needed and notifies the team lead of any
scheduling issues or concerns.

(5) The accreditation execution schedule and all event invitations should clearly convey
each event’s start and end times in all of the time zones applicable to all accreditation evaluators
and other stakeholders. This helps avoid confusion, and it helps ensure that the right people get
to the right place at the right time.

k. Publish accreditation execution schedule.

(1) Publish schedule. Not later than day 15 of the 120-day Army accreditation period, the
scheduling coordinator publishes the accreditation execution schedule on the accreditation team
lead’s behalf. The preferred publication platform for the accreditation execution schedule is the
AQAP’s Army 365 site, which is available to all evaluators and learning institution action
officers. Publishing to the AQAP’s Army 365 site enables constant situational awareness of
scheduled events and provides all stakeholders the opportunity to request any needed schedule
adjustments as early in the process as possible.

127
TRADOC Pamphlet 11-21

(2) Review schedule. Accreditation evaluators and learning institution action officers
review the current accreditation execution schedule at least weekly to confirm the events
scheduled in the current week and review and confirm the events scheduled for the following
week. Most virtual events involve several or many participants, and re-coordinating and
rescheduling these events require a significant amount of effort and time; therefore, evaluators
and action officers notify the scheduling coordinator as soon as possible of any scheduling
conflicts and change requests.

(3) Evaluator coverage for some scheduling conflicts. When more than one accreditation
evaluator is scheduled to conduct a single virtual event, that event does not normally need to be
rescheduled when an evaluator realizes there is a scheduling conflict for that event, unless that
evaluator is also the event’s primary evaluator. In most cases, an evaluator who is unable to
attend a scheduled event can provide their evaluation questions to one of the other scheduled
evaluators, who can collect the needed information on their behalf.

(4) Manage event invitations. The coordinating scheduler sends and manages all event
invitations in close coordination with the accreditation team lead and on the team lead’s behalf.
Event invitations include the event’s date, start and end times in all applicable time zones, event
type and brief description, any special instructions, and a link to the event location, which is
normally within the AQAP’s Army 365 site. The coordinating scheduler may also upload the
event’s applicable AEAS criteria to the AQAP’s Army 365 site so that participants have easy
access to them if needed.

H-5. Sub-phase 2: Days 31 to 90


This sub-phase consists of four critical accreditation tasks, as shown in table H-4 below and
described in the paragraphs that follow.

Table H-4
Critical accreditation tasks days 31 to 90 of 120-day Army accreditation period
Timeline Task Deliverable Lead Action Officer
Conduct virtual events Data collection notes; rubrics Evaluators
As
and AEAS reports as
scheduled
completed
Per team Conduct virtual evaluator Team lead decides if any Accreditation team lead
lead huddles and evaluators
14 days Update accreditation Updated accreditation Accreditation team lead
before on- execution schedule with on- execution schedule and coordinating
site visit site events scheduler
As early as Schedule initial impressions Updated accreditation Accreditation team lead
possible out-brief execution schedule and coordinating
scheduler

a. Conduct virtual events. From day 31 through the last day of the 120-day Army
accreditation period, or through the first day of the on-site visit if conducted, accreditation
evaluators conduct data collection events using virtual means in accordance with the
accreditation execution schedule. This may include but is not limited to student and instructor
record reviews, other types of record reviews, test control procedure reviews, focus groups,

128
TRADOC Pamphlet 11-21

interviews, training observations, high physical demand task (HPDT) observations, and facility
walk-throughs.

(1) Prepare for events. Evaluators ensure they are well-prepared for each scheduled event
by reviewing ahead of time all applicable documentary evidence available in the learning
institution’s self-study, TDC, and the AQAP portal.

(2) Provide event hosts. The learning institution’s QAO provides event hosts for each
virtual event. An event host’s role is to verify that everyone scheduled to attend the event is
present or that there is a representative in their place, facilitate introductions between evaluators
and participants, and capture any due-outs from the event. For events with stated, implied, or
assumed assurances of confidentiality (see para 7-25a(2)), such as focus group sessions, the host
normally leaves the event after introductions, and the event’s team lead invites the host back into
the event at its conclusion to share any due-outs.

(3) Stay on criteria or topic. Evaluators ensure each event stays on the scheduled AEAS
criteria or topic: If evaluators introduce AEAS criteria or topics that were not scheduled for the
event, they will not likely have the right participants present.

b. Conduct virtual evaluator huddles.

(1) Integral to conducting an accreditation successfully is regular communication


between all accreditation team members, especially during periods of high activity. For the
period of conducting virtual events, which is days 31 through 120 or the first day of any on-site
visit, the team lead establishes a formalized communication process known as virtual evaluator
huddles. To accomplish this, the team lead may decide to use periodic virtual team meetings; a
virtual content-sharing platform, such as a wiki; or a combination of the two.

(2) No matter which method or platform the team lead decides to use for virtual evaluator
huddles, the purpose is the same, which is to communicate accreditation and evaluation
information across the team. Information shared in virtual evaluator huddles incudes result
summaries from events conducted since the last huddle; any impact issues or value-added
practices identified; and any issues or concerns related to the learning institution, the
accreditation process, and/or the virtual event schedule. The team lead decides and informs the
team specifically what types of information they want shared in the virtual evaluation huddles.

(3) The team lead decides the method and frequency of virtual evaluator huddles based
on the nature and scope of the accreditation, and on the level of evaluation activity during any
given period. Virtual evaluator huddles are for accreditation team members only.

c. Update accreditation execution schedule with on-site events.

(1) If an on-site visit is planned, then not later than 14 days before the on-site visit, the
coordinating scheduler, working with the accreditation team lead and following the same
processes described in paragraph H-4j, updates the accreditation execution schedule with on-site
events.

129
TRADOC Pamphlet 11-21

(2) The updated schedule should only address those on-site events requiring prior
planning and coordination with the learning institution. Activities such as training observations
and other observations not requiring prior coordination should not be included on the schedule.
This allows evaluators flexibility with what they choose to observe during the on-site visit.

(3) The team lead publishes the accreditation execution schedule, updated with all on-site
data events, to the learning institution and accreditation team. The team lead also distributes or
makes available any special instructions and site maps to the accreditation team.

d. Schedule initial impressions out-brief. As early in the accreditation process as practicable,


the accreditation team lead coordinates with the learning institution’s QAO to schedule an initial
impressions out-brief for the institution’s commander, commandant, or civilian or military
equivalent. Normally the initial impressions out-brief is scheduled for on or about the last day of
the scheduled accreditation period or the last day of the on-site visit if conducted. The initial
impressions out-brief may be conducted using virtual methods or in person.

H-6. Sub-phase 3: Days 91 to 120


This sub-phase consists of five critical accreditation tasks, as shown in table H-5 below and
described in the paragraphs that follow.

Table H-5
Critical accreditation tasks days 91 to 120 of 120-day Army accreditation period
Timeline Task Deliverable Lead Action Officer
As scheduled Conduct on-site visit Data collection notes; Evaluators
rubrics and AEAS reports
as completed
After data Conduct initial data analysis Draft initial impressions Accreditation team lead
collection; and interpretation out-brief presentation and evaluators
before out-
brief
Normally last Conduct initial impressions Initial impressions out-brief Accreditation team lead
day of out-brief presentation
accreditation
period
By initial Invite learning institution to Survey results Accreditation team lead
impressions take post-accreditation
out-brief survey
By one week Conduct accreditation team AAR summary Accreditation team lead
after initial AAR and evaluators
impressions
out-brief

a. Conduct on-site visit. When an on-site visit is needed, it is normally conducted during days
114 through 120 of the scheduled Army accreditation period. The actual days may vary based on
mission requirements.

(1) Provide on-site visit facilities and administrative support. During any on-site visit, the
learning institution provides appropriate facilities and administrative support for the Army

130
TRADOC Pamphlet 11-21

accreditation team. Support includes a lockable room to serve as the team’s on-site operations
center (see para 6-32); an operations center liaison, preferably from the learning institution’s
QAO; other rooms for conducting focus groups and interviews; audio-visual and printing
equipment; office supplies; escorts for helping evaluators get to the various locations across the
installation; and the protocol for the initial-impressions out-brief.

(2) Confirm initial impressions out-brief date. As early in the on-site visit as possible the
accreditation team confirms the scheduled date and time for conducting the initial impressions
out-brief, which is normally conducted on the last day of the on-site visit.

(3) Conduct on-site data collection activities.

(a) During the on-site visit, accreditation evaluators conduct data collection events in
accordance with the updated accreditation execution schedule and their planned unscheduled
observations. Data collection during the on-site visit focuses on gathering information that could
not be conducted using virtual means, and polyangulating data collected during virtual events.

(b) Data collection events may include but are not limited to student and instructor record
reviews, other types of record reviews, test control procedure reviews, focus groups, interviews,
training observations, HPDT observations, and facility walk-throughs. Some data collection
events, such as training observations, are not normally scheduled in the on-site environment, and
evaluators may use any available opportunity to conduct these.

(4) Conduct on-site evaluator huddles.

(a) At the end of each day of data collection, the accreditation team lead conducts an in-
person on-site evaluator huddle with all accreditation evaluators. Information shared in on-site
evaluator huddles incudes result summaries from events conducted that day; any impact issues or
value-added practices identified; and any issues or concerns related to the learning institution, the
accreditation process, and/or the on-site event schedule. The team lead informs the team about
the specific types of information they want shared during the on-site evaluator huddles.

(b) On-site evaluator huddles are for accreditation team members only; however, the
team lead may choose to have the scheduling coordinator present for any discussions about the
accreditation execution schedule.

b. Conduct initial data analysis and interpretation.

(1) Before the initial impressions out-brief, the accreditation team meets, either using
virtual methods or in-person as applicable, to discuss initial evaluation impressions and finalize
an initial-impressions out-brief.

(2) Before this meeting, each AEAS lead, working closely with their associated criterion
evaluators, should have already conducted initial data analysis and interpretation for their AEAS
and prepared an AEAS slide or slides for the initial impressions out-brief in accordance with the
initial impressions out-brief presentation guidelines in appendix I. Each AEAS lead presents
their initial impressions to the rest of the team and solicits feedback and discussion as needed.

131
TRADOC Pamphlet 11-21

Each AEAS lead’s initial impressions include observed strengths, challenges, and any impact
issues and value-added practices.

(3) The accreditation team lead develops a high-level summary for the initial impressions
out-brief presentation while considering, among other things, whether the:

(a) Institution’s DOTMLPF-P functions adequately support applicable criteria and


competencies.

(b) Institution oversees subordinate organizations that implement quality, current, and
relevant training and education that reflects the operational environment and validated lessons
learned.

(c) Institution trains AC and RC students to the same standard.

(d) Institution presents the right education and training, using the right medium, to the
right student, at the right time, and in the right place.

(e) Institution prepares for future training and education requirements.

(4) The team lead consolidates the AEAS and summary slides into a single presentation
and finalizes the presentation in accordance with the initial impressions out-brief presentation
guidelines in appendix I.

c. Conduct initial impressions out-brief

(1) Normally on or about the last day of the scheduled accreditation period, the
accreditation team lead, with team member assistance, provides an initial impressions out-brief
to the learning institution’s senior leaders, staff, and key action officers.

(2) The purpose of the initial impressions out-brief is to inform the learning institution’s
leaders and stakeholders of key findings, both positive and negative, that emerged during
accreditation events and activities.

(3) The team lead ensures that leaders and stakeholders understand that initial findings
and impressions may change after in-depth data analysis, and that the learning institution may
address any issues or concerns with the final draft report when they receive it.

(4) The accreditation team lead also conducts a courtesy desk-side discussion with the
learning institution’s commander, commandant, or civilian or military equivalent the day before
or morning of the official out-brief. The purpose of this discussion is to provide the senior leader
with an overview of the team’s initial impressions in advance of the official out-brief, and to
provide the senior leader the opportunity to address their initial questions or concerns.

(5) See paragraph 7-16 for a general overview of briefing evaluation results. See
appendix I for initial impressions out-brief presentation guidelines.

132
TRADOC Pamphlet 11-21

d. Invite to complete a post-accreditation survey.

(1) By the end of the accreditation period, the accreditation team lead invites the learning
institution’s senior leaders and key accreditation stakeholders to complete a post-accreditation
survey. The survey asks questions related to leaders’ and stakeholders’ experiences with the
accreditation process and the accreditation team. It also solicits feedback on ways to improve the
accreditation process.

(2) The accreditation team lead provides the survey link during the formal in-brief and
the initial impressions out-brief and encourages maximum participation. The team lead also
shares the survey link directly with the learning institution’s QAO director (or equivalent) not
later than ten days after the last day of the accreditation period. The learning institution’s QAO
director (or equivalent) shares the link directly with leaders and key stakeholders across the
learning institution and encourages maximum participation. A link to the post-accreditation
survey is also included in the LON.

(3) HQ TRADOC QAO uses accreditation survey data to inform changes and
improvements to Army accreditation and related processes.

e. Conduct after action review.

(1) At the end of the scheduled accreditation period, and not later than one week after the
end of the initial impressions out-brief, the accreditation team lead conducts an AAR with the
accreditation team. The AAR may be either virtual or in-person.

(2) The primary purpose of the AAR is to capture accreditation process strengths,
challenges, and lessons learned.

(3) The team lead consolidates AAR feedback and provides an AAR executive summary
to the AQAP Director and AQAP Deputy Director.

(4) HQ TRADOC QAO uses AAR feedback to inform changes and improvements to
Army accreditation and related processes.

Section III
Phase 3: After the 120-Day Army Accreditation Period
This section describes the expected process for accomplishing accreditation milestones, tasks,
and deliverables after the end of the scheduled 120-day Army accreditation period or initial
impressions out-brief. This phase consists of four categories of critical accreditation tasks, as
shown in table H-6 and described in the paragraphs H-7 through H-10.

133
TRADOC Pamphlet 11-21

Table H-6
Critical accreditation tasks after the 120-day Army accreditation period
Timeline Task Deliverable Lead Action Officer
Army Accreditation Report
By 3 days Provide completed rubrics to Completed applicable Criterion evaluator
after out-brief AEAS lead. rubrics
By 8 days Provide completed AEAS Completed AEAS reports AEAS lead
after out-brief report to team lead.
By 14 days Consolidate and review report; Completed draft report and Accreditation team
after out-brief process for final review. all applicable rubrics lead
By 17 days Provide draft report to learning Final draft report and all AQAP Director
after out-brief institution for review. applicable rubrics; request
for clarification document
Learning Institution Report Review
Receipt plus Complete report review and Acceptance or rebuttal Learning institution
14 days submit command memorandum; any QAO
acceptance/rebuttal (after request(s) for clarification
receipt of draft report).
Within 7 days Complete clarification and/or Written clarification Accreditation team
of receiving rebuttal response (as required). and/or rebuttal response lead
request
Final Report and Certificate of Accreditation
Within 3 days Forward report packet for CG, Final report packet HQ TRADOC QAO
of receiving TRADOC approval.
acceptance
As soon as Forward final approved Final approved AQAP Director
possible after accreditation report and accreditation report;
receipt accreditation certificate to accreditation certificate
learning institution.
Corrective Action Plan
Receipt plus Submit corrective action plan Corrective action plan Learning institution
30 days (after receipt of final approved QAO
accreditation report).

H-7. Army accreditation report


Developing the final draft Army accreditation report consists of four critical accreditation tasks
(see also table H-6).

a. Submit rubrics.

(1) Criterion evaluators are encouraged to prepare and submit their rubrics to their AEAS
lead immediately after they finish evaluating each criterion, no matter where that day falls in the
accreditation timeline. This allows the criterion evaluator to rate the rubric criteria and write the
narrative while the information is still fresh in their mind. It also allows the AEAS lead to begin
analyzing results and preparing the AEAS report.

(2) Not later than three days after the initial impressions out-brief, criterion evaluators
submit their completed AEAS rubrics to the AEAS lead and courtesy copy the accreditation team
lead. Within each rubric, evaluators clearly describe their observations for all sub-criteria rated

134
TRADOC Pamphlet 11-21

below 100. Narratives for sub-criteria rated below 100 are actionable; readers should be able to
easily understand why those criteria received the ratings that they did.

(3) Evaluators also clearly describe notably positive observations and any value-added
practices and/or impact issues for each criterion. If the rubric does not provide enough space to
also provide recommendations for criteria rated below 100, evaluators provide recommendations
directly to the AEAS lead so that the AEAS lead may summarize those recommendations in the
AEAS report. Evaluators do not include attachments to the rubrics without accreditation team
lead approval.

(4) In instances where the learning institution made an immediate or on-the-spot


correction, the rubric still reflects the issue observed but notes that the institution has already
taken the necessary corrective action. The report reflects the condition at the time of initial
observation.

b. Submit summary comments and recommendations.

(1) AEAS leads are encouraged to prepare and submit their AEAS reports to the team
lead immediately after receiving all applicable rubrics from criterion evaluators, no matter where
that day falls in the accreditation timeline. This allows the AEAS lead to write the AEAS report
while the information is still fresh. It also allows the accreditation team lead to begin preparing
the accreditation report’s executive summary.

(2) Not later than eight days after the initial impressions out-brief, AEAS leads provide
their completed AEAS reports with all applicable rubrics to the accreditation team lead. Each
AEAS report summarizes positive and negative observations, provides recommendations, and
consolidates value-added practices and impact issues.

c. Prepare final draft accreditation report.

(1) Not later than 14 days after the initial impressions out-brief, the accreditation team
lead consolidates and reviews the completed AEAS reports and rubrics and prepares a final draft
accreditation report.

(2) The report includes all applicable rubrics, an executive summary with
recommendations, consolidated value-added practices and impact issues, and a summary of all
associated proponent assessment results.

(3) The team lead forwards the completed draft report through internal staffing (in
accordance with local policy) to the AQAP Director for review and approval.

d. Provide final draft report to learning institution. Not later than 17 days after the initial
impressions out-brief, after approving the final draft report, the AQAP Director sends the
learning institution an email notification with a link to the report. The notification advises the
learning institution that the report is ready for their review, and that they have 14 days to review

135
TRADOC Pamphlet 11-21

the report and submit a memorandum of acceptance or rebuttal and any official requests for
clarification.

H-8. Learning institution report review


Not later than 14 days after receiving the link to the final draft accreditation report, the learning
institution reviews the draft accreditation report for any calculation errors that may affect their
accreditation rating; documents any issues or concerns with the accreditation report in an
accreditation acceptance or rebuttal memorandum signed by the learning institution’s
commander, director, chief of staff, or other senior leader authorized to represent the institution;
and submits the acceptance or rebuttal memorandum and any official requests for clarification to
the AQAP Director with a courtesy copy to the accreditation team lead.

a. Rebuttal.

(1) A rebuttal is an argument or claim providing evidence that a reported accreditation


rating, finding, recommendation, or impact issue is inaccurate or false.

(2) Rebuttals are only considered when the learning institution fails accreditation with an
overall accreditation rating below 80; the rebuttal applies to an AEAS rated below 80, and in that
case is considered for that AEAS only; or the rebuttal applies to a recommendation or impact
issue. The AQAP Director is the final adjudicating authority for all rebuttals.

(3) Whenever a rebuttal is considered, the AQAP Director assigns an action officer,
normally the accreditation team lead, to review the rebuttal, coordinate with the institution as
needed, and prepare a draft written response to the rebuttal. The draft response includes the
learning institution’s reason(s) for rebuttal and addresses each reason if more than one. The
action officer submits the draft response and all supporting evidence to the AQAP Director. The
AQAP Director reviews the draft response and all supporting evidence and makes a final
determination on whether to accept or reject the rebuttal, in full or in part. The final response
includes the final outcome and justification. The accreditation team lead updates the
accreditation report with revised ratings and narratives if needed for alignment with the rebuttal
decision.

b. Requests for clarification.

(1) Learning institutions may submit unofficial or official requests for clarification on any
accreditation process or reported accreditation rating, finding, recommendation, value-added
practice, or impact issue. As their first course of action, learning institutions normally submit
unofficial requests for clarification to the accreditation team lead, who forwards the requests to
the appropriate evaluator(s) for response. The team lead provides all responses to the learning
institution as quickly as possible.

(2) Learning institutions may submit official requests for clarification without first
submitting an unofficial request, or after submitting an unofficial request without response or
resolution. The learning institution includes all official requests for clarification in the

136
TRADOC Pamphlet 11-21

acceptance or rebuttal memorandum to the AQAP Director, who assigns a SME(s) as action
officer for the request.

(3) The action officer conducts a thorough review of the specific concern or issue that the
request addresses, which includes reviewing applicable regulations and policies. The action
officer provides a written report of the review to the AQAP Director. The report is in
memorandum format and includes the initial request for clarification, the review methodology,
all applicable references, a recommended final response, and the AQAP Director’s signature
block. The AQAP Director reviews the action officer’s report of review, obtains clarification as
needed, and determines the final response. The AQAP Director may direct the action officer to
make modifications to the report before signing the report. The AQAP Director sends the final
response to the learning institution.

H-9. Final report and certificate of accreditation

a. As soon as possible after receiving the learning institution’s response to the draft
accreditation report, and after working through any rebuttal actions, HQ TRADOC QAO
forwards the final accreditation report packet, to include the learning institution’s acceptance
memorandum and an Army accreditation certificate, through appropriate staffing channels for
final CG, TRADOC approval.

b. Normally within five days of forwarding, HQ TRADOC QAO receives the final approved
Army accreditation report and certificate. Normally within three days of receipt, the AQAP
Director delivers the final approved Army accreditation report and signed certificate of
accreditation to the learning institution.

H-10. Corrective action plan

a. A corrective action plan is the commander, commandant, or civilian or military


equivalent’s plan for resolving shortcomings and deficiencies for all AEAS criteria and sub-
criteria rated below 100.

b. Not later than 30 days after receiving the final accreditation report, the learning institution
QAO coordinates with its institution’s stakeholders to develop and submit a corrective action
plan to the AQAP Director and accreditation team lead. The corrective action plan identifies the
staff leads for each corrective action item. Learning institutions may use any format for their
corrective action plans. A corrective action plan template, for optional use, is available in the
AEAS section of the AQAP portal: https://armyeitaas.sharepoint-mil.us/sites/tr-hq-aqap.

H-11. Impact issues and value-added practices


The accreditation team lead processes any impact issues and value-added practices identified
during accreditation following the processes described in appendix B.

137
TRADOC Pamphlet 11-21

Appendix I
Initial Impressions Out-Brief Presentation Guidelines
This appendix provides information and guidance on the initial impressions out-brief
presentation format for Army accreditation. Although learning institutions may use different
formats for proponent assessment and internal evaluation out-brief presentations, they are
encouraged to adopt the basic format presented here and adapt it to meet their specific needs. For
an initial impressions out-brief presentation template, visit the AQAP portal:
https://armyeitaas.sharepoint-mil.us/sites/tr-hq-aqap.

I-1. Intent of the initial impressions out-brief


The intent of any initial impressions out-brief, no matter the format, is for an accreditation,
assessment, or evaluation team to share results of initial, high-level data analysis and
interpretation. It is not for sharing results of in-depth data analysis and interpretation, because in
most cases the team has not completed this process. It is also not for conveying contents of the
report, because in most cases the team has not yet completed or finalized the report.

I-2. First four out-brief slides


What follows are examples of the first four initial impressions out-brief presentation slides for
Army accreditation.

a. Figure I-1 shows an example initial impressions out-brief title slide. This example slide
identifies the type of out-brief, the accredited learning institution’s name, and the out-brief date.

Figure I-1. Example initial impressions out-brief title slide

b. Figure I-2 shows an example initial impressions out-brief agenda slide. This example slide
outlines a typical accreditation out-brief agenda.

138
TRADOC Pamphlet 11-21

Figure I-2. Example initial impressions out-brief agenda slide

c. Figure I-3 shows an example initial impressions out-brief purpose slide. This example
slide stresses that the purpose of the briefing is to provide initial impressions only.

139
TRADOC Pamphlet 11-21

Figure I-3. Example initial impressions out-brief purpose slide

d. Figure I-4 shows an example of an initial impressions out-brief “findings legend” slide.
This legend informs the format for all subsequent slides that convey findings by AEAS.

Figure I-4. Example initial impressions out-brief findings legend

140
TRADOC Pamphlet 11-21

I-3. Remaining out-brief slides

a. The next seven or more slides present accreditation findings by AEAS. The information in
these slides uses the findings legend shown in figure I-4. If possible, AEAS findings slides
should be no longer than one slide each; however, some AEAS have many criteria and sub-
criteria and may require more than one slide to adequately report associated findings. Findings
should be listed in bulleted format, not sentence format. The accreditation team lead is
responsible for ensuring these slides follow the AQAP Director’s guidance.

b. Immediately following the AEAS findings slides, in the order listed, are:

(1) A “closing remarks” slide prompting closing remarks from the learning institution’s
leadership and AQAP leadership.

(2) A “post-accreditation survey” slide with a link to the post-accreditation survey.

(3) A slide with the accreditation team lead’s contact information.

(4) A slide with AQAP leaders’ contact information.

(5) An end slide depicting the Army and AQAP logos.

141
TRADOC Pamphlet 11-21

Appendix J
Example Course Evaluation Timeline and Process
This appendix provides an example course evaluation timeline and process that may serve as a
guide for conducting course evaluations. The timeline and process may be reduced or expanded
depending on the nature of the course being evaluated, the scope of the evaluation, and the
resources available. Additionally, each learning institution’s involved stakeholders may vary,
depending on local policy and procedure.

J-1. Example course evaluation timeline


As shown in table J-1, a course evaluation can be divided into five phases: planning and
coordination, initial analysis, site visit, final draft report, and final report.

Table J-1
Example course evaluation milestone timeline
Timeline Milestone Activity Lead Action Officer
Planning and Coordination
Complete mission analysis and planning Evaluation team lead
By 4 weeks before Notify course and other relevant stakeholders Evaluation team lead/chief/
site visit of upcoming course evaluation director
Request information from course Evaluation team lead
Course provides requested information Course manager
By 3 weeks before
Design and develop survey(s) Evaluation team lead
site visit
Send survey invitations Evaluation team lead
Gather existing data from systems of record Evaluation team lead
By 2 weeks before Participants complete survey(s) Course manager/participants
site visit Coordinate site visit activities Evaluation team lead
Publish course evaluation execution schedule Evaluation team lead
Initial Analysis
Analyze data already collected Evaluation team lead/team
By one day before
Refine data collection questions and Evaluation team lead/team
site visit
instruments
Site Visit
Site Visit Conduct data collection activities Evaluation team lead/team
(up to 1 week long) Conduct initial impressions out-brief Evaluation team lead/team
Final Draft Report
Conduct in-depth analysis, synthesis, and Evaluation team lead/team
evaluation
By 3 weeks after Prepare final draft report Evaluation team lead/team
out-brief Obtain QAO director approval Evaluation team lead
Distribute final draft report Evaluation team lead/chief/
director
Final Report
By 1 week after Review final draft report Course manager/stakeholders
receiving draft Provide corrective action plan Course manager/stakeholders
By 1 week after Review and accept corrective action plan Evaluation team lead/team
receiving corrective Publish final report Evaluation team lead/chief/
actions director

142
TRADOC Pamphlet 11-21

J-2. Phase 1: Planning and coordination


Planning and coordination are critical for effective and efficient execution of course evaluations.
This phase normally consists of ten significant milestones.

a. Complete mission analysis and planning. By not later than four weeks before the site visit,
the evaluation team lead completes course evaluation mission analysis and planning. This
milestone normally consists of seven process steps:

(1) Identify target date for course visit. The evaluation team lead reviews ATRRS for
dates the course will be in session and identifies a target date for conducting a course visit.
Identifying a target date as early in the process as possible helps ensure adequate time for
effective analysis and planning, and for pre-visit data collection and analysis.

(2) Determine AEAS criteria applicability. The evaluation team lead determines which
AEAS criteria are applicable to the course. Some considerations for determining applicability
include whether or not evidence for the criterion is observable at the course level, and whether or
not the criterion applies to the particular type of course (for example, IMT, PME, or functional).

(3) Determine required critical information, key evaluation questions, and scope. The
evaluation team lead reviews existing data such as the course’s most-recent internal and external
evaluation results, learning products, and AAR summary reports from the course; and looks for
indications of the course’s strengths and challenges. Considering the results of this review and
the applicable AEAS criteria, the team lead determines the evaluation’s required critical
information, key evaluation questions, and scope. Key evaluation questions are questions that the
evaluation is designed to answer about the required critical information; for example, “How does
the course teach the learning objectives as designed?” The team lead conducts risk analysis for
any applicable AEAS criteria not included in the evaluation’s scope. The team lead obtains chief
and/or QAO director (or equivalent) approval for the proposed scope as needed to ensure
alignment and synchronization with the QAO’s overall mission planning.

(4) Identify required data collection activities. The evaluation team lead determines the
data sources and data collection methods needed to answer the key evaluation questions. Typical
data sources include students, instructors, course leadership, course developers, test control
officers, student and instructor records administrators, and personnel executing HPDT and/or
field training exercises. Typical data sources also include learning products, design and
development audit trail documentation, commander’s training guidance, SOPs, student records,
instructor records, and test control records. Typical data collection methods include surveys,
document reviews, records reviews, test control audits, interviews, focus groups, and direct
observations. Based on the determination of data sources and data collection methods, the team
lead identifies the evaluation’s required data collection activities. Typical data collection
activities include course document review, TDA review (equipment and personnel), instructor
and course manager surveys, test control audit, student and instructor records reviews, student
and instructor focus groups, course manager interview, training observations, equipment and
training aid observations, and HPDT observations.

143
TRADOC Pamphlet 11-21

Note. Surveys, interviews, and focus groups used for course evaluations are conducted in
compliance with AR 25-98.

(5) Determine resource requirements. The evaluation team lead determines the number of
evaluators needed to conduct the evaluation efficiently and effectively, and requests evaluation
team member support through the chief and/or QAO director (or equivalent) as needed.
Depending on the scope, course evaluations typically require one team lead and one or two
assistant evaluators for efficient and effective execution. If any focus groups are planned, at least
one assistant evaluator is needed to take notes while the team lead facilitates. The team lead also
determines and requests any other required resources needed, such as equipment, supplies,
and/or transportation or travel to remote training locations.

(6) Determine assistant evaluator roles and responsibilities. The evaluation team lead
determines assistant evaluator(s) roles and responsibilities. The team lead works with the chief
and/or QAO director (or equivalent) on this step as needed to ensure alignment and
synchronization with evaluator professional development goals.

(7) Select site visit date or date range. The evaluation team lead selects a date or date
range to conduct the site visit, which may be conducted in person and/or using virtual methods.
The team lead coordinates the selected date or date range with the chief and/or QAO director (or
equivalent) to ensure alignment and synchronization with the QAO’s overall mission planning. If
a date range is selected, it should be as short as possible to avoid excessive disruption to the
course, while still allowing adequate time to conduct planned data collection activities. A date
range of one week or less is recommended for a full course evaluation site visit.

b. Notify course and other relevant stakeholders.

(1) By not later than four weeks before the site visit, the evaluation team lead, chief, or
QAO director (depending on local policy and procedure) notifies the course and other relevant
stakeholders of the upcoming evaluation. Other relevant stakeholders may include the training
organization’s leadership and individuals who designed and developed the learning products.

(2) A full course evaluation requires significant coordination of resources for records
reviews, test control audits, interviews, and focus groups, and this process can be disruptive to
course operations. Notifying the course of the evaluation in advance, particularly when the
evaluation’s scope requires coordination, helps with minimizing disruption and maintaining
cooperative and collaborative relationships.

c. Request information from course.

(1) By not later than four weeks before the planned site visit, the evaluation team lead
requests from the course the training and HPDT schedules for all classes in session during the
visit, and the points of contact for the various data collection activities.

(2) The team lead also requests course and unit documentation, to include the most-
recently approved course management plan, commander’s training guidance, relevant SOPs, and

144
TRADOC Pamphlet 11-21

any training waivers. This may also include course development audit trail documentation;
however, the team lead may need to request that documentation from another source if the course
does not maintain it.

d. Course provides requested information. By not later than three weeks before the site visit,
the course manager or other course representative provides all requested information (see para J-
2c) to the evaluation team lead.

e. Gather existing data from TDC and other systems of record. By not later than two weeks
before the site visit, the evaluation team lead or assistant evaluator gathers existing data from
TDC. This data includes the current and in-use Individual Training Plan, Course Administrative
Data, Program of Instruction (POI), and a sample of three or more lesson plans.

f. Coordinate site visit activities. By not later than two weeks before the planned site visit, the
evaluation team lead coordinates evaluation events, to include data collection activities and the
initial impressions outbrief, with team members and course stakeholders. This usually involves
coordinating and scheduling document reviews, test control audits, interviews, and focus groups.
Activities such as training observations and other observations not requiring prior coordination
should not be coordinated and scheduled. This allows evaluators flexibility with what they
choose to observe during the site visit.

g. Publish course evaluation execution schedule. By not later than two weeks before the site
visit, the evaluation team lead develops a course evaluation execution schedule and distributes it
to all course evaluation stakeholders. The team lead updates and re-publishes the execution
schedule as needed throughout the remainder of the course evaluation process.

J-3. Phase 2: Initial analysis


Initial analysis is critical for ensuring complete and effective data collection efforts,
polyangulation of data, and timely reporting. This phase normally consists of three significant
milestones.

a. Analyze data already collected. By not later than one day before the site visit, the
evaluation team lead and any assistant evaluator(s) complete initial analysis of data already
collected. Initial analysis data may include the course’s most-recent internal and external
evaluation results; course documentation, such as learning products and course design and
development audit trail documentation; AAR summary reports from the course; commander’s
training guidance; SOPs; and any training waivers.

b. Enter results of initial analysis into draft report. By not later than one day before the site
visit, the evaluation team lead and any assistant evaluator(s) evaluate the results of initial
analysis against the AEAS criteria and enter the findings into the draft course evaluation report.
Findings may change after collecting and analyzing additional data during the site visit; however,
to be able to provide stakeholders with a timely report, it is important to begin writing the report
as early in the process as possible, and to continue writing the report throughout the course
evaluation as new data is collected and analyzed.

145
TRADOC Pamphlet 11-21

c. Refine data collection questions and instruments. By not later than one day before the site
visit, the evaluation team lead and any assistant evaluator(s) use the results of initial analysis to
refine or develop the specific questions they need to ask during interviews and focus groups, and
the observations they need to conduct throughout the course visit. If pre-written AEAS criteria-
based questions already exist in the QAO’s course evaluation data collection instruments,
evaluators use the results of initial analysis to refine those instruments to meet the specific needs
of that evaluation.

J-4. Phase 3: Site visit


Site visits can be conducted using virtual methods, in person, or a combination of both. This
phase normally consists of two significant milestones.

a. Conduct data collection activities.

(1) The on-site visit may begin with a brief orientation with key stakeholders to clarify
and confirm expectations.

(2) During the site visit, the evaluation team lead and any assistant evaluators conduct the
planned data collection activities, which may include additional course document review, test
control audit, student and instructor records reviews, student and instructor focus groups, course
manager interview, training observations, and HPDT observations. As much as practicable,
evaluators avoid or minimize any disruptions their presence may cause to training during the on-
site visit.

(3) As evaluators complete data collection activities and conduct initial analysis of the
data, they should add their findings to the draft course evaluation report.

b. Conduct initial impressions out-brief.

(1) The purpose of the initial impressions out-brief is to inform stakeholders of key
findings, both positive and negative, that emerged during course evaluation activities. The
evaluation team lead ensures stakeholders understand that initial findings and impressions may
change after in-depth data analysis, and that the course may address any issues or concerns with
the final draft report when they receive it.

(2) Before the last day of the site visit, the evaluation team lead and any assistant
evaluator(s) conduct high-level analysis, synthesis, and interpretation of the data collected during
the course evaluation, to include the data collected and analyzed before the site visit. Evaluators
evaluate the results of this high-level analysis against the AEAS, and identify key positive and
negative evaluation findings, or initial impressions. Although out-brief formats may vary, QAOs
are encouraged to follow the initial impressions out-brief guidelines in appendix I.

(3) On the last day of the site visit, the evaluation team lead conducts an initial
impressions out-brief with key stakeholders. Stakeholders invited to the out-brief will vary
depending on local policy and procedure; however, at minimum, out-brief attendees should
include course management and a representative from the organization that designed and/or

146
TRADOC Pamphlet 11-21

developed the learning products. The initial impressions out-brief may be conducted formally or
desk-side, depending on the number of stakeholders and local policy and procedure.

(4) See paragraph 7-15 for a general overview of briefing evaluation results.

J-5. Phase 4: Final draft report


By not later than three weeks after the initial impressions out-brief, the evaluation team prepares
and distributes the final draft report. This phase normally consists of four significant milestones.

a. Conduct in-depth analysis, synthesis, and evaluation. The evaluation team lead and any
assistant evaluator(s) conduct in-depth analysis, synthesis, and interpretation of all of the data
collected during the course evaluation, to include the data collected and analyzed before the site
visit. Evaluators evaluate the results of this in-depth analysis against all applicable AEAS criteria
using the AEAS rubrics.

b. Prepare final draft report.

(1) The evaluation team lead and any assistant evaluator(s) rate all applicable AEAS
criteria based on the evaluation results and finalize the draft report. Within the report, evaluators
clearly explain the reason for all ratings below 100, describe the potential impact of any critical
findings, and provide recommendations for any findings that do not have obvious solutions. Very
importantly, evaluators also clearly describe what the course did well.

(2) Evaluators are encouraged to use the AEAS evaluation report tool for course
evaluation reports; however, they may use another tool for the report if that tool provides the
same information and elements as the AEAS evaluation report tool: AEAS rubrics with ratings
and comments, AEAS summaries, impact issues, value-added practices, and an executive
summary.

(3) The evaluation team lead obtains at least one quality control review of the final draft
report, usually in the form of a peer review, before forwarding the report through the chief to the
QAO director (or equivalent) for approval. A quality control review focuses on the report’s
accuracy, completeness, clarity, conciseness, and readability. The review process may involve
other personnel depending on the QAO’s local policies and procedures.

c. Obtain QAO director approval. The chief or evaluation team lead obtains approval of the
final draft report from the QAO director (or equivalent).

d. Distribute final draft report. By not later than three weeks after the initial impressions out-
brief, the evaluation team lead, chief, or QAO director (depending on local policy and procedure)
distributes the final draft report to course and other applicable stakeholders for courtesy review
and corrective action plan.

J-6. Phase 5: Final report


This phase normally consists of four significant milestones.

147
TRADOC Pamphlet 11-21

a. Review draft report. By not later than one week after receiving the final draft report,
course and other applicable stakeholders review the final draft report and submit any requests for
clarification to the evaluation team lead. The team lead provides timely responses to all requests
for clarification.

b. Provide corrective action plan.

(1) By not later than one week after receiving the final draft report, the course manager or
equivalent collaborates with internal and external stakeholders, as needed, to develop a
corrective action plan for all criteria and sub-criteria rated below 100.

(2) The corrective action plan describes what the course will do to correct each finding,
the target date for completing each corrective action, and the action officer responsible for each
corrective action. The course manager or equivalent provides the corrective action plan to the
evaluation team lead.

c. Review and accept corrective action plan. By not later than one week after receiving the
corrective action plan, the evaluation team lead reviews the corrective action plan. Once satisfied
that the corrective action plan is complete and aligned with the findings, the team lead accepts
the corrective action plan and attaches it to the final report.

d. Publish final report. By not later than one week after receiving the corrective action plan,
the evaluation team lead finalizes the course evaluation report and distributes it, with the
corrective action plan, to the institution’s applicable stakeholders. This distribution typically
includes all key stakeholders who were involved in the course evaluation, as well as the training
unit’s leadership and the training development organization’s leadership. Distribution may vary
depending on local policy and procedure.

J-7. Follow-up
The evaluation team lead or other assigned evaluator conducts a follow-up assistance visit with
the course to assess corrective action plan progress. Follow-up should usually occur within about
one year of the initial course evaluation; however, the timeframe may vary depending on the
nature of the initial findings and priority of the corrective actions.

J-8. Impact issues and value-added practices


The QAO processes any impact issues and value-added practices identified during course
evaluations following the processes described in appendix B.

148
TRADOC Pamphlet 11-21

Appendix K
External Survey Process, Reporting, and Questions
These guidelines outline the external survey and reporting process and describe the AQAP
graduate and leader survey questions. Quality assurance surveys are conducted in compliance
with AR 25-98.

K-1. Graduate survey

a. Proponent learning institution QAOs survey their courses’ graduates 6 to 12 months after
graduation using the AQAP-provided automated survey tool. This includes surveying graduates
of their courses taught at their outlying subordinate schools and functionally aligned RC learning
institutions. Depending on the type and nature of a course, some graduates may be surveyed as
early as three months after graduation. An important consideration when determining how long
after graduation to survey graduates is how long graduates typically need at their units to
implement the knowledge and skills they gained at the course.

b. Proponent QAOs e-mail survey invitations to graduates using only enterprise e-mail
addresses, no personal e-mail addresses. The invitation e-mails should contain a link to the
graduate survey, and a message from the learning institution’s commander, commandant, or
civilian or military equivalent inviting graduates to complete the survey.

c. Graduate surveys include, at minimum, two AQAP survey questions.

(1) AQAP graduate survey question 1. The first question asks graduates if the training
and education they received adequately prepared them to perform their jobs at their units.
Response options for this question use the following Likert-scale: 1=Strongly Disagree,
2=Disagree, 3-Neutral, 4=Agree, 5=Strongly Agree. The numbers shown here indicate the
response value that the AQAP survey tool applies to each response when they are listed in the
survey in the order shown here.

(2) AQAP graduate survey question 2. The second question asks graduates if they were
trained and educated on the same equipment (or concepts) they use at their units. The response
options for this question includes “Yes” and “No,” in that order.

(3) Example graduate survey items are shown in figure K-1.

149
TRADOC Pamphlet 11-21

1. Please select your level of agreement with this statement: The training/education I
received from the course adequately prepared me to perform my current job at my unit of
assignment.

o Strongly Disagree
o Disagree
o Neutral
o Agree
o Strongly Agree

2. Did the course train/educate you on the same equipment (or concepts if equipment was
not part of your course) that you use at your unit of assignment?

o Yes
o No
Figure K-1. Example graduate survey items

K-2. Leader survey

a. Proponent QAOs survey graduates’ leaders in the operational force every six months using
the AQAP-provided automated survey tool. Primary target survey participants are brigade-level
leaders in the operational force expected to have knowledge of graduates’ job performance.
Brigade-level leaders include brigade commanders and sergeants major, and other brigade-level
leaders. To help reduce the risk of survey fatigue, proponent QAOs should survey a sample of
leaders in the operational force every six months and avoid surveying the same leaders every six
months. QAOs determine the sampling methods they use.

b. Proponent QAOs may also survey their learning institution’s current PME students, soon
after they arrive for PME training, about their subordinates’ job performance.

c. Proponent QAOs e-mail survey invitations to leaders using only enterprise e-mail
addresses, no personal e-mail addresses. The invitation e-mails should contain a link to the
survey, and a message from the learning institution’s commander, commandant, or civilian or
military equivalent inviting participants to complete the survey. Survey invitations for learning
institutions’ current PME students may be distributed using means other than e-mail, such as via
a direct link that the course manager provides students during in-processing or course
introduction.

d. Leader surveys include, at minimum, one AQAP survey question. That question asks
graduates’ leaders if the training or education that their personnel received adequately prepared
them to perform their jobs at their units. Response options for this question use the following
Likert-scale: 1=Strongly Disagree, 2=Disagree, 3-Neutral, 4=Agree, 5=Strongly Agree. The
numbers shown in these response options indicate the response value that the survey tool applies
to each response when they are listed in the survey in the order shown here. An example
graduate survey item is shown in figure K-2.

150
TRADOC Pamphlet 11-21

Please select your level of agreement with this statement: Of my unit’s Soldiers arriving or
returning from a course within the last six months, the training and education they received
from the course adequately prepared them to perform their jobs.

o Strongly Disagree
o Disagree
o Neutral
o Agree
o Strongly Agree
Figure K-2. Example leader survey item

K-3. General external survey guidelines

a. Proponent QAOs may not take away from but may add to their external surveys to support
their institutions’ external evaluation results. For example, to solicit qualitative feedback,
proponent QAOs may add open-ended response questions after the AQAP questions, asking
survey respondents to explain why they selected the response that they did. Additions to related
external surveys are processed through the AQAP External Survey Program Manager, who
assures compliance with the approved SCN and AR 25-98.

b. Proponent QAOs are strongly encouraged to use the e-mail campaign feature of the AQAP
survey tool whenever possible for ease in survey administration and response tracking, and for
the ability to send automatic reminder emails to invitees who have not yet completed the survey.

K-4. Summarized external survey data report

a. Quarterly, on the last day of the first month that follows the end of each fiscal year quarter,
proponent QAOs submit a summarized external survey data report, as shown in figure K-3, to
the AQAP External Survey Program Manager.

Figure K-3. Summarized external data report format

151
TRADOC Pamphlet 11-21

b. The report includes total responses by response option for the required survey questions. It
also includes the exact verbiage of the questions asked in the leader and graduate surveys. A
summarized external survey data report tool is available on the AQAP portal:
https://armyeitaas.sharepoint-mil.us/sites/tr-hq-aqap.

c. The AQAP External Survey Program Manager analyzes and synthesizes the survey data
from all learning institutions and prepares an executive summary of the aggregate results for the
AQAP Director, who reports the results quarterly, or as required, to TRADOC senior leaders.

d. Proponent QAOs analyze and report external survey results at least quarterly, normally at
the quarterly quality assurance review, to their institution’s commander, commandant, or civilian
or military equivalent. Proponent QAOs distribute external survey reports to institutional
stakeholders as required by local policy.

152
TRADOC Pamphlet 11-21

Appendix L
Example Self-Study and Self-Assessment Process
This appendix provides an example concurrent self-study and self-assessment process. Self-study
and self-assessment are well-suited for conducting concurrently. Both processes involve the
learning institution’s QAO coordinating and leading a matrixed team of the same action officers.
Both processes involve the institution examining itself and judging its performance and
effectiveness. And both processes lead to joined reports. The process described here may vary
depending on local policy and procedure.

L-1. Conduct self-study and self-assessment

a. The learning institution’s QAO coordinates and leads its institution through the self-
assessment process. Because this is the learning institution’s opportunity to evaluate itself, the
QAO leads a matrixed team of the institution’s action officers from across the institution’s
DOTMLPF-P domains. The QAO provides coaching and support as needed as action officers
evaluate their own organizations and processes and rate themselves against the AEAS criteria
applicable to them. The self-assessment process involves a considerable amount of collaboration
between the matrixed team of action officers and the QAO.

b. The QAO begins the process by meeting with all action officers together to review the
institution’s last self-assessment results and corrective actions; review previously identified
trends, value-added practices, and impact issues; clarify the meaning and intent of certain
criteria; answer any questions about the criteria and the process; and share expectations for the
process going forward. The QAO also requests information needed for the self-study from the
action officers.

c. The QAO holds periodic in-progress reviews with action officers to discuss the progress of
the self-assessment and the self-study, interim self-assessment results, and any issues or concerns
related to either the self-assessment or the self-study. The QAO also works closely with action
officers to finalize the self-assessment’s corrective action plan.

L-2. Report and brief self-study and self-assessment results

a. Action officers report their self-assessment ratings and supporting narratives, in writing, to
the QAO, who consolidates the feedback from all action officers into a single self-assessment
report using the AEAS evaluation report tool and rubrics.

b. Action officers report their corrective actions for all AEAS criteria and sub-criteria rated
below 100 to the QAO. The QAO consolidates corrective actions and develops a corrective
action plan.

c. Action officers provide the QAO with all information requested for the self-study. That
information may include, but is not limited to, new or updated written narratives for various
sections within the report, or various documents associated with or attached to the report. The
QAO consolidates the information and develops the self-study. For self-study guidelines, see
appendix M.

153
TRADOC Pamphlet 11-21

d. The QAO director (or equivalent) briefs the learning institution’s commander and other
senior leaders on the self-study and self-assessment results, and the corrective action plan.

e. The QAO requests command signature on the self-study and self-assessment report. Once
the commander signs both, the QAO attaches the self-assessment report and corrective action
plan to the self-study and uploads the self-study to the institution’s site in the AQAP portal.

f. A QAO that falls under a multi-branch CoE QAO develops and forwards its command-
approved self-study, with its self-assessment report and corrective action plan attached, to the
multi-branch CoE QAO. The multi-branch CoE QAO uploads all of the CoE’s self-studies to the
institutions’ site in the AQAP portal.

g. Self-study due dates vary depending on each learning institution’s accreditation or


assessment cycle. For their year of accreditation or assessment, learning institutions backwards
plan and consider their scheduled accreditation or assessment period to ensure they complete and
upload that year’s new or updated self-study with self-assessment report and corrective action
plan not later than 60 days before the first day of the 120-day Army accreditation (or assessment)
period.

L-3. Impact issues and value-added practices


The QAO processes any impact issues and value-added practices identified during self-study and
self-assessment following the processes described in appendix B.

Appendix M
Self-Study Guidelines
This appendix provides general guidelines for preparing a self-study, to include an outline of
chapters and the content for each chapter.

M-1. General self-study guidelines

a. A self-study should use all chapter and section headings and annotate any that are not
applicable.

b. A self-study’s narratives should be as concise as possible.

c. A self-study should provide links to supporting documents: Documents should not be


copied and pasted or otherwise attached to the self-study.

d. A self-study’s supporting-document files and file folders should be named using a logical
and standardized naming convention that allows everyone to easily find the file(s) they need in
relation to the associated AEAS, for example, “3a1_descriptive-file-name_YYYYMMDD.”

e. A self-study may be published as an electronic document, PDF, or website.

154
TRADOC Pamphlet 11-21

M-2. Self-study contents


A self-study contains a cover page, a table of contents, a preface, an institutional summary, seven
chapters, and seven appendices as described in the paragraphs that follow.

a. Cover page. The cover page includes the three elements below, excluding bullets. The
cover page can be in any clear format.

(1) The words “Self-Study.”

(2) Institution name.

(3) Fiscal year (for example, “FYXX”).

b. Table of contents. A self-study’s table of contents includes two heading levels. The first
heading level includes the preface, institutional summary, chapters, and appendices. The second
heading level includes sections under the preface, institutional summary, and chapters. If using a
word processing application, using styles to format headings within the document is
recommended. This allows for generating and updating the table of contents easily.

c. Preface. A self-study’s preface contains two elements: appraisal of self-study methods, and
the commander’s memorandum.

(1) Appraisal of self-study methods. The individual leading the self-study, normally the
QAO director (or equivalent), provides an appraisal of the methods that the institution used to
conduct the self-study and the major benefits the institution realized from conducting the self-
study. This section should be 500 words or less.

(2) Commander’s memorandum. The preface includes a signed memorandum from the
learning institution’s commander, commandant, or civilian or military equivalent attesting to the
institution’s compliance with the AEAS. The QAO director (or equivalent), or self-study author
with the QAO director’s approval, prepares the memorandum for signature. The memorandum
should be no longer than one page.

d. Institutional summary. The institutional summary contains eight elements:

(1) Point(s) of contact. This element lists the name, title, organization, phone number, and
e-mail address for the self-study’s primary point(s) of contact. This should include, at minimum,
the QAO director (or equivalent) and the individual(s) preparing the report. Learning institutions
may include others on this list as needed. All points of contact listed should be able to answer
any questions about the report.

(2) History. This element provides a concise history of the learning institution and its
programs. It includes information such as the date the institution was established, the dates the
first students were in attendance and graduated, and highlights of how the institution evolved
over time to where it is today. It includes or describes the institution’s charter and mission.

155
TRADOC Pamphlet 11-21

(3) Other accreditations. This element lists any other accreditations the learning
institution has obtained or is seeking at the institution, program, or course level. For each
accrediting agency, it lists the name of the accrediting agency and the date of the last
accreditation (if applicable), and briefly describes the purpose for obtaining or seeking
accreditation from that accrediting agency. If the institution has not obtained and is not seeking
any other accreditations, this element states, “None.”

(4) Credentialing. This element briefly explains what the institution does to offer its
students credentialing opportunities.

(5) Partnerships. This element briefly describes the institution’s partnerships with other
organizations that help the institution further its training and education mission.

(6) Significant highlights. This element briefly describes significant highlights that
emerged from the self-study and self-assessment process. Highlights might include the
institution’s new and innovative initiatives and value-added practices.

(7) Challenges. This element briefly describes the institution’s current challenges and
what it is doing to correct and/or mitigate them.

(8) Summary of AEAS ratings. This element briefly summarizes the results of the
associated self-assessment.

e. Chapter 1: Evidence for AEAS 1, mission, purpose, and functions.

(1) Chapter introduction. This chapter begins with a chapter introduction, which includes
a brief summary of the institution’s performance in AEAS 1. The introduction also includes the
chapter’s purpose, which is to enable the institution’s leaders and Army quality assurance
evaluators to quickly understand and access evidence that the institution is meeting requirements.

(2) Criteria. Each AEAS 1 criterion (for example, 1a and 1b) has its own section within
the chapter. Each section describes the methods and processes the learning institution uses to
meet that criterion and any significant highlights or challenges related to that criterion. Each
section also provides links to all associated documentary evidence. Examples of linked
documentary evidence supporting AEAS 1 criteria are shown in figure M-1.

156
TRADOC Pamphlet 11-21

Figure M-1. Example documentary evidence for standard 1

f. Chapter 2: Evidence for AEAS 2, governance and administration. This chapter follows the
same chapter introduction and criteria guidance as described in paragraph M-2e. Examples of
linked documentary evidence supporting AEAS 2 criteria are shown in figure M-2.

Figure M-2. Example documentary evidence for standard 2

157
TRADOC Pamphlet 11-21

g. Chapter 3: Evidence for AEAS 3, learning programs. This chapter follows the same
chapter introduction and criteria guidance as described in paragraph M-2e. Examples of linked
documentary evidence supporting AEAS 3 criteria are shown in figure M-3.

Figure M-3. Example documentary evidence for standard 3

h. Chapter 4: Evidence for AEAS 4, institutional training and education mission


management. This chapter follows the same chapter introduction and criteria guidance as
described in paragraph M-2e. Examples of linked documentary evidence supporting AEAS 4
criteria are shown in figure M-4.

Figure M-4. Example documentary evidence for standard 4

158
TRADOC Pamphlet 11-21

i. Chapter 5: Evidence for AEAS 5, assessment, evaluation, and effectiveness. This chapter
follows the same chapter introduction and criteria guidance as described in paragraph M-2e.
Examples of linked documentary evidence supporting AEAS 5 criteria are shown in figure M-5.

Figure M-5. Example documentary evidence for standard 5

j. Chapter 6: Evidence for AEAS 6, faculty and staff. This chapter follows the same chapter
introduction and criteria guidance as described in paragraph M-2e. Examples of linked
documentary evidence supporting AEAS 6 criteria are shown in figure M-6.

Figure M-6. Example documentary evidence for standard 6

k. Chapter 7: Evidence for AEAS 7, leader development. This chapter follows the same
chapter introduction and criteria guidance as described in paragraph M-2e. Examples of linked
documentary evidence supporting AEAS 7 criteria are shown in figure M-7.

159
TRADOC Pamphlet 11-21

Figure M-7. Example documentary evidence for standard 7

l. Appendix A: Organizational charts. This appendix includes the institution’s most-recent


organizational charts depicting all levels of leadership, directorates, and sections.

m. Appendix B: TDA and Unit Manning Report. This appendix includes the institution’s
most-recent TDA (personnel and equipment) and unit manning report.

n. Appendix C – List of courses and student load. This appendix includes all the items shown
in figure M-8, with the annual student load annotated next to each course within each document.

Figure M-8. Self-study appendix C contents

o. Appendix D: Self-assessment report with corrective action plan. This appendix includes
the learning institution’s most-recent self-assessment report and corrective action plan.

p. Appendix E - Master evaluation plan. This appendix includes the learning institution’s
most-recent MEP.

q. Appendix F - Map(s) of installation. This appendix includes maps of the learning


institution’s installation. These maps should be designed to help evaluators navigate locations on
the installation that they might need to visit to evaluate the institution.

r. Appendix G – Point of contact list. This appendix includes a list of the learning
institution’s key action officers overall responsible for the different areas of the AEAS criteria;
for example, ADDIE, safety, facilities, operational environment, threat management, human
resources, and so on.

160
TRADOC Pamphlet 11-21

Appendix N
Example Proponent Assessment Timeline and Process
This appendix provides an example proponent assessment timeline and process, which may serve
as a guide for conducting proponent assessments. The process may vary depending on the
organization and mission of the higher-level-accredited learning institution or the assessed
institution.

N-1. Example proponent assessment timeline

As shown in table N-1, a proponent assessment can be divided into five phases: planning and
coordination, initial analysis, assessment visit, final report, and corrective action plan.

161
TRADOC Pamphlet 11-21

Table N-1
Example proponent assessment milestone timeline
Timeline* Milestone Activity Lead Action Officer
Planning and Coordination
By 60 days before Complete mission analysis and planning Assessment team lead
assessment visit Provide LON Proponent QAO director
By 30 days before Provide remaining documentary evidence Assessed institution QAO
assessment visit
Gather existing data from systems of record Assessment team
By 14 days before
Coordinate assessment visit events/activities Assessment team lead
assessment visit
Publish assessment execution schedule Assessment team lead
Initial Analysis
By 1 day before Complete initial data analysis; begin report; Assessment team
assessment visit develop or refine assessment questions
Assessment Visit
Conduct formal in-brief, virtual or on-site Assessment team
Conduct data collection activities using virtual Assessment team
Assessment visit
methods
(up to 2 weeks long)
IF on-site visit, conduct visit Assessment team
Conduct initial impressions out-brief Assessment team
Final Report
Conduct in-depth analysis and evaluation Assessment team
By 21 days after out- Prepare final draft report Assessment team
brief Obtain QAO director approval of report Assessment team lead
Distribute final draft report to institution Proponent QAO director
By 14 days after Review final draft report and submit Assessed institution QAO
receiving draft report memorandum of acceptance or rebuttal
Distribute final report to institution; upload Proponent QAO director/
By 7 days after
with acceptance memorandum to AQAP assessment team lead
receiving acceptance
portal†
Corrective Action Plan
By 28 days after Submit corrective action plan to proponent Assessed institution QAO
receiving draft report QAO
By 7 days after Review and accept corrective action plan Assessment team lead
receiving actions
*All “days” are calendar days.

Upload final report not later than 60 days before the first day of the higher-level learning institution’s scheduled
120-day Army accreditation period.

N-2. The most-critical proponent assessment milestone

a. The most-critical milestone for proponent assessment is the proponent assessment report
due date. The assessment team lead backwards-plans all proponent assessment milestones from
this date.

b. If the associated Army accreditation team lead determines that the proponent assessment
will be conducted prior to the associated Army accreditation, the proponent assessment report is
due not later than 60 days before the first day of the higher-level learning institution’s scheduled
120-day Army accreditation period. The example backwards-planning timeline depicted in figure

162
TRADOC Pamphlet 11-21

N-1 below shows that the assessment team lead should begin planning for the assessment at least
179 days, or approximately six months, before the first day of the higher-level accredited
institution’s 120-day Army accreditation period.

Mission Analysis and Planning


(estimated 70-day period)

Virtual and On-Site Events


(estimated 14-day period)

Report and Review Period


(estimated 35-day period)

Submit Proponent Assessment Report


60 days before day 1

Day 1
120-Day Army Accreditation Period
Figure N-1. Example proponent assessment backwards planning

c. Early planning is critical for effective and efficient execution of proponent assessment and
on-time delivery of the final assessment report. Begin planning at least six months before the
first day of the higher-level learning institution’s scheduled 120-day Army accreditation period is
recommended.

N-3. Phase 1: Planning and coordination


Planning and coordination are critical for effective and efficient execution of proponent
assessments. This phase normally consists of six critical milestones, described in paragraphs N-
3a through N-3f.

a. Complete mission analysis and planning. By not later than 60 days before the start of the
assessment visit, the assessment team lead completes assessment mission analysis and planning.
Mission analysis and planning normally consists of 11 process steps:

(1) Begin coordination with assessed institution. The assessment team lead begins initial
coordination with the assessed institution’s QAO. The assessment team lead requests a list of the
institution’s courses in session between approximately 6 and 24 weeks prior to the higher-level
accredited institution’s 120-day Army accreditation period.

(2) Provide list of courses in session. The assessed learning institution QAO provides the
assessment team lead with the institution’s courses in session from the date of request through
the higher-level accredited institution’s 120-day Army accreditation period.

163
TRADOC Pamphlet 11-21

(3) Determine possibility of concurrent assessment and accreditation. If courses will be in


session during the higher-level accredited institution’s 120-day Army accreditation period, the
proponent assessment may possibly be conducted concurrently with the associated accreditation,
upon confirmation from the accreditation team lead (see para N-3a(4)). If no courses will be in
session during the higher-level institution’s 120-day Army accreditation period, the proponent
assessment may not be conducted concurrently with the associated accreditation, and the
proponent assessment report will be due not later than 60 days before the first day of the higher-
level institution’s 120-day Army accreditation period.

(4) Confirm assessment requirements. The assessment team lead confirms with the
accreditation team lead whether or not the proponent assessment will be conducted concurrently
with the associated accreditation and confirms the assessment report due date.

(5) Select assessment visit dates. Working with the assessed learning institution, and the
accreditation team lead if the assessment will be conducted concurrently with the accreditation,
the assessment team lead selects dates for the assessment visit. The assessment visit may be
conducted in person, using virtual methods, or both. The team lead coordinates the selected dates
with the chief and/or QAO director to ensure alignment and synchronization with the QAO’s
overall mission planning. The date range should be as short as possible to avoid excessive
disruption to the learning institution, while still allowing adequate time to conduct planned data
collection activities. A date range of two weeks or less is recommended, with either two weeks
for assessment using virtual methods, or one week for assessment using virtual methods plus one
week for an on-site visit. Identifying dates for the assessment visit as early in the process as
possible helps ensure adequate time for effective analysis and planning, and for pre-visit data
collection and analysis.

(6) Determine AEAS criteria applicability. Working with the assessed learning
institution, and with the accreditation team lead as needed, the assessment team lead analyzes
and determines which AEAS criteria are applicable to the assessed learning institution. For
information about AEAS criteria applicability, see paragraph 4-3.

(7) Determine required critical information, key evaluation questions, and scope. The
assessment team lead reviews the assessed learning institution’s self-study and self-assessment,
last assessment report, and learning products, and looks for indications of the institution’s
strengths and challenges. Considering the results of this review and the applicable AEAS criteria,
the team lead determines the assessment’s required critical information, key evaluation
questions, and scope. The team lead conducts risk analysis for any applicable AEAS criteria not
included in the evaluation’s scope. The team lead obtains QAO director approval for anything
less than full scope.

(8) Identify required data collection activities. The assessment team lead determines the
data sources and data collection methods needed to answer the evaluation questions. Based on
the determination of data sources and data collection methods, the team lead identifies the
evaluation’s required data collection activities. Typical data collection activities include course
document review, test control audit, student and instructor records reviews, student and

164
TRADOC Pamphlet 11-21

instructor focus groups, course manager interview, non-supervisor focus groups, command and
staff interviews, training observations, and HPDT observations.

Note. Surveys, interviews, and focus groups used for proponent assessments are conducted in
compliance with AR 25-98.

(9) Determine team member and other resource requirements. The assessment team lead
determines the number of evaluators required to conduct the assessment efficiently and
effectively, and requests assessment team member support through the chief and/or QAO
director as needed. Depending on the scope, assessments typically require one team lead and one
or two additional team members for efficient and effective execution. When conducting focus
groups, at least one evaluator is needed to take notes while another evaluator facilitates. The
team lead also determines and requests any other required resources needed, such as equipment,
supplies, and/or transportation or travel to the training site.

(10) Determine assessment team roles and responsibilities. The assessment team lead
determines assessment team roles and responsibilities. The team lead works with the chief and/or
QAO director on this step as needed to ensure alignment and synchronization with evaluator
professional development goals.

(11) Draft LON and attachments. The assessment team lead prepares a draft LON,
assessment planning guidance, and assessment documentary evidence guidelines, and forwards
those to the QAO director for approval. The team lead may use the Army accreditation planning
guidance as a guide when drafting the proponent assessment planning guidance, and the Army
accreditation evidence guidelines as a guide when drafting the assessment evidence guidelines.
The team lead determines whether the method of delivery of documentary evidence is via e-mail
or upload to a shared site, and provides that information, along with any naming convention
requirements, in the planning guidance document. No matter how this step is accomplished, it is
important to clearly communicate assessment expectations to the assessed learning institution.

b. Provide letter of notification. By not later than 60 days before the assessment visit, the
proponent QAO director provides the assessed learning institution a LON, assessment planning
guidance, and assessment documentary evidence guidelines.

c. Provide remaining documentary evidence. By not later than 30 days before the start of the
assessment visit, the assessed learning institution QAO provides all documentary evidence not
already available in Army systems of record to the assessment team lead in accordance with the
assessment planning guidance.

d. Gather existing data from systems of record. By not later than 14 days before the start of
the assessment visit, the assessment team gathers existing data from TDC and other systems of
record, as appropriate. This data includes the current learning products that the assessed
institution should be implementing and electronic instructor records.

e. Coordinate assessment visit events and activities. Not later than 14 days before the start of
the assessment visit, the assessment team lead coordinates and schedules assessment visit events

165
TRADOC Pamphlet 11-21

and activities, to include in-brief, data collection activities, and initial impressions out-brief, with
the assessed institution’s QAO. This usually involves coordinating and scheduling records
reviews, test control audits, interviews, and focus groups. Activities such as training observations
and other observations not requiring prior coordination should not be coordinated and scheduled.
This allows evaluators flexibility with what they choose to observe during the assessment visit.

f. Publish assessment execution schedule. By not later than 14 days before the start of the
assessment visit, the assessment team lead develops an assessment execution schedule and
distributes it to the assessed learning institution and all assessment team members. The team lead
updates and re-distributes the execution schedule as needed throughout the remainder of the
assessment process.

N-4. Phase 2: Initial analysis


Initial analysis is critical for ensuring complete and effective data collection efforts,
polyangulation of data, and timely reporting. This phase normally consists of three significant
milestones.

a. Analyze data already collected. By not later than one day before the assessment visit, the
assessment team completes initial analysis of data already collected. Initial analysis data may
include the assessed learning institution’s self-study and self-assessment, recent internal and
external evaluation results, course documentation, such as learning products, the commander’s
training guidance, SOPs, policies, and any training waivers.

b. Enter results of initial analysis into draft report. By not later than one day before the
assessment visit, the team lead evaluates the results of initial analysis against the AEAS criteria
and enters the findings into the draft course evaluation report rubrics. Findings may change after
collecting and analyzing additional data during the assessment visit; however, to be able to
provide the assessed institution with a timely report, it is important to begin writing the report as
early in the process as possible, and to continue writing the report throughout the assessment as
new data is collected and analyzed.

c. Refine data collection questions and instruments. By not later than one day before the
assessment visit, the assessment team lead uses the results of initial analysis to refine or develop
the specific questions they need to ask during interviews and focus groups, and the observations
they need to conduct throughout the assessment. If pre-written AEAS criteria-based questions
already exist in the QAO’s assessment data collection instruments, evaluators use the results of
initial analysis to refine those instruments to meet the specific needs of that assessment.

N-5. Phase 3: Assessment visit


Assessment visits may be conducted using both virtual and in-person methods. This phase
normally consists of four significant milestones.

a. Conduct formal in-brief. On the first day of the assessment visit, the assessment team lead
conducts a formal in-brief, either using virtual methods or on-site, with the assessed institution’s
senior leaders and key staff. The purpose of the in-brief is to provide an overview of Army
accreditation and the AEAS; explain the purpose of assessment and how it informs the higher-

166
TRADOC Pamphlet 11-21

level institution’s accreditation; provide an outline of the assessment process and timeline, from
in-brief through corrective action plan; and provide the institution the opportunity to express
concerns or ask for clarification, as needed. The assessed learning institution, in turn, briefs the
assessment team on their organizational structure, mission, key staff, and points of contact.

b. Conduct assessment events using virtual methods. During the first week of the assessment
visit, the assessment team conducts data collection activities using virtual methods. These
activities may include interviews, focus groups, instructor and student record reviews, test
control reviews, virtual facility walk-throughs, and training observations, depending on technical
capabilities.

c. Conduct on-site visit.

(1) During the second week of the assessment visit, the assessment team continues
conducting data collection activities, either on-site or using virtual methods, as determined
during the mission analysis phase. As much as practicable, evaluators avoid or minimize any
disruptions their presence may cause to training during any on-site visit.

(2) As evaluators complete data collection activities and conduct initial analysis of the
data, they add their findings to the draft evaluation report rubrics.

d. Conduct initial impressions out-brief.

(1) Before the last day of the site visit, the assessment team conducts high-level analysis,
synthesis, and interpretation of the data collected during the assessment, to include the data
collected and analyzed before the assessment visit. The team evaluates the results of this high-
level analysis against the AEAS, and identifies key positive and negative evaluation findings, or
initial impressions. Although out-brief formats may vary, proponent QAOs are encouraged to
follow the initial impressions out-brief guidelines in appendix I.

(2) On the last day of the assessment visit, the assessment team lead conducts an initial
impression out-brief, either using virtual methods or on-site, with the assessed institution’s senior
leaders and key staff. The purpose of the initial impressions out-brief is to inform stakeholders of
key findings, both positive and negative, that emerged during assessment events and activities.
The team lead ensures stakeholders understand that initial findings may change after in-depth
analysis of the data.

(3) See paragraph 7-16 for a general overview of briefing evaluation results.

N-6. Phase 4: Final report


This phase normally consists of six significant milestones.

a. Conduct in-depth analysis, synthesis, interpretation, and evaluation. Following the initial
impressions out-brief, the assessment team conducts in-depth analysis, synthesis, and
interpretation of all of the data collected during the assessment, to include the data collected and

167
TRADOC Pamphlet 11-21

analyzed before the assessment visit. The team evaluates the results of this in-depth analysis
against all applicable AEAS criteria using the AEAS rubrics.

b. Prepare final draft report.

(1) By not later than two weeks after the initial impressions out-brief, the assessment
team finalizes the draft report.

(2) Evaluators use the AEAS evaluation report tool to write the assessment report, which
includes findings, conclusions (summaries), recommendations, any impact issues, and any value-
added practices. Within the report, evaluators clearly explain the reasons for all ratings below
100, describe the potential impact of any critical findings, and provide recommendations for any
findings that do not have obvious solutions. Very importantly, evaluators also clearly describe
what the learning institution did well.

(3) The assessment team obtains at least one quality control review of the final draft
report, usually in the form of a peer review, before forwarding the report through the chief to the
QAO director for approval. A quality control review focuses on the report’s accuracy,
completeness, clarity, conciseness, and readability. The review process may involve other
personnel depending on the QAO’s local policies and procedures.

c. Obtain QAO director approval. By not later than three weeks after the initial impressions
out-brief, the chief or assessment team lead obtains approval of the final draft report from the
QAO director or equivalent.

d. Distribute final draft report. By not later than three weeks after the initial impressions out-
brief, the QAO director or equivalent distributes the final draft report to the assessed learning
institution for review and acceptance or rebuttal.

e. Review final draft report. By not later than 14 days after receiving the final draft report, the
assessed learning institution reviews the final draft report and submits any requests for
clarification to the assessment team lead. The team lead provides timely responses to all requests
for clarification. The assessed institution QAO submits either command acceptance or rebuttal to
the proponent QAO director. If the assessed learning institution is an RC learning institution, the
proponent QAO director forwards any rebuttal to the AQAP Director for adjudication.

f. Publish final report. By not later than seven days after receiving the acceptance
memorandum, the assessment team lead finalizes the assessment report, and uploads the report
and acceptance memorandum to the specific location in the AQAP portal that the accreditation
team lead prescribes. The assessment team lead provides the assessed learning institution with a
copy of the final assessment report.

168
TRADOC Pamphlet 11-21

N-7. Phase 5: Corrective action plan


This phase normally consists of two significant milestones.

a. Provide corrective action plan. By not later than 28 days after receiving the draft
assessment report, the assessed learning institution develops a corrective action plan for all
criteria and sub-criteria rated below 100. The corrective action plan describes what the institution
will do to correct each finding, the target date for completing each corrective action, and the
action officer responsible for each corrective action. The learning institution provides the
corrective action plan to the assessment team lead.

b. Review and accept corrective action plan. By not later than seven days after receiving the
corrective action plan, the assessment team lead reviews the corrective action plan. Once
satisfied that the corrective action plan is complete and aligned with the findings, the team lead
accepts the corrective action plan and attaches it to the proponent QAO’s locally held copy of the
final report. The team lead does not submit the corrective action to the higher-level learning
institution’s accreditation team lead.

N-8. Assessment follow-up


The assessment team lead or other assigned evaluator conducts a follow-up assistance visit with
the assessed learning institution to assess corrective action plan progress. A follow-up visit may
be conducted either in person or using virtual methods, depending on resource availability.
Follow-up should usually occur within about one year of the assessment visit; however, the
timeframe may vary depending on the nature of the initial findings and priority of the corrective
actions.

N-9. Impact issues and value-added practices


The assessment team lead processes any impact issues and value-added practices identified
during proponent assessment following the processes described in appendix B.

169
TRADOC Pamphlet 11-21

Appendix O
Proponent Assessment Travel
This appendix describes the process for requesting travel funding to conduct proponent
assessments and provides guidelines for completing travel authorizations and vouchers in DTS.
Travelers should also refer to the Defense Travel Management Office’s website for the most-
current information on DOD travel: https://www.defensetravel.dod.mil/.

Note. This appendix is specific to proponent assessment travel only. Travelers conducting any
other type of HQ TRADOC QAO-funded travel should contact the Chief, HQ TRADOC QAO
Plans and Operations Division for specific guidance.

O-1. Projected travel submissions

a. By 15 February of each year, proponent QAO directors prepare and submit their annual
projected travel submissions for their proponent assessments scheduled during the
next/upcoming fiscal year to the Chief, HQ TRADOC QAO Plans and Operations Division.

b. As shown in figure O-1, a projected travel submission includes all of the resources the
proponent requires to conduct each scheduled visit: projected travel date, name and location of
assessed learning institution; purpose of the travel; percentage of time assessment conducted
using alternate means; number of travelers; number of days; and calculated costs for per diem,
transportation, other, and total trip.

Figure O-1. Example proponent assessment travel funding submission

170
TRADOC Pamphlet 11-21

c. Proponent QAOs capture their travel submissions by fiscal year. Multi-branch CoE QAOs
capture all of their proponent assessments in a single spreadsheet, breaking out their proponents
separately.

d. The Chief, Plans and Operations Division validates all requirements and consolidates
requirements into a single POM submission to HQDA G-3. All RC-related evaluation travel is
obligated using Training Reserve Component Support funding, with funding and approval
authority being centrally managed at the HQ level to monitor and track obligation rates.

O-2. Defense Travel System authorizations and vouchers

a. Travelers conducting proponent assessment-related travel submit their travel requests in


DTS using the travel authorization checklist shown in table O-1.

Table O-1
Defense Travel System travel authorization checklist for proponent assessments
DTS Travel Authorization Checklist
1 Have you provided your social security number via encrypted e-mail to the Chief, HQ
TRADOC QAO Plans and Operations Division to be “cross-org” for travel?
2 Is your business being conducted on an installation? If yes, is the destination on your
orders to the installation and not a local city?
3 Have you captured the correct trip type? For example, “Temporary Duty Travel
(Routine).”
4 Is your trip description complete? For example, “Conduct proponent assessment of 426th
RTI.”
5 Did you book your lodging through DTS? Lodging should be booked through DTS, but if
you block a group of rooms outside DTS, then address the flag and provide the name,
address, and rate for the hotel.
6 Is your lodging within the authorized per diem rate? Lodging is within the authorized per
diem rate unless the approving official approved actuals in advance of travel.
7 Is your rental car reservation for a compact car? Only compact cars are authorized unless
the approving official granted approval in advance and the comments are captured in the
authorization accordingly.
8 Did you include CTO fees, mileage to and from the airport, airport parking, baggage fees,
taxi, hotel taxes, and fuel for rental car as expenses on your authorization?
9 Did you charge all expenses to GOVCC except for mileage to and from the airport?
10 Did you select the right LOA? For travel to proponent assessments, it should be “FY
Title XI: Reserve component accreditations and proponent assessments of Reserve
component institutions.” “FY” will be the current fiscal year.
11 Did you change the routing list to QAORL before digitally signing the document? ALL
authorizations and vouchers route to HQ TRADOC QAO for approval.
12 Did you address all flags and provide justification?
CTO = Commercial Travel Office; GOVCC = government travel charge card; LOA = line of accounting;
QAORL = Quality Assurance Office routing list

171
TRADOC Pamphlet 11-21

b. Travelers conducting proponent assessment-related travel complete their travel vouchers in


DTS using the travel voucher checklist shown in table O-2.

Table O-2
Defense Travel System travel voucher checklist for proponent assessments
DTS Travel Voucher Checklist
1 Did you file your voucher within five days of return from travel? Vouchers are filed
within five days of return from travel.
2 Did you upload all required receipts into the voucher? Required receipts (when
applicable) include:
• Hotel (show a zero balance)
• Airline ticket and CTO fee
• Rental car (show a zero balance)
• Fuel for rental car
• Baggage
• Parking
The ONLY things that do not require receipts are meals and mileage to and from the
airport.
3 Are your receipts legible? Receipts need to be legible.
4 Did you update expenses as needed?
5 Did you annotate appropriate information for the approving official? For example, did
you explain all flags, if any?
6 Did you change the routing list to QAORL before digitally signing the voucher?
7 Did you sign the voucher and check that it went forward?
CTO = Commercial Travel Office; QAORL = Quality Assurance Office routing list

172
TRADOC Pamphlet 11-21

Appendix P
Instructor Actions Review Evidence Guidelines
Proponent QAOs conduct IA reviews of their institutions’ new POIs (new POIs only) as part of
the new POI submission process. This is a quality assurance review of the IAs to assure effective
processes; it is not a quality control review of the IAs, the POI, or the POI’s lessons. Quality
assurance evaluators may use this appendix as a guide, along with TP 350-70-14.

P-1. Instructor actions are POI-driven and specific


Quality assurance evaluators look for evidence that developers assigned IAs that are POI-driven,
POI-specific, performed every time the POI is executed, performed in the same manner every
time the POI is executed, not duplicative of tasks/hours that earn instructor-contact-hour (ICH)
credit, appropriate for an instructor to perform, and quantifiable. IAs directly support execution
of the POI’s lesson plans.

a. Examples of valid IAs include classroom setup and breakdown, pre-entry assessment,
remedial instruction and assessment, grading assessments, student counseling, and evaluating
students’ written assignments.

b. Examples of non-valid IAs include instructor certification activities, supervising


instructors and others, command-related duties, personnel actions, and training development
activities. These examples represent instructor work that is not performed in the routine
execution of a POI and should not be documented as IAs.

P-2. Not all lessons require instructor actions


Quality assurance evaluators look for evidence that developers did not automatically assign IAs
to every lesson without considering actual requirements for each lesson. Not all POI lessons
require IAs. For example, in the case of multiple lessons conducted during the same day in the
same location, normally (but not always) only the first lesson of the day would require setup, and
normally (but not always) only the last lesson of the day would require breakdown. If lessons
other than the first lesson of the day include setup, and/or if lessons other than the last lesson of
the day include teardown, the developer should clearly and justifiably explain the reason for this
in the IA text field in TDC.

P-3. Instructor action times are not standardized


Quality assurance evaluators look for evidence that developers did not automatically assign the
same IA times to every lesson. IA times should not be standardized. One indicator that
developers may have standardized IA times is the same setup and teardown times for every
lesson. IAs capture the actual and unique work requirements for each lesson.

P-4. Instructor action tasks are not duplicated


Quality assurance evaluators look for evidence that developers did not duplicate ICH tasks
already captured in the POI. For example, when instructors grade student performance during
time allotted for the lesson, such as during hands-on testing, ICH is already being earned for that
time. IAs may not duplicate ICH.

173
TRADOC Pamphlet 11-21

P-5. Instructor actions directly support the program of instruction


Quality assurance evaluators look for evidence that developers only documented IAs related to
execution of the POI. Examples of tasks that may be documented as IAs because they are
scheduled events conducted in direct support of the POI every time it is executed include in- and
out-processing students and graduation-related activities. Examples of tasks that may not be
documented as IAs because they are not associated with POI time, and are included in the POI as
information only, include physical readiness training and fitness testing.

P-6. Instructor actions are not direct support to manpower training events
Quality assurance evaluators look for evidence that developers did not assign IAs to tasks that
“direct support to manpower training event” personnel should perform, such as delivering
supplies, water, and ammunition to training sites. Even if instructors are actually performing
these tasks due to “direct support to manpower training event” personnel shortages, these tasks
are not IAs and may not be documented as such in the POI.

P-7. Targets for instructor action time were not established


Quality assurance evaluators look for evidence that developers did not establish targets for IA
time within the POI. For example, lessons having the same or near-same total IA time, but also
having significantly different training strategies and/or conditions, may be an indicator of
established IA targets.

P-8. Instructor action hours were not manipulated


Quality assurance evaluators look for evidence that developers did not attempt to manipulate IA
hours so that the IRM would produce a pre-determined number of instructors. For example, a
POI having the same or more total IA time as the previous version of that POI, even though the
POI’s academic hours were reduced by 25 percent from the previous version, may be an
indicator of manipulated IA hours. An activity having more IA time allocated than would
normally be expected for that activity, without a clear and justifiable explanation in the IA text
field in TDC, may also be an indicator of manipulated IA hours.

P-9. Instructor actions have clear and justifiable explanations as needed


Quality assurance evaluators look for evidence that developers use the IA text fields in TDC to
provide clear and justifiable explanations of all IAs that are not necessarily obvious to outside
reviewers. For example, explanations are not necessary for IAs that are obvious, such as 10
minutes for classroom setup; however, if classroom setup is 60 minutes, the developer should
clearly and justifiably explain in the IA text field what the instructor does during that time and
why it takes so long.

P-10. Requirements for multiple instructors are clearly explained and aligned
Quality assurance evaluators look for evidence that developers clearly explain in the ISR text
field in TDC the requirements for multiple instructors wherever multiple instructors are needed.
Evaluators look for alignment of the developer’s explanations with the time, ISR, and total
instructors for the lessons listed in the POI.

174
TRADOC Pamphlet 11-21

Glossary

Section I
Abbreviations and Acronyms

AC active component
AAR after action review
AATS Army National Guard aviation training site
ADDIE analysis, design, development, implementation, and evaluation
AEAS Army Enterprise Accreditation Standards
AQAP Army Quality Assurance Program
ARIMS Army Records Information Management System
AR Army regulation
ARNG Army National Guard
ATRRS Army Training Requirements and Resource System
CG Commanding General
CLP continuous learning points
CoE center of excellence
DA Department of the Army
DA PAM Department of the Army pamphlet
DOD Department of Defense
DOTMLPF-P doctrine, organization, training, materiel, leadership and education,
personnel, facilities, and policy
HPDT high physical demand task
HQ headquarters
HQDA Headquarters, Department of the Army
IA instructor actions
ICH instructor contact hour
IMT initial military training
ISR instructor-to-student ratio
LON letter of notification
MEP master evaluation plan
NCO noncommissioned officer
NCOA Noncommissioned Officer Academy
PDF Portable Document Format
PME professional military education
POI program of instruction
POM program objective memorandum
QAE quality assurance evaluator
QAEC quality assurance evaluator course
QAEDP Quality Assurance Evaluator Development Program
QAO quality assurance office
RA Regular Army
RC reserve component
RRS–A Records Retention Schedule–Army
RTI regional training institute

175
TRADOC Pamphlet 11-21

RTSM regional training site – maintenance


SCN survey control number
SME subject-matter expert
SOP standard operating procedure
TDA table of distribution and allowance
TP TRADOC Pamphlet
TR TRADOC Regulation
TRADOC Training and Doctrine Command
USMA United States Military Academy
USAR United States Army Reserves

Section II
Terms

Accreditation
A process for determining whether an institution meets established standards for function,
structure, and performance.

Accreditation staff assistance visit


A formalized event in which a team or an evaluator representing the accrediting agency helps a
learning institution be better able to identify its own strengths and weaknesses in relation to the
AEAS and recommends ways for the institution to improve its processes. The purpose is to
assist, coach, counsel, and mentor Army learning institutions on the AEAS and Army
accreditation process. It is not an accreditation or pre-accreditation.

Accredited (center/proponent) learning institution


CoEs and non-CoE learning institution with proponent responsibility for courses and/or learning
programs. CoE examples include Cyber CoE, Intelligence CoE, and Medical CoE. Examples of
non-CoE proponent learning institutions include the MG Robert M. Joyce School for Family and
MWR (morale, welfare, and recreation), and the National Guard Professional Education Center.

Accredited (non-proponent) learning institution


Army learning institution that is not the proponent for the courses or programs it implements, but
that is accredited and awarded an Army accreditation certificate. Examples include NCOAs,
Army troop schools, USAR training brigades, RTSMs, ARNG RTIs, AATS, and other ARNG
learning institutions requiring accreditation.

AEAS lead
An Army accreditation team member who arbitrates and adjudicates issues and findings from
criterion evaluators; reviews and processes impact issues and value-added practices within
assigned AEAS; coordinates, analyzes, and synthesizes AEAS rubrics into AEAS report; and
writes AEAS summaries and recommendations, impact issues, and value-added practices.

Army accreditation
An evaluative process that assures Army leaders that learning institutions meet and sustain
accepted quality standards, and that their processes are working within established limits. It looks

176
TRADOC Pamphlet 11-21

at various aspects of a process, including conformance to policy, regulation, and other guiding
directives; resources, such as personnel and equipment; methods; environment; process controls,
such as SOPs and training; and metrics for tracking process performance. It also assesses whether a
learning institution’s products conform to necessary requirements, such as those described in the
AEAS.

Army accreditation cycle


A three-year cycle in which Army learning institutions are accredited or assessed.

Army accreditation period


A scheduled 120-day period in which all accredited Army learning institutions are evaluated for
accreditation.

Army accreditation team


A matrixed team of evaluators who conduct Army accreditations. The team is composed of an
accreditation team lead, AEAS evaluators, and criterion evaluators. Team members are selected
based on their professional experience and expertise. Depending on the expertise required for
any given accreditation, team members may be selected from various organizations across the
Army.

Army Enterprise Accreditation Standards


The Army’s accepted standards for accreditation. The AEAS establish criteria for institutional
quality and provide the Army the means to assess and improve all Army learning institutions
across active and reserve components, and across the DOTMLPF-P domains.

Army learning institution


An Army organization or activity that generates and sustains trained, ready, and available forces.
Army learning institutions include centers and schools, both AC and RC, that provide IMT,
PME, Civilian Education System, and functional training for Soldiers, Army Civilians, and
contractors. Examples of Army learning institutions include but are not limited to Army centers
of excellence, training brigades, noncommissioned officer academies, troop schools, regional
training institutes, training battalions, regional training sites, and aviation training sites.

Army Quality Assurance Program


An Army program that defines responsibility for accrediting all Army learning institutions across
all Army components, except for the USMA. Through Army accreditation and its related
processes, the AQAP assures Army standards are achieved in the development, education, and
training of Soldiers and DA Civilians while strengthening the Army’s ability to learn, adapt, and
innovate, and its readiness to deploy, fight, and win decisively against any adversary, anytime,
and anywhere. The AQAP is comprised of five major quality assurance functions: accreditation,
oversight and governance, proponent assessment, internal evaluation, and external evaluation. HQ
TRADOC QAO is the lead agent for the Army.

Assessed (non-proponent) learning institution


An Army learning institution that is not the proponent for the courses or programs it implements,
is not individually accredited, and is not awarded an Army accreditation certificate. A proponent

177
TRADOC Pamphlet 11-21

assesses this learning institution through the proponent assessment process, and results become
part of the higher-level accredited institution’s accreditation report. Examples include USAR
training battalions, RTI training battalions, and proponents’ own outlying subordinate schools.

Audit trail
A record or collection of records that provide documentary evidence of what was done to affect a
specific outcome; for example, records that provide evidence of how a course progressed through
the design and development process, or records that provide evidence of the steps an
organization took to mitigate or resolve an impact issue.

Center of excellence
An Army learning institution that has authority over proponents and/or instructional delivery
institutions. Some may also be considered proponents and/or provide instructional delivery.

Commandant
Any individual assigned to an authorized paragraph and line number and designated as
COMMANDANT on the organization’s current TDA. This generally applies to NCOAs;
however, CoE commanders may also be designated as CG COMMANDANT on the
organization’s TDA.

Commander
Any individual assigned to an authorized paragraph and line number and designated as
COMMANDER, CDR, or CG on the organization’s current TDA. This generally applies to
commanders of CoEs, schools, RTIs, brigades, etc.

Competency
The knowledge, skills, and abilities required for successful human performance.

Competency model
A framework that defines the full range of competencies required to be successful in a specific
job or occupation.

Continuous learning
The constant expansion of knowledge and skills needed to perform more effectively and adapt
more readily to ever-changing environments. It involves a wide range of activities that increase
performance capabilities.

Continuous learning points


The points awarded for successful completion of continuous learning activities. They provide a
method of ensuring quality assurance evaluators develop and enhance their professional skills,
remain current in their professions, and remain flexible and adaptable to those ever-changing
environments. They may be awarded for completion of academic courses, training courses,
professional activities, and professional experience.

178
TRADOC Pamphlet 11-21

Corrective action plan


The commander, commandant, or civilian or military equivalent’s plan for resolving
shortcomings and deficiencies for all AEAS criteria and sub-criteria rated below 100 during an
Army accreditation, proponent assessment, and/or self-assessment. It identifies the staff lead
responsible for ensuring compliance with the Army requirement and includes all actions taken
toward resolution.

Course evaluation
A focused evaluation of an individual course as measured against applicable AEAS criteria.

Criterion
A value or standard by which to evaluate a product, process, or behavior. Plural: Criteria.

Criterion evaluator
An Army accreditation team member who evaluates all assigned criteria by collecting and
polyangulating data, identifies and processes any impact issues and value-added practices, and
completes rubrics for all assigned criteria.

Data analysis
A systematic process of transforming raw data into usable information. It involves organizing,
cleaning, analyzing, and interpreting the data. It is used to reveal patterns and trends in the data
and relationships among variables, and to find meaning in data for drawing conclusions and
making informed decisions.

Data collection
A systematic process of gathering evaluation data. Methods include surveys, document reviews,
observations, interviews, and focus groups.

Director
For this publication, a director is any individual assigned to an authorized paragraph and line
number and designated as DIRECTOR or something similar in nature on the organization’s
current TDA. This applies to senior leaders, other than commanders and commandants, who are
responsible for an academic or educational institution.

Document review
A systematic method of collecting data by reviewing and evaluating existing documents. It is
useful for gathering background information about an organization’s history and operations,
answering basic evaluation questions, revealing differences between course or program design
and implementation, formulating interview and focus group questions, and developing
observation forms. It is also useful for polyangulating and confirming whether or not
assumptions are borne out in the documentation.

Evaluation
A process of systematically examining a system or process to determine its value or merit using
standards and evaluative criteria. It involves collecting, analyzing, and interpreting data; gaining

179
TRADOC Pamphlet 11-21

insights; and making judgments to determine the degree of the system or process’ value or merit,
inform decisions, and improve future performance. It is part of the quality assurance function.

Evaluation questions
The general questions that an evaluation answers. They are aligned with an evaluation’s purpose,
and they guide an evaluation’s design decisions, such as what data to collect and the data
collection and analysis methods to use. They employ polyangulation; for example, a single
evaluation question can be answered using multiple data sources and multiple data collection and
analysis methods. They are high-level questions, and they should not be confused with the
specific survey, interview, and/or focus group questions used to collect evaluation data. The
number of evaluation questions needed depends on the evaluation’s purpose and scope, and on
the resources available to collect and analyze the data.

Executive Summary
An executive summary is a brief overview of a report designed to give the reader a quick
preview of its content.

External evaluation
A quality assurance process that provides Army learning institutions the means to determine if their
training and education courses meet the performance needs of the operational Army. It includes
external surveys and other types of external evaluation.

External survey
Provides Army learning institutions a means to solicit feedback from graduates and their leaders
on the quality of their learning institutions’ courses. This feedback from the operational force
informs institutions of how well their courses prepare Soldiers and Army Civilians to perform
their jobs when they arrive or return to their units.

Focus group
A method of data collection that involves facilitating group discussions to gain a range and depth
of understanding of participants’ common experiences and. Focus groups depend on group
interaction and the data that emerges. They can be conducted in person, or through
teleconferencing or videoconferencing.

Impact issue
A situation or circumstance that impedes an Army learning institution’s mission, is beyond the
learning institution’s ability to resolve, has an audit trail documenting how the learning
institution tried to resolve the issue, and causes the learning institution to fail one or more AEAS
criteria.

Individual Training Plan


The Individual Training Plan is a long-range planning document that articulates the proponent’s
career-long learning strategy for a MOS, area of concentration, or separate functional area. (TR
350-70, TP 350-70-14).

180
TRADOC Pamphlet 11-21

Initial impressions outbrief


A process of informing the Army learning institution’s leaders and stakeholders of the evaluation
team’s initial findings and impressions, both positive and negative, that emerged during
accreditation, assessment, or evaluation events and activities. It is normally conducted prior to
in-depth data analysis; therefore, initial findings and impressions may change.

Inspection
A process of closely examining, measuring, or testing a product or service’s characteristics and
comparing results with specific requirements to establish whether the product or service is
correct and in compliance. It usually follows a checklist based on product or service
specifications. It is part of the quality control function.

Instructional delivery institution


An Army learning institution that only provides instructional delivery of, or implements,
proponents’ learning materials; for example, RC learning institutions, NCOAs, RTSMs, AATS,
and troop schools.

Instructor actions
The requirements-producing instructor work categories documented in POIs. They are based on
time and ISR, and they capture instructors’ tasks and work hours when not formally executing
POI lessons with students. (TP 350-70-14)

Internal evaluation
An Army learning institution’s quality assurance review of its own processes and functions. It
provides the means to assure Army leaders that the Army’s training and education products,
programs, and processes are efficient and produce desired results. It also provides learning
institutions the means to improve and sustain high levels of institutional performance across the
DOTMLPF-P domains. It includes course evaluations, non-course evaluations, self-studies, and
self-assessments.

Interpreting results
A process of applying value judgments to analyzed data in accordance with the evaluation
criteria, and of drawing conclusions. Results may suggest recommendations for improvement or
lead to additional questions about the organization or program.

Interview
A method of data collection that involves orally questioning participants through one-on-one
discussion. Interviews can be conducted in person, over the telephone, or through
videoconferencing.

Master evaluation plan


An Army learning institution’s three-year planning document defining the institution’s strategy
for meeting its quality assurance evaluation requirements throughout the three-year Army
accreditation cycle. A master evaluation plan, commonly referred to as a MEP, captures

181
TRADOC Pamphlet 11-21

projected plans for conducting internal and external evaluations and proponent assessments, as
applicable.

Non-course evaluation
A focused evaluation of AEAS criteria and sub-criteria not normally observed, or not solely
observed, at the course level. A non-course evaluation includes but is not limited to institution-
level evaluations of various AEAS- and DOTMLPF-P-related processes.

Observation
A systematic method of collecting data about behaviors as they occur in their natural settings. It
is also a method of collecting data when information about the physical setting is needed, such as
determining if a classroom is conducive to learning, or if barracks facilities adequately support
students’ needs. Observations often conducted for quality assurance evaluations include but are
not limited to observing training, training equipment, training facilities, barracks facilities, and
test administration.

Outlier
An implausible response in the data; a data point that is considerably larger or smaller than the
nearest data point.

Policy
A law, regulation, rule, or guideline that describes a desired end state, drives actions and
activities, and provides a framework for processes and procedures. Examples include regulations,
policy memorandums, and commander’s training guidance.

Polyangulation
A systemic process of analyzing and relating multiple sources of data to develop a
comprehensive understanding of a phenomenon. It involves examining data from multiple
sources to verify the data’s validity and reliability, and to evaluate the extent to which the
evidence converges. It recognizes the multiple systems and environments in which data is
collected, the diverse realities and perspectives of those from whom data is collected, and the
varied natures of those collecting and analyzing the data.

Procedure
A specific or standardized way to carry out an activity to consistently achieve desired results. It
provides step-by-step instructions on how an activity should be done. Example procedure
documents are SOPs and instruction manuals. Continuity books also typically contain procedures
describing how to perform certain tasks specific to a duty position.

Process
A set of activities that turn inputs into outputs or results. It is an outline or roadmap of what
needs to be done to get to the end result. Example process documents are flow charts and action
plans.

182
TRADOC Pamphlet 11-21

Project management
A process of organizing, coordinating, and managing people and tasks from the initial stages of a
project to completion. The primary goal is to meet project requirements; however, it also aims to
improve a team’s efficiency and effectiveness.

Proponent
An Army organization or staff element designated by the HQDA, Deputy Chief of Staff, G-
3/5/7, who has primary responsibility for materiel or subject matter expertise in its area of
interest or is charged with the accomplishment of one or more functions. (TP 350-70-14)

Proponent assessment
A quality assurance process in which proponent learning institutions evaluate their outlying
subordinate schools and functionally aligned RC learning institutions against the AEAS. The
resulting proponent assessment report becomes part of the higher-level accredited institution’s
accreditation report.

Proponent assistance
An informal event in which proponent QAOs provide the learning institutions that they assess
with ongoing informal assistance, coaching, counseling, mentoring, and other quality assurance
support throughout the three-year Army accreditation cycle. The purpose is to assist, coach,
counsel, and mentor assessed learning institutions on the AEAS and proponent assessment
process. It is not an assessment or pre-assessment. The type and level of assistance varies
depending on the needs of the assessed learning intuition and the resources available.

Qualitative data
Data that are non-numerical and descriptive and can be analyzed using a variety of methods such
as content or narrative analysis. They provide context and meaning and can reveal a part of a
story that quantitative data cannot, such as how or why something has occurred. They either
cannot be converted to numerical data or are considered more valuable in qualitative form.
Examples include interview, focus group, and observation notes; document content; and
responses to open-ended survey questions.

Quality
The ability of a product or service to satisfy the needs of the internal and external stakeholders
who use or otherwise benefit from the product or service.

Quality assurance
A function that provides leaders assurance that an organization is efficiently and effectively
meeting its mission requirements, also assuring that controls are in place to effect quality
performance across the organization.

Quality assurance evaluation rubric


An evaluation tool that provides a common framework for evaluation. It provides performance
criteria and describes standards for different levels of performance of each criterion. It guides
judgment on performance and facilitates efficient analysis, synthesis, and evaluation.

183
TRADOC Pamphlet 11-21

Quality assurance office


An organization or individual (depending on the mission and size of an Army learning
institution) that executes the AQAP and reports directly to and serves as the “eyes and ears” of
their learning institution’s commander, deputy commander, commandant, assistant commandant,
or civilian or military equivalent, as appropriate.

Quality control
The day-to-day actions taken to ensure a program, product, or process meets applicable
specifications and standards. It has three objectives: find defects, correct defects, and validate the
deliverable.

Quantitative data
Data that are numerical and can be statistically analyzed. Examples include scores, quantities,
frequencies, time, and responses to closed-ended survey questions.

Quarterly quality assurance review


Provides an Army learning institution’s senior leaders with summarized progress and results
collated and synthesized from all quality assurance activities conducted during the last quarter,
internal and external evaluations; self-assessments, and Army accreditations or proponent
assessments, as applicable. Includes a summary of the quality assurance trends across the
institution, any new impact issues and value-added practices, and the status of any open
corrective action plan items and impact issues.

Required critical information


The information needed to answer the evaluation questions.

Root cause analysis


A systematic process of uncovering the underlying causes of problems and identifying and
applying appropriate solutions. It can also be used to diagnose successes and discover where
something went right.

Rebuttal
An argument or claim providing evidence that a reported accreditation rating, finding,
recommendation, or impact issue is inaccurate or false.

Self-assessment
An internal evaluation process in which an Army learning institution evaluates its own programs
and processes against approved standards. It provides the means to assure that training and
education products, programs, and processes are efficient and produce desired results. It also
provides the means to identify and correct deficiencies to improve and sustain high levels of
institutional performance across the DOTMLPF-P domains.

Self-study
An internal evaluation process in which an Army learning institution critically examines its form
and substance, programs and processes, and strengths and challenges, then judges its

184
TRADOC Pamphlet 11-21

performance and effectiveness relative to its goals. Its primary purpose is to advance an
institution’s understanding of itself.

Standard
An accepted level of quality, attainment, or achievement.

Survey
A process of collecting data, or feedback, from a group of people, and of aggregating, analyzing,
and interpreting that data. A survey always contains a written set of questions, known as a
questionnaire.

Systematic inquiry
The process of methodically and holistically seeking information needed to make decisions,
draw conclusions, and ensure accurate and credible evaluation results. It drives decisions about
high-level evaluation questions and the approaches and methods needed to answer those
questions. It also drives decisions about specific questions, such as interview questions, needed
to explore shortcomings and strengths and help answer the higher-level evaluation questions. It
considers interacting systems of people, processes and/or structures and how they may influence
one another. It aims to both develop theories about facts or situations by using inductive
reasoning, and to test existing theories about facts or situations by using deductive reasoning.

Theme
Common or repeating ideas, concepts, and/or meaningful patterns that emerge when collecting
and analyzing data.

Trend
A positive or negative change or development in a general direction.

Value-added practice
An Army learning institution’s practice that enhances the value of an AEAS criterion, is
effective and applicable across Army learning institutions, can be supported within the limits of
regulatory guidance and resources, and promotes the institution as a learning organization.

185

You might also like