0% found this document useful (0 votes)
8 views27 pages

Bloom

concepto de evalucíon según Bloom
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views27 pages

Bloom

concepto de evalucíon según Bloom
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

DOCUMENT RESUME

ED 081 236 EM 011 423


AUTHOR Ward, Marjorie E.; Cartwright, G. Phillip
TITLE Some Contemporary Models for Curriculum
Evaluation.
INSTITUTION Pennsylvania State Univ., University Park.
Computer-Assisted Instruction Lab.
PUB DATE Aug 72
NOTE 26p.; Paper presented at the Conference on Curriculum
Evaluation (Cap-Rouge, Quebec, August 1972)

EDRS PRICE MF-$0.65 HC-$3.29


DESCRIPTORS *Computer Assisted Instruction; Curriculum Design;
Curriculum Development; *Curriculum Evaluation;
*Formative Evaluation; Instructional Design;
Instructional Programs; Literature Reviews; Models;
State of the Art Reviews; *Summative Evaluation
IDENTIFIERS CAI; Consequential Evaluation

ABSTRACT
Two major topics are covered in this paper--the
evaluation of instructional programs in general and the evaluation of
computer-assisted instructional (CAI) course material during initial
preparation. The first half of the paper reviews significant
literature relating to instructional program evaluation and
distinguishes between formative and summative evaluation. It then
describes five models of evaluation, including three which specify
procedures for the initial preparation of instructional programs, a
fourth which represents a model for summative evaluation by an
outsider, and a fifth which depicts several classes and data to be
gathered during a comprehensive evaluation. The other major section
of the paper deals with matters relating to CAI materials. Literature
on the formative and summative evaluation of CAI programs is
reviewed, followed by a discussion of consequential evaluation and an
overview of some evaluative criteria. A model for CAI course
preparation and evaluation is presented and recommendations for the
inclusion of formative evaluation as an integral part of curriculum
development are made. (Author/PB)
SOME CONTEMPORARY MODELS FOR
U S DEPARTMENT OF
EDUCATION & CURRICULUM EVALUATION
NATIONAL INSTITUTE Or
EOUCAT ION
THIS DOCUMENT HAS BEEN REPRO
DuCED EXACTLY As RECEIVED FROM Marjorie E. 'lard
THE P1 RSON OR ORGANIZATION ORIGIN
MING .T POINTS OF VIEW OR OPINIONS University of Pittsburgh
STATED DO NOT NECESSARILY REPRE
SENT OFFICIAL .,AToNAL INSTITUTE OF
EDUCATION POSITION OR POLICY G. Phillip Cartwright
The Pennsylvania State University

A. Evaluation of Instructional Programs

1. Formative and Summative Evaluation

Authorities have referred to Cronbach's paper entitled "Course Improvement"

Through Evaluation" presented in 1963 as a "classic" (Glass, 191;3). In this

paper Cronbach defined evaluation as the "...collection and use of information

to make decisions about an educational program" (Cronbach, 1963, p. 672). He

indicated that such information could be used for course improvement, for deci-

sions about individual stydents, or for administrative regulations. Cronbach

emphasized the importance of evaluation for the purpose of course improvement:

The greatest service evaluation can perform is to identify

aspects of the course where revision is desirable... To be influ-

ential in course improvement, evidence must become available midway

in curriculum development, not in the home stretch, when the

developer is naturally reluctant to tear open a supposedly finished

body of materials and techniques. Evaluation, used to improve the

course while it is still fluid, contributes more to improvement of

education than evaluation used to appraise a product already placed

on the market (Cronbach, 1963, p. 675).

Cronbach stated thaethe analysis of performance on single test items or

the record of responses to different types of problems could be more informative

than an analysis of total scores. He viewed evaluation as:

FILMED FROM BEST AVAILABLE COPY


...a fundamental part of curriculum development, not an

appendage. Its job is to collect facts the course developer

can and will use to do a better job, and facts from which a

deeper understanding of the educational process will emerge

(Cronbach, 1963, p. 683).

Scriven (1967) proposed using the terms "formative" and "summative" to

distinguish between evaluation to improve an instructional program or curriculum

during its development and evaluation to determine the worth or effectiveness of

an instructional program once it had been completed. He suggested that, in

order to avoid potential clashes between curriculum writers and professional

evaluators,

...formative evaluators should,.if at all possible be sharply

distinguished from the summative evaluators, with whom ney may

certainly work in deeloping an acceptable summative evaluation

schema, but the formative evaluators should ideally exclude them-

selves from the role or judge in the summative evaluation (Scriven,

1967, p. 45),

Linavall and Cox (1970) point out that, "Once a program has been developed

and is fully functioning, the task of a summative evaluator is to describe just

what that program does or what it is worth" (Lindvall and Cox, 1970, p. 56).

Scri-ven maintained that in the early stages of any kind of curriculum project

general objectives or goals are formulated. These goals, which should not be

considered absolute commitments, but rather reminders subject to alteration,

might range from motivational and cognitive goals to the goal of producing a

marketable program. Scriven declared that these goals were to be themselves

items for evaluation; performance measured against goals was not to he the only
3

concern of the evaluator. To'him it was "...obvious that if the goals aren't

worth achieving then it is uninteresting how well they are achieved" (Scriven,

1967, p. 52).

Scriven outlined three types of activities-which could facilitaft both the

evaluation of the goals and the evaluation of performance measured against those

goals. These activities are:

1 Regular reexamination and modification of proposed

general objectives or goals of the preect.

2. Construction of a test-question pool, which thus becomes

an "operational version of the goals" (Scriven, 1967, p.

56) and, as such, also'requires regular reexamination and

modification in light of any changes in the project goals.

3. External judgments about the consistency of the project

goals, content, and test-question pool.

Scriven saw several refinements of the above activities as crucial to formative

evaluation studies since they could uncover the causes of poor results:

Essentially, we need to know about the success of three

connected matching problems: first, the match between goals

and course content; second, the match between goals and examina-

tion content; third, between course content and examination content.

...Only in this way are we likely to be able to track down the

real source of disappointing results (Scriven, 1967, p. 59).

Stolurow in a paper presented at a Council for Exceptional Children

Special Conference on Instructional Technology commented on the function,of

formative evaluation:
It is the formative evaluation process that results in

specific revisions of a program to improve its rhetoric, instruc-

tional effectiveness, and acceptability (Stolurow; 1970, p. 75).

In Handbook on Formative and Summative Evaluation of Student Learning,

Bloom, Hastings, and Madaus defined evaluation as:

...the systematic collection of evidence to determine

whether in fact cetatn changes are taking place in the learners

as well as to determine the amount or degree of change in indi-

-vidual students (Bloom, Hastings, and Pladaus. 1971, p.3).

They distinguished between and mmative evaluation on the basis of

purpose, time at which evaluation occurs, and "...level of generalization sought

by the items in the examination used to ccllect data for the evaluation" (Bloom,

Hastings, and Madaus, 1971, 2 61).

We have cho:,er t.ril '';ummative evaluation" to indi-

cate the type of ,,..uation used at the end of a term, course,

or program for purposes of grading, certificatiOn, evaluation

of Drogress, or research on the effectiveness of a curriculum...

. Formative evaluation is for us the use of systematic eval-

uation in the process of curriculum construction, teaching and

learning for the purpose of improving any of these three processes

(Bloom, Hastings, and Madaus, 1971, p. 117).

In the Preface to their book the authors explained that their interest is the

improvement of student learning, as the title of their book would indicate.

Airasian also focused on formative evaluation for the improvement of

student learning. He stated that formative evaluation "...seeks to identify

learning weaknesses prior to the completion of instruction on a course segment"


(Airasian, 1971, p. 79)...He summarized differences between formative and

summative evaluation by indicating the verb tense used with each term:

"... Formative evaluation provides data about how students are changing.

Summative evaluation is concerned with how students have changed..." (Airasian,

1971, n. 78).

2. Models for Evaluation of instructional Programs

In the following discussion five models for the evaluation of instructional

orograms will be described. The first three specify procedures for the initial

preparation of instructional nroarams The fourth renresents a model for

summative evaluation conducted by an outside evaluator. The fifth model depicts

several classes of data to be gathered when conducting a comprehensive evaluation.

Model I. Stake'(1967) lndicatea that twn main types of information are necessary

for the evaluation of en ationdl orne,r1n.: . The ti.!i t)ci, Is the intents and

outcomes, and the second is nersunal judq.19.nts ds Lc, the quaIlty and appropriate-

ness of the intents and outcomes.

In another article Stake (1967) explained what his proposed evaluation program

would involve. Descriptions of TviTat intended antecedents or entry behaviors

were exnected, what intended transactions or instructional processes were

planned, and what outcomes were anticipated would be evaluated for their logical

relationship to each other: Then the descriptions of what actually happened would

be examined to determine if what was intended actually occurred (see Figure 1).

Finally, judgments of the value of the instructional program would be based on

absolute standards reflected by the evaluator's personal judgment and on relative

standards reflected by comparison of the particular program to alternative programs

(see Figure 2). Program designers would prepare a rationale stating the basic

nurbose and philosophical background of their program which would assist the

evaluators.
6

Stake posed five questions which he felt should be answered prior to the

initiation of evaluation procedures:

"1. Is this evaluation primarily descriptive, primarily judgmental,

or both descriptive and judgmental?

2. Is thiS evaluation to emphasize the antecedent conditions, the

transactions or the outcomes 'alone, or a combination of these,

or their functional contingencies:

3. Is this evaluation to indicate the congruence between what is

intended and whdt occurs?

4. Is this evaluatn Lr, undertaken within a single program or

as a comparison betweerCtwo or more .curriculum programS?

5., Is this evaluation intended more to further the development of

curricula or t:; neL) anc) available curricula?" (Stake,

1967, n. 539) -

Stake here did not report the sequence in which the steps of` is process

would be followed, nor did he illustrate his process. No reports' of projects

in which his evaluation procedures had been used were located.in the literature.

;,971) has suggested an expansion of the Stake model to increase the

information yield about new educational products.

Model II. Briggs (1970) in hiS monograph entitled Handbook of Procedures for

the Development of Instructional Systems presented a model for the preparation

of new instructional course material. His model, which encompasses course

design, development, and evaluation, provides for the'deliberate selection

or creation of instructional,materials on the basis of both learner character-

istics and the nature of the competencies which the course is supposed to

develop, as well as on the basis of the characteristics of the material


7

alternatives (see Figure 3). The monograph is devotea to the design phase

of Briggs' model. Briggs stated that formative evaluation procedures would

start during the development and evaluation phases which he discussed briefly.

He listed suggestions for formative evaluation which could be followed sub-

sequent to what he called "formative design" steps taken during the develop-

ment of first-draft materials in Steps 1-6. Briggs defined formative design

as "...the use of performance tests (empirical data) for making the necessary

decisions long before first-draft materials are ready for try-out" (Briggs,

1970, p..173). In his critique of his model written. after he and his graduate

students had examined twenty other models for instruction drawn from military,

industrial, educational, and governmental settings, Briggs observed several

limitations in his model:

The model/is somewhat limited from the point of view of

planning the integration of materials, snace, teachers, and

learners into an administrative and management system for the

operation of the learning environment...Whereas the model may

be inadequate for skills of inquiry needed for advanced types

of problem solving, it is clearly,useful as a guide for planning

instruction at many of the less advanced levels (Briggs, 1970, p.

185).

Model III. Baker and Schutz declared, "Most instruction is dispensed, not

developed" (Baker and Schutz, 1971, p. xv). They characterized instructional

development as "...essentially a cyclical process, ...a team effort, and ...

user-oriented" (Baker and Schutz, 1971, pp. xv-xvi). They viewed an adequate

instructional development program as one giving consideration to five program

systems: Instructional, Training, Installation, Accountability, and Modification.


8

The Instructional system in the opinion of Baker and Schutz is the key system

from which specifications for the other four systems are derived. All systems

share common characteristics and are closely interdependent, although each system

has a distinct function within the total development program.

Baker and Schutz listed seven components of their instructional development

cycle which cuts across all five program systems and system characteristics.

These components are:

1. Formation

2. Instructional Specification

3. Item Tryout

4. Product Development

5. Product Tryout

6. Product Revision

7. Operations Analysis (Baker and Schutz, 1971, p. 131).

For each of the seven components in the development cycle Popham and Baker

(1971) specified general rules (see Appendix A). In addition, Popham described

principles demonstrated to be effective in following the rules for activities

within each component of the development cycle. These principles are:

"1. Provide relevant practice for the learner.

2. Provide knowledge of results.

3. Avoid the inclusion of irrelevancies.

4. Make the material interesting" (Popham, 1971, p. 171).

To produce the interest required in the last principle listed above, Popham

urged the deliberate use of variety, humor, game-type situations, suspense,

and format variations.

Model IV. Glass (1972) applied a prototype evaluation format to the appraisal

of an educational product already on the market, an instructional 100-foot


9

cassette tape recording of a presentation entitled "Evaluation Skills" given by

Dr. Michael Scriven. The model covered the following items:

1. Product description

2. Goals evaluation

3. Clarification of point of entry of the evaluator

a. Irreversible decisions

b. Reversible decisions (Enter the evaluator.)

4. Trade-Offs

5. Comparative cost analysis

6. Intrinsic (secondary) evaluation

a. Technical quality

b. Content evaluation

c. Utilization of uniqueness of.medium

d. Survey of availability

7. Outcome (primary) evaluation

8. Summative judgments and recommendations

9. Circumstances modifying the summative judgments (scope and value claims)

Glass's prototype model was prepared for the outside evaluator to follow

in appraising a finished instructional product.

Model V. The CIPP model was developed by Stufflebeam and his associates at the

Ohio State University. It might be regarded as a heuristic model for generating

data classes. The CIPP Model has four components: Context, Inputs, Process,

and Product (Stufflebeam, 1970).

Those four components should help provide relevant data to decision-makers.

First: Context - Howwill CAI fit into the overall plan of operation and

the goals of the organization? How will present personnel react? What will

their roles be? Will there be union problems? What about scheduling? Space?
10

Second: Input What do the students bring to the learning situation? What

are the desired entry behaviors? What prerequisite knowledge or skills are required?

Third: Process What is the quality of interaction between student and

system? How are individual differences accounted for? How well does the system

match individual students with different instructional strategies? What are the

testing procedures?

Finally: Product Does the system work? Do all students meet all objectives?

Are all relevant objectives covered? Are irrelevant objectives included? How

much time is required? Are the students well prepared for whatever follows the

program? How is student behavior changed? Is behavior changed in this context

only? How long is the behavior maintained? Do students like the program? How

about other personnel?

There is an extensive literature related to,-this model. Readers are

referred to Dr. Don Stufflebeam, at the Evaluation Center, Ohio State University,

for more information.

3. Summary

Authorities have distinguished between formative and summative evaluation

and have developed models for authors of instructional programs to follow. Stake's

plan for evaluation provides a general outline for instructional development

projects which make evaluation an integral part of the project. Briggs' model

places emphasis on the selection of available or the design of new materials in

order for students to reach instructional objectives. Glass reports the results

of his having used a model appropriate for the evaluation of finished products.

Baker and Schutz outline a practical program for the newcomer in program develop-

ment to follow.

Factors influencing the choice of a particular model would seem to include

the purpose for and the scope of the evaluation, the point at which the evaluation

is to be initiated, and the person to whom the task of evaluation is assigned.


11

B. Evaluation of Computer-Assisted Instruction


Course Material During Initial Preparation

Rogers (1968) several years ago reviewed problems in CAI and observed that

lack of quality CAI course material constituted a majorproblem. He called

attention to the need for evaluated course materials.

Cartwright has identified recent trends in curriculum evaluation: evalua-

tion is becoming acceptable and broader in base; as the contribution that

formative evaluation can make to curriculum development receives greater

recognition, there is a corresponding decrease in emphasis on summative evalua-

tion; and in spite of this recognition, "...the large majority of CAI publications

and papers that have become available in the last two years still are reporting

summative evaluation activities..." (Cartwright, 1971,p. 2).

1. Formative and Summative Evaluation

Cartwright and Mitzel (1971) described both the formative and summative

procedures they followed during the preparation of a three-credit CAI course

designed for regular classrocm teachers primarily in rural.areas entitled

"Earl Identification of Handicapped Children." During the formative evalua-

tiol, procedures, which covered approximately six months, fifteen students took

the course while a proctor observed and recorded any student comments and

program bugs. Technical problems went to the programer and content problems

were given to the author who made necessary changes. Once all revisions had

been organized, the course was revised and a second pilot group of fifteen

students took the course unattended by a proctor. In addition, two graduate

students in special education completed the course and submitted their evalua-

tion reports. Then, 115 inservice teachers completed the course for college

credit as a part of Penn State's Continuing Education program. Extensive


12

revisions were made as a result of the analyses of the responses, requests

for assistance, and response latencies collected from these students. Finally,

300 more students completed the course and additional revisions were made as a

result of data collected on those students. All told, over 333,000 student

responses were analyzed during the formative evaluation.

To conduct a summative evaluation of the course, on-campus students who

registered for "Introduction to the Education of Exceptional Children" were

randomly assigned to conventional instruction (CI) and to CAI. Objectives for

both courses were the same; in fact, the teacher of CI class had been one of

the CAI course authors. Using time to complete the course and score on the

75-item final exam as variables, the authors reported that analyses of their

data indicated the CAI students (n=27) scored significantly higher than CI

students (n=87) on the final exam and completed the course in twelve hours

less time than the CI students.

Confer (1970) reported another summative evaluation of a,CAI course

designed to teach general math. Students, all repeaters in general math, were

randomly assigned to regular class instruction and to CAI instruction during

a summer sc!iool session. Performance at the end of instruction in computation

and problem-solving was measured with the Stanford Achievement Test (SAT).

Analysis of covariance indicated no significant differences between the two

groups on SAT scores. Confer concluded that his results neither confirmed

nor rejected CAI as a method of instruction. Among his recommendations was

the need for an analysis of all students' responses to help determine necessary

changes in the CAI general math course.

In a speech at the Association for the Development of Instructional Systems

in 1971, Cartwright stated:

It is unlikely that summative evaluation per se will improve


13

the quality of instruction. Formative evaluation, however, is

a model that can be used to iriprove the quality of instruction.

...It seems to me that criterion-referenced instruction as a goal

and formative evaluation as a method is the way to go at this

point in time in the devel t of CAI (Cartwright, 1971, p. 9).

2. Consequential Evaluation

Glass (1969) nas described consequential evaluation as follows: "Con-

sequential evaluation is evaluation of the affects (sic) of the program...

(the consequences of the program)" (Glass, 1969, p. 5). Formative evaluation

deals with data collection for the purposes of improving the course and

summative evaluation is concerned with the "comparative worth .or effectiveness

of competing programs" (Class, 1969, p. 5). Consequential evaluation, is

concerned with a more slionery criterion -- long range performance or behavior

of the learners who have taken the CAI program. In the case of CAI programs in

teacher education, we ure concerned with the performance of teachers in the

classroom and, ultimately the behavior of the children in those classrooms.

Good consequential evaluation studies are quite expensive to carry out and

quite difficult to manage.

3. Criteria for Evaluation of CAI

Seltzer has written:

What the computer can and cannot do is a matter of research

and fact. What the computer should and not do in instruction is

based on value judgments...(Seltzer, 1971, p. 373).

Seltzer suggested that, in order to be in a-position to make value judgments,

criterion statements should be drawn up for use in evaluating the selection

of the computer to assist in any particular instructional process. The


14

criterion statements Seltzer proposed are:

"1. If the computer poses a unique solution to an important

problem in the instructional process, then it should be

used regardless of the cost involved.

2. If the computer is more efficient or effective and the

cost of its use to instruct is minimal, then it should

be used. And conversely,

3. if the cost of development and use of the computer in

instruction is relatively high with the relative efficiency

or effectiveness only marginal, then the computer should

not be.used in the instructional process" (Seltzer, 1971,

p. 375).

These criterion statements look at CAI cost, effectiveness, and efficiency

in comparison to alternative means of instruction. They could logically be

considered during the initial design of a proposed CAI application to instruc-

tion before much instructional material had been developed.

4 Model for CAI Course Preparation and Evaluation

Bunderson constructed a prescriptive model for the design of CAI course

material. He explained the circumstances which prompted his effort:

The instructional design model described in this chapter

was originated to provide management and quality control for

curriculum development, to provide a bridge between the curriculum

development and basic research activities of the laboratory, and

to serve as a focus for teaching students and others how to design

quality CAI programs. Its development was influenced by the author's

attempts to adjust to a joint appointment between educational


15

psychology and computer science and to communicate with

staff members and students from both fields (Bunderson,

1970, p. 46).

Bunderson discussed the activities to be performed by the instructional

designer, their approximate sequence, and the product of each activity. The

design activities in the sequence Bunderson outlined are:

1. Intent and justification

a. Write societal needs.

b. Write institutional needs:

c. Write program goals.

1. Describe job requirements.

2. Describe student population.

3. Describe institutional constraints.

d. Write justification for CAI.

2. Instructional design: analysis

a. Derive operational requirements from goals.

1. Derive terminal objectives.

2. Set entering performance standards.

3. Consider effect of constraints on program design.

b. Behavioral analysis

1. Obtain intermediate objectives through analysis

of terminal objectives.

2. Construct learning hierarchy.

c. Analysis of learner traits

3. Instructional design: synthesis

a. Specify interface.
16

1. Display and response devices.

2. Representation.

b. Construct individualizing flow chart.

1. Hierarchy-based gating mechanisms.

2. Trait-by-treatment branches.

3. Continuously adaptive mechanisms.

c. Write working draft.

1. Construct curriculum-embedded tests controlling

major flow.

2. Write steps and describe format of steps.

4. Produce program materials.

a. Code from author's draft.

b. Produce oedia.

c. Debug code and proof media.

5. Evaluate and revise.

a. Editorial evaluation.

b. Internal empirical evaluation.

c External empirical evaluation.

1. Validation testing.

2. Longitudinal validation.

6. Use of feedback.

Return to any previous step as indicated by evaluation,

revise, and recycle.

In his discussion of parts of his model, Bunderson observed that the con-

struction of a learning hierarchy (see above, 2.b.2.) seems ...


17

...readily applicable to any cumulative subject matter

such as mathematics, much of science, and even music. It

seems less applicable to highly verbal areas (Bunderson,

1970, p. 56).

5. Summary

A survey of the literature related to both evaluation and CAI reveals that

models for formative evaluation are available for use in developing course

materials, that authorities urge formative evaluation be incorporated into

initial CAI course development projects, and that to date formative evaluation

procedures have not been reported in many completed projects. ,'In the one

model designed specifically for CAI course preparation, the author observed that

some of the activities he outlined were more suitable for.subject matter with

inherent structure rather than for highly verbal subject matter on which

several structures might be imposed.

Authorities continue to stress the need for formative evaluation during

initial CAI course preparation for purposes of course improvement. They see

l:ttie information in summative evaluation results that can help authors locate

course weaknesses or errors.

The application by a course author of a formative evaluation model to the

initial preparation of a CAI course would seem strongly indicated.


APPENDIX A

A REVIEW OF PRODUCT DEVELOPMENT RULES


(Baker and Schutz, 1971, pp. 167-68)

FORMULATION

F:l. The extensiveness of a proposed product's justification should be

communsurate with the importance of the product.

F:2. Excessive time should not be spent in formulation.

F:3. In justifying the development of the new product, make certain there

are no competing products of high quality.

INSTRUCTIONAL SPECIFICATIONS

IS:1. All instructional objectives should be stated in terms of the

learner's post-instructional behavior.

IS:2. En-route and entry behaviors should also be described behaviorally

in the instructional specifications.

IS:3. Criteria for judging the adequacy of the learner's response should

be specified.

IS:4. A clearly specified method for determining learner affect toward the

completed instructional product should be specified.

ITEM TRYOUT

IT:1. The criterion test must be completely prepared prior to the deve-

lopment of the instructional product.

IT:2. Measures of the entry and en-route behaviors should be constructed

during the item tryout stage.

1T:3. Prototype items should not deviate from the behaviors described in

the instructional specifications.

IT:4. Prototype items should be tried out with a small number of learners

first, later with a larger number of learners.


PRODUCT DEVELOPMENT

PD:l. Supply the learner with appropriate practice during an instructional

sequence.

PD:2. The product should provide the learner with the opportunity to obtain

knowledge of results.

PD:3. The instructional product should contain provisions for proms ing the

learner's interest in the product.

PD:4. Avoid the development of an inflexible strategy in approaching

product development tasks.

PD:5. If teachers are involved in the instructional process, make their

participation as replicable as possible.

PD:6. In general, adopt a "lean" programming strategy.

PD:7. If the product is to be used in the classroom, develop it so that

teacher attitudes toward the product will be positive.

PD:8. Selection of the instructional medium should be made in light of the

desired instructional objectives, intended target population, cost,

and other relevant considerations.

PD:9. Tne time devoted to Jie development of the product should be commen-

surate with the importance of the product.

PRODUCT TRYOUT

PT:l. Avoid an extremely small or extremely large number of learners when

field testing the product.

PT:2. Verify that the procedures associated with the use of the product

result in a replicable treatment.

PT:3. Data from field trials should be efficiently summarized for use by

those who will revise the product.


PT:4. Those involved in field testing the product should collect data; they

should not, themselves, engage in drawing inferences from the data.

PRODUCT REVISION

PR:l. Base product revisions on legitimate inferences from field test data.

PR:2. ,The primary inferences regarding product revisions should be made

from criterion data.

PR:3. Learner response data during the program should be considered a

valuable source Of cues for product improvement.

PR:4. No loss of face for the :initial developer should be associated with

revisions of an instructional product.

OPERATIONS ANALYSIS

OA:1. Operations analysis should be performed at the conclusion of all

systematic development of instructional products.

OA:2. The operations analysis should be written and transmitted to some

central repository.
REFERENCES

Airasian, Peter W. Role of evaluation in mastery learning. In Mastery

Learning: Theory and Practice, edited by Benjamin S. Bloom. New

York: Holt, Rinehart, and Winston, Inc., 1971, pp. 77-88.

Baker, Robert L. and Schutz, Richard E. Instructional product development.

New York: Van Nostrand Reinhold Company, 1971, pp. xiii-xxiii, 1-128.

Bloom, Benjamin S., Hastings, J. Thomas, and Madaus, George F. Handbook on

formative and summative evaluation of student learning. New York:

McGraw-Hill Book Company, 1971, pp. 5-138.

Borich, Gary D. Expanding the Stake model to increase information yield

about new educational products. Educational Technology, December, 1971.

Briggs, Leslle J. Handbook of procedures for the design of instructional

systems. Pittsburgh: American Institutes for Research, 1970.

Bunderson, Victor C. The computer and instructional design. In Computer-

Assisted Instruction, Te,ting, and Guidance, edited by Wayne H. Holtzman,

New York: Harper and Row, Publishers, 1970, pp. 45-70.

Cartwright, G. Phillip Issues in curriculum evaluation. Paper presented at

the Association for the Development of Instructional Systems; State

University of New York at Stony Brook, February, 1971.

Cartwright, G. Phillip and Mitzel, Harold. Development of a CAI course in

the identification and diagnosis of handicapping conditions in children.

Final Report No. R-44, CAI Laboratory, The Pennsylvania State University,

June, 1971.

Confer, Ronald W. The effect of one style of CAI on the achievement of

students who are repeating general math. Unpublished Ph.D. dissertation,

University of Pittsburgh, 1970.


Cronbach, Lee J. Course improvement through evaluation. Teachers College

Record, LVII, May 1963, pp. 672-83.

Glass, Gene. Design of evaluation studies. Paper presented at the Council

for Exceptional Children on Early Childhood education, New Orleans,

December, 1969.

Glass, Gene V, Educational product evaluation: A prototype format applied.

Educational Researcher I, January, 1972, pp. 7-10, 16.

Popham, W. James and Baker, Eva L. Rules for the development of instructional

products. In Instructional Product Development, edited by Robert L.

Baker and Richard E. Schutz. New York: Van Nostrand Reinhold Company,

1971, pp. 167-68.

Popham, W. James. Preparing instructional products: Four development prin-

ciples. In Instructional Product Development, edited by Robert L.

Baker and Richard E. Schutz. New York: Van Nostrand Reinhold Company,

1971, pp. 169-207.

Rogers, James L. Current problems in CAI. Datamation, September, 1968, pp.

28-33.

Scriven, Michael The methodology of evaluation. In Perspectives of Curriculum

Evaluation, AERA Monograph Series on Curriculum Evaluation, No. 1.

Chicago: Rand McNally and Company, 1967, pp. 39-83.

Seltzer, Robert A. Computer-assisted instruction - what it can and cannot

do. American Psychologist, XXVI, No. 4, April, 1971, pp. 373-77.

Stake, Robert E. Toward a technology for the evaluation of educational

programs. In Perspectives of Curriculum Evaluation, AERA Monograph

Series on Curriculum Evaluation, No. 1. Chicago: Rand McNally and

Company, 1967, pp. 1-12.


Stake, Robert E. The countenance of educational evaluation. Teachers

College Record, LVIII, April, 1967, pp. 527-38.

Stolurow, Lawrence. Instructional technology. Paper presented at a Council

for Exceptional Children Special Conference on Instructional Technology,

San Antonic, Texas, December, 1970, p. 75.

Stufflebeam, D. The use of experimental design in education. Paper presented

. at the Annual Meeting of the American Educational Research Association,

Minneapolis, 1970. Columbus: Evaluation Center, The Ohio State University.

ADDENDUM

Lindvall, C. M. and Cox, Richard C. Evaluation as a tool in curriculum deve-

lopment: The IPT evaluation program. In The IPI Evaluation Program,

AERA Monograph Series on Curriculum Evaluation, No. 5. Chicago: Rand

McNally and Company, 1970.


FIGURE 1

A REPRESENTATION OF THE PROCESSING


OF DESCRIPTIVE DATAa

INTENDED ANTECEDENTS congruous ---> OBSERVED ANTECEDENTS

logical empirical
contingency contingency

INTENDED TRANSACTIONS 4-congruous--4 OBSERVED TRANSACTIONS

I
logical empirical
contingency contingency

INTENDED OUTCOMES 4congruous --- OBSERVED OUTCOMES

a
Stake, Robert E. The countenance of educational evaluation. Teachers
College Record, April, 1967, p. 533.
FIGURE 2

A REPRESENTATION OF THE INTERRELATIONSHIP OF


DESCRIPTIVE AND JUDGMENT DATAa

DESCRIPTIVE MATRIX JUDGMENT MATRIX

Intents Observations Standards Judgments

ntecedents

ransactions

outcomes

a
Stake, Robert E. The countenance of educational evaluation. Teachers
College Record, April, 1967, p. 532.
FIGURE 3
FLOW CHART: A MODEL FOR THE DESIGii OF IMSTRUCTIONa

1. State objectives 2. Prepare tests . Analyze objectives . Identify assumed


and performance over the for structure and entering competen-
standards. objectives. sequence. cies.

. Prepare pretests 5a. Or plan an 5b. Or screen 5c. Or plan a


and remedial adaptive students or dual-track
instruction. program. accept drop-outs. program.

. Select media . Develop first . Small-group . Classroom try-


and write draft tryouts and outs and
prescriptions. materials. revisions. revisions.

REVISIONS

10. Performance
evaluation.

Additional Revisions of Materials and/or Objectives and Performance Standards.

aBriggs, Leslie J. Handbook of procedures for the design of instruction. Pittsburgh:


American Institutes for Research, 1970, p. 7.

You might also like