MODULE
ASSESSMENT OF
LEARNING
1
LANIE N.
E.
Assessment of Learning 1 (Module_Ed 106) lnea 2022 | 1
AVELINO
, LPT, MA
ASSESSMENT OF LEARNING 1
MODULE SEVEN
PRELIMINARIES
Module Title: Module Seven - PERFORMANCE - BASED TEST
Course Title: Assessment of Learning 1
Course Number: Ed 106
COURSE DESCRIPTION/OVERVIEW
This course provides an examination of the uses of assessment practices and strategies to improve student
learning. Special emphasis will be placed on authentic assessment practices, standardized tests, and developmental
screenings. Additionally, students will become familiar with measures to assess learners with special needs and
learners from linguistically and culturally different backgrounds.
This course also focuses on the development and utilization of assessment tools to improve the teaching-
learning process. It emphasizes the use of testing for measuring knowledge, comprehension and other thinking skills.
It allows the students to go through the standard steps in test construction for quality assessment.
Total Learning Time: 7 HOURS
Pre-requisites: N/A
LEARNING OUTCOMES
This course/module made a concerted effort to achieve the following learning outcomes;
Develop performance-based tests to assess elected learning competencies from the K to 12 curriculum guide.
Construct appropriate scoring rubrics for giving students' products/performances
INDICATIVE CONTENT
MODULE SEVEN - PERFORMANCE - BASED TEST
Lesson 1. Performance-Based Tests
Lesson 2. Performance Tasks
Lesson 3. Rubrics and Exemplars
Lesson 4. Creating Rubrics
Lesson 5. Writing and Selecting Effective Rubrics
Lesson 6. Tips in Designing Rubrics
Lesson 7. Automating Performance-Based Tests
DISSCUSSION
Introduction
Over the past few years, there has been a general dissatisfaction over the results of traditional standardized objective
tests. Concerted efforts have, therefore, been expended to find alternative assessment mechanisms of measuring
educational outcomes and processes and measure more complex processes in education. For example, multiple
choice tests have been criticized because they, purportedly, are unable to measure a complex problem solving skill,
are hopeless in measuring processes the appearing daily classroom activities, gauge the processes involved in
accomplishing the task performance and examine learners' application skills rather than superficial learning of the
material. Educators have therefore focused their attention to finding alternative assessment methods that would
Assessment of Learning 1 (Module_Ed 106)
lnea 2022 | 2
hopefully address these difficulties with a traditional method of objective assessment. Performance-based assessment
is one alternative assessment technique that has been proposed.
Performance-based, assessment procedures believed that the best way to gauge a student or pupil competency in a
certain task is through observation in set or on site. Such a belief appears consistent with the constructivist
philosophy in education often taught in courses on Philosophy of Education. A performance-based test is designed to
assess students on what they know, what they are able to do and the learning strategies they employ in the process
of demonstrating it.
Many people have noted serious limitations of performance-based test and their variability to word subjectivity in
scoring and creating or providing the real or closer-to-the task environment for assessment purpose. However, the
concerns for subjectivity may be addressed simply by automating the test. The second issue is obviously a bigger
problem, and there is no guarantee that ideas from one domain will apply to another.
Lesson 1. Performance-Based Tests
There are many testing procedures that are classified as performance tests with a generally agreed upon definition
that these tests are assessment procedures that require students to perform a certain task or activity or perhaps,
solve complex problems. For example, Bryant suggested assessing portfolios of a student's work overtime, students'
demonstrations, hands-on execution of experiments by students, and a student's work in simulated environment.
Such an approach falls under a category of portfolio assessment (i.e., keeping records of all tasks successfully and
skillfully performed by student). According to Mehrens, performance testing is not new. In fact, various types of
performance-based tests were used even before the introduction of multiple-choice testing. For instance, the
following are considered performance testing procedures: performance tasks, rubric scoring guides and exemplars of
performance.
Lesson 2. Performance Tasks
In performance tasks, students are required to draw on the knowledge and skills they possess and to reflect upon
them for use in the particular task at hand. Not only are the students expected to obtain knowledge from a specific
subject or subject matter but they are in fact required to draw knowledge and skills from other disciplines in order to
fully realize the key ideas needed in doing the task. Normally, the tasks require students to work on projects that yield
a definite output or product, or perhaps, following approaches which tests their approach to solving a problem. In
many instances, the tasks require a combination of the two approaches. Of course, the essential idea in performance
tasks is that students are pupils learn optimally by actually doing (Learning by Doing) the task which is a constructivist
philosophy.
As in any other tests the tasks need to be consistent with the intended outcomes of the curriculum and objectives of
instruction; and must require students to manifest (a) what they know and (b) the process by which they came to
know it. In addition, performance-based tests required that tasks involving examining the processes as well as the
products of student learning.
Lesson 3. Rubrics and Exemplars
Modern assessment methods tend to use rubrics to describe student performance. A rubric is a scoring method that
lists the criteria for a piece of work, or "what counts" (for example, purpose, organization, details, voice, and
mechanics are often what count in a piece of writing); it also articulates gradations of quality for each criterion, from
excellent to poor. Perkins et al (1994) provide an example of rubric scoring for student inventions and lists the criteria
and gradations of quality for verbal, written, or graphic reports on student inventions. This is shown in the succeeding
figure as a prototype of rubrics scoring. This rubric lists the criteria in the column on the left: the report must explain
(1) the purposes of the invention, (2) the features or parts of the invention and how they help it serve its purposes,
(3) the pros and cons of the design, and (4) how the design connects to other things past, present, and future. The
rubric could easily include criteria related to presentation style and effectiveness, the mechanics of written pieces, and
equality of the invention itself. The four columns to the right of the criteria described varying degrees of quality, from
excellent to poor.
There are many reasons for the seeming popularity of rubric scoring in the Philippine school system. First, they are
very useful tools for both teaching and evaluation of learning outcomes. Rubrics have the potential to improve student
Assessment of Learning 1 (Module_Ed 106)
lnea 2022 | 3
performance, as well as monitor it, by clarifying teachers' expectations and by actually guiding the students how to
satisfy these expectations.
Secondly, rubrics him to allow students to acquire wisdom in judging and evaluating the quality of their own work in
relation to the quality of the work of other students. In several experiments involving the use of rubrics, students
progressively became more aware of the problems associated with their solution to a problem and with the other
problems inherent in the solutions of other students. In other words, rubrics increase the students' sense of
responsibility and accountability.
Third, rubrics are quite efficient and tend to require less time for the teachers in evaluating student performance.
Teachers tend to find that by the time a piece has been self- and peer-assessed according to a rubric, they have little
left to say about it. when they do have something to say, they can often simply circle an item in the rubric, rather
than struggling to explain the floor strength we have noticed and figuring out what to suggest and terms of
improvements. rubrics provide students with more informative feedback about their strengths and areas in need of
improvement.
Finally, it is easy to understand and construct a rubric scoring guide. Most of the items found in the rubric scoring
guide are self-explanatory and require no further help from outside experts.
Rubric for an Invention Report
Criteria Quality
(3) (2) (1) (0)
Most acceptable Acceptable Less Acceptable Not Acceptable
Purposes The report explains The report explains The report explains The report does not
that he purposes of all of the key some of the purposes refer to the purposes
the invention and purposes of the of the invention but of the invention
points out less invention. misses key purposes.
obvious ones as
well.
Features The report details Other report details The report neglects The report does not
both he and hidden that you teachers of some features of the detail the features of
features of the the invention and invention or the the invention are the
invention and explains a purpose purposes they serve. purposes they serve.
explains how they they serve.
serve several
purposes.
Critic Other report The report discusses The report discusses The report does not
discusses the the strengths and either the strengths or mention the strengths
strengths and weaknesses of the weaknesses of the are the weaknesses of
weaknesses of the invention. invention but not both. the invention.
invention, and
suggests ways in
which it can be
improved.
Connections The report makes The report makes The report makes clear The report makes no
appropriate appropriate are inappropriate connections between
connections connections between connections between the invention and
between the the purposes and the invention and other things.
purposes and features of the other phenomena.
features of the invention and one or
invention and many two phenomena.
different kinds of
phenomena.
SUB-TOTALS
Assessment of Learning 1 (Module_Ed 106)
lnea 2022 | 4
Lesson 4. Creating Rubrics
In designing a rubric scoring guide, the students need to be actively involved in the process. The following steps are
suggested in actually creating a rubric:
1. Survey Models - show students’ examples of good and not-so-good work. Identify the characteristics that make
the good ones good and the bad ones bad.
2. Define Criteria - from the discussion on the models, identify the qualities that define good work.
3. Agree on the Levels of Quality - describe the best and worst levels of quality, then fill in the middle levels based
on your knowledge of common problems in the discussion of not-so-good work.
4. Practice on Models - using the grid criteria and levels of quality, evaluate the models presented in step 1
together with the students.
5. Use Self and peer Assessment- give students their task. As they work, stop them occasionally for self-and peer-
assessment.
6. Revise - always give students time to revise their work based on the feedback they got in Step 5.
7. Use Teacher Assessment - use the same rubric students used to assess their work yourself.
Lesson 5. Writing and Selecting Effective Rubrics
Two defining aspects of rubrics are the criteria that describe the qualities that you and students should look for as
evidence of students' learning and the descriptions of the levels of performance.
Desired Characteristics of Criteria for Classroom Rubrics
Characteristics Explanation
The criteria are:
Appropriate Each criterion represents an aspect of a standard, curricular goal, or
instructional goal or objective that students are intended to learn.
Definable Each criterion has a clear, agreed-upon meaning that both students
and teachers understand.
Observable Each criterion describe is equality in the performance that can be
perceived (seen or heard, usually) by someone other than the person
performing.
Distinct from one another Each criterion identifies a separate
aspect of the learning outcomes the performance is intended to
assess.
Complete All the criteria together describe the whole of the learning outcomes
the performance is intended to assess
Able to support descriptions along Each criterion can be described over a range of performance level.
a continuum of quality
Oral Reading Fluency Rubric
Name _____________________________ Date______________
1 2 3 4
Expression No Expression A little expression Same expression Lots of Expression
Phrasing 1 2 3 4
No Expression A little expression Same phrasing Very good Phrasing
Speed 1 2 3 4
Way too slow or way A little bit too slow or Almost perfect but Just Right!
too fast! a little bit too fast. still needs
practice...
Assessment of Learning 1 (Module_Ed 106)
lnea 2022 | 5
Desired Characteristics of Descriptions of Levels of Performance for Classroom Rubrics
Characteristic Explanation
The descriptions of levels of
performance are...
Descriptive Performance is described in terms of what is observed in the work.
Clear Both students and teachers understand what the descriptions mean.
Cover the whole range of Performance is described from one extreme of the continuum of quality to
performance another for each criterion.
Performance descriptions are different enough from level to level that were
Distinguish among levels can be categorized unambiguously. It should be possible to match examples
of work to performance descriptions at each level.
Center the target The description of performance of the level expected by the standard,
performance (acceptable, curriculum goal, or lesson objective is placed at the intended level on the
mastery, passing) at the rubric.
appropriate level
Feature parallel descriptions Performance descriptions at each level of the continuum for a given standard
from level to level describe different quality levels for the same aspects of the work.
Book Talk Rubric
Criteria Quality
Did I get my audience's Creative Beginning Boring Beginning No Beginning
attention?
Did I tell what kind of Tells exactly what type of Not sure, not clear Didn't mention it
book? book it is
Did I tell something about Included facts about Slid over character Did not tell anything
the main character? character about main character
Did I mention the setting? Tells when and where Not sure, not clear Didn't mention setting
story takes place
Did I tell one interesting Made it sound interesting- Told part and skipped on Forgot to do it
part? I want to buy it! to something else
Did I tell who might like Did tell Skipped over it Forgot to tell
this book?
How did I look? Hair combed, neat, clean Lazy look Just-got-out-of-bed look,
clothes, smiled, looked head down
up, happy
How did i sound? Clear, strong, cheerful No expression in voice Difficult to understand----
voice 6-inch voice or screeching
Lesson 6. Tips in Designing Rubrics
Perhaps the most difficult challenge is to use clear, precise and concise language. Terms like "creative", "innovative"
and other vague terms need to be avoided. If are you a brick is to teach as well as evaluate, terms like these must be
defined for students. Instead of these words, try a word that can convey ideas and which can be readily observed.
Patricia Crosby and Pamela Heinz, both seventh grade teachers (from Andrade, 2007), solved the same problem in a
rubric for oral presentations by actually listing ways in which students could meet the criterion. (fig. 19). This
approach provides valuable information to students on how to begin a talk and avoid a need to define elusive terms
like creative.
Assessment of Learning 1 (Module_Ed 106)
lnea 2022 | 6
Rubric for an Oral Presentation
Criterion Quality
Gains attention of Gives details or an amusing Does a two-sentence, Does not attempt to
audience. fact, a series of questions, a introduction then starts gain attention of
short demonstration, a speech audience, just starts
colorful visual for a personal Gives a one-sentence speech.
reason why they picked this introduction then starts
topic. speech.
Specifying the levels of quality can often be very challenging also. Spending a lot of time with the criteria will help but
in the end, what comes out are often subjective. There is a clever technique often used to define the levels of quality.
It is centrally graduates the quality levels through the responses: "Yes", "Yes but," "No but," and "No."
Rubric for Evaluating a Scrapbook (Lifted from Andrade, 2007)
Criterion Quality
Gives enough Yes, I put in enough Yes, I put in some No, i didn't put in No, i had almost
details. details to give the details, but some enough details, no details.
reader a sense of key details are but i did include a
time, place, and missing. few.
events.
Rubrics are scales that differentiate levels of student’s performance. They contain the criteria that must be met by the
student and the judgment process will be used to rate how well the student has performed. An exemplary is an
example that delineates the desired characteristics of quality in ways students can understand. These are important
parts of the assessment process.
Well-designed rubrics include:
• performance dimensions that are critical to successful task completion;
• criteria that reflect all the important outcomes of the performance task;
• a writing skill that provides a usable, easily-interpreted score;
• criteria that reflect concrete references, in clear language understandable to students, parents, and other
teachers; and other teachers; and others.
In summary, we can say that to design problem-based tests, we have to ensure that both processes and end-results
should be tested. The tests should be designed carefully enough to ensure that proper scoring rubrics can be
designed, so that the concerns about subjectivity in performance-based tests are addressed. Indeed, this needs to be
done anyway in order to automate the test, so that performance-based testing is used widely.
Lesson 7. Automating Performance-Based Tests
Going by the complexity of the issues that needed to be addressed in designing performance-based tests, it is clear
that automating the procedure is no easy task. The sets of tasks that comprise a performance-based test have to be
chosen carefully in order to tackle the design issues mentioned. Moreover, automating the procedure imposes another
stringent requirement for the design of the test. In this section, we summarize what we need to keep in mind while
designing an automated performance-based test.
We have seen that in order to automate a performance-based test, we need to identify a set of tasks which all you do
the solution of a fairly complex problem. For the testing software to be able to determine whether a student has
completed any particular task, the end of the task should be accompanied by a definite change in the system. The
testing software can track this change in the system, to determine whether the student has completed the task.
Indeed, a similar condition applies to every aspect of the problem-solving activity that we wish to test. In this case, a
set of changes in the system can indicate that the student has the desired competency.
Such tracking is used widely by computer game manufacturers, where the evidence of a game player's competency is
tracked by the system, and the game player is taken to the next 'level' of the game.
Assessment of Learning 1 (Module_Ed 106)
lnea 2022 | 7
In summary, the following should be kept in mind as we design a performance-based test.
Each performance task/problem that is used in the test should be clearly defined in terms of performance
standards not only for the end result but also for the strategies used in various stages of process.
A user need not always end up accomplishing the task; hence it is important to identify important milestones
that the test taker reaches while solving the problem.
Having define the possible strategies, the process and milestones, that is election of tasks that comprise a
test should allow the design of good rubrics for scoring.
Every aspect of the problem-solving activity that wish to wish to test has to lead to a set of changes in the
system, so that the testing a software can collect evidence of the student's competency.
ACTIVITY
A. Construct a checklist for a performance test which tests the students' ability to perform the following:
1. using an inclined plane to illustrate the concept of a diluted free fall
2. using the low power objective and high-power objective of a microscope
3. opening and using the MS WORD for word processing
4. using MS EXCEL to prepare a class record for a teacher
5. playing the major keys on a guitar
B. Construct a rubric scoring guide for the following:
1. An essay on the "History of the Philippine Republic: 1898-1998"
2. Poem reading "The Raven" by Edgar Allan Poe
3. Constructing three-dimensional geometric figures made of cardboard boxes
4. Story Telling: "May Day's Eve" by Nick Joaquin
5. Solving an algebraic verbal problem involving two linear equations in two unknowns in two unknowns
6. Writing the alphabet in cursive form
7. Interpreting a poem from Robert Frost
8. Writing an autobiography
9. Research Report
C. Differentiate between a performance test and the traditional assessment method of cognitive testing.
REFERENCES
Assessment of Learning 1
Navarro, Rosita PhD; Santos, Rosita PhD; Corpuz, Brenda PhD (2017)
Assessment of Learning 1
De Guzman, Estefania S. PhD et.al.2015
Assessment of Learning 1 (Module_Ed 106)
lnea 2022 | 8