The Quality of Written Peer Feedback On Undergraduates ' Draft Answers To An Assignment, and The Use Made of The Feedback
The Quality of Written Peer Feedback On Undergraduates ' Draft Answers To An Assignment, and The Use Made of The Feedback
The research described here investigated the quality and characteristics of peer
feedback given on a draft piece of writing in the context of an undergraduate
summative assignment. It also investigated whether the recipients made use of
the feedback, with the aim of discovering whether some types of feedback were
used in preference to others. The peer feedback was characterised in various
ways, and then a comparison with the feedback subsequently given on the pol-
ished piece of writing by the tutor was used to determine the strengths and
weaknesses of the peer feedback. Although the peers’ feedback had some differ-
ent characteristics from that of the tutors, it was nevertheless of good quality.
The examination of the use the recipients made of the feedback showed that
much feedback was ignored. The use recipients made of the feedback depended
very little on the characteristics of the feedback received, but did vary strongly
across the recipients. The ability level of the recipients was not found to be a
factor in this variation. The results of this research suggest that future work
needs to focus more on students using feedback than on students giving feed-
back.
Keywords: peer feedback; formative peer assessment; peer review; use of peer
feedback
Introduction
Although much is now known about the circumstances under which peer assessment
operates most successfully, less is known about the feedback that students give – its
quality and its characteristics. Similarly, little is known about whether, and if so
how, these characteristics affect the way students respond to the feedback. This
article addresses this lack of knowledge, drawing on a formative peer assessment
activity in an undergraduate technology module.
*Email: mirabelle.walker@open.ac.uk
Liu and Carless (2006) have named formative peer assessment ‘peer feedback’ to
clearly distinguish it from cases where a mark or grade is also given. Nicol,
Thomson, and Breslin (2014) prefer the term ‘peer review’, defining it as ‘an
arrangement whereby students evaluate and make judgements about the work of
their peers and construct a written feedback commentary’ (103). This definition is an
exact description of the formative peer assessment to be described in this article, but
the term ‘peer feedback’ is preferred here to avoid confusion with ‘peer review’ as
academic review of papers.
Peer feedback may be given on a draft piece of work, with the recipient having
the opportunity to improve their work before handing it in to be formally assessed
by a tutor. It is to be expected that the peer feedback on the draft will be of benefit
to the recipient, and indeed Cho and MacArthur (2010) have shown not only that
this is the case, but that more feedback is of more benefit, in that the increased quan-
tity of feedback from multiple peers, as against just one, leads to greater improve-
ment from draft to final version.
A further result from research into peer feedback is that the peer feedback pro-
cess is also of benefit to those giving the feedback, in that they subsequently go on
to produce work of a higher standard themselves (Cho and Cho 2011; Cho and
MacArthur 2011; Li, Liu, and Steckelberg 2010; Nicol, Thomson, and Breslin
2014). Indeed, Cho and Cho (2011) found that in a peer feedback situation, the
students gain more benefit from giving feedback than from receiving it. Nicol,
Thomson, and Breslin (2014) expanded on this finding in their research; their focus
groups elicited student comments that showed that giving feedback is related mainly
to critical thinking, taking the assessor’s perspective and transferring ideas generated
by giving feedback into their own work, whereas receiving feedback is related
mainly to addressing subject content that needs clarification or other improvement.
In other words, giving feedback engages a higher level of thinking skills.
It cannot be assumed, however, that students can bring appropriate higher-level
thinking skills to bear without having first learned how to do so, and there is general
agreement that students need support in order to provide quality feedback. For
example, in his advice on how to set up a peer feedback arrangement, Topping states
‘quality training will make a great deal of difference’ (2009, 25), and specifically
recommends ‘give feedback and coaching where needed’ (2009, 25). The ‘training’
offered to students before a peer feedback activity frequently takes the form of a dis-
cussion of the criteria that will be used, possibly together with practice using one or
more exemplars drawn from previous students’ work (e.g. Cartney 2010; Orsmond,
Merry, and Callaghan 2004).
The peer feedback situation discussed in this article embodied formative com-
ments by two peers on a draft piece of work with prior ‘training’ in giving and using
feedback. This enabled the research to move on from investigating the characteris-
tics of the peer feedback situation to investigating the characteristics of the peer
feedback itself. The research first examined the quality and characteristics of the
feedback that students gave to their peers, including how well it matched the ‘train-
ing’ they had been given, and then looked at what sorts of feedback the recipients
made use of and what sorts they ignored. The research therefore brings out new
knowledge about the nature and use of peer feedback.
234 M. Walker
In the first criterion, ‘brief’ is used to denote the requirements placed on the piece of
writing, whatever their source (for example, an assignment question, a superior at
work or one’s own desire to write). The criteria concerning meeting the brief and
structure are complex, and hence are divided into sub-elements. Many of the sub-
elements for meeting the brief depend on the brief, and students are shown how to
tease them out. The sub-elements for structure are more constant and students are
given them as a checklist.
In the first two summative assignments, students write passages of a similar level
and difficulty to the ones they have been working with as examples. In the second
assignment, they also perform a critical evaluation of some aspects of a short report
written by a former student. An important feature of the skills development is that
the tutors evaluate the pieces of writing in these, and all other, assignments using
exactly the same framework of six criteria that the students have been learning to
use. This has two benefits. One is that the feedback from their tutor gives students
further ‘worked examples’ in which the framework is used. The second is that they
Assessment & Evaluation in Higher Education 235
should more readily understand their tutor’s feedback because they themselves have
been working with the criteria.
The foregoing constitutes the ‘training’ for the peer feedback activity, which stu-
dents carry out as part of the third summative assignment.
This peer feedback activity takes place between weeks 11 and 16 of the 33-week
module. Tutors allocate students to small groups of four, five or six, and these small
groups work together over this six-week period to create a joint wiki and a website,
both of which will be assessed at the end of the period, with each receiving 50% of
the assignment marks. The peer feedback takes place during the wiki creation.
Each small group develops a wiki in the module’s virtual learning environment,
with only the small-group members and their tutor being able to access this wiki.
The task is to create a set of wiki pages on the topic of online communication and
collaboration. Each group member chooses one topic from a stipulated set, and
writes a draft of their wiki page (maximum 700 words) to a brief given in the
assignment. Using the wiki’s commenting facility, each member must then supply
feedback on the draft pages of two other members (up to 250 words per recipient).
Finally, each member reworks their wiki page in the light of the feedback received.
The group members themselves decide who writes which page and who gives feed-
back on which pages, ensuring that there is no overlap of topics chosen and that
each member gives and receives two pieces of feedback. They also negotiate dates
for the drafts and the feedback, within the overall deadline for submitting the fin-
ished assignment.
Students receive up to 30 marks (out of 50) for the final version of their individ-
ual wiki page, 10 marks for the two pieces of feedback they supply to their peers
and five marks for a brief explanation of how they have used the feedback they
received to make improvements to their wiki page. The foregoing are all individual
marks. The final five marks for the wiki activity are the same for all members of the
small group, and are awarded on the basis of how well the group worked together to
produce the wiki pages.
(1) What were the characteristics of the peer feedback and did a comparison
with those of the tutor feedback indicate any particular strengths or
weaknesses?
(2) How did the characteristics of the feedback that the students received relate
to the changes they subsequently made?
Method
The Open University’s quality assurance processes include the monitoring of tutors’
marking and commenting on assignments by experienced members of staff. For the
236 M. Walker
monitoring of the assignment involving peer feedback, one marked and commented
script had been selected from each tutor working on the module. Permission was
obtained to use these scripts in this research, and also to access the wiki pages of
the small groups of which these students were a part. This gave 25 wiki pages, in
both draft and polished form. It should have given 50 pieces of peer feedback, but
the peer feedback process had not worked as planned in two groups, and so two stu-
dents had received only one piece of feedback each. Thus the research was carried
out on:
Of the 73 students whose work is represented in the research (25 authors of wiki
pages plus 48 feedback providers), 77% were male, an identical percentage to that
for all 591 students studying the module. The mean age of the 73 students, 36, was
just 7 months lower than that of all the students studying the module.
All peer feedback on the draft wiki pages and all tutor feedback on the final, pol-
ished wiki pages was divided into comments, where a comment is taken to be a
statement relating to a particular shortcoming or praiseworthy item, whether that
statement occupies part of a sentence, a whole sentence or several sentences. The
comments were carefully read in order to classify them in four different ways. This
was done on four passes through the entire set of comments. After the initial classifi-
cation, a proportion of comments was classified a second time, as a check for con-
sistency. The data generated were then entered into a spreadsheet to aid the
numerical analysis aspects of the investigation.
The first classification was to determine which of the six criteria in the evalua-
tion framework (as introduced earlier in this article) each comment belonged to.
Next the feedback was classified using a coding scheme introduced by Brown
and Glover (2006), because this scheme has been used to uncover what sorts of
feedback are usable by students (Walker 2009). The coding scheme considers two
aspects of the comments and hence requires two classifications of the comments.
The first relates to the category of comment. There are five categories:
The other classification in the coding scheme concerns the depth of comment. There
are three depths:
The feedback itself suggested one further aspect to classify. This was the tone of the
comments. The tone of the peer feedback tended to be more tentative than the tone
of the tutor feedback when shortcomings were being addressed, although there was
no apparent difference in the tone of the motivating comments. As the tone of a
comment may affect its recipient’s willingness to use it, an analysis was undertaken
of the tone the peers and tutors used in comments that addressed a shortcoming.
A coding scheme was designed for this, with tones being classified as:
Suggesting/tentative – ‘I suggest you …’, ‘You might like to …’, ‘Have you
thought of …?’
Stating – a simple statement relating to a shortcoming.
Exhorting – an instruction to do something about a shortcoming.
Probing – designed to probe and stretch the student’s understanding.
In summary, to investigate the quality of the feedback given by the peers, each of
the 592 comments made by the peers and the 654 comments made by the tutors was
classified in four different ways:
After all the comments had been classified in these four ways, analyses and compar-
isons were made. Then the draft and polished wiki pages of each recipient of
feedback were compared to find the changes. These changes were examined in the
light of the peer feedback received in order to ascertain whether some kinds of peer
feedback were acted on in preference to others, and if so which. Other factors which
could have affected whether students acted on the comments were also examined.
Results
In this section, for clarity, the group members who provided the peer feedback will
be referred to as ‘peers’ and the group member who wrote the draft, received the
feedback and polished the draft will be referred to as ‘recipient’.
Figure 1. Comparison of peers’ and tutors’ comments made under the six criteria meeting
the brief, factual accuracy, structure, style, technical level and English.
Figure 2. Comparison of peers’ and tutors’ comments falling into the three categories
content, skills development and motivating.
240 M. Walker
suggested a potential weakness, in that the peers might be setting too low a standard.
Hence a careful study was undertaken of the motivating comments made by peers,
including an examination of whether a tutor subsequently found a problem with
something a peer had found acceptable, which would be a very strong indication of
an inappropriate standard. The following two points emerged about the difference in
the proportion of motivating comments.
Much of the difference came from the peers being more thorough in specifi-
cally addressing the sub-elements of meeting the brief and structure than were
tutors – very often silence from a tutor seemed to imply that a sub-element
was acceptable, whereas a peer was more likely to say so explicitly.
Some of the difference came from the general comments made at the start or
end of the feedback. Peers were more likely to offer such comments, and they
all fell into the motivating category.
These two points accounted for much of the difference in the proportions of com-
ments in the motivating category. Most of the rest was because peers, under style
and technical level, had a slight tendency to find acceptable something which the
tutor found inappropriate, again indicating that peers had more difficulty with these
two criteria than with the others. Thus, the difference in proportions of peer and
tutor comments in the three categories indicated a small weakness in the peer
feedback.
A previous study has shown that comments of depth explain are more usable by
students (Walker 2009). Figure 3 compares the percentages of comments given by
tutors and peers for the three depths, and shows that peers gave slightly more com-
ments of depth explain than did tutors, an indication of a small strength in the peers’
feedback. Indeed, it was noticeable that some peers were careful to explain some of
their suggestions for improvements. This may be because they were trying to be
helpful, but it may also be because they were aware that their tutor would be mark-
ing their feedback and wanted to be sure that the tutor understood why they were
making the suggestion.
When the coding scheme for tone was applied to the peers’ and tutors’ com-
ments, it revealed the pronounced differences shown in Figure 4. These differences
Figure 3. Comparison of peers’ and tutors’ comments falling into the three depths indicate
(depth 1), correct/amplify (depth 2) and explain (depth 3).
Assessment & Evaluation in Higher Education 241
Figure 4. Comparison of peers’ and tutors’ comments falling into the four tones suggest-
ing/tentative, stating, exhorting and probing.
are not surprising given the differences between the tutor–student relationship and
the peer–recipient relationship.
The same message can be conveyed by a comment in any tone, but the tone may
subtly affect the recipient’s willingness to use the comment to address the shortcom-
ing indicated. The question of whether the differences shown in Figure 4 indicate
any strength or weakness in the peers’ commenting is addressed later on, when the
use the recipients made of the comments is discussed.
that the peers were able to give feedback using the criteria in the module’s
framework, though a weakness was that many did not refer to all of the crite-
ria;
that in the characteristics of category, depth and tone the peers’ feedback dif-
fered from that of the tutors, but in ways that suggested only small weaknesses
in the peers’ work;
that the peers’ feedback overall was usable and of good quality.
Overall results
When the draft wiki pages and the peer feedback received were compared with the
final wiki pages, it became apparent that many comments from peers that related to
shortcomings were ignored by their recipients. Specifically:
This response behaviour was not, however, spread evenly across the 25 recipients.
Three of them (one strong, one average and one weak student) ignored the com-
ments received altogether, while nine recipients (two strong, six average and one
weak) responded to 75% or more of the comments received. In all, 14 (56%) of the
recipients responded to 50% or more of the comments received.
Given the wide variation in response rate across the recipients, an investigation
was carried out as to whether the variation could be explained by the recipient’s
ability level, with one of their module assignment marks being used as an estimate
of the ability level. It was not appropriate to use the mark for the assignment in
which the peer feedback took place, because the feedback process was likely to
affect the mark, so the mark for the next assignment was used. This mark was corre-
lated with the proportion of comments attended to in the peer feedback activity, and
Assessment & Evaluation in Higher Education 243
the outcome indicated no correlation between ability level and response rate
(Pearson correlation, r = −0.06, p = 0.77). Hence the variation in response cannot be
explained by the recipient’s ability level, and some other reason for not responding
to the feedback is needed.
In the peer feedback activity, the recipients had been asked to explain to their
tutor how they had used the feedback to improve their draft, and 5% of the assign-
ment marks were available for this explanation. Therefore an explanation of why
those who made no changes chose that course of action should have been available
in their assignment answer. One of the three recipients who made no changes had
not written this explanation and so there is no evidence as to why he behaved in this
way. Both of the other two did write explanations, and both justified what they had
put in their drafts, saying they did not agree with their peers’ comments. In both
cases, the tutor agreed with the peers rather than the recipients. This suggests that a
possible reason for not responding to peer feedback is an inability or unwillingness
to recognise that the shortcomings that others have identified are indeed shortcom-
ings. In considering this suggestion, it should be noted that, had the recipients
accepted the comments and made changes accordingly, they would have submitted a
better wiki page for marking and so would have received a higher mark. Hence it
may simply be the case that the recipients anticipated receiving what was to them an
adequate mark without doing further work, but concealed this reason for not using
the comment.
Table 2. The percentages of comments that were attended to (thoroughly or to some extent)
for each of four criteria.
Percentage of comments for this criterion
Criterion that were attended to
Does the document meet the brief? (n = 94) 46
Is the document factually accurate? (n = 14) 36
Is the structure appropriate for the audience, 62
purpose and medium? (n = 73)
Is the English correct? (n = 40) 43
Note: In each row, n refers to the number of comments made for this criterion.
244 M. Walker
judgement, so there should have been little room for the recipient to disagree with
them. Once again the recipients’ written explanations to their tutor (where
available – five further recipients had not provided an explanation) were used to
investigate this lack of response. However, it quickly became apparent that in
their explanations the majority of recipients focused on the comments that had led
to changes and omitted to mention those that had not. Where reasons were given
for not attending to comments on the meeting the brief and factual accuracy crite-
ria, four recipients indicated that the word limit for the wiki page was an inhibitor
and, as seen earlier with those who made no changes at all, three recipients sim-
ply failed to agree with their peers that a shortcoming was indeed a shortcoming.
In one of these cases, the tutor agreed with the peer rather than the recipient,
making three instances of this in total.
So far as depth is concerned, Table 3 shows that recipients were most likely to
make changes in response to comments of depth explain, an unsurprising result
given that the explanation would help the recipient to understand the comment bet-
ter. However, the difference in response among the three depths was again small,
and the result regarding depth explain should be treated with some caution as the
number of such comments was relatively small.
Table 4 shows the percentages of comments that led to changes for three of the
tones (the tone probing has been omitted because only three peer comments fell into
this class). The table shows clearly that there was very little difference in response
to the three tones: recipients were almost equally willing to pay attention to a com-
ment in any tone. This is a surprising result, as it might have been assumed that
recipients would be more likely to ignore comments phrased as suggestions or in
tentative language. Insofar as there can be said to be any difference, the reverse is
the case. Clearly, however, the difference in tone between peers’ and tutors’ com-
ments indicated neither a strength nor a weakness in the peers’ feedback.
Table 3. The percentages of comments that were attended to (thoroughly or to some extent)
for each of the three depths.
Percentage of comments for this depth that were
Depth attended to
Indicate (depth 1) (n = 50) 36
Correct/amplify (depth 2) (n = 165) 52
Explain (depth 3) (n = 17) 65
Note: In each row, n refers to the number of comments made for this depth.
Table 4. The percentages of comments that were attended to (thoroughly or to some extent)
for each of three tones.
Tone Percentage of comments for this tone that were attended to
Suggesting/tentative (n = 87) 53
Stating (n = 132) 48
Exhorting (n = 10) 50
Note: In each row, n refers to the number of comments made for this tone.
Assessment & Evaluation in Higher Education 245
that only around half of the comments indicating a shortcoming were used to
make changes to recipients’ wiki pages;
that the characteristics of the peer feedback made little difference to whether
the recipient made changes;
that individual recipients differed greatly in the proportion of comments that
they used to make changes;
that this variation could not be accounted for by the ability level of the recipient.
Discussion
The results of this study have indicated that students performed better at giving feed-
back to their peers than in making use of the feedback they received, as evidenced
by the quantity and quality of the feedback provided, and by the lack of use of
around half of this feedback. This is an intriguing result, given the finding men-
tioned earlier that giving feedback engages a higher level of thinking skills than
receiving it (Nicol, Thomson, and Breslin 2014), which would suggest that giving
feedback is the more challenging activity.
The availability or otherwise of marks is unlikely to be an explanation for this
behaviour, as marks were available, in roughly equal quantities, for all aspects of the
giving and use of the feedback.
There was a slight asymmetry in the preparation students received for the peer
feedback activity. In the module materials, equal emphasis was given to making a
critical evaluation and to using the evaluation to improve a document, but only the
former had featured in a previous summative assignment. It may be, therefore, that
some students felt less confident about using comments than about giving them.
Indeed, Price, Handley, and Millar suggest that some students ‘may not be able to act
on the feedback without further help’ (2011, 892). However, while inability or lack of
confidence may apply in some instances here, they are unlikely to explain why only
around half of usable comments were in fact used. Other explanations are needed.
This is not the first time that students have been found not to make use of peer
feedback. For example, Cartney (2010) also identified this problem. Some causes
have been suggested by Strijbos, Narciss, and Dünnebier (2010). In a somewhat
contrived peer feedback situation, they identified three factors that would affect
whether, and if so to what extent, recipients responded to peer feedback. These were
‘perceived adequacy of feedback’ (i.e. fairness, usefulness and acceptability), ‘affect’
(i.e. how the feedback affected the recipient emotionally) and ‘willingness to
improve’. Further work is needed to identify whether these three factors apply in the
peer feedback activity of this article.
Relatively little work has been done on the reasons why students may fail to act
on comments from peers, but more has been done in the case of comments from
tutors. It is helpful to examine these reasons for potential applicability to this study.
Several researchers have identified vagueness or lack of detail in the feedback as
a problem (e.g. Crisp 2007; Duncan 2007; Higgins, Hartley, and Skelton 2002), and
this present study also found that recipients did not use vague comments such as,
for example, ‘There are some spelling mistakes’. However, vagueness or lack of
detail is not an explanation for most of the non-use in the study, and certainly not
246 M. Walker
for the three recipients who took no action, each of whom received several clear and
precise comments.
Another reason for students not using tutor feedback is that they do not understand
the language or the criteria (Higgins, Hartley, and Skelton 2002). Given the prepara-
tory work the fact the recipients had been providing feedback using the same criteria,
lack of understanding of the criteria is unlikely to be a reason here, except perhaps to
some extent with the known ‘difficult’ criteria of style and technical level.
Other researchers have found characteristics of the recipient, rather than of the
feedback, to be a reason for not using tutor feedback. For example, Wingate (2010)
found the reasons to be low motivation and self-perception of the student’s ability,
while Rae and Cochrane identified ‘active students’ and ‘passive students’, with the
latter having a ‘distinct lack of intent to learn’ (2008, 221).
The research carried out in this study suggests that in the case of peer feedback
the reasons for not using feedback lie with the recipient rather than with the feedback,
in that the work to answer the second research question indicates clearly that there
was little variation in response with feedback characteristics and great variation across
recipients. It has, however, clearly demonstrated that the ability level of the recipient
was not a factor in the extent to which they acted on their peers’ feedback. It has
found some evidence that some recipients were unable or unwilling to accept that a
shortcoming identified by a peer was indeed a shortcoming, but the sample size is suf-
ficiently small that this can only be put forward as one possible explanation for some
recipients’ non-use of peer feedback. It may, however, be related to the factor Strijbos,
Narciss, and Dünnebier identified as ‘willingness to improve’ (2010). There is clearly
more work needed to investigate this reluctance to use peer feedback.
Conclusion
The results of this study indicate that, when a peer feedback activity is consistent
with the conditions that have been found to enable such an activity to work well,
the giving of peer feedback is less problematic than the subsequent use of the feed-
back. This suggests that attention, both in practical situations and in research, should
focus more on recipients’ use of peer feedback and on the conditions that will
enable such use to work well. These conditions appear to have less to do with the
nature of the feedback than with the recipients themselves. However, more work is
needed to confirm this, and to extend current knowledge about the characteristics of
the recipients that make them more likely to use the feedback to improve their work.
Acknowledgement
The author is grateful to the three anonymous reviewers for their valuable comments on this
article, and to colleagues for their helpful comments on a draft.
Notes on contributor
Mirabelle Walker was a member of the team that planned and prepared the module whose
assignment was used in the research described here. Her research interests centre around
effective feedback on written assignments.
References
Bloxham, S., and A. West. 2007. “Learning to Write in Higher Education: Students’ Percep-
tions of an Intervention in Developing Understanding of Assessment Criteria.” Teaching
in Higher Education 12 (1): 77–89.
Assessment & Evaluation in Higher Education 247
Brown, E., and C. Glover. 2006. “Evaluating Written Feedback.” In Innovative Assessment in
Higher Education, edited by C. Bryan and K. Clegg, 81–91. London: Routledge.
Cartney, P. 2010. “Exploring the Use of Peer Assessment as a Vehicle for Closing the Gap
between Feedback Given and Feedback Used.” Assessment & Evaluation in Higher Edu-
cation 35 (5): 551–564.
Cho, K., and C. MacArthur. 2010. “Student Revision with Peer and Expert Reviewing.”
Learning and Instruction 20 (4): 328–338.
Cho, K., and C. MacArthur. 2011. “Learning by Reviewing.” Journal of Educational
Psychology 103 (1): 73–84.
Cho, Y. H., and K. Cho. 2011. “Peer Reviewers Learn from Giving Comments.” Instructional
Science 39 (5): 629–643.
Crisp, B. R. 2007. “Is It worth the Effort? How Feedback Influences Students’ Subsequent
Submission of Assessable Work.” Assessment & Evaluation in Higher Education 32 (5):
571–581.
Duncan, N. 2007. “‘Feed‐forward’: Improving Students’ Use of Tutors’ Comments.” Assess-
ment & Evaluation in Higher Education 32 (3): 271–283.
Falchikov, N. 1995. “Peer Feedback Marking: Developing Peer Assessment.” Innovations in
Education and Training International 32 (2): 175–187.
Fernández-Toro, M., M. Truman, and M. Walker. 2013. “Are the Principles of Effective
Feedback Transferable across Disciplines? A Comparative Study of Written Assignment
Feedback in Languages and Technology.” Assessment & Evaluation in Higher Education
38 (7): 816–830.
Hanrahan, S. J., and G. Isaacs. 2001. “Assessing Self- and Peer-assessment: The Students’
Views.” Higher Education Research and Development 20 (1): 53–70.
Higgins, R., P. Hartley, and A. Skelton. 2002. “The Conscientious Consumer: Reconsidering
the Role of Assessment Feedback in Student Learning.” Studies in Higher Education
27 (1): 53–64.
Kaufman, J. H., and C. D. Schunn. 2011. “Students’ Perceptions about Peer Assessment for
Writing: Their Origin and Impact on Revision Work.” Instructional Science 39 (3): 387–406.
Li, L., X. Liu, and A. L. Steckelberg. 2010. “Assessor or Assessee: How Student Learning
Improves by Giving and Receiving Peer Feedback.” British Journal of Educational Tech-
nology 41 (3): 525–536.
Liu, N.-F., and D. Carless. 2006. “Peer Feedback: The Learning Element of Peer Assess-
ment.” Teaching in Higher Education 11 (3): 279–290.
Nicol, D., A. Thomson, and C. Breslin. 2014. “Rethinking Feedback Practices in Higher
Education: A Peer Review Perspective.” Assessment & Evaluation in Higher Education
39 (1): 102–122.
Orsmond, P., S. Merry, and A. Callaghan. 2004. “Implementation of a Formative Assessment
Model Incorporating Peer and Self-assessment.” Innovations in Education and Teaching
International 41 (3): 273–290.
Patton, C. 2012. “‘Some Kind of Weird, Evil Experiment’: Student Perceptions of Peer
Assessment.” Assessment & Evaluation in Higher Education 37 (6): 719–731.
Price, M., K. Handley, and J. Millar. 2011. “Feedback: Focusing Attention on Engagement.”
Studies in Higher Education 36 (8): 879–896.
Rae, A. M., and D. K. Cochrane. 2008. “Listening to Students: How to Make Written
Assessment Feedback Useful.” Active Learning in Higher Education 9 (3): 217–230.
Strijbos, J.-W., S. Narciss, and K. Dünnebier. 2010. “Peer Feedback Content and Sender’s
Competence Level in Academic Writing Revision Tasks: Are They Critical for Feedback
Perceptions and Efficiency?” Learning and Instruction 20 (4): 291–303.
Topping, K. J. 2009. “Peer Assessment.” Theory into Practice 48 (1): 20–27.
Walker, M. 2009. “An Investigation into Written Comments on Assignments: Do Students
Find Them Usable?” Assessment & Evaluation in Higher Education 34 (1): 67–78.
Wingate, U. 2010. “The Impact of Formative Feedback on the Development of Academic
Writing.” Assessment & Evaluation in Higher Education 35 (5): 519–533.
Copyright of Assessment & Evaluation in Higher Education is the property of Routledge and
its content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder's express written permission. However, users may print, download, or email
articles for individual use.