Rethinking Expert Opinion Evidence: Melbourne University Law Review May 2017
Rethinking Expert Opinion Evidence: Melbourne University Law Review May 2017
net/publication/316960516
CITATIONS READS
7 520
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Gary Edmond on 16 May 2017.
This article approaches expert opinion evidence from a scientific, specifically cognitive
science, perspective. Decades of scientific research on expertise presents a picture of
expertise that bears limited resemblance to the categories and practices used by legal
institutions to regulate the admission, presentation and evaluation of expert evidence
(ie, opinions based on specialised knowledge). This article seeks to explain why legal
institutions should direct more attention to scientifically-based criteria and insights,
rather than the somewhat crude set of legal proxies developed by common law judges, if
they hope to credibly regulate forensic science and medicine evidence in ways that
enhance factual rectitude and fairness.
CONTENTS
*
BA (Syd), MPsych, PhD (UNSW); Senior Lecturer and Australian Research Council DECRA
Fellow, School of Psychology, UNSW Sydney. This research was supported by the following
Australian Research Council grants: DE140100183 (Martire) and LP160100008 (Martire
and Edmond).
†
BA (Hons) (Wollongong), LLB (Hons) (Syd), PhD (Cantab); Professor, School of Law,
UNSW Sydney; Research Professor (Fractional), Northumbria Law School, Northumbria
University; Chair, Evidence-Based Forensics Initiative.
967
968 Melbourne University Law Review [Vol 40:967
1
See Daniel Kahneman and Gary Klein, ‘Conditions for Intuitive Expertise: A Failure to
Disagree’ (2009) 64 American Psychologist 515.
2
These are treated in more detail elsewhere: see Gary Edmond, ‘The Admissibility of Forensic
Science and Medicine Evidence under the Uniform Evidence Law’ (2014) 38 Criminal Law
Journal 136; Gary Edmond, ‘Specialised Knowledge, the Exclusionary Discretions and Relia-
bility: Reassessing Incriminating Expert Opinion Evidence’ (2008) 31 University of New South
Wales Law Journal 1.
3
We leave the not entirely unrelated question of judicial expertise for another occasion.
4
We appreciate that the High Court seems to have rejected recourse to ‘reliability’ in relation
to s 137 (and implicitly s 79) of the uniform evidence legislation in IMM v The Queen (2016)
257 CLR 300, 306 [16]–[17], 314 [48] (French CJ, Kiefel, Bell and Keane JJ) (‘IMM’). For
reasons this article makes clear, inattention to reliability (and validity) and proficiency repre-
sents an undesirable response to both the identification and evaluation of scientific, medical
and technical evidence, particularly forensic science evidence. For reasons made clear below,
rational endeavours to gauge the probative value of opinions based on specialised knowledge
(including ‘at its highest’) require courts to engage with evidence of validity, reliability and
demonstrable performance. See Gary Edmond, ‘Icarus and the Evidence Act: Section 137,
2017] Rethinking Expert Opinion Evidence 969
Probative Value and Taking Forensic Science Evidence “at Its Highest”’ (2017) 41 Melbourne
University Law Review (forthcoming).
5
See, eg, Buckley v Rice Thomas (1554) 1 Plowden 118, 124–5; 75 ER 182, 192–3 (Saunders J);
Folkes v Chadd (1782) 3 Dougl 157; 99 ER 589. For modern common law manifestations of
this practice, see R v Turner [1975] 1 QB 834; Clark v Ryan (1960) 103 CLR 486.
6
See Tal Golan, Laws of Men and Laws of Nature: The History of Scientific Expert Testimony in
England and America (Harvard University Press, 2004).
7
JP v DPP (NSW) [2015] NSWSC 1669 (11 November 2015) [35] (Beech-Jones J), quoting
R v Tang (2006) 65 NSWLR 681, 713 [144] (Spigelman CJ) (‘Tang’); Dasreef Pty Ltd v Haw-
char (2011) 243 CLR 588, 604 [39] (French CJ, Gummow, Hayne, Crennan, Kiefel and
Bell JJ) (‘Dasreef ’).
8
Honeysett v The Queen (2013) 233 A Crim R 152 (‘Honeysett’).
9
A classic example from the United States is Daubert v Merrell Dow Pharmaceuticals Inc,
43 F 3d 1311 (9th Cir, 1995) (‘Daubert’). See also Peter W Huber, Galileo’s Revenge: Junk
Science in the Courtroom (Basic Books, 1991). Interestingly, in recent months, the appellate
judge who doubted the need to apply the Daubert criteria and consider whether the evidence
was prepared for litigation to the forensic sciences seems to have experienced an epiphany:
Alex Kozinski, ‘Rejecting Voodoo Science in the Courtroom’, The Wall Street Journal
(online), 20 September 2016 <http://www.wsj.com/articles/rejecting-voodoo-science-in-the-
courtroom-1474328199>.
10
Consider the Canadian engagement with forensic gait analysis in England: R v Aitken [2012]
BCCA 134 (2 April 2012), quoting Otway v The Queen [2011] EWCA Crim 3 (14
January 2011).
970 Melbourne University Law Review [Vol 40:967
11
Consider the stab wound evidence in Gilham v The Queen (2012) 224 A Crim R 22,
38 [152]–[153]. See also the voice identification and comparison evidence as ad hoc expertise
in R v Leung (1999) 47 NSWLR 405; Li v The Queen (2003) 139 A Crim R 281.
12
R v Jung [2006] NSWSC 658 (29 June 2006). See also R v Madigan [2005] NSWCCA 170
(9 June 2005) for judicial reluctance to recognise expert evidence adduced by the defendant.
13
Tang (2006) 65 NSWLR 681, 709 [120] (Spigelman CJ), where the alternative would have
been leaving the images for the jury.
14
Wood v The Queen (2012) 84 NSWLR 581, 619–20 [728] (McClellan CJ at CL), discussing the
Uniform Civil Procedure Rules 2005 (NSW) r 31.23, sch 7 (‘Expert Witness Code of Conduct’).
For the most elaborate practice direction in Australia, see the Supreme Court of Victoria,
Practice Note No 2 — Expert Evidence in Criminal Trials, 25 June 2014.
15
For a more detailed review, see Gary Edmond, ‘Legal versus Non-Legal Approaches to
Forensic Science Evidence’ (2016) 20 International Journal of Evidence and Proof 3.
16
The uniform Evidence Acts comprise seven Australian statutes: Evidence Act 1995 (Cth);
Evidence Act 2011 (ACT); Evidence Act 1995 (NSW); Evidence Act 2004 (Norfolk Island);
Evidence (National Uniform Legislation) Act 2011 (NT); Evidence Act 2001 (Tas); Evidence Act
2008 (Vic). They are substantially similar to the Evidence Act 1995 (Cth); however, not entire-
ly identical. Queensland, South Australia and Western Australia have not adopted the uni-
form legislation. To prevent confusion, we refer to the Commonwealth legislation when
citing the uniform Evidence Acts.
17
Uniform Evidence Acts s 76(1) states: ‘[e]vidence of an opinion is not admissible to prove the
existence of a fact about the existence of which the opinion was expressed.’
2017] Rethinking Expert Opinion Evidence 971
Despite the differences between the wording of s 79(1) and common law
concepts, the conspicuous omission of ‘expert’ and ‘field’ and the fresh
emphasis on ‘knowledge’, most Australian courts have not dramatically altered
their admissibility practice since the introduction of the uniform Evidence
Acts.19 Some even continue to endorse obscure common law concepts such as
ad hoc expertise — notwithstanding the conspicuous absence of
‘knowledge’.20 Our concerns in this article are primarily oriented to the
admission and evaluation of scientific, medical and other types of technical
expertise, with a particular emphasis on the assessment of the opinions of
forensic scientists.
I II S C I E N T I F I C A P P R OAC H E S TO E X P E RT I S E
Neither the common law nor the jurisprudence and practice that has emerged
around s 79(1) require an expert witness (or the party calling them) to
demonstrate that the ‘training, study or experience’, or any resultant ‘special-
ised knowledge’, manifest in the witness displaying measurably superior
performance in the relevant domain.21 Rather, courts tend to assume that
training, study or experience begets specialised knowledge and that this
18
Honeysett (2014) 253 CLR 122, 131 [23]. See also Dasreef (2011) 243 CLR 588, 602–3 [32]
(French CJ, Gummow, Hayne, Crennan, Kiefel and Bell JJ); HG v The Queen (1999) 197 CLR
414, 427 [38]–[39] (Gleeson CJ).
19
Cf Dasreef (2011) 243 CLR 588, 604 [37] (French CJ, Gummow, Hayne, Crennan, Kiefel and
Bell JJ):
The admissibility of opinion evidence is to be determined by application of the
requirements of the Evidence Act rather than by any attempt to parse and analyse particu-
lar statements in decided cases divorced from the context in which those statements were
made.
See generally Australian Law Reform Commission, Uniform Evidence Law, Report No 102
(2005).
20
See Nguyen v The Queen [2017] NSWCCA 4 (2 February 2017). Cf Gary Edmond, Kristy
Martire and Mehera San Roque, ‘Unsound Law: Issues with (“Expert”) Voice Comparison
Evidence’ (2011) 35 Melbourne University Law Review 52; Gary Edmond and Mehera San
Roque, ‘Quasi-Justice: Ad Hoc Expertise and Identification Evidence’ (2009) 33 Criminal Law
Journal 8.
21
The need for ‘reliability’ was explicitly rejected in Tang (2006) 65 NSWLR 681, 712 [137]
(Spigelman CJ); Tuite v The Queen [2015] VSCA 148 (12 June 2015) [10] (‘Tuite’); IMM
(2016) 257 CLR 300, 314 [48] (French CJ, Kiefel, Bell and Keane JJ). But see Honeysett (2014)
253 CLR 122, 136–7 [38]–[42].
972 Melbourne University Law Review [Vol 40:967
22
Dasreef (2011) 243 CLR 588, 604 [37] (French CJ, Gummow, Hayne, Crennan, Kiefel and
Bell JJ).
23
In order to determine whether a person is a good archer, for example, we would want to see
them shoot a number of arrows, on target, on a standard range. Ribbons, trophies and even
Olympic medals might be used as proxies for performance, perhaps even very informative
proxies of performance (at some particular stage or stages). ‘Proxies’ operate as particularly
good evidence only when they provide a direct indication of performance relative to others
(ie, through competitions) or some objective standard (eg, proximity of the arrow in relation
to the bullseye). However, where the proxy is membership of an archery club, or perhaps
even being an office bearer in an archery club, or even selector, we cannot assume that the
individual is a better archer than non-members. Further, depending on the specific activity,
performance might improve, deteriorate or remain reasonably stable over time. A gold medal
for archery at the 1984 Los Angeles Olympics might not reveal very much about current
ability as an archer. To assess post-Olympic ability would require more recent evidence of
performance. Additionally, expertise as an archer reveals little about abilities in other do-
mains. Generations of ancient Cretan archers confirm that expertise in archery reveals noth-
ing about the ability to accurately fire a rifle at a target.
24
Adriaan D de Groot, Thought and Choice in Chess (Mouton Publishers, 2nd ed, 1978).
25
Paul E Meehl, Clinical versus Statistical Prediction: A Theoretical Analysis and a Review of the
Evidence (University of Minnesota Press, 1954).
26
Other intellectual traditions in this area include the ‘traditional’ and ‘expert-performance’
approaches: see K Anders Ericsson and Tyler J Towne, ‘Expertise’ (2010) 1 Wiley Interdisci-
plinary Reviews: Cognitive Science 404. These schools roughly correspond to the NDM and
HB traditions (respectively) with regard to relative versus objective definitions of expertise.
27
For an authoritative review, see Kahneman and Klein, above n 1.
28
Ibid.
2017] Rethinking Expert Opinion Evidence 973
29
We tend to use the terms field, domain, discipline and profession interchangeably. Our
concern is not with social or professional classification or recognition, but rather with the
ability of individuals to do specific tasks.
30
The question of whether performance is sufficient to warrant the admission of expert opinion
into a legal proceeding is a policy question for judges. Self-evidently, where the level of per-
formance is not much above the performance of ordinary persons, there are numerous dan-
gers in admitting an individual into a legal proceeding, particularly a criminal proceeding,
and conferring the (somewhat specious) attribution ‘expert’. There are risks and costs associ-
ated with expert opinion evidence, and these should be considered before evidence is admit-
ted. Limited defence resourcing and the ineffectiveness of conventional legal safeguards
should also inform admissibility decision-making. For example, these dangers are demon-
strated by proponents of emerging areas of legally recognised ‘expertise’, such as forensic gait
comparison; often suggesting that slightly enhanced performance over novices ought to
provide them with access to courtrooms. See, eg, Ivan Birch et al, ‘The Identification of Indi-
viduals by Observational Gait Analysis Using Closed Circuit Television Footage’ (2013) 53
Science and Justice 339, 342. Cf the more critical approach in Gary Edmond and Emma
Cunliffe, ‘Cinderella Story: The Social Production of a Forensic “Science”’ (2016) 106 Journal
of Criminal Law and Criminology 219.
31
James Shanteau, ‘Competence in Experts: The Role of Task Characteristics’ (1992) 53
Organizational Behavior and Human Decision Processes 252, 255.
32
See Kahneman and Klein, above n 1, 519.
33
Richard W Herling, ‘Operational Definitions of Expertise and Competence’ (2000) 2(1)
Advances in Developing Human Resources 8, 20 (emphasis altered).
974 Melbourne University Law Review [Vol 40:967
34
It is, however, important to note that there is debate regarding whether the ‘demonstrated’
skill has been sufficiently defined and measured (measurable) to support an attribution of
expertise. For example, proponents of the ‘expert-performance’ approach and those of the
‘traditional’ approach may disagree about the specific nature of the ‘expertise’ demonstrated
by billionaires, senators or child prodigies. For detailed discussion, see K Anders Ericsson,
‘Why Expert Performance is Special and Cannot be Extrapolated from Studies of Perfor-
mance in the General Population: A Response to Criticisms’ (2014) 45 Intelligence 81; Erics-
son and Towne, above n 26.
35
Ericsson and Towne, above n 26, 405.
36
Collins describes how those with advanced research degrees in physics and mathematics,
regulating access to research funding from agencies such as the United States’ National Sci-
ence Foundation, may not understand the dynamics of knowledge production and social
ordering in specialist sub-groups: Harry Collins, ‘Public Experiments and Displays of Virtu-
osity: The Core-Set Revisited’ (1988) 18 Social Studies of Science 725. For a more detailed
account, see Harry Collins, Gravity’s Shadow: The Search for Gravitational Waves (University
of Chicago Press, 2004).
2017] Rethinking Expert Opinion Evidence 975
For example, research reveals that in Australia, one can become and re-
main a passport examiner without ever having to demonstrate expertise
comparing persons in photographs or a person to a photograph.37 It might
seem incredible that passport officers are not selected or promoted on the
basis of their ability to correctly identify faces. Nevertheless, that was the
situation until very recently.38
Similarly, highly experienced and respected forensic psychologists do not
achieve high status by making many correct or more correct predictions about
future dangerousness than their peers.39 Even though these skills seem
integral to professional practice, individuals in these domains progress by
being adequate (or strong) performers on other tasks. The forensic psycholo-
gist may become respected in the field by competently using actuarial
assessment tools, by thinking critically and engaging in evidence-based
practice, by building strong rapport with their clients, by being a good
colleague and co-worker, and by being a clear communicator; rather than
being a relatively or highly accurate predictor of future dangerousness. To
those outside the profession it is not always obvious that certification and
progression is based on other important (and sometimes not so important)
skills. External evaluators do not see the full picture and may not appreciate
the range of practical, professional, and institutional factors at play. They may
assume that entry and elevation through the ranks is based on skill in
particular crucial tasks, rather than a broad range of professional competen-
37
Recent research found that Australian passport officers were no more accurate at standard-
ised face-matching tasks than first year university students. Significantly, experience as a
passport officer made no difference to performance: see David White et al, ‘Passport Officers’
Errors in Face Matching’ (2014) 9(8) PLoS ONE 1, 3–4.
38
Until quite recently, Australian courts allowed anthropologists to testify about similarities
between persons accused of crimes and persons of interest in images (such as CCTV record-
ings) for purposes of identification. It is not entirely clear, following Honeysett, whether
anatomists (and others) who devote additional time to examining the images might yet
testify (possibly as an ad hoc expert). Ironically, under Honeysett, it is quite likely that a
passport examiner would be entitled to interpret CCTV images in order to identify a person
of interest (or describe similarities between a person of interest and the accused). Most Aus-
tralian courts (and Honeysett is exemplary) do not direct attention to what would seem to be
the fundamental questions: (i) can this witness actually do what is claimed? (ii) How good
are they? And (iii) how do we know? See Honeysett (2014) 253 CLR 122, 138–9 [47]–[48];
Gary Edmond, ‘A Closer Look at Honeysett: Enhancing Our Forensic Science and Medicine
Jurisprudence’ (2015) 17 Flinders Law Journal 287.
39
Assessment of future dangerousness is now commonly considered a core component of the
forensic (rather than clinical) subspecialty of psychological practice: Kirk Heilbrun and
Stephanie Brooks, ‘Forensic Psychology and Forensic Science: A Proposed Agenda for the
Next Decade’ (2010) 16 Psychology, Public Policy and Law 219.
976 Melbourne University Law Review [Vol 40:967
The first and the second indicators relate to the acquisition of what might be
described as background and foundational knowledge.43 The third indicator is
the performance dimension previously discussed.44 The fourth and fifth
indicators relate to the scope of expertise and its generalisability within and
across domains. This raises the important issue of the ‘expert claim’.45
40
See Ericsson, above n 34.
41
Kahneman and Klein, above n 1.
42
Jean Bédard and Michelene T H Chi, ‘Expertise’ (1992) 1 Current Directions in Psychological
Science 135, 138–9.
43
Although, it is worth noting that courts do not usually attempt to assess the amount or
organisation of a witness’s knowledge. Background knowledge may be part of ‘specialised
knowledge’ under s 79(1) of the uniform Evidence Acts, but many opinions may draw upon
information, commitments, beliefs and knowledge that forms part of a domain or tradition
and may not deal with a specific ability. When dealing with a specific ability, there is a need
for independent evidence of validity and/or performance. This is consistent with the refer-
ence to ‘study or investigation’ linked to ‘specialised knowledge’: Honeysett (2014) 253 CLR
122, 131 [23].
44
Note that the conceptualisation of successful performance described by Bédard and Chi is
consistent with the NDM framework: Bédard and Chi, above n 42.
45
See generally Kristy A Martire and Richard I Kemp, ‘Considerations When Designing
Human Performance Tests in the Forensic Sciences’ (2017) 49 Australian Journal of Forensic
Sciences (forthcoming).
2017] Rethinking Expert Opinion Evidence 977
46
It is important to note that this is not a failing of the practitioner. Rather, it is an unavoidable
consequence of the complexity of the task.
47
Training and qualifications may affect the nature of the claims put forward by a practitioner,
as well as their validity.
48
We are agnostic on the level of performance required to warrant legal recognition and
admission as an expert. However, we suggest that performance should be substantially better
than novices because of the costs and dangers associated with introducing expert evidence in
criminal proceedings, especially evidence adduced by the state and represented by the prose-
cutor and (perhaps) the courts, as expert. These are issues that warrant consideration in
addition to s 79(1) under ss 135 and 137 of the uniform Evidence Acts. See also above n 30.
978 Melbourne University Law Review [Vol 40:967
49
See, eg, The President’s Council of Advisors on Science and Technology, ‘Forensic Science in
Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods’ (Report,
Executive Office of the President (US), September 2016) ch 5 (‘PCAST Report’); National
Research Council (US), Strengthening Forensic Science in the United States: A Path Forward
(National Academies Press, 2009) ch 5.
50
For a surprisingly critical review of these and other domains in the United States (‘US’), see
National Research Council (US), above n 49; Expert Working Group on Human Factors in
Latent Print Analysis, ‘Latent Print Examination and Human Factors: Improving the Practice
through a Systems Approach’ (Report, National Institute of Standards and Technology, US
Department of Commerce, February 2012) ch 2; PCAST Report, above n 49. On the use of
specific terminologies in the US, see Simon A Cole, ‘More Than Zero: Accounting for Error
in Latent Fingerprint Identification’ (2005) 95 Journal of Criminal Law and Criminology 985;
Simon A Cole and Gary Edmond, ‘Science without Precedent: The Impact of the National
Research Council Report on the Admissibility and Use of Forensic Science Evidence’ (2015) 4
British Journal of American Legal Studies 585.
51
The PCAST Report recommends ‘black-box’ empirical studies to establish the ‘foundational
validity’ of a method: PCAST Report, above n 49, 47–8.
52
See Smith v The Queen (2001) 206 CLR 650. The PCAST Report suggests that in the absence
of appropriate empirical tests of foundational validity (ie, performance relative to the expert
claim), assertions about the significance of apparent similarities may be meaningless. At ibid
46, the PCAST Report states:
Without appropriate estimates of accuracy, an examiner’s statement that two samples are
similar — or even indistinguishable — is scientifically meaningless: it has no probative
value, and considerable potential for prejudicial impact. Nothing — not training, person-
al experience nor professional practices — can substitute for adequate empirical demon-
stration of accuracy.
53
See Gary Edmond et al, ‘How to Cross-Examine Forensic Scientists: A Guide for Lawyers’
(2014) 39 Australian Bar Review 174.
2017] Rethinking Expert Opinion Evidence 979
analyse fingerprints that have been left in blood as compared to those made
by sebaceous oils.54 Thus, the forensic scientist may actually be claiming
accuracy in development or analysis of oil-to-ink rather than blood-to-ink
comparisons. Ultimately, adequate specification of the claim is central to the
ability to demonstrate verifiable expertise relevant to a fact in issue.
Skilled intuition is another widely discussed feature of expertise.55 This
reflects the observation that the judgement and decision-making of experts
usually becomes an automatic process of recognition (ie, identifying a familiar
problem and recalling an appropriate response) that may be difficult to
articulate.56 Importantly, both experts and non-experts also engage in
intuitive but imperfect automatic responses, susceptible to heuristics and
biases that undermine outcome success.57 This makes the presence of intuition
and the inability to articulate the reasoning behind decision-making com-
monplace in the judgement of experts, but also in non-experts.58 Whether the
intuition was skilled or imperfect can ultimately only be established by
recourse to the performance criterion discussed above.
54
Bonnie Marchant and Christina Tague, ‘Developing Fingerprints in Blood: A Comparison of
Several Chemical Techniques’ (2007) 57 Journal of Forensic Identification 76, 76–7.
55
See, eg, Kahneman and Klein, above n 1.
56
This is an issue for a system that depends primarily on transparency and questioning to
evaluate the claimed expertise. Consider the reasoning in Davie v Magistrates of Edinburgh
[1953] SC 34 (28 November 1952) 40 (Lord Cooper P).
57
See generally Itiel Dror, ‘Cognitive Bias in Forensic Science’ in Mark Licker et al (eds),
McGraw-Hill Yearbook of Science and Technology 2012 (McGraw-Hill, 2012) 43; Itiel E Dror,
‘The Paradox of Human Expertise: Why Experts Get it Wrong’ in Narinder Kapur et al (eds),
The Paradoxical Brain (Cambridge University Press, 2011) 177.
58
There are species of expert who have abilities but may not possess relevant knowledge or
insight. Some sport stars might be able to do extraordinary things without being able to
explain them. Similarly, the ability to remember faces seems to be an ability that has a genetic
base. Interestingly, legal admissibility rules and practices (eg, cross-examination) might have
difficulty accommodating tacit and intuitive abilities. On ‘super-recognisers’, see generally
Richard Russell, Brad Duchaine and Ken Nakayama, ‘Super-Recognizers: People with Ex-
traordinary Face Recognition Ability’ (2009) 16 Psychonomic Bulletin and Review 252;
Anna K Bobak, Peter J B Hancock and Sarah Bate, ‘Super-Recognisers in Action: Evidence
from Face-Matching and Face Memory Tasks’ (2016) 30 Applied Cognitive Psychology 81;
David J Robertson et al, ‘Face Recognition by Metropolitan Police Super-Recognisers’ (2016)
11(2) PloS ONE 1. See also the law-related discussion in Gary Edmond and Natalie Wortley,
‘Interpreting Image Evidence: Facial Mapping, Police Familiars and Super-Recognisers in
England and Australia’ (2016) 3 Journal of International and Comparative Law (forthcoming).
For a general discussion of tacit knowledge, see Michael Polanyi, The Tacit Dimension (An-
chor Books, 1967); Harry Collins, Tacit and Explicit Knowledge (University of Chicago
Press, 2010).
980 Melbourne University Law Review [Vol 40:967
59
Kahneman and Klein, above n 1, 519.
60
On comparing passport images, see White et al, above n 37.
61
Kahneman and Klein, above n 1, 520.
62
Ibid 523.
2017] Rethinking Expert Opinion Evidence 981
63
Robin M Hogarth, Tomás Lejarraga and Emre Soyer, ‘The Two Settings of Kind and Wicked
Learning Environments’ (2015) 24 Current Directions in Psychological Science 379.
64
Ibid; Robin M Hogarth, Educating Intuition (University of Chicago Press, 2001) 90–1,
217–19.
65
Hogarth, Lejarraga and Soyer, above n 63.
66
See David L Faigman, John Monahan and Christopher Slobogin, ‘Group to Individual (G2i)
Inference in Scientific Expert Testimony’ (2014) 81 University of Chicago Law Review 417.
67
See Michael P Kortan et al, ‘FBI Testimony on Microscopic Hair Analysis Contained Errors
in at Least 90 Percent of Cases in Ongoing Review’ (Media Release, Federal Bureau of Inves-
tigation, 20 April 2015) <https://www.fbi.gov/news/pressrel/press-releases/fbi-testimony-on-
microscopic-hair-analysis-contained-errors-in-at-least-90-percent-of-cases-in-ongoing-
review>.
68
Erica Beecher-Monas, ‘Reality Bites: The Illusion of Science in Bite-Mark Evidence’ (2009) 30
Cardozo Law Review 1369, 1383–4; Mark Page, Jane Taylor and Matt Blenkin, ‘Expert Inter-
pretation of Bitemark Injuries — A Contemporary Qualitative Study’ (2013) 58 Journal of
Forensic Sciences 664; Mary A Bush, Howard I Cooper and Robert B J Dorion, ‘Inquiry into
the Scientific Basis for Bitemark Profiling and Arbitrary Distortion Compensation’ (2010) 55
Journal of Forensic Sciences 976; Michael J Saks, ‘Forensic Bitemark Identification: Weak
Foundations, Exaggerated Claims’ (2016) 3 Journal of Law and the Biosciences 538.
982 Melbourne University Law Review [Vol 40:967
69
Very recently, the PCAST Report concluded that foundational validity has not been
established for complex-mixture DNA analysis, bite mark analysis, firearms analysis, foot-
wear analysis or hair analysis: PCAST Report, above n 49, ch 5.
70
Ibid 52.
71
For a detailed discussion of validity, see Thomas D Cook and Donald T Campbell, Quasi-
Experimentation: Design and Analysis Issues for Field Settings (Rand McNally College Pub-
lishing, 1979) ch 2. The PCAST Report defines the foundational validity of a technique in
terms of its reliability, repeatability, reproducibility and accuracy: ibid 47–8.
72
See Itiel E Dror, ‘A Hierarchy of Expert Performance’ (2016) 5 Journal of Applied Research in
Memory and Cognition 121.
73
PCAST Report, above n 49, 51–2.
2017] Rethinking Expert Opinion Evidence 983
IV L E G A L A P P R OAC H E S TO E VA LUAT I N G E X P E RT I S E
Lawyers and judges are vitally concerned with the actual skills, ability and
knowledge possessed by individuals who might be recognised and relied upon
as experts. Legal institutions care deeply about the rectitude of verdicts and
the expert opinions upon which they are increasingly based. Such commit-
ments are consistent with the HB definition of expertise; concerned as it is
with the objective accuracy and optimal performance of judgements and
decisions. Furthermore, those admitted as experts are expected to perform
substantially better than ordinary persons (such as judges and jurors) in order
for their opinions to be considered relevant, admissible and able to assist in
fact-finding.78 Under s 135 of the uniform Evidence Acts courts may be
sensitive to the resource implications of adducing, admitting and contesting
evidence by balancing these costs against the benefits of the opinion. Sec-
tions 135 and 137 should sensitise courts to the dangers and risks to the
accused (and others) flowing from the admission of opinions that are not
demonstrably expert (and therefore not known to be probative).79
74
Obtaining the appropriate kinds of information is just the beginning. Gigerenzer suggests
that many highly trained medical doctors, including specialists, struggle with the reported
results of experiments and scientific research even in their domain: Gerd Gigerenzer, Simply
Rational: Decision Making in the Real World (Oxford University Press, 2015) ch 5.
75
National Research Council (US), above n 49, 7–8.
76
PCAST Report, above n 49.
77
For more information about designing human validation/performance trials, see Martire and
Kemp, above n 45; PCAST Report, above n 49, 47–54.
78
The relevance of expert testimony (see uniform Evidence Acts ss 55, 56) is discussed in
Edmond et al, ‘How to Cross-Examine Forensic Scientists’, above n 53.
79
Logically, with most scientific, medical and technical forms of evidence there is a need to
know how probative an opinion is before you can begin to consider its ‘highest’ probative
984 Melbourne University Law Review [Vol 40:967
It is our contention that where the court has not assessed the purported
expertise of the witness against an appropriate performance criterion, its
relevance and probative value, along with efficiency concerns and even the
rectitude of the verdict, might not be susceptible to rational evaluation.
value. Otherwise, the attribution of a highest probative value is speculative. It might appear
reasonable, but it remains just a guess.
80
Highly specific claims might be more common in the forensic sciences (eg, DNA profiling
and fingerprint comparison) than forensic pathology or medicine more generally. Moreover,
procedures are often developed to constrain both access to information and the level
of discretion.
81
See generally Edmond, ‘Legal versus Non-Legal Approaches to Forensic Science Evidence’,
above n 15.
82
Ibid 24–5.
2017] Rethinking Expert Opinion Evidence 985
83
For a discussion of the limits of proficiency tests commonly used and relied upon in the
forensic sciences, see PCAST Report, above n 49, 68; National Research Council (US),
above n 49, 206–8. Seemingly oblivious to limitations and the written advice of commercial
proficiency test providers, the Australian National Institute of Forensic Sciences — an organi-
sation primarily funded by police — recently attempted to calculate error rates for a
variety of forensic procedures using the results of simplistic proficiency tests: Australia New
Zealand Policing Advisory Agency, ‘NIFS Presentations’, ANZPAA NIF News (Melbourne),
October 2016, 6.
84
Daniel Kahneman and Shane Frederick, ‘Representativeness Revisited: Attribute Substitution
in Intuitive Judgment’ in Thomas Gilovich, Dale Griffin and Daniel Kahneman (eds), Heuris-
tics and Biases: The Psychology of Intuitive Judgment (Cambridge University Press, 2002) 49.
For an insightful discussion of the role of substitution in judicial decision-making, see Emma
Cunliffe, ‘Judging, Fast and Slow: Using Decision-Making Theory to Explore Judicial Fact
Determination’ (2014) 18 International Journal of Evidence and Proof 139.
85
They are also confident about the effectiveness of legal safeguards, even though safeguards
have not produced widespread legal awareness of systemic problems with many
forensic sciences.
986 Melbourne University Law Review [Vol 40:967
86
It is questionable whether such opinions are based on ‘specialised knowledge’.
87
See, eg, Commonwealth/Northern Territory, Royal Commission of Inquiry into Chamberlain
Convictions, Report of the Commissioner; The Hon Mr Justice T R Morling (1987) ch 16;
Gilham v The Queen (2012) 224 A Crim R 22; Wood v The Queen (2012) 84 NSWLR 581;
Acting Justice Brian Martin, ‘Inquiry into the Conviction of David Harold Eastman for the
Murder of Colin Stanley Winchester’ (Report of the Board of Inquiry, 29 May 2014); East-
man v DPP (ACT) [No 2] (2014) 9 ACTLR 178; R v Keogh [No 2] (2014) 121 SASR 307.
88
National Research Council (US), On The Theory and Practice of Voice Identification (National
Academies Press, 1979); National Research Council (US), Forensic Analysis Weighing Bullet
Lead Evidence (National Academies Press, 2004). See also above n 50.
89
We have italicised knowledge because the dentists doing bite mark comparison had
specialised knowledge. The problem is that their knowledge of anatomy and ability to man-
age the health of mouths did not transfer to discriminating between, and matching, bite
marks — especially on bodies.
90
See David A Harris, Failed Evidence: Why Law Enforcement Resists Science (New York
University Press, 2012) 164–7; Jennifer L Mnookin et al, ‘The Need for a Research Culture in
the Forensic Sciences’ (2011) 58 UCLA Law Review 725.
2017] Rethinking Expert Opinion Evidence 987
91
Validation studies and performance testing are not always required for certification or
accreditation. According to the PCAST Report, above n 49, 55:
Importantly, good professional practices — such as the existence of professional societies,
certification programs, accreditation programs, peer-reviewed articles, standardized pro-
tocols, proficiency testing, and codes of ethics — cannot substitute for actual evidence of
scientific validity and reliability.
92
There are exceptions, such as the United Kingdom (‘UK’)’s response to injured children in
R v Harris [2006] 1 Cr App R 5. Cf Emma Cunliffe, Murder, Medicine and Motherhood (Hart
Publishing, 2011).
93
See, eg, Aytugrul v The Queen (2012) 247 CLR 170; Justice Peter McClellan and Amber
Doyle, ‘Legislative Facts and Section 144 — A Contemporary Problem?’ (2016) 12 Judicial
Review 421. Cf Gary Edmond, David Hamer and Emma Cunliffe, ‘A Little Ignorance is a
Dangerous Thing: Engaging with Exogenous Knowledge Not Adduced by the Parties’ (2016)
25 Griffith Law Review 1.
94
See, eg, Li v The Queen (2003) 139 A Crim R 281, 294–5 [106]–[111] (Ipp JA). Worth noting
is also the High Court’s response to independent research in the New South Wales Court of
Criminal Appeal by McClellan CJ at CL in Aytugrul v The Queen (2012) 247 CLR 170.
95
See Faigman, Monahan and Slobogin, above n 66. We note that pharmaceuticals and
therapeutics are routinely tested and used notwithstanding the inability to test them on every
type of potential patient: Steven Epstein, Inclusion: The Politics of Difference in Medical Re-
988 Melbourne University Law Review [Vol 40:967
search (University of Chicago Press, 2007). We also note that, putting aside regulatory re-
quirements, the failure to test would expose manufacturers to negligence and liability actions.
96
Indeed, there are dangers in studying particular fact scenarios or cases, or trying to
reproduce, in order to prove or disprove, the circumstances in a specific case.
97
See generally Gigerenzer, above 74.
98
As well as the determination of probative value ‘at its highest’: IMM (2016) 257 CLR 300,
314 [47] (French CJ, Kiefel, Bell and Keane JJ).
99
The PCAST Report refers to (a) as foundational validity and (b) as validity as applied: PCAST
Report, above n 49, 43.
2017] Rethinking Expert Opinion Evidence 989
is partial and distorted.100 Similarly, where the study did not allow the use of
enhancing tools or did not include verification, one could infer that perfor-
mance would not be worse if these were available.101 Thus, notwithstanding
potential limitations, validation studies and other information about perfor-
mance can and should mediate the admission and evaluation of expert
opinion in individual cases.102 Experimentally derived evidence, as the
study of Australian Passport Officers revealed, will almost always be better
than judicial (or juror) impressions of apparent plausibility and
witness credibility.103
100
Jason M Tangen, Matthew B Thompson and Duncan J McCarthy, ‘Identifying Fingerprint
Expertise’ (2011) 22 Psychological Science 995.
101
Verification is a part of the Analysis, Comparison, Evaluation and Verification (‘ACE-V’)
process employed by most fingerprint bureaus. The process as implemented has been sub-
jected to criticism in both the National Research Council and National Institute of Standards
and Technology reports: National Research Council (US), above n 49; National Institute of
Standards and Technology, above n 50.
102
See Gary Edmond, Matthew B Thompson and Jason M Tangen, ‘A Guide to Interpreting
Forensic Testimony: Scientific Approaches to Fingerprint Evidence’ (2014) 13 Law, Probabil-
ity and Risk 1.
103
White et al, above n 37.
104
Gary Edmond and Mehera San Roque, ‘The Cool Crucible: Forensic Science and the Frailty
of the Criminal Trial’ (2012) 24 Current Issues in Criminal Justice 51.
105
Authoritative scientific and technical organisations and entities include PCAST (US), the
National Academy of Sciences (US), the National Institute of Standards and Technology
(US), the Royal Society (UK), and the Forensic Science Regulator (England and Wales). Cf
the Australian approach embodied in Tang (2006) 65 NSWLR 681; IMM (2016) 257
CLR 300.
106
These approaches are accentuated by changes (mostly reductions) to the resourcing of
the defence.
990 Melbourne University Law Review [Vol 40:967
107
These are frequently characterised by judges on appeal as tactical decisions, even though
lawyers (and judges) often do not know better and are not resourced.
108
Warnings and directions are not known to cure problems with evidence.
109
See especially Gary Edmond, ‘Forensic Science Evidence and the Conditions for Rational
(Jury) Evaluation’ (2015) 39 Melbourne University Law Review 77.
2017] Rethinking Expert Opinion Evidence 991
V R E M E D IAT I N G O U R J U R I S P RU D E N C E
Misdirected reliance on legal proxies rather than scientific indicia of expertise
could be corrected if lawyers, trial judges and courts of appeal augmented the
jurisprudence around ‘specialised knowledge’ from Honeysett v The Queen
(‘Honeysett’).111 According to the High Court:
‘Specialised knowledge’ is to be distinguished from matters of ‘common
knowledge’. Specialised knowledge is knowledge which is outside that of per-
sons who have not by training, study or experience acquired an understanding
of the subject matter. It may be of matters that are not of a scientific or technical
kind and a person without any formal qualifications may acquire specialised
knowledge by experience. However, the person’s training, study or experience
must result in the acquisition of knowledge. The Macquarie Dictionary defines
‘knowledge’ as ‘acquaintance with facts, truths, or principles, as from study or
investigation’ … and it is in this sense that it is used in s 79(1). The concept is
captured in Blackmun J’s formulation in Daubert v Merrell Dow Pharmaceuti-
cals Inc: ‘the word “knowledge” connotes more than subjective belief or unsup-
ported speculation … [It] applies to any body of known facts or to any body of
ideas inferred from such facts or accepted as truths on good grounds.’112
110
For some insight into the impact of reliability standards in criminal proceedings in the US in
the wake of the NAS report, see Cole and Edmond, above n 50. In Canada, appellate courts
have read some of the Daubert-style requirements down: R v Abbey (2009) 97 OR (3d) 330;
R v Aitken [2012] BCCA 134 (2 April 2012); Emma Cunliffe and Gary Edmond, ‘Gaitkeeping
in Canada: Mis-Steps in Assessing the Reliability of Expert Testimony’ (2014) 92 Canadian
Bar Review 327.
111
(2014) 253 CLR 122.
112
Ibid 131–2 [23] (emphasis in original) (citations omitted).
992 Melbourne University Law Review [Vol 40:967
113
509 US 579 (1993).
114
526 US 137 (1999).
115
The original term in the United States Federal Rules of Evidence was ‘scientific, technical or
other specialized knowledge’: Federal Rules of Evidence, 28 USC r 702 (1975).
116
(2014) 253 CLR 122, 131 [23] (emphasis altered),
117
Ibid (emphasis altered).
118
Forensic science institutions and courts should also direct their attention to the dangers
created by human factors: see, eg, D Michael Risinger et al, ‘The Daubert/Kumho Implica-
tions of Observer Effects in Forensic Science: Hidden Problems of Expectation and Sugges-
tion’ (2002) 90 California Law Review 1; Bryan Found, ‘Deciphering the Human Condition:
The Rise of Cognitive Forensics’ (2015) 47 Australian Journal of Forensic Sciences 386.
119
Honeysett (2014) 253 CLR 122, 139 [48]. The question of whether he was an ad hoc expert,
from having spent time looking at the images, rather than from any demonstrated perfor-
2017] Rethinking Expert Opinion Evidence 993
proficiency, the Court excluded his opinions about the images.120 An explana-
tion that was consistent with this approach, though of far greater service to
lawyers, experts and judges, would have been to explain that the anatomist’s
procedure had not been validated and there was no evidence that his perfor-
mance was superior to the tribunal of fact.121 Significantly, in the absence of
knowledge about the value of his procedure and conclusions, we cannot
say that the opinion was based on ‘specialised knowledge’ because it was
not linked to ‘training, study or experience’ by the required ‘study
or investigation’.122
Earlier, in civil proceedings, the High Court expressed a willingness to use
familiar proxies to avoid the need to delve into contests around ‘knowledge’:
The way in which s 79(1) is drafted necessarily makes the description of the re-
quirements very long. But that is not to say that the requirements cannot be
met in many, perhaps most, cases very quickly and easily. That a specialist med-
ical practitioner expressing a diagnostic opinion in his or her relevant field of
specialisation is applying ‘specialised knowledge’ based on his or her ‘training,
study or experience’, being an opinion ‘wholly or substantially based’ on that
‘specialised knowledge’, will require little explicit articulation or amplification
once the witness has described his or her qualifications and experience, and has
identified the subject matter about which the opinion is proffered.123
This is a fairly recent example of a senior court suggesting that when it comes
to ‘established’ disciplines, there may be no need to direct attention to the
question of whether the individual possesses the requisite expertise. The High
Court seemed to indicate, in this civil appeal, that the traditional proxies
suffice. Weakly diagnostic social indicia are said to be sufficient for the task.
mance, was strategically avoided by the court yet remains potentially open to future prosecu-
tors. Use of the category ad hoc expert directs no attention to evidence of expertise.
120
Edmond, ‘A Closer Look at Honeysett’, above n 38, 300. Revealingly, the decision does not
address the question of the anatomist’s actual abilities: at 297–8.
121
See especially Smith v The Queen (2001) 206 CLR 650, 655 [11] (Gleeson CJ, Gaudron,
Gummow and Hayne JJ):
The fact that someone else has reached a conclusion about the identity of the accused and
the person in the picture does not provide any logical basis for affecting the jury’s assess-
ment of the probability of the existence of that fact when the conclusion is based only on
material that is not different in any substantial way from what is available to the jury.
In the absence of formal evaluation, it is not possible to determine whether anatomists,
physical anthropologists or military intelligence officers can outperform ordinary persons.
122
Honeysett (2014) 253 CLR 122, 131 [23]. See also uniform Evidence Acts s 79(1).
123
Dasreef (2011) 243 CLR 588, 604 [37] (French CJ, Gummow, Hayne, Crennan, Kiefel and
Bell JJ) (emphasis added).
994 Melbourne University Law Review [Vol 40:967
124
See, eg, the Cochrane collaboration and its systematic review of studies to provide a
foundation for evidence-based medicine: Homepage (2017) Cochrane
<http://www.cochrane.org>.
125
See ibid; Honeysett (2014) 253 CLR 122; HG v The Queen (1999) 197 CLR 414.
2017] Rethinking Expert Opinion Evidence 995
comprehension. It may be that, to the extent that they are willing to admit
some expert opinions, trial judges may only need to moderate the strength of
the expert claim.126 This may require probabilistic forms of expression and
opinions that incorporate or acknowledge limitations, uncertainty and the
ubiquitous threat of error. This is the kind of information that studies focusing
on a performance criterion will generate.
126
Tang (2006) 65 NSWLR 681 is not a good guide here, because we should know that a putative
expert performs better than jurors before we consider moderating their opinion in order to
allow them to testify. Perceived utility or need cannot overcome the requirement that opin-
ions must be based on knowledge: see Simon A Cole, ‘Splitting Hairs? Evaluating “Split
Testimony” as an Approach to the Problem of Forensic Expert Evidence’ (2011) 33 Sydney
Law Review 459.
127
See Steven Rares, ‘Using the “Hot Tub”: How Concurrent Expert Evidence Aids Understand-
ing Issues’ (Paper presented at New South Wales Bar Association Continuing Professional
Development Seminar, Bar Association Common Room, 23 August 2010); Gary Edmond,
‘Merton and the Hot Tub: Scientific Conventions and Expert Evidence in Australian Civil
Procedure’ (2009) 72 Law and Contemporary Problems 159.
128
Ian Freckelton et al, Expert Evidence and Criminal Jury Trials (Oxford University Press, 2016)
54–5 [3.34]–[3.35].
996 Melbourne University Law Review [Vol 40:967
es may not bring that oversight to light. Thus, where procedures have not been
appropriately evaluated, legal procedures that do not engage with the funda-
mentals of expertise would seem to perpetuate traditional means of admitting
and evaluating expert opinion.129 We should be careful not to mistake
institutional efficiencies for enhanced responses to expertise. Concurrent
evidence has considerable potential, but where procedures and expert claims
have not been appropriately evaluated, it cannot overcome that lacuna. It
may help to identify such oversights and limitations, but that is yet to
be demonstrated.
Finally, when thinking about procedural reform and conventional trial
safeguards, it is increasingly significant that most persons accused of a
criminal offence will not have the benefit of expert assistance, even if they are
tried and instruct their counsel to contest the expert opinion evidence
assembled against them.130 Not only should glib assertions and uncritical
commitment to the value of conventional trial safeguards be avoided, in the
context of an inadequately resourced criminal justice system and impecunious
defendants, the need for prosecutors to proffer demonstrably reliable forensic
science evidence is more important than ever before.
V I C O N C LU S I O N
Legal approaches to expertise in criminal proceedings are misguided. They
rely heavily on models of expertise that arose in the enlightenment, where the
dominant forms of relevant knowledge, namely natural philosophy, medicine
and early manifestations of engineering, were predominantly gentlemanly
pursuits.131 Legal recourse to expert evidence and its gradual expansion
through recognition of emerging fields, areas of specialisation and experience,
enabled courts to admit an ever-expanding array of putatively expert opin-
ions. Accommodating legal responses were undermined as permissive rules,
initially extended to the social equals of judges (eg, university trained elites),
129
Where all parties are represented, a trial judge might think it fair to admit the evidence even
if there might be validity and reliability problems. On the practical limits of trial safeguards,
see Edmond and San Roque, ‘The Cool Crucible’, above n 104.
130
Recent studies suggest that only the state calls expert witnesses in most cases: see Freckelton
et al, above n 128, 123 [6.23].
131
These conceits are still visible, for example, in continued legal privileging of psychiatry over
clinical or forensic psychology. Historically, the role of trust between university-educated
elites seems to have been a feature of legal engagement: see generally Steven Shapin, A Social
History of Truth: Civility and Science in Seventeenth Century England (University of Chicago
Press, 1994) ch 1.
2017] Rethinking Expert Opinion Evidence 997
were used to obtain opinions, however marginal and inexpert, that supported
partisan interests. It was not long after the introduction of our modern
instantiation of the expert opinion rule in the late 18th century that judicial
concerns about expert venality and partisanship emerged fully-fledged.132
In general, though especially where opinion evidence is challenged, courts
should be expecting to see specialised knowledge demonstrated through study
or investigation. That is, they should be expecting to see the party adducing an
expert opinion to bring scientific literatures that support the procedure (eg,
validation and performance studies) and the specific type of application. For
many procedures, formal scientific validation is the type of study that produc-
es relevant ‘specialised knowledge’. In relation to most forensic science and
medicine evidence, opinions should be based on such knowledge. This
knowledge should exist separately from the expert, should be publicly
available, and should be made available to the court, ideally included in expert
reports.133 Courts should construe the need for ‘training, study or experience’
to confirm the specific witness’s ability to meet expert performance standards
and competently use valid procedures relevant to their claims.
Inattention to scientifically accepted criteria for identifying expertise (re-
quiring superior performance, either relative to an objective standard or
compared with novices or lay persons) has meant that, to various degrees, the
opinions received in legal proceedings may be speculative, perhaps mere ipse
dixit, and unrepresentative of what is known beyond the courtroom. Stand-
ards for the admission of expert opinion evidence in criminal proceedings are
excessively liberal. We accept that there may be institutional reasons for
tempering the strictness of requirements around admissibility in some
circumstances, but these do not apply to opinion evidence adduced by
the state.
In conclusion, we would make two emphatic points. First, when it comes
to expert opinion evidence adduced by the state in criminal proceedings,
there is a need to attend to indicia of expert performance. Where the evidence
is of a scientific, medical or technical nature there appear to be very few
credible reasons for exempting experts from the need to identify the scientific
research supporting their practices and claims and, where appropriate,
evidence of their own proficiency or ability in the specific domain. Currently,
no jurisdiction in Australia requires such information as part of its admissibil-
132
See, eg, Thorn v Worthing Skating Rink Co (1876) 4 Ch D 415, 416. For a more expansive
discussion, see Golan, above n 6.
133
See Gary Edmond, Kristy Martire and Mehera San Roque, ‘Expert Reports and the Forensic
Sciences’ (2017) 40 University of New South Wales Law Journal (forthcoming).
998 Melbourne University Law Review [Vol 40:967
134
Cf Tuite [2015] VSCA 148 (12 June 2015). Tuite was the only appellate decision in Australia
stipulating that forensic science evidence should be produced using validated procedures,
albeit by virtue of s 137 of the uniform evidence legislation. In the aftermath of IMM (2016)
257 CLR 300, 306 [17], 314 [50], the status of Tuite and the requirement that forensic science
be valid and reliable is uncertain. As this article explains, it is certainly arguable that forensic
science evidence, most conspicuously opinions derived via the feature comparison methods,
is ‘weak’ or ‘unconvincing’ where the procedures have not been formally validated and
actual expertise has not been demonstrated. See also Edmond, ‘Icarus and the Evidence Act’,
above n 4.