0% found this document useful (0 votes)
24 views11 pages

Statistical Analysis of ERA and The Quality of Research in Australian Universities

This document analyzes the impact of the Excellence in Research for Australia (ERA) process on improving research quality at Australian universities. It compares results from ERA assessments conducted in 2018, 2015, and 2012. The analysis finds that universities' performance in cited fields of research improved by 27% from 2015 to 2018, and by 80% from 2012 to 2015. However, there is no strong statistical evidence that research quality itself improved between assessment periods, suggesting universities may simply be better at reporting outcomes aligned with ERA criteria over time. The authors call for more transparency in ERA data and further analysis using additional metrics to fully understand the effectiveness of the process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views11 pages

Statistical Analysis of ERA and The Quality of Research in Australian Universities

This document analyzes the impact of the Excellence in Research for Australia (ERA) process on improving research quality at Australian universities. It compares results from ERA assessments conducted in 2018, 2015, and 2012. The analysis finds that universities' performance in cited fields of research improved by 27% from 2015 to 2018, and by 80% from 2012 to 2015. However, there is no strong statistical evidence that research quality itself improved between assessment periods, suggesting universities may simply be better at reporting outcomes aligned with ERA criteria over time. The authors call for more transparency in ERA data and further analysis using additional metrics to fully understand the effectiveness of the process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/342314994

Statistical analysis of ERA and the quality of research in Australian


universities

Article · June 2020


DOI: 10.1108/JARHE-02-2020-0048

CITATION READS

1 109

2 authors:

Nethal Jajo Shelton Peiris


The University of Sydney Simon Fraser University
36 PUBLICATIONS   90 CITATIONS    125 PUBLICATIONS   1,121 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

MODELLING HEALTH EMERGENCY: AN EFFICIENT APPROACH IN OPERATING VIA SIMULATION View project

Outlier identification (faulty Integrated Circuit) using Integrated Circuit leakage current measurement View project

All content following this page was uploaded by Nethal Jajo on 11 November 2021.

The user has requested enhancement of the downloaded file.


The current issue and full text archive of this journal is available on Emerald Insight at:
https://www.emerald.com/insight/2050-7003.htm

Statistical analysis of ERA and the Statistical


analysis of
quality of research in ERA

Australian universities
Nethal K. Jajo and Shelton Peiris
School of Mathematics and Statistics, The University of Sydney, Sydney, Australia
Received 28 February 2020
Revised 30 May 2020
Abstract Accepted 31 May 2020
Purpose – The purpose of this paper is to explore the impact of the Excellence Research in Australia (ERA)
process in boosting research quality at Australian universities, this paper presents an analysis of a policy
initiative, ERA, and compares the results of its measures as calculated in 2018 with those observed in previous
implementation, namely, 2015 and 2012.
Design/methodology/approach – Two approaches are implemented in this study; Excellence Index (EI)
scores for both cited and peer-reviewed 4 digits FoR codes and citation per paper (CPP) approach for the cited 4
digits FoR codes.
Findings – The authors show that the higher education providers’ (HEPs’) performance in the cited FoRs in
ERA in 2018 was improved by 27% compared to that in 2015, and that HEPs’ performance in the cited FoR
codes in ERA 2015 was improved by 80% compared to that in 2012. A reason for this visibility of research
improvement may be due to the universities are simply getting better at reporting outcomes using ERA-driven
criteria. Moreover, even though EI scores steadily increased in ERA rounds, there is no significant statistical
evidence available of improvement in research quality between two consecutive ERA rounds.
Originality/value – These findings underpin the importance of more future research and deep analysis using
the other complementary variables, like Relative Citation Impact (RCI), citation centiles and distribution of
papers based on the centiles and RCI classes and more transparency and data availability from the Australian
Research Council (ARC) site. Given the introduction of the Engagement and Impact Assessment by the ARC to
accompany the ERA exercise in 2018, the authors expect that the results of these findings will be useful as well
as prompting further debate and scholarship to the relevance and value of the ERA process.
Keywords Research quality, FoR codes, Citation analysis, Statistical analysis, Evaluation research,
Hypothesis testing and academic staff publishing
Paper type Research paper

Introduction
Some national governments, including those in the UK and Australia are measuring the
quality of research conducted by their universities. The UK exercise is known as the Research
Quality Framework and has been implemented since the 1980s. From 2010, Australia uses the
ERA exercise. Both Australian and UK systems were applied at the institutional level.
The first full round of ERA in Australia was conducted in 2010 and the results published
in 2011 (Australian Research Council, 2017) followed by three rounds in 2012, 2015 and 2018.
ERA assessments sheds light on the quality research outputs (i.e. books, chapters, journal
articles, conference papers) produced by Australian Universities over a six-year period (up to
two years prior to the ERA year) and is based on a census from the previous year of the ERA
process. ERA assesses HEPs’ performance in 22 FoR codes (assigned two digits codes). The
FoR codes are a hierarchical classification with three different levels: division (two digits);
groups (four digits) and fields (six digits). Each division is based on a broad discipline
grouping and the groups within each division are taken as those which share the same broad
methodology, techniques and/or perspectives as other entities within the division. The two-
digit codes consist of a collection of related four-digit FoR codes. No specification is provided
by the ARC of how the ranking at one level in the hierarchy is sub-divided. That is, how a Journal of Applied Research in
Higher Education
four-digit code directly assigns to a corresponding two-digit code since each FoR code has a © Emerald Publishing Limited
2050-7003
two-digit code, with four-digit sub-codes. University’s submission for each FoR code is called DOI 10.1108/JARHE-02-2020-0048
JARHE a “Unit of Evaluation” (UoE) and every UoE is ranked from 1 to 5 with 1 being the worst and 5
the best. The qualitative descriptors for these categories are given below: 1 5 well below
world standard, 2 5 below world standard, 3 5 at world standard, 4 5 above world standard
and 5 5 well above world standard.
The quality of research outputs is measured in two ways by ERA depending on the
discipline. They are as follows:
(1) Citations: For the Natural Sciences, Technology, Engineering and Mathematics
(STEM) disciplines, use the citation analysis based on metrics provided by Elsevier’s
Scopus in 2010, 2012 and 2015, and by Clarivate Analytics in 2018.
(2) Peer-review: For the Humanities, Arts and Social Sciences (HASS) use a suitable peer-
review process.
The peer-review process is time-consuming as it needs at least two reviewers to read and
qualitatively evaluate every submitted paper. As such, the impact of the ERA peer-review
process on improving the Australian Universities’ research outputs is not included in this
analysis.
ERA uses several bibliometric profiles for the citation analysis indicators for both HEPs in
Australia and the World. These are as follows:
(1) CPP, Relative Citation Impact (RCI) and RCI classes,
(2) World citations centiles threshold: The number of citations required to be in the top
1%, 5%, 10%, 25% and 50% of the world for a FoR code for each of the reference
periods, distribution of papers based on World centiles threshold and
(3) Distribution of papers against the RCI classes: Number of articles belonging to
particular RCI bands (termed RCI Classes). The ARC used seven classes of RCIs for
ERA: Class 0: Output with RCI 5 0, Class I: Output with 0 < RCI ≤ 0:79, Class II:
Output with 0:8 < RCI ≤ 1:19, Class III: Output with 1:2 < RCI ≤ 1:99, Class IV:
Output with 2:0 < RCI ≤ 3:99, Class V: Output with 4:0 < RCI ≤ 7:99, Class VI:
Output with 8:0 < RCI. These profiles are designed to be complementary and must be
considered as a set (see, ARC, 2010, 2012, 2015 and 2018).
The ARC definitions of CPP and RCI can be summarised as following: For a particular
publication year (y) and for given FoR code, the CPP (Australia or World) is calculated by:
P
ðy;FoRÞ Number of Citationsðy;FoRÞ
CPPðy;FoRÞ ¼ P
ðy;FoRÞ Number of Publicationsðy;FoRÞ

and the corresponding RCI (Australia or World) for a particular FoR code is:
XNumber of Citationsðy;FoRÞ
RCIFoR ¼
y
CPPðy;FoRÞ

If an article has been assigned with multiple FoR codes, then the number of citations will be
multiplied by the assigned apportionment for each of these FoRs.
In the literature, there are many researchers who have provided alternative analyses
together with transparent and authentic perspective on how to identify the performance of
educational research in Australia while other researchers focussed on strategies
implemented by universities to improve their performance at ERA. For example, (Perry,
2018) suggested using bibliometric data as an alternative to ERA in assessing the
performance of educational research in Australian universities. This author claimed that
ERA assessments favour large entities and disadvantage smaller ones. Perry further Statistical
suggested that the current ERA peer-review process may not accurately reflect the analysis of
performance of educational research in Australia. Harrison et al., 2013 argue that the efforts
taken by universities to build the research capacity are likely to continue to be very
ERA
competitive and the focus on individuals rather than their departments/schools makes an
increasingly pervasive culture of accountability. Against the discourse of accountability and
the accompanying loss in autonomy and creativity, they proposed that academics at all
educational institutions must actively engage in “community research”. The paper
concluded with interventions designed to build a high-quality, analytically and
theoretically intensive research culture to educational research in Australia. Crowe and
Watt, 2017 compared the ERA data collections in 2010, 2012, 2015 and demonstrated an
overall improvement in ratings across universities with most improving or at least holding
their ground. However, Crowe and Watt, 2017 are concerned that almost 40% of the assessed
institutions still did not meet the benchmark of at least above the world standard and noted
that there are some issues associated with ERA data collections. Diezmann, 2018 investigate
the similarities and differences in the research strategies that universities employ to improve
their performance in ERA.

Approach
This paper implements the following two methods to evaluate the performance in Australian
Universities for the three rounds of ERA, namely, 2012, 2015 and 2018.
(1) The first method is to use EI, where EI score is a measurement of the quality of
research outputs submitted to the ERA round by HEPs. This is the sum of the
weighted proportions of HEPs’ research outputs in each rated FoR code by the ERA
round. Then use both the EI and an ERA rating for cited and peer-reviewed four-digit
FoR codes to measure the quality of the research outputs submitted by HEPs to each
ERA round.
(2) The second method is to compare the performance of Australian universities in the
field of educational research (in four digit-cited FoR, codes from a total of 92 FoR
codes) using data from the world and Australian CPP as benchmarks provided by the
national reports in ERA 2012, 2015 and 2018.
Note that in this analysis, the results in ERA 2010 were not included due to some criticisms
and errors made during this round, see (Kellow, 2012). In addition, a comprehensive
analysis of ERA scores for all rounds using RCI, citations centiles, distribution of papers
based on centiles threshold and distribution of papers against RCI classes have not
been considered in this study due to the lack of availability of the relevant data from
universities.

Analysis of excellence index (EI)


The EI and its calculation methodology were conducted by the Department of Education
(DoE) in 2010 (see, DoE, 2015) as a process of determining 60% of the threshold to fund for the
research block grant called Sustainable Research Excellence (SRE) component in 2010–2016
(see, DoE, 2012 for details). In the process in 2015, the DoE considers each HEP and assigns an
EI score to each FoR. Then the final EI score is calculated for each HEP by creating a
composite score of the ERA-assessed 4-digit FoR codes. Each FoR code contributes to the EI
score by taking the weighted ERA rating applied to the volume measured. The HEP’s EI
score is the sum of each FoR’s contribution to the EI score. Weighted ratings are calculated for
the HEP by replacing each ERA rating of 3 or above with the relevant weighting. ERA ratings
JARHE for each FoR code are weighted such that the ratings 5, 4, 3, 2, 1 have weighting of 7, 3, 1, 0, 0,
respectively. A weighted rating is not applicable to FoR codes that did not meet the output
threshold for assessment. The contribution of each FoR code to the EI score is the share of
outputs (volume measure) multiplied by the weighting for that FoR code. Once summed, the
composite score is the university’s EI score. In summary, the equation to calculate the EI score
for a particular university is given by:
X Research Outputs
EI ¼ P 3W ;
Research Outputs
(
7 if the FoR code of the research ouput was rated 5
3 if the FoR code of the research ouput was rated 4
where W ¼
1 if the FoR code of the research ouput was rated 3
0 if the FoR code of the research ouput was rated 2 or 1:

The methodological approach used in this paper mimics the Doe’s methods in calculating
the EI by replacing HEP with ERA round to evaluate the EI score for each ERA round in
2012, 2015 and 2018. In this approach we use both the cited and peer-reviewed 4 digits FoR
codes to calculated the EI scores. Therefore, the EI score for each ERA round can be
considered as a measurement of the research quality in that ERA round. For example,
Table 1 reports the results assuming that there are only 4 FoR codes in total such that an
ERA round with 3,000 level codes can be assessed. Overall, ERA round had 9.34 research
output points across the 4 assessed FoR codes. The percentage of outputs points for each
FoR code at ERA round ranged from 11% to 86%. ERA ratings in each FoR code were
weighted according the scale 5 5 7, 4 5 3, 3 5 1. Those with an ERA rating of 2 and 1 have
no weighting. The contribution of each FoR code to the EI score is the share of outputs
(Volume Measure) multiplied by the Weighting for that FoR code. Once summed, the
composite score is ERA’s EI score is 2.81.
The EI scores for each ERA round are given in Table 2, which shows a steady increase in
the EI scores since 2012. Our concern is to conclude whether this increase demonstrates that
the Australian universities’ research is improving due to ERA process.
Summary statistics of the contribution to EI scores variable (Table 3) together with its
error bar plot (Figure 1) show that there is an overlapping in error bars with unequal
variances. The overlapping is a clue that the difference in contribution to EI scores is not
statistically significant and a suitable statistical test is required to draw a valid conclusion.
The unequal variances and large samples contributing to EI scores for the three ERA rounds
provide justification to use the Welch t-test to confirm whether there is an improvement in
Australian research quality resulting from the ERA process.
Evidence from the data suggests that there is not much improvement in ERA 2018
compared to ERA 2015 and the same can be said about ERA 2015 compared to ERA 2012.

ERA round FoR Outputs points Volume measure Rank Weighting Contribution to EI score

a b¼ Pa c D b3d
a

Table 1. 3,000 0801 0 0.00 5.00 7.00 0.00


ERA’s EI score 3,000 0906 8 0.86 4.00 3.00 2.58
calculations mimicking 3,000 0913 0.34 0.04 4.00 3.00 0.12
method stated in 3,000 1005 P 1 0.11 3.00 1.00 P 0.11
Doe paper Total a ¼ 9:34 100% Ei Score b3d ¼ 2:812.81
However, the evidence suggests that there is an improvement in ERA 2018 compared to ERA Statistical
2012, see Table 4, for more details. analysis of
ERA
Analysis of citations per paper (CPP)
The CPP benchmarks for HEPs and the world were retrieved from the materials currently
published by the ARC website (http://www.arc.gov.au/era). Despite the fact that ARC uses
CPP, RCI, citation centiles and distribution of papers based on the centiles and RCI classes, we
focus on using CPP for the following two reasons: lack of raw data about the HEPs’ outputs
and RCI and RCI classes are dependent on citations.

ERA round EI scores

2018 3.898 Table 2.


2015 3.210 EI Scores for each
2012 1.964 ERA round

ERA round Min 1st quar Median Mean 3rd quar Max SD
Table 3.
2018 0 0 0 0.005 0.004 0.196 0.01 Summary statistics for
2015 0 0 0 0.004 0.002 1.161 0.04 contribution to EI
2012 0 0 0 0.003 0.002 0.368 0.02 scores variable

Figure 1.
Error bar plots of
contribution to EI score
variable by ERA round
JARHE A comparison between ERA 2018 vs ERA 2012 and ERA 2015 vs ERA 2012, using paired t-
test for all cited 92 FoR codes altogether revealed that the world and Australian CPP in ERA
2018 was better than that for ERA 2015 and ERA 2012. Comparing ERA 2018 to ERA 2015,
there was not much improvement in both world and Australian CPP, see Table 5, Figure 2
and Figure 3.
We have implemented the Wilcoxon signed-rank test to compare the Australian
universities’ performance in ERA 2018, 2015 and 2012. The Wilcoxon signed-rank test (a non-
parametric statistical hypothesis test) can be used to compare two related samples, matched
samples or repeated measurements of a single sample to assess whether their population
mean ranks differ (i.e. a paired difference test). This is an alternative to the paired Student’s
t-test (also known as “t-test for matched pairs” or “t-test for dependent samples”) when the
population cannot be assumed to be normally distributed. In relation to the Australian
universities’ performance in ERA 2018 compared to that in 2015, it was improved 27% in the
92 FoR codes. However, there was an 80% improvement in the 92 FoR codes in ERA 2015
compared to 2012, see Table 6 and Table 7 for details.

Limitations and future research


Despite the rigour and contribution of this study, we acknowledge that this is exploratory and
the first of this kind to examine the ERA process to see whether boosting the quality in
Australian research. These findings are preliminary and further analysis, testing and
examination of the data using the remaining complementary variables will be useful in future
research and to find whether there is a cause and effect relationship between the ERA, and the
research outcomes of universities.
We hope that these findings will encourage the ARC to be more transparent in its release
of ERA data, allowing access to research output data from all participating HEPs in the ERA
process and to provide detailed comparison between each round in terms of quality of
research output. The ARC’s current data restrictions undermine academic freedom to analyse
and report on ERA effectiveness.
Future research might also look at research outputs longitudinally in the common years
(three years) between two consecutive ERA rounds and check whether these common

Table 4.
Welch t-test results for Statistics 95% CI
contribution to EI Welch two sample t-test t value p value Left Right Decision
scores: alternative
hypothesis; true ERA2018 vs ERA2015 0.53597 0.5921 0.0023 0.004 Not improved
difference in means is ERA2018 vs ERA2012 3.2419 0.001 0.00097 0.00396 Improved
not equal to 0 ERA2015 vs ERA2012 0.9561 0.3393 0.00167 0.0048 Not improved

Table 5. Statistics 95% CI


Paired t-test results for Pairs direction World/Australian t value p value Left Right Decision
all cited, together, FoRs
for both Australian ERA2015–ERA2018 Australian 1.9975 0.04627 2.012 0.017 Improved
HEPs and World. World 1.2111 0.2264 0.221 0.932 Not
Alternative ERA2012–ERA2018 Australian 8.891 0 5.344 3.410 Improved
hypothesis: true World 3.927 0 1.76 0.586 Improved
difference in means is ERA2012–ERA2015 Australian 19.857 0 3.695 3.03 Improved
not equal to 0 World 19.938 0 1.68 1.38 improved
Statistical
analysis of
ERA

Figure 2.
Comparing World CPP
for all cited FoRs

Figure 3.
Comparing Australia
CPP for all cited FoRs
JARHE research outputs provide the same quality as those in non-common years. Further this can be
extended to include other complementary variables like RCI, citation centiles and distribution
of papers based on the centiles and RCI classes used by the ARC to rank each FoR code.

Concluding remarks
This study offers a unique insight into the experience and impact of the ERA process and
those at the HEPs’ research portfolios who seek how to lead and support a team. The ERA
research output evaluation process is measured using three types of bibliometric profiles:
RCI, calculated against Australian institution and world benchmarks; distribution of papers
based on world centile threshold and Australian HEPs’ average; and distribution of papers
against RCI classes. These three profiles are designed to be complementary and must be
considered as a set.
This paper focussed on implementing two approaches to explore the impact of the ERA
process in boosting research quality at Australian universities. These approaches are as
follows: use EI analysis on both cited and peer-reviewed 4 digits FoR code and the use of CPP
analysis for the cited 4 digits FoR codes.
The other types of bibliometric profiles were not used due to the lack of raw data about
the HEPs’ outputs, and because RCI and RCI classes are dependent on citations.

FoR p value FoR p value FoR p value FoR p value

104 0.016 306 0.016 706 0.016 1007 0.016


105 0.031 307 0.016 799 0.031 1099 0.016
201 0.031 405 0.031 902 0.016 1104 0.047
Table 6. 203 0.047 599 0.016 907 0.016 1110 0.016
Improved Australian 301 0.047 603 0.031 909 0.016 1112 0.047
FoRs in ERA 2018 305 0.016 705 0.047 912 0.031 1199 0.016
compared to ERA 2015 1799 0.031

FoR p value FoR p value FoR p value FoR p value

0102 0.016 0404 0.016 0705 0.016 1103 0.016


0103 0.016 0405 0.016 0706 0.016 1104 0.016
0104 0.016 0406 0.016 0901 0.031 1105 0.016
0105 0.031 0499 0.046 0902 0.016 1106 0.016
0199 0.016 0501 0.031 0903 0.016 1107 0.031
0201 0.016 0502 0.016 0904 0.016 1108 0.016
0204 0.016 0503 0.016 0905 0.016 1109 0.016
0205 0.016 0601 0.031 0906 0.016 1110 0.016
0206 0.016 0602 0.016 0907 0.016 1111 0.016
0299 0.016 0603 0.016 0908 0.016 1113 0.031
0301 0.016 0604 0.018 0909 0.016 1114 0.016
0302 0.016 0605 0.016 0910 0.016 1115 0.016
0303 0.016 0607 0.016 0912 0.016 1117 0.016
0304 0.016 0608 0.016 0913 0.016 1199 0.016
0305 0.016 0699 0.031 0914 0.016 1701 0.016
Table 7. 0306 0.016 0701 0.031 0999 0.016 1702 0.016
Improved Australian 0401 0.016 0702 0.016 1001 0.016 1799 0.016
FoRs in ERA 2015 0402 0.016 0703 0.016 1007 0.016
compared to ERA 2012 0403 0.016 0704 0.016 1102 0.016
The statistical analysis of Australian universities’ performance in ERA in 2018 compared Statistical
to 2015 and 2012 using the first approach (EI analysis) shows that even though EI scores analysis of
steadily increased in ERA rounds, there is still no significant statistical evidence to suggest
improvement in research outputs between two consecutive ERA rounds. Using the second
ERA
approach (CPP analysis), the results show that only 27% of the FoR codes were improved in
2018 compared to 2015 while there was an 80% improvement in 2015 compared to 2012.
This shows that the large increase in HEPs’ research quality that was followed by ERA
2015 slowed down in 2018. The recognised improvement may be due to universities are
simply getting better at reporting outcomes/using ERA-driven criteria to guide publication
outlets.
These findings underpin the importance of more future research and deep analysis using
the other complementary variables, like RCI, citation centiles and distribution of papers based
on the centiles and RCI classes and more transparency and data availability from the ARC
site. Given the introduction of the Engagement and Impact Assessment by the ARC to
accompany the ERA exercise in 2018, we expect that the results of these findings will be
useful as well as prompting further debate and scholarship to the relevance and value of the
ERA process.

References
Australian Research Council (2010), Excellence for Research in Australia 2010 National Report,
Commonwealth of Australia, Canberra.
Australian Research Council (2012), Excellence for Research in Australia 2012 National Report,
Commonwealth of Australia, Canberra.
Australian Research Council (2015), ERA 2015 Evaluation Handbook, Commonwealth of Australia,
Canberra.
Australian Research Council (2017), Excellence in Research for Australia (ERA), Australian Research
Council, available at: http://www.arc.gov.au/excellence-research-australia.
Australian Research Council (2018), ERA 2018 Evaluation Handbook, Commonwealth of Australia,
Canberra.
Crowe, S.F. and Watt, S. (2017), “Excellence in research in Australia 2010, 2012, and 2015: the rising of
the curate’s Souffle?”, Australian Psychologist, Vol. 52, pp. 503-513, doi: 10.1111/ap.12248.
Department of Education (2012), “Higher education support Act 2003 - other grants guidelines
(research) 2012 (DIISRTE)”, available at: https://www.legislation.gov.au/Details/F2012L02010/
Html/Text#_Toc334695835.
Department of Education (2015), “The process for determining sustainable research excellence (SRE)
scheme grant amounts - 2015 allocations”, available at: https://docs.education.gov.au/
documents/2015-sre-process-calculations.
Diezmann, C.M. (2018), “Understanding research strategies to improve ERA performance in
Australian universities: circumventing secrecy to achieve success”, Journal of Higher
Education Policy and Management, Vol. 40 No. 2, pp. 154-174.
Harrison, N., Bennett, S., Bennett, D., Bobis, J., Chan, P., Seddon, T. and Shore, S. (2013), “Changing
boundaries—shifting identities: strategic interventions to enhance the future of educational
research in Australia”, The Australian Association for Research in Education, Vol. 40,
pp. 493-507, doi: 10.1007/s13384-013-0107-y.
Kellow, A. (2012), “Assessing political science quality: excellence in research for Australia”, European
Political Science, Vol. 11, pp. 567-580.
Perry, L.B. (2018), “Assessing the performance of educational research in Australian universities: an
alternative perspective”, Higher Education Research and Development, Vol. 37 No. 2,
pp. 343-358, doi: 10.1080/07294360.2017.1355893.
JARHE About the authors
Nethal K. Jajo completed his PhD in mathematics from Nankai University in China. After graduation
from Nankai University in 1999, he tutored at The School of Mathematics and Statistics-University of
New South Wales and lectured at Western Sydney University, The Northern Consortium of British
Universities and Macquarie University. He also worked as mathematics modeller at the Australian
Department of Defence. He is holding three positions within the University of Sydney: modelling and
projection analyst at the DVC-Research, Research Portfolio; Honorary affiliate, School of Mathematics
and Statistics; and Sessional lecturer at the Discipline of Business analytics, Sydney Business School.
Nethal K. Jajo is the corresponding author and can be contacted at: nethal.jajo@sydney.edu.au
Shelton Peiris did his PhD at Monash University, Melbourne, Victoria. His research interests are on
Statistical Analysis of Time Series with Applications in Financial Econometrics, topics in Mathematical
Statistics and statistics teaching/education. In Fall 2019, he is a professor (visiting) at the Department of
Statistics and Actuarial Science, Simon Fraser University, Burnaby, Vancouver, Canada.

For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: permissions@emeraldinsight.com

View publication stats

You might also like