Language Testing Then and Now
Language Testing Then and Now
net/publication/228390033
CITATION READS
1 9,666
1 author:
Ashish Giri
KIIT University
1 PUBLICATION 1 CITATION
SEE PROFILE
All content following this page was uploaded by Ashish Giri on 23 March 2014.
Abstract
The article is a brief historical overview of English language testing, particularly the testing of
English as a second or foreign language. It offers a discussion of how dffirent language testing
trends have emerged out of the changes taken place in the field of linguistics, particularly,
applied linguistics and teaching of English as a second or foreign language. In discussing the
trends, the British, North American and Australian contexts, in particular, hqve been
considered. Finally, some issues of the present day language testing are reviewed.
1. Introduction
'Tests' and 'examinations' are ancient practices. Their origin, oflen subject to
interpretations, can be traced back to the 'pre-historic' period of education. In the Indian
sub-continent, for example, public discourses and disputations or demonstrations of
acquired abilities in a given area were a common practice in the'ancient academic
period. The Greeks and Romans are also said to have practised some form of academic
examinations in their glorious past. About four thousand years ago, the Chinese are
believed to have adopted 'examination' in an elaborate form of a measure of ability for
the first time. However, the form of examination that is current in academic settings
today has evolved from and is a development of the examination that was in practice in
the 19tn century. Today, the word'examination'means a series of systematic'tests'of
knowledge, skill or of special ability to be carried out by an individual or an authority.
So far as language testing is concerned, it is generally assumed that the history
of English language testing is as old as the history of English language teaching itself
because testing has always been an integral part of any English language teaching
(ELT) programme, which probably began in the 15th century with the ordinance
promulgated by Henry the 5'n that English should be adopted as the language of royal
correspondence in the place of French. This royal ordinance is believed to have
facilitated the development of English language teaching methods, writing of teaching
and learning materials and designing language testing strategies. In the beginning, there
was a serious lack of teaching and leaming materials. So, methods to be adopted,
materials to be used and testing strategies to be followed largely depended on the
concerned tutors. As a result, the Engli,sh language teaching and testing did not flourish
much. In fact, it was only in the 16tn century when serious attempts were made to
produce a scholarly description of the English language thereby providing a foundation
for its teaching and learning. This has been marked as the formal beginning of teaching
English as a foreign language which later was also termed as English as a second
language depending on where the teaching of the language took place.
enrolled or seeking effolment in the British schools. At the higher level, examination
boards develop and administer O (ordinary) and A (Advanced) level examinations and
also ESL and EFL proficiency tests. The EFL and ESL tests are given to applicants to
the British instituti,ons to determine their ESL/EFL proficiency. The International
English Language Testing System (IELTS), which is for the students of Year 11
onward, is a measu.e oi English Language proficiency for the students seeking
admission for higher education or training in British or Australian universities, and has
been jointly developed by University of Cambridge Language Examination Syndicate
(UCE"ES), The British Council (BC) and International Development Program (IDP) of
Australian Universities. The test reflects the ideas of communicative language teaching
and is probably the first standard communicative language test administered over a large
population. tbl,ts, which is widely welcomed by the British and Australian
universities, consists of two sections - General and Modular. The general section
comprises of a listening test and an oral interview intended to test the oral skills' The
modular section, on the other hand, is intended to test the written skills, reading and
writing. The modules are further divided into sub-modules.
The overall format for the modules are the same; they all contain texts from
books, journals, reports, etc., related to a specific subject area and involve candidates in-
study skills necessary for academic studies.
British tests are highly innovative in content and format, and lay great emphasis
on validity on the examination construction procedures, which rely on expert
judgement.
1. Four one-paragraphpassages
- One narrative
- One historical, topical or journalistic,
- One critical, and
- One scientific
2. A longer passage dealing with debatable ideas,
3. A direct dictation, and the reproduction from memory of a dictated passage,
4. An oral test with ten topics prepared for the examiner,
5. Composition on a selected topic.
In the last two decades, students seeking eruolment in the North American
universities have been given one of the three tests, Comprehensive English Language
Test (CELT), Michigan English Language Assessment Battery (MELAB) or Test of
English as a Foreign Language (TOEFL). Among these, TOEFL is considered to be a
secured evidence of English language proficiency by most North American universities.
These tests have traditionally involved the assessment of listening, reading, vocabulary
and grammar. The TOEFL has also included Test of Written English (TWE) and Test of
Spoken English (TSE) as direct measures of oral and writing proficiency. More
recently, the TOEFL has been computerised. Students can enter their answers directly
on the computer and results are known within days. The test called American Language
Institute of Georgetown University (ALIGU) test is another test of academic English
given to applicants of scholarships awarded by various US government agencies.
For the school level ESL/EFL proficiency, only the Language Assessment
Battery GAB), which assesses receptive as well as productive skills, and Secondary
Level English Proficiency (SLEP), which assesses receptive skills, are given.
for use
The ASLpR is based on the absolute proflciency ratings and is_designed
pemlits
with learners whose education and employrnent is more diverse. It, therefore,
;i;; i"g*e of differentiation. The ASi-PR is not a test instrument, rather it is a scale
Language Roundtable (ILR)
una u se"t of procedures which are based on Interagency
Testing Foreign Languages/
Oral Interview derived from the American Council on the
rater has the
Educational Testing r"*ir. (ACTEFL/ETS)(Alderqol 9t aI. 1987)- The
more
lit..ry 16.eplace tli suggested tasks or exercises with the ones he or she regards level and
uppiJp.i"t.,'provided ttr"aT ttre new tasks and exercises are of equal diff,rculty
linguistic or communicative levels
complexity. However, no guidance at the functional,
complexity is provided'
as to how to make the taski or exercises of equal difficulty
and
To sum up this section, thousands oi students/individuals, in every part of the
their
world, take various English language proficiency tests each year to- demonstrate
are then used
ability/proficiency ir-gigiirft as"a f6reign language, the s.cores of which
Uy aiffi.*t institJions"of English Telkine-countries^like USA, UK, Australia and
purposes such as offering
i*iau,fo. ,"r."nirr! tn. .*Aiiates for u tt.tl*btt of different in career' Test
admission to an edu"cational programme, employment or advancement
scores are related to varioui aspects of pioficiency demonstrating
the candidates'
reading and writing' In the last
iung,rug. ability in different skiun listening speaking,
developed to measure different aspects
thrJe d"ecad"s, n,r-.rous test batteries have 6een
of English language proficiency of the non-native speakers.
TOEFL
American engUsfr language test batteries such as MELAB, ALIGU and
have been in use in Xsian cointr]es including Nepal for a long time
now. IELTS made
its way to Nepal about 10 Years ago.
"Ingeneral,Britis'handQ.{orth)AmericanEFLproficiencytestsrepresent
different aiproaches to language
"on
test development. North American tradition in
il&;;g; i6rt ir heavily baied properlies of tests' Issues such as
_psycho*.tti9
i"ri"uuifitv una and prediciive validity ar-e of particular interest in this
oUj".ti"ity of scoring and generalisibility of results play a dominant
tradition. Hence"o"turrent
often
.ot. in the development of test metho?s. Foiexample,-multiple-choice items aretest
used in testing r.L.pii* skills to gain desired internal consistency even if-the is
expected to measure communicative competence. Moreover, in orde-r to achieve high
in
inter-rater reliability, ih; ;;" of trained i"or.rr and detailed specific instructions
.onaurting interviJws are highly recommended for testing productive skills in the
tradition.
The British tradition puts emphasis on the specification of test content and
content and
expert judgement. While reliaUility reieive less attention in this tradition,
face vaiidity are the major of the test designers' It can be generalised that
"orrceirrs
British tests enjoy ;; variability in their formats and include various communicative
activities.
Language testing specialistsi theorists have always stressed the need to base the
development and use of language tests on a theory of language proficiency. More
recently, they have called for the incorporation of a theoretical framework of what
language proficiency is with the methods and technology involved in measuring it.
Bachman (1990:9) describes the complexity of the problem of defining and measuring
ability in the following terms:
All language tests must be based on a clear definition of language abilities, whether
this derives from a language teaching syllabus or a general theory of language
ability, and must utilise some procedure for eliciting language performance.
However, there are different ways of looking at language ability, and as a result, there
is a wide variation in its definition. These different definitions of language ability
focus on different aspects of the ability, and so developing a single language testing
design to accommodate all of these aspects is very complicated. Furthermore, the test
instruments or methods used to elicit language abilities are themselves based on the
assumed level of the abilities, making it unceftain whether the test measures will elicit
data to characterise an individual's language performance in non-testing situations.
Language testers, therefore, face a complicated dilemma of attempting to measure
abilities that are not precisely defined, and of using methods to elicit language abilities
that themselves are manifestations of the very same language abilities. In other words,
the test methods or data elicitation procedures one uses to measure language abilities
are characteristically related to the way one views these abilities. The most important
task of a language tester, therefore, is to define language abilities in such a way that
the test methods or response elicitation procedures applied to elicit language test
performance that is characteristic of language performance in non-testing situations
(Bachman 1990:9). Given below, then, is an account of different trends in language
testing and the ways the language has been viewed in these trends.
gaining mastery over the structure through the process of repetition and practice.
Language ability is seen as the ability to handle discrete elements of the language
system and develop aspects of individual language skills.
The discrete-point approach to testing measured language proficiency by testing
learners' knowledge of the discrete items of grammar and vocabulary and of discrete
items of language skills by taking one item or aspect at a time.
Discrete-point tests became the most widely used tests worldwide in 1960s and
1970s and are still popularly practised in many parts of the world. So much so that the
psychometric-structuralist trend in testing a second or foreign language became
synonymous to objective testing. Lado's work (1961), which has been seminal in
introducing objective testing to second or foreign language testing, claimed a universal
appeal and received a great deal of support from the linguists, methodologists, teachers,
course designers and test developers everywhere in the subsequent years. It "has
correctly pointed to the desirability of testing for very specific items of language
knowledge and skills, judiciously sampled from the usually enoffnous pool of possible
items. This procedure makes it a highly reliable and valid testing" (Spolsky i978 quoted
in McGarrell 1981).
There has been a lot of criticism of this approach to language teaching and testing
also. Ingram (1985) suggests that language is not just the sum of its parts, butthe parts
have to bb.mobilised and integrated together to carry out particular tasks in particular
situations. Davies (1963) rejects this approach by saying that whole language
development cannot and should be equated with the separate development of its
constituent parts because the whole is always greater than any one of its constituent
parts.
Oller (1979) makes a similar comment in his seminal work in which he discusses the
defect of the discrete point approach to language testing by saying that it suffers from
several deficiencies, and suggests that the problem with this approach is that it depends
on language proficiency being neatly quantifiable. He outlines its deficiency in the
following terms:
Discrete point analysis necessarily breaks the elements of language apart
and tries to teach them (or test them) separately with little or no attention to
the way those elements interact in a larger context of communication. What
makes it ineffective as a basis for teaching or testing languages is that
crucial properties of language are lost when its elements are separated. The
fact is that in any system where the parts interact to produce properties and
qualities that do not exist in the part separately, the whole is greater than the
sum of its parts...organisational constraints themselves become crucial
properlies of the system which simply cannot be found in the parts
Separately (Oller 197 9 :I 12)
As the quotation above implies, language and its social context are
complementary to one another. Language knowledge, therefore, must be tested in
language contexts to see if a person can communicate appropriately in the given context
of situation. Testing formal knowledge of language, i.e. linguistic competence, is
necessary but not sufficient to predict that the person can use the language effectively in
a given situation. A person taking a driving test or music test, for example' must
demonstrate their driving or perfoining abiliiy as a whole. The knowledge
of engine
parts or the keyboard does not necessarily make him or her a good driver or
performer'
weir (1988) goes on to suggest that grammatical (linguistic) competence is not a good
in
indicator of one's communlJative skil'is at all. A tester, therefore, should be interested
the development and measurement of communicative competence- rather than of
linguistic cbmpetence. Weir's suggestion is in line with Spolsky (1978) and Morrow's
that, instead of attempting l9
ltilOy commints which *.t. -id. a decade earlier should
,rtuUtir6 a person's knowledge of a language in terms of elements and skills, one
setting.
attempt to test that person'r ubility to perform in a specified sociolinguistic
which exists in every normal human being, the later describes it as a physical process of
learning language structures and systems. Bloomfield (1933), who is regarded as the
mentor of structural linguistics, took a more concrete approach to language leaming and
emphasized the process of segmenting and classifring the physical features of
utterances (Chomsky's surface structures) and disregarded the underlying structures (or
what Chomsky calls deep structures) of language (Crystal 1997). Chomskyan theory
claims that every mind is pre-programmed with the ability of learning a language in
which internalisation of the underlying rules is the key to successful language learning
and adequate exposure to the target language the only condition for it.
Chomsky's work provided a basis for describing a child's (learner's) language as
a legitimate, rule governed, and consistently evolving system. This means that measures
of a child or learner's performance should have predictive ability of hislher competence,
which is unseen, unobservable, and abstract.
The psycholinguistic theory of language learning in 1960's led to the
development of various language-teaching schemes, which treated language learning as
a process of acquiring conscious control and understanding of language systems. The
theory advocatei that if a learner has a proper degree of cognitive control over the
language structures, facility will develop automatically with use of the language in
meaningful situations' (McGarrell 1 98 1 :29)
Language testing, under this thqory, shifted its emphasis from linguistic
accuracy to functional ability. Language tests reflected on problem-solving approaches
and were expected to reveal what underlying rules the learners had internalised. Target
language errbrs received a positive attitude and were considered to be an indication of
the learners' level of the transitional competence (Corder 1967) or 'interlanguage'
(Selinker 1972). Reliability and validity were still central to the language testing
(McGanell 1981)
books. Wilkins (1976), for example, outlined the notional categories of communicative
needs. Munby (1981) provided a scheme of elaborate specifications of the learners'
target language communicative needs, the Van Ek (i979) devised a communicative
language proficiency course for the Threshold Level, and Leech (1975) wrote an
elaborate communicative grammar. All this led to the development of teaching schemes
aiming at meeting the target language communicative needs of the learners.
So far as language testing is concerned, this aspect calls for testing schemes that
reveal the learners' ability in using the language in communicative situations. Language
tests should evaluate not only the learners' knowledge of the elements and skills but
also their ability to comprehend and produce utterances that are both situationally and
contextually appropriately.
These purposes are not always mutually exclusive to each other as one form
of assessment
may also be used for several other purposes. The assessment given for finding evidence
of
or
pfogress, for example, may also be used for researching students' learning styles
communication strategies. A test given for a general purpose may also serve the
purpose of
may also
motivating students towards learning. Similarly, a test given for testing achievement
be used for certification purposes.
WHAT
[skills or knowledge learned]
administrators discrete-
point/objective
norm-referenced end-of-term
use of skills/application of knowledge
As the figure # above suggests, what is assessed (how well the learners are
achieving or how well they are learning), how is the results of the assessment is
interpreted (norm-referenced or criterion-referenced), which format/design of
assessment is used, and who conducts the assessment and who uses the results are all
dependent on why (purpose) of the assessment.
Of the three general and fourteen specific purposes of language assessment, as
outlined in the figure # above, a national test like the English test of the School Leaving
Certificate (SLC) Examination is given only for a general administrative purpose. Under
this general purpose, it serves two specific purposes - general proficiency and
certification or achievement. The general language proficiency is indicative of the
learners' target language linguistic competence as well as their ability to use this
competence in actual communicative situations. As a general proficiency test, the
English test of the SLC, is, as one would expect, a test of general language skills and
elements. The SLC reports an aggregate of scores only, and it does not contain
information on the outcomes of the test components nor does it provide descriptions of
abilities. The institutes, which may use the test results for selection purposes, make their
decisions on the basis of the aggregates.
Decision about certification is usually made on the basis of the achievement of the
target course. What type of certification (first, second or third division) is issued
depends on what has been achieved or learned from what has been taught in the course.
The English test of the SLC, in this sense, is an achievement test. In other words, the
SLC examination is a measure of achievement, the results of which are sometimes used
for screening or selection purposes. A score given on a subject is an indication of how
well a candidate has taken the examination. It is not usually an indication of how well
he has learned durins the course.
Conclusion
'What to test' in language has always been an important consideration and is something
that has undergone reviews over time. The 20th century, particularly the second half
witnessed considerable changes and shifts in the theory and practice of language
teaching and testing. Though a timeline'for the changes cannot be explicitly drawn, it
appears that every decade of the second half of the 20th century experienced same sort
o?-change. In language testing 'What is to be tested' follows what goes in the teaching
and learning of that language. In other words, language testing tends to follow trends of
language teaching
The trends in the second or foreign language testing may be described as
analytical, global and communicative. The analytical approach to language testing
follows the description of language, which suggests that language consists of several
discrete systems and sub-systems, and language learning is gaining mastery over the
systems one by one. Testing, then, is employing objective tests, rvhich could be scored
consistently. The global approach advocates for combining various language sub-skills
and elements in testing because they are inter-related and interdependent, and for
integrating the test items into a total language event. The communicative approach, on
the other hand, views language as communication and language learning as developing
communicative competence, which is essential for enabling learners to use language in
the multiple functions it serves in the real life. In communicative testing, the best exams
are those that combine various sub-skills as we do when exchanging ideas orally or in
writing.
From a communicative perspective, language testing in ESL and EFL may
roughly be divided into pre-communicative and communicative stages. In the pre-
communicative language testing, the learners of a second/foreign language have to
demonstrate their practical command of skills acquired. This tradition continued
through structuralist, discrete-point framework of Lado (1961), pragmatic expectancy
g.u-iu, of Oller (Ig7g), integrative test structure of Carroll (1972) and
psycholinguistic performance model of Davies (1988). The communicative language
iestittg .utr b. linked to the seminal work of Hymes (1972), but a detailed testing
framework was only proposed in the much publicised work of Canale and Swain (i980)
which was later elaborated by Bachman (1990) and Bachman and Palmer (1996)'
Almost two decades ago, in order for a language test to be eclectic, a synthesis
of discrete-point approach and integrative procedures was viewed necessary. Twenty
years later on, when the psycholinguistic-sociolinguistic trend is still in vogue and the
tommunicative approactr to language testing is becoming increasingly popular, it is
assumed that no single language testing model or test 'can accommodate the wide
variety of test scenario'. The present research, therefore, is an attempt to justit that an
adaptation or combination of models or tests in different language areas and different
target situations should be made available in order to meet the local language testing
reouirements.
In Nepal, for example, where ELT practitioners are still bound to the previous trend
and are not yet fully aware of the ways and means of communicative language
testing, a combination of all three aspects of the psycholinguistic-sociolinguistic
trend and the communicative approach to language testing can justifiably serve the
various purposes of the English test of the SLC.
References
Alderson J. Clapham, C. and Wall, D. (1995). Language Test Construction and Evaluation. Cambridge.
Cambridge University Press.
Alderson, J. C., Krahnke, K. J. and Stansfield, C. W. (19S7). Reviews of English Language Proficiency
Zesrs. Washington, DC. TESOL.
Allen, J. P. B. and Davies, A. (1977). Testing and Experimental Methods. Edinburgh Course in Applied
Linguistics. Vol. 4. London: Oxford University Press.
Asher, R.E. (1994xed.). The Encyclopaedia of Language and Linguistics. Oxford. Pergamon Press.
Bachman, L. F., Davidson, F., Ryan, K. and Choi, I. (1995). Studies in Language Testing. Vol. l-5.
Cambridge: Cambridge University Press.
Bachman, L. and Palmer, A. (1996). Language testing in practice: Designing and Developing Useful
Language Zesls. Oxford: Oxford University Press.
Bachman, L. F. (1990). Fundamental Considerations in Language Testing. Oxford: Oxford University
Press.
Banerjee, S. (2000). Integrated Testing. In Byram, M. (2000) Routledge Encyclopaedia of Language
Teaching and Learning. London. Routledge.
Bhadra, S. and Yadav, Y. (1988). Causes of Failures in the Proficiency Certi/icate Examinations.
Kathmandu: CERID.
Bloomfield, L. (1933). Language. New York. Holt, Rinehart and Winston
Brown, J. D., Cook, H. G., Lockhart, C. and Ranes, T. (1991). 'Soutleast Asian Languages Proficiency
Examinations ; ENC ED 365 | 52.
Brown, J. D. and Yamashita, S. O. (1995).'Language Testing in Japan.' ERICED 4001.13.
Cajkler, W. and Addleman, R. (2000). The Practice of Foreign Language Teaching. London. David
Fulton.
Canale, M. (1983). 'On Some Dimensions of Language Proficiency.'ln Oller, J. (ed.) (1983). Issues in
Language Testing Researcft. Rowley: Newbury House. 102-l15.
Canale, M. (1994). 'On Some Theoretical Framework for Language Proficiency.' In Rivera, C.
(ed.)(1994). Language Proficiency and Academic Achievemerl. Clevedon, Avon: Multilingual
Matters.
Canale, M. and Swain, M. (1980). Theoretical Bases for Communicative Approaches to Second
Language Teaching and Testing. Applied Linguisitcs. Nol. 1, No. l, Pp- l-47.
Carroll, B. J. and Hall, P. J. (1985). Make Your Own Language Tests. New York: Pergamon Press.
Carroll, J. B. (1983). 'Psychometric Theory and Language Testing.'ln Oller, J. (ed.) (19S3) Issues in
L anguage Testing Res earch. Ptowley: Newbury House. 19 5-207 .
CERID, (1989). Causes of Failure in English in the SLC Examination. Kathmandu: Tribhuvan
University.
Chomsky, N. (1957). Syntactic Structures. The Hague. Mouton
Clapham, C. & Wall, D. (eds.) (1996). Language Testing Update.. England: Centre for Research in
Language Education, Lancaster University.
Cohen. A. (1994). Testing Language Ability in the Classroom.Rowley: Newbury House Publishers, Inc.
Coombe, C. (1998). 'Current Trends in English Language Testing.' ERIC ED 428574.
Corder, S.P. (1973). Introducing Applied Linguistics. London. Penguin.
Crystal, D. (1997). The Cambridge Encyclopaedia of Language. Cambridge. Cambridge University Press.
Ministry
Curriculum Development Centre (CDC). (lgg7). Secondary Education Curriculum. Kathmandu:
of Education, HMG/N.
Cziko, G. A. (1984). 'Improving the Psychometric, Criterion-referenced, and Practical Qualities in
Integrative Language Tests.' TESOL Quarterly. 16' 3:367-79'
Davey, L. dgg1). e Cise for a National Testing System. ENC Practical Assessment, Research and
Evaluation. ISSN 1096 0066.
Davey, L. and Monty, M. (1991). 'A Case Against a National Test.' ERIC Practical Assessment, Research
and Evaluatior, ISSN 1096 0066.
Davies, A.(1963) (ed.). Language Testing Symposium. A Psycholinguistic Approach. Oxford' Oxford
University Press.
Davies, A. (1983).'Operationalising Uncertainty in Language Testing: An Argument in Favour of
Content Validity.' Language Testing. 5. 1:32-48.
Davies, A. ( 1990). Principles of Language Testing. oxford. Black Basil.
EzuC. (1995). 'The Cost of aNational Examination.' ERIC/AE Ed' 385611'
Farhady, H. (1979).'The Disjunctive Fallacy between Discrete-point and Integrative Tests.'
TESOL
Oller, J. (1979). 'The Psychology of Language and Contrastive Linguistics: The Research and the
Debate'. ENC EJ206643.
Oller, J. (1983). Issues in Language Testing Research. Rowley: Newbury House.
Oller, J. (1919). Language Tests at School: A Pragmatic Apprctach. England: Longman.
Oller, J. ( I 99 I ). 'Current Research/Development in Language Testing.' ERIC ED 365146.
Oller, J. (1992). 'Foreign Language Testing; Part 2: Its Depth. ADFL Bulletin,22:5-13
Rea-Deakens, P. and Germaine, K. (1996). Evaluation. Oxford: Oxford Universify Press.
Selinker, L. (1972) Interlanguage. IRAL Vol 10. Pp-209-231
Skinner, B. F. (1957) Verbal Behaviour. New York: Appleton, Century, Crofts.
Spolsky, B. (1969).'Introduction: Linguists and Language Testers.'In Spolsky, B. (ed.) (1969).
Approaches to language Testing. IAashington, D. C. Center of Applied Linguistics. l-19.
Spolsky, B. (1975). Language Testing: Art or Science? Key Note Address at Association Internationale
de Linguistique Appliquece World Congress, France.
Spolsky, B. (1978)(ed.). Approached to Language Testing. Papers in Applied Linguistics. ERIC ED
16548
Spolsky, B. (1993 Testing of English of Foreign Students in 1930. EzuC 2002.
Spols\y, B. (i995). Measured Words: The Development of Objective Language Testing. Oxford. Oxford
University Press.
Van Ek, J. A. (1979). The Threshold Level. Council ofEurope. Transbourg, 1979
Weir, C.J.( 1988). The Specification, realisation, and validation of an English language proficiency test.
In Hughes, A. (ed.) Testing English for University study. ELT Document 127. London. Modern
English Publications/The British Council.
Weir, G. (1990). Communicative Language Testing. New York; Prentice Hall.
White, E.M. (1994). Teaching and Assessing Writing. San Francisco: Jossey - Bass Publishers.
Wilde, J. and Sockey, S. (1995). Evaluation Handbook. Albuquerque: Evaluation Assistance Centre, New
Mexico High Land University.
Wilkins, D. (1976). Notional Syllabus. A Taxonomy and its relevance to Foreign Language Curiculum
Development. London. Oxford University Press.
Contact: Ram.Giri@research.vu.edu.au
(Currently Mr. Giri is a Doctoral student at Victoria Universtiy, Australia)