COMPUTER ADAPTIVE LANGUAGE TESTING ACCORDING TO NATO STANAG 6001 REQUIREMENTS
DOI:
https://doi.org/10.20535/2410-8286.225018Abstract
The article deals with the innovative, cutting age solution within the language testing realm, namely computer adaptive language testing (CALT) in accordance with the NATO Standardization Agreement 6001 (NATO STANAG 6001) requirements for further implementation in foreign language training of personnel of the Armed Forces of Ukraine (AF of Ukraine) in order to increase the quality of foreign language testing. The research provides the CALT method developed according to NATO STANAG 6001 requirements and the CALT algorithm that contains three blocks: “Starting point”, “Item selection algorithm”, “Scoring algorithm” and “Termination criterion”. The CALT algorithm has an adaptive ability, changing a complexity level, sequence and the number of items according to the answers of a test taker. The comparative analysis of the results of the CALT method piloting and the paper-and-pencil testing (PPT) in reading and listening according to the NATO STANAG 6001 requirements justifies the effectiveness of the three-level CALT method. It allows us to determine the following important benefits of CALT: test length reduction, control of measurement accuracy, objective assessment, improved test security, generation of a unique set of items, adaptive ability of the CALT algorithm, high motivation of the test takers, immediate score reporting and test results management. CALT is a qualitative and effective tool to determine test takers’ foreign language proficiency level in accordance with NATO STANAG 6001 requirements within the NATO Defence Educational Enhancement Programme. CALT acquires a special value and relevance in the context of the global COVID 19 pandemic.
Downloads
References
- Albano, A. D., Cai, L., Lease, E. M., & McConnell, S. R. (2019). Computerized adaptive testing in early education: Exploring the impact of item position effects on ability estimation. Journal of Educational Measurement, 56(2), 437-451. https://doi.org/10.1111/jedm.12215
| - ATrainP-5. NATO STANAG 6001: Language Proficiency Levels. Edition A Version 2. (2016). North Atlantic Treaty Organization, Standardization Office. Retrieved 12 June 2017. Retrieved from www.natobilc.org
- Babcock, B., & Weiss, D. J. (2012). Termination criteria in computerized adaptive tests: do variable-length CATs provide efficient and effective measurement? J. Comput. Adap. Test, 1, 1–18. https://doi.org/10.7333/1212-0101001
- Bachman, L. (2000). Modern language testing at the turn of the century: assuring that what we count counts. Language Testing, 17, 1–42. https://doi.org/10.1191/026553200675041464
| - Bachman, L. F. (1990). Fundamental Considerations in Language Testing. Oxford, Oxford University Press. Retrieved from https://www.academia.edu/28794667/Fundamental_Considerations_in_Language_Testing
- Bachman, L. F., Davidson, F. & Milanovich, M. (1996). The use of test methods in the content analysis and design of EFL proficiency tests. Language Testing, 13, 125–1
- Beckers, J.J., Schmidt, H.G. (2003) Computer experience and computer anxiety. Computers in Human Behavior, 19, 6, 785-797, https://doi.org/10.1016/S0747-5632(03)00005-0
- Blake, R. J. (2011). Current trends in online language learning. Annual Review of Applied Linguistics, 31, 19-35. https://doi.org/10.1017/S026719051100002X
- Canale, M., & Swain, M. (1980) Theoretical bases of communicative approaches to second language teaching and testing. Applied Linguistics, 1(1), 1–47. https://doi.org/10.1093/applin/I.1.1
- Chapelle, C., & Voss, E. (2017). Utilizing Technology in Language Assessment. Language Testing and Assessment. Encyclopedia of Language and Education, 3rd ed. (pp.149-161). https://doi.org/10.1007/978-3-319-02261-1_10
- Chen, C. -., Wang, W. -., Chiu, M. M., & Ro, S. (2020). Item selection and exposure control methods for computerized adaptive testing with multidimensional ranking items. Journal of Educational Measurement, 57(2), 343-369. https://doi.org/10.1111/jedm.12252
| - Eggen, TJHM. (2018). Multi-Segment Computerized Adaptive Testing for Educational Testing Purposes. Front. Educ, 3, 111. https://doi.org/10.3389/feduc.2018.00111
- Fedoruk, P. I. (2008). Adaptive tests: general provisions. Mathematical machines and systems. 1, 115−127. Retrieved from http://dspace.nbuv.gov.ua/handle/123456789/748
- Frick, T. W. (1992). Computerized adaptive mastery tests as expert systems. Journal of Educational Computing Research, 8(2), 187−213. https://doi.org/10.2190/J87V-6VWP-52G7-L4XX
- Fulcher, G. (1999). Computerizing an English language placement test. ELT Journal, 53, 289–299. https://doi.org/10.1093/elt/53.4.289
- Fulcher, G. (2017). Criteria for Evaluating Language Quality. Language Testing and Assessment. Encyclopedia of Language and Education, 3rd ed. (pp.179–192). https://doi.org/10.1007/978-3-319-02261-1_13
- Gibbons, R. D., & deGruy, F. V. (2019). Without wasting a word: Extreme improvements in efficiency and accuracy using computerized adaptive testing for mental health disorders (CAT-MH). Current Psychiatry Reports, 21(8) https://doi.org/10.1007/s11920-019-1053-9
| - Hambleton, R. K., & Zaal, J. N. (1991). Advances in educational and psychological testing: theory and applications. Springer Sciences & Business Media, LLC. https://doi.org/10.1007/978-94-009-2195-5
- Hambleton, R.K., Swaminathan, H. & Rogers, H. J. (1991). Fundamentals of item response theory. Newbury Park, CA, Sage Publications.
- Hockly, N. (2019). Automated writing evaluation. ELT Journal, 73(1), 82–88. https://doi.org/10.1093/elt/ccy044
- Holzknecht, F., McCray, G., Eberharter, K., Kremmel, B., Zehentner, M., Spiby, R., & Dunlea, J. (2021). The effect of response order on candidate viewing behaviour and item difficulty in a multiple-choice listening test. Language Testing, 38(1), 41-61. https://doi.org/10.1177/0265532220917316
| - Hrabar, Е. V. (2010). Basic types of tests in foreign languages in US pedagogy. Scientific notes of Ternopil National Pedagogical University named after Volodymyr Hnatyuk. Series: Pedagogy, 1, 194–201. Retrieved from http://nbuv.gov.ua/UJRN/NZTNPU_ped_2010_1_37
- ICAT (International Association for Computerized Adaptive Testing). Retrieved from http://www.iacat.org/
- Jamieson, J. (2005). Trends in computer-based second language assessment. Annual Review of Applied Linguistics. 25. 228-242. https://doi.org/10.1017/S0267190505000127.
- Kravchenko, O. M. & Plakasova, Zh.M. (2010). Model of intellectual controlling subsystem with multilevel adaptive testing. East European Journal of Advanced Technologies, 4/2 (46), 21–25.
- Larkin, K. C. & Weiss, D. J. (1975). An empirical comparison of two-stage and pyramidal adaptive ability testing (Research Report, 75-1). Minneapolis: Psychometrics Methods Program, Department of Psychology, University of Minnesota. Retrieved from https://eric.ed.gov/?id=ED106317
- Larson, J. (1999). Considerations for testing reading proficiency via computer-adaptive testing. In M. Chalhoub-Deville (Ed.), Studies in language testing, Vol. 10. Issues in computer-adaptive testing of reading proficiency (pp.71-90). Cambridge: University of Cambridge Press.
- Lin, G.-H., Huang, Y.-J., Chou, Y.-T., Chiang, H.-Y., & Hsieh, C.-L. (2019). Computerized adaptive testing system of functional assessment of stroke. Journal of Visualized Experiments, 2019(143). https://doi.org/10.3791/58137
| - Lord, F. M. (1971). The self-scoring flexilevel test. Journal of Educational Measurement, 8, 147–151. https://doi.org/10.1111/j.1745-3984.1971.tb00918.x
- Maia, M., Lilley, M. & Barker, T. (2003). Computer-Adaptive Testing in Higher Education: the way forward? XXXVIII Cladea - Latin American Council of Schools of Administration: Lima. https://doi.org/10.13140/2.1.2074.7520
- Mâsse, L. C., O’Connor, T. M., Lin, Y., Hughes, S. O., Tugault-Lafleur, C. N., Baranowski, T., & Beauchamp, M. R. (2020). Calibration of the food parenting practice (FPP) item bank: Tools for improving the measurement of food parenting practices of parents of 5–12-year-old children. International Journal of Behavioral Nutrition and Physical Activity, 17(1). https://doi.org/10.1186/s12966-020-01049-9
| - Meunier, L. (1994). Computer Adaptive Language Tests (CALT) Offer a Great Potential for Functional Testing. Yet, Why Don't They? CALICO Journal, 11(4), 23-39. Retrieved February 25, 2021, from http://www.jstor.org/stable/24152755
| - Mizumoto, A., Sasao, Y., & Webb, S. A. (2019). Developing and evaluating a computerized adaptive testing version of the word part levels test. Language Testing, 36(1), 101-123. https://doi.org/10.1177/0265532217725776
| - Monfils, L. F., & Manna, V. F. (2021). Time to achieving a designated criterion score level: A survival analysis study of test taker performance on the TOEFL iBT® test. Language Testing, 38(1), 154-176. doi:10.1177/0265532220940709
| - Newhouse, C. P., & Cooper, M. (2013). Computer-based oral exams in Italian language studies. ReCALL, 25(3), 321-339. https://doi.org/10.1017/S0958344013000141
- Oppl, S., Reisinger, F., Eckmaier, A Helm, C. (2017). A flexible online platform for computerized adaptive testing. Int J Educ Technol High Educ, 14(2). https://doi.org/10.1186/s41239-017-0039-0
- Padmavathi, M. (2016) A study of student-teachersʼ readiness to use computers in teaching: an empirical study. I-manager’s Journal on School Educational Technology, 11(3). Retrieved from https://pdfs.semanticscholar.org/ce38/ddfc90c28af40dfddd9a741fe703a632565f.pdf?_ga=2.127548237.1317622877.1612358315-1008902291.1612358315
- Sands, W.A., Waters, B.K. & McBride, J.R. (1997). Computerized adaptive testing. From inquiry to operation. Washington, American Psychological Association. Retrieved from https://pdfs.semanticscholar.org/4e9c/c706ea17628f970389a25b2d268b52320e13.pdf
- Serhiienko, V.P., Malezhyk, M.P., & Sitkar T.V. (2012). Computer technologies in testing: a textbook. Lutsk: Printing house “Volyn Polygraph”. Retrieved from https://www.coursehero.com/file/64399499/KTTpdf/
- Thompson, N. A. & Weiss, D. A. (2011). A Framework for the Development of Computerized Adaptive Tests. Practical Assessment, Research, and Evaluation, 16, Article 1. https://doi.org/10.7275/wqzt-9427
| - Thompson, N. A. (2007). A practitioner’s guide for variable-length computerized classification testing. Practical Assessment Research & Evaluation, 12 (1). https://doi.org/10.7275/fq3r-zz60
- Thompson, N.A. (2016). User’s manual for SIFT: Software for investigating test fraud. Minneapolis, Assessment Systems Corporation.
- Vispoel, W.P., Rocklin, T.R., & Wang, T. (1994). Individual differences and test administration procedures. A comparison of fixed-item, computerized adaptive, and self-adapted testing. Applied Measurement in Education, 7, 53–59.
| - Wang, C., Weiss, D. J., & Shang, Z. (2019). Variable-length stopping rules for multidimensional computerized adaptive testing. Psychometrika, 84(3), 749-771. https://doi.org/10.1007/s11336-018-9644-7
| - Weiss, D. J. & Kingsbury, G. G. (1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 21, 361–375. https://doi.org/10.1111/j.1745-3984.1984.tb01040.x
- Weiss, D. J. (1973). The stratified adaptive computerized ability test (Research Report 73-3). Minneapolis: University of Minnesota, Department of Psychology. Retrieved from http://iacat.org/sites/default/files/biblio/we73-3.pdf
- Weiss, D. J. (2004). Computerized adaptive testing for effective and efficient measurement in counseling and education. Measurement and Evaluation in Counseling and Development, 37 (2), 70–84. https://doi.org/10.1080/07481756.2004.11909751
- Wigglesworth, G. & Frost, K. (2017). Task and Performance-Based Assessment. Language Testing and Assessment. Encyclopedia of Language and Education, 3rd ed., 121-133. https://doi.org/10.1007/978-3-319-02261-1_8
- Yigit, H. D., Sorrel, M. A., & de la Torre, J. (2019). Computerized adaptive testing for cognitively based multiple-choice data. Applied Psychological Measurement, 43(5), 388-401. https://doi.org/10.1177/0146621618798665
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2021 Piotr Gawliczek, Viktoriia Krykun, Nataliya Tarasenko, Maksym Tyshchenko, Oleksandr Shapran
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).