COVİD-19 DÖNEMİNDE YAPILAN ONLİNE SINAVLARDA TEST BİTİRME SÜRESİ İLE TEST BAŞARISI ARASINDAKİ İLİŞKİNİN İNCELENMESİ

Author :  

Year-Number: 2021-Year: 14 - Number: 88
Yayımlanma Tarihi: 2021-12-19 23:54:56.0
Language : Türkçe
Konu : Matematik ve Fen Bilimleri Eğitimi
Number of pages: 1-12
Mendeley EndNote Alıntı Yap

Abstract

Bu araştırmanın amacı Covid-19 pandemi döneminde yapılan online sınavlar da öğrencilerin sınavı bitirme süresi ile başarılarını karşılaştırmak ve online sınavlar hakkındaki düşüncelerini araştırmak amacı ile yapılmıştır. İki ve daha fazla sayıdaki değişken arasında ilişki var mı veya derecesini belirlemeyi amaçlayan ilişkisel tarama modelleri içerisinde yer alan ilişkisel tarama (korelasyon) modeli kullanılarak araştırma yapılmıştır. Araştırma 2020-2021 eğitim-öğretim yılı güz döneminde öğretim ilke ve yöntemleri dersini alan toplam 125 öğrenci ile yürütülmüştür.  Bu çalışma da kullanılan sorular Google classroom üzerine yerleştirilmiştir. Sınav sırasında sorular her öğrenciye farklı sırada gelmiştir ve her soru için şıkların yerleri sistem tarafından karıştırılmıştır. Öğrencilerin kalan süreyi her an ekranda görmeleri mümkündür. Öğrenciler sorulara farklı sürelerde yanıt vermişlerdir. Sınav sonucunda her bir öğrenciden elde edilen sınavı bitirme süresi ve sınavdan alınan puan verileri toplanmıştır. Öğretmen adaylarının online sınavlarda çoktan seçmeli testlerde harcadıkları süre ile sınavdan aldıkları puanlar arasında istatistiksel olarak anlamlı bir ilişkinin olup olmadığını incelenmiştir. Elde edilen sonuçlara göre sınavlarda harcanan süre ile alınan puanlar arasında yapılan “Pearson” korelasyon hesaplamalarında istatistiksel olarak anlamlı bir farklılık bulunamamıştır. Öğrencilerden testi erken bitirenin daha az puan veya testi geç bitirenin daha fazla puan alması ile ilişkisinin olmadığı sonucuna varılmıştır. Cinsiyet açısından da sınavlarda harcanan süre ile başarı arasında istatistiksel olarak anlamlı bir farklılık bulunamamıştır. Online olarak yapılan mülakattan öğrencilerin online sınavlar hakkındaki düşünceleri alınmıştır. Bulgulardan elde edilen veriler ışığında alanyazındaki sonuçlarla ilişkilendirilmiştir. 

 

Keywords

Abstract

The purpose of this research was to compare the success of the students in the online exams during the Covid-19 pandemic period and to investigate their thoughts about online exams. The research was conducted using the relational screening (correlation) model, which is included in the relational screening models aiming to determine whether there is a relationship between two or more variables or its degree. The research was conducted with a total of 125 students who took the teaching principles and methods course in the fall semester of the 2020-2021 academic year. The questions used in this study were placed on the Google classroom. During the exam, the questions came to each student in a different order and the places of the choices for each question were mixed by the system. It is possible for students to see the remaining time on the screen at any time. Students answered the questions at different times. As a result of the exam, the time to finish the exam and the score data obtained from each student were collected. It was examined whether there is a statistically significant relationship between the time teacher candidates spend on multiple choice tests in online exams and the scores they get from the exam. According to the results obtained, no statistically significant difference was found in the “Pearson” correlation calculations made between the time spent in the exams and the scores obtained. It was concluded that there was no relationship between the students who completed the test early, getting less points, or those who completed the test late getting more points. In terms of gender, there was no statistically significant difference between the time spent in exams and success. The opinions of the students about the online exams were taken from the online interview. In the light of the data obtained from the findings, it has been associated with the results in the literature.

Keywords


  • Allen, I. E. & Seaman, J. (2007). Online Nation: Five years of growth in online learning. Survey report, The Sloan Consortium, USA. Retrieved from http://www.sloan-c.org/publications/survey/online_nation.

  • Ary, D., Jacobs, L. C. & Razavieh, A. (1996). Introduction to Research in Education. (5th Ed.). Harcourt Brace & Company. Orlando, FL.

  • Atabek Yiğit, E., Balkan Kıyıcı, F. & Çetinkaya, G. (2014). Evaluating the Testing Effect in the Classroom: An effective way to retrieve learned information. Eurasian Journal of Educational Research, 54, 99-116.

  • Başol, G., Kocadağ Ünver, T. & Çiğdem, H. (2017). Ölçme Değerlendirme Dersinde e-Sınav Uygulanmasına İlişkin Öğrenci Görüşleri", Uluslararası Türk Eğitim Bilimleri Dergisi, 1(8), 111-128.

  • Bridgeman, B., Cline, F. & Hessinger, J. (2004). Effect of Extra Time on Verbal and Quantitative GRE Scores. Applied Measurement in Education, 17(1), 25-37.

  • Bugbee Jr., A.C. & Bernt, F.M. (1990). Testing by computer: Findings in six years of use 1982-1988. Journal of Research on Computing in Education, Vol. 23 Issue 1, 87-101.

  • Bull, J. & McKenna, C. (2004). Blueprint for Computer-Assisted Assessment. London, UK: Routledge-Falmer.

  • Chen, J. & Chang, C. (2006). Using Computers in Early Childhood Classrooms: Teachers' Attitudes, Skills and Practices. Journal of Early Childhood Research, 4(2), 169-188.

  • Çiğdem, H. & Topçu, A. (2013). Students’ Perception of e-learning in The Technical Vocational School. Science Journal of Turkish Military Academy, 23(2), 1-19.

  • Çiğdem, H. & Tan, Ş. (2014). Students’ Opinions on Administering Optional Online Quizzes in a two-year College Mathematics Course, Journal of Computer and Educational Research 2(4): 51-73

  • Dermo, J. (2009). E-assessment and the Student Learning Experience: A survey of student perceptions of e- assessment. British Journal of Educational Technology, 40 (2), 203-214.

  • DeSouza, E. & Fleming, M. (2003). A comparison of in-class quizzes vs. online quizzes on student exam performance. Journal of Computing in Higher Education, 14, 121-134.

  • Ferrao, M. (2010). E-assessment Within the Bologna Paradigm: Evidence from Portugal. Assessment & Evaluation in Higher Education, 35 (7): 819–830.

  • Grimstad, K. & Grabe, M. (2004). Are Online Study Questions Beneficial? Teaching of Psychology, 31(2), 143–46.

  • Gronlund, N.E. (1985). Measurement and Evaluation in Teaching (5th Ed.). New York: Macc Millan Publishing Company.

  • Hale, G. A. (1992). Effects of Amount of Time Allowed on the Test of Written English. (ERIC No: 385 569).

  • Harwood, I. & Warburton., B. (2004). Thinking the Unthinkable: Using project risk management when introducing computer-assisted assessments. Proceedings of the 8th International Computer Assisted Assessment Conference, July 6–7, in Loughborough. Retrieved from http://www.caaconference.com/pastConferences/2004/proceedings/

  • Johnson, B. C. & Kiviniemi, M. T. (2009). The Effect of Online Chapter Quizzes on Exam Performance in an Undergraduate Social Psychology Course. Teach Psychology, 36 (1), 33-37.

  • Karasar, N. (2002). Bilimsel Araştırma (11. Baskı). Ankara: Nobel Yayın Dağıtım.

  • Kaya, Z. & Tan, Ş. (2014). New trends of measurement and assessment in distance education. Turkish Online Journal of Distance Education, 15 (1), 206-217.

  • Kutluca, T. (2010) Investigation of Teachers’ Computer Usage Profiles and Attitudes Toward Computers. International Online Journal of Science, 2(1), 81-97.

  • Lawrence, I. M. (1993). The Effect of Test Speededness on Subgroup Performance (ERIC No: 171 793).

  • Lindner, M. A., Mayntz, S. M. & Schult, J. (2018). Studentische Bewertung und Präferenz von Hochschulprüfungen mit Aufgaben im offenen und geschlossenen Antwortformat [Students’ evaluations of and preferences in exams with open-ended and closed-ended questions in higher education], Zeitschrift Für Pädagogische Psychologie, 32(4), 239–248. https://doi.org/10.1024/ 10100652/a000229

  • Llamas-Nistal, M., Fernández-Iglesias, M. J., González-Tato, J. & Mikic-Fonte, F. A. (2013). Blended e- Assessment: Migrating classical exams to the digital world, Computers & Education, 62, 72-87.

  • Morris, D. (2008). Economics of Scale and Scope in e-Learning. Teaching in Higher Education, 33(3), 331–343.

  • Perlini, A. H., Lind, D. L. & Zumbo, B. D. (1998). Context Effects on Examinations: The effects of time, item order and item difficulty, Canadian Psychology/Psychologie Canadienne, 39(4), 299–307. https://doi.org/10.1037/h0086821

  • Ricketts, C. & Zakrzewski, S. (2004). How do the Risks of a web-based CAA system differ from those of a closed network system? Proceedings of the 8th International Computer Assisted Assessment Conference, July 6–7, in Loughborough. Retrieved from http://www.caaconference.com/pastConferences/2004/proceedings/Ricketts2.pdf.

  • Sorensen, E. (2013): Implementation and Student Perceptions of e-assessment in a Chemical Engineering Module. European Journal of Engineering Education, 38(2), 172-185

  • Stadler,M. Kolb, N. & Sailer, M. (2021). The Right Amount of Pressure: Implementing Time Pressure in Online Exams, Distance Education, 42:2, 219-230, DOI: 10.1080/01587919.2021.191162

  • Teo, T. (2008). Assessing the Computer Attitudes of Students: an Asian perspective. Computers in Human Behavior, 24(4), 1634–1642

  • Tümer, E., Şahin, İ. & Aktürk, A.O. (2008). Online Sınav Sistemi ve bu Sistem ile İlgili Öğrenci Görüşleri. 5th International Educational Technologies Symposium, Anadolu University, Eskişehir.

  • Türker, H. & Yaylak, E. (2011). İlköğretim Sosyal Bilgiler Öğretiminde İnternet Tabanlı Öğretim Yönteminin Ders Başarısına Etkisi. Buca Eğitim Fakültesi Dergisi, 29, 162-177.

  • Usta, E. & Mahiroğlu, A. (2008). Harmanlanmış Öğrenme ve Çevrimiçi Öğrenme Ortamlarının Akademik Başarı ve Doyuma Etkisi. Ahi Evran Üniversitesi Kırşehir Eğitim Fakültesi Dergisi, 9(2).

  • Warburton, B. & Conole, G. (2003). CAA in UK HEIs – The State of the Art. Proceedings of the 7th International Computer Assisted Assessment Conference, July 8–9, in Loughborough. Retrieved fromhttp://www.caaconference.com/pastConferences

  • Yalman, M. & Kutluca, T. (2013). Matematik Öğretmeni Adaylarının Bölüm Dersleri İçin Kullanılan Uzaktan Eğitim Sistemi Hakkındaki Yaklaşımları. Dicle University Journal of Ziya Gokalp Education Faculty, 21.

  • Yağcı, M. (2012). Çevrimiçi Sınav Ortamlarının Öğrencilerin Akademik Başarılarına Etkisi, e-Journal of New World Sciences Academy, 7(1), 331-339.

  • Zakrzewski, S. & Bull, J. (1998). Computer-Assisted Assessment: Suggested guidelines for an institutional strategy. Assessment & Evaluation in Higher Education 23, (3): 283–94.

  • Zakrzewski, S. & Steven, C. (2000). A Model for Computer-Based Assessment: The Catherine Wheel Principle. Assessment & Evaluation in Higher Education 25, (2): 201–15.

                                                                                                                                                                                                        
  • Article Statistics