學術產出-Proceedings

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

  • No doi shows Citation Infomation
題名 Exploring Effect of Rater on Prediction Error in Automatic Text Grading for Open-ended Question
作者 李蔡彥
Li,Tsai-Yen
貢獻者 國立政治大學資訊科學系
關鍵詞 Rater; prediction error; SVM; automatic grader; testing; science learning
日期 2009-11
上傳時間 27-May-2010 16:49:20 (UTC+8)
摘要 This paper aims to explore the way of evaluating the automatic text grader for
     open-ended questions by considering the relationships among raters, grade levels, and
     prediction errors. The open-ended question in this study was about aurora and required
     knowledge of earth science and physics. Each student’s response was graded from 0 to 10
     points by three raters. The automatic grading systems were designed as support-vectormachine
     regression models with linear, quadratic, and RBF kernel respectively. The three
     kinds of regression models were separately trained through grades by three human raters and
     the average grades. The preliminary evaluation with 391 students’ data shows results as the
     following: (1) The higher the grade-level is, the larger the prediction error is. (2) The ranks
     of prediction errors of human-rater-trained models at three grade levels are different. (3) The
     model trained through the average grades has the best performance at all three grade-levels
     no matter what the kind of kernel is. These results suggest that examining the prediction
     errors of models in detail on different grade-levels is worthwhile for finding the best
     matching between raters’ grades and models.
關聯 Proceedings of the 17th International Conference on Computers in Education
資料類型 conference
dc.contributor 國立政治大學資訊科學系en_US
dc.creator (作者) 李蔡彥zh_TW
dc.creator (作者) Li,Tsai-Yen-
dc.date (日期) 2009-11en_US
dc.date.accessioned 27-May-2010 16:49:20 (UTC+8)-
dc.date.available 27-May-2010 16:49:20 (UTC+8)-
dc.date.issued (上傳時間) 27-May-2010 16:49:20 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/39726-
dc.description.abstract (摘要) This paper aims to explore the way of evaluating the automatic text grader for
     open-ended questions by considering the relationships among raters, grade levels, and
     prediction errors. The open-ended question in this study was about aurora and required
     knowledge of earth science and physics. Each student’s response was graded from 0 to 10
     points by three raters. The automatic grading systems were designed as support-vectormachine
     regression models with linear, quadratic, and RBF kernel respectively. The three
     kinds of regression models were separately trained through grades by three human raters and
     the average grades. The preliminary evaluation with 391 students’ data shows results as the
     following: (1) The higher the grade-level is, the larger the prediction error is. (2) The ranks
     of prediction errors of human-rater-trained models at three grade levels are different. (3) The
     model trained through the average grades has the best performance at all three grade-levels
     no matter what the kind of kernel is. These results suggest that examining the prediction
     errors of models in detail on different grade-levels is worthwhile for finding the best
     matching between raters’ grades and models.
-
dc.language en-USen_US
dc.language.iso en_US-
dc.relation (關聯) Proceedings of the 17th International Conference on Computers in Educationen_US
dc.subject (關鍵詞) Rater; prediction error; SVM; automatic grader; testing; science learningen_US
dc.title (題名) Exploring Effect of Rater on Prediction Error in Automatic Text Grading for Open-ended Questionen_US
dc.type (資料類型) conferenceen