學術產出-Journal Articles

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

  • No doi shows Citation Infomation
題名 以項目反應模式來探討評分者的評分標準與嚴荷性
作者 王文中
關鍵詞 向量 ; 計分 ; 矩陣 ; 設計 ; 評分 ; 評分者 ; 項目反應模式 ; 標準
日期 1993-09
上傳時間 4-Jun-2016 13:52:03 (UTC+8)
摘要 It has been argued that multiple-choice items can only be used for low level learning or ability. However, since there are no raters involved, the fairness of scoring has justified its popularity. If essay questions or other scoring formats which include raters are desirable, the fairness of scoring should be guaranteed or the raters` bias be detected. Tradionally, more than one rater is employed in order to reduce random errors caused by raters. So far, less has been done to detect nonrandom errors (the raters` bias). Two kinds of biases are proposed in the paper, overall rating severity and step severities. These definitions are somewhat similar to those in the partial credit model (Masters, 1982.) In the paper, a generalized Rasch model, the random coefficients multinominal logit model (RCML), is applied to estimate the raters` bias. The RCML allows the users to define parameters to respresent raters` overall severity and step severities. The data is from 435 fourth-grade students who took a mathematics test which includes 20 multiple-choice and two open-ended items. Each open-ended item is scored by two different raters. Five models are proposed and compared.
關聯 教育與心理研究, 16,83-105
Journal of Education & Psychology
資料類型 article
dc.creator (作者) 王文中zh_TW
dc.date (日期) 1993-09
dc.date.accessioned 4-Jun-2016 13:52:03 (UTC+8)-
dc.date.available 4-Jun-2016 13:52:03 (UTC+8)-
dc.date.issued (上傳時間) 4-Jun-2016 13:52:03 (UTC+8)-
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/97591-
dc.description.abstract (摘要) It has been argued that multiple-choice items can only be used for low level learning or ability. However, since there are no raters involved, the fairness of scoring has justified its popularity. If essay questions or other scoring formats which include raters are desirable, the fairness of scoring should be guaranteed or the raters` bias be detected. Tradionally, more than one rater is employed in order to reduce random errors caused by raters. So far, less has been done to detect nonrandom errors (the raters` bias). Two kinds of biases are proposed in the paper, overall rating severity and step severities. These definitions are somewhat similar to those in the partial credit model (Masters, 1982.) In the paper, a generalized Rasch model, the random coefficients multinominal logit model (RCML), is applied to estimate the raters` bias. The RCML allows the users to define parameters to respresent raters` overall severity and step severities. The data is from 435 fourth-grade students who took a mathematics test which includes 20 multiple-choice and two open-ended items. Each open-ended item is scored by two different raters. Five models are proposed and compared.
dc.format.extent 115 bytes-
dc.format.extent 151 bytes-
dc.format.mimetype text/html-
dc.format.mimetype text/html-
dc.relation (關聯) 教育與心理研究, 16,83-105
dc.relation (關聯) Journal of Education & Psychology
dc.subject (關鍵詞) 向量 ; 計分 ; 矩陣 ; 設計 ; 評分 ; 評分者 ; 項目反應模式 ; 標準
dc.title (題名) 以項目反應模式來探討評分者的評分標準與嚴荷性zh_TW
dc.type (資料類型) article