學術產出-學位論文

文章檢視/開啟

書目匯出

Google ScholarTM

政大圖書館

引文資訊

TAIR相關學術產出

題名 評審間意見一致程度之問題探討
The Problems of Interater Agreement
作者 楊麗華
Yang, Li-Hua
貢獻者 江振東
Chiang, Jeng-Tung
楊麗華
Yang, Li-Hua
關鍵詞 對數線性模式
評審間意見一致程度
Loglinear model
Interater agreement
Kappa
日期 1996
上傳時間 28-四月-2016 11:48:47 (UTC+8)
摘要 本文主要在探討評審間意見一致程度的問題, 除了回顧kappa係數和weighted kappa係數及其應用外. 並討論如何採用對數線性模型中加入代表一致程度的參數來分析這類的問題. kappa係數提供應用者一個快速獲得評審間意見一致程度的指標, 而利用對數線性模型分析, 則可以獲得更多的訊息. 因此, 我們嘗試將kappa係數和對數線性模型作一對比. 此外,針對三個評審間意見一致程度的問題我們也引進K-ABC係數, 並與模型log(m-ijk) = u + l(A, i) + l(B, j) +l(C, k) + Delta*I(i=j=k) 作以對比, 利用模擬實驗將Delta(hat)與K(ABC)所對應的可能範圍列出, 供使用者參考.
The focus of this study is on the measurement of interater agreement. Analyses in terms of kappa tyoe coefficients and in terms of loglinearmodel techniques are reviewed, and issues related to the two approaches adderssed. In additition, a new kappa type corfficient Kappa-ABC is introduceto provide indication of agreement among three raters. Its possible connections with the coefficient Delta in the model log(m-ijk) = u + l(A, i) + l(B, j) + j(C, k) + Delta*I (i=j=k) are studied.
參考文獻 I.Agresti, A. (1988) "A model for agreement between ratings on an ordinal scale." Biometrics, 44, 539-548.
2.Agresti, A. (1990) "Categorical data analysis." New York: Wiley.
3.Agresti, A. (1992) "Modelling pattems of agreement and disagreement." Statistical Methods in Medical Research, 201-218.
4.Bennett, E. M., Alpert,R. and Goldstein, A. C. (1 954) "Communications through limited response questioning." Public Opinion Quarterly, 18, 303-308.
5.Black.man, N. J-M. and Koval, J. J. (1993) "Estimating rater agreement in 2 X 2 tables : correction for chance and intraclass correlation." Applied Psychological Measurement, 17, 211-223.
6.Cicchetti, D. V. and Feinstein, A. R. (1990) "High agreement but low kappa : 1.
The problems of two paradoxes." Journal 01 Clinical Epidemiology,43, 543-549.
7.Cicchetti, D. V. and Feinstein. A. R. (1990) "High agreement but low kappa :II. Resolving the paradoxes." Journal of Clinical Epidemiology, 43, 551-558.
8.Cohen,J. (1960) "A coefficient of agreement fornominal scale." Educational and Psychological Measurement 20,37-46.
9.Cohen,J. (1968) "Weighted kappa : Nominal scale agreement with provision for scaled disagreement or partial credit." Psychological Bulletin,70, 213-220.
10.Conger, A. J. (1980) "Integration and generalization of kappas for mutiple raters." Psychological Bulletin, 88 , 322-328.
11. Darroch, J. N. and McCloud, P. J. (1986) "Category distinguishability and observer agreement." Australian Journal of Statistics,28,371-388.
12.Dice, L. R. (1954) "Measures of the amount of ecologic association between species." Ecology26,297-302.
13 .Fleiss, J.L. and Cohen, J. (1969) "Large sample standard errors of jappa and weighted kappa." Psychological Bulletin, 72,323-327.
14.Fleiss,J.L. (1971) "Measuring nominal scale agreement among many raters." Psychological Bulletin, 76,378-382.
15 .Fleiss, J.L. (1975) "Measuring agreement between two judges on the presence or absence of a trait." Biometrics ,31,651-659.
16.Fleiss,J.L. (1981) "Statistical methods for rates and proportions." chapter 13 (2nd ed.) New York: Wiley.
17.Goodman,L.A. and Kruskal, W.H. (1954) "Measures of association for cross classification." Journal of the American Statistical Association ,49,732-764.
18.Hubert,L. (1977) "Kappa revisited. "Psychological Bulletin, 84,289-297.
19.James,I.R. (1983) "Analysis of nonagreements among multiple raters." Biometrics, 39, 651-657.
20.Landis,J.R. and Koch, G.G. (1975a) "A review of statistical methods in the analysis of data arising from observer reliability sstudies (part I) ." Statistical Neerlandica , 29.101-123.
2l.Landis,J.R. and Koch, G.G. (1975b) "A review of statistical methods in the analysis of data arising from observer reliability sstudies (Part II)." Statistical Neerlandica , 29.151 -16l.
22.Landis, J.R. and Koch, G.G. (1977a) "The measurement of observer agreement for categorical data." Biometrics ,33,159-174.
23 .Landis, J.R. and Koch, G. G. (1977b) "An application of hierarchical kappa-type statistics in the assessment of majority agreement among mutiple observers." Biometrics ,33 ,363-374.
24.Light,R. (1971) "Measures of response agreement for qualitative data :some generalizations and aiternatives." Psychological Bulletin, 76,365-377.
25 .Mak, TK. (1988) "Analyzing intraclass correlation for dichotomous variable." Applied Statistics, 37,344-352.
26.Maxwell,AE. and PiIliner, A.E. G. (1968) "MReDeriving coefficients of reliability and agreement for ratings." The British Journal of Mathematical and Statistical Psychology, 21,105-116.
27.Rogot,E. and Goldberg, I.D. (1966) "A proposed index for measuring agreement in test-retest studies." Journal of Chronic Diseases, 19,991-1006.
28.Scott,W.A (1955) "Reliability of content analysis: the case of nominal scale coding. " Public Opinion Quarterly ,19,32 1-325.
29.Tanner,M.A ane Young, M.A (1985) "Modeling agreement among raters."
Journal of the American Statistical Association ,80,175-180.
30.Zwick,R. (1988) "Another look at interrater agreement." Psychological Bulletin, 103,3,374-378.
描述 碩士
國立政治大學
統計學系
83354013
資料來源 http://thesis.lib.nccu.edu.tw/record/#B2002002795
資料類型 thesis
dc.contributor.advisor 江振東zh_TW
dc.contributor.advisor Chiang, Jeng-Tungen_US
dc.contributor.author (作者) 楊麗華zh_TW
dc.contributor.author (作者) Yang, Li-Huaen_US
dc.creator (作者) 楊麗華zh_TW
dc.creator (作者) Yang, Li-Huaen_US
dc.date (日期) 1996en_US
dc.date.accessioned 28-四月-2016 11:48:47 (UTC+8)-
dc.date.available 28-四月-2016 11:48:47 (UTC+8)-
dc.date.issued (上傳時間) 28-四月-2016 11:48:47 (UTC+8)-
dc.identifier (其他 識別碼) B2002002795en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/87309-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 統計學系zh_TW
dc.description (描述) 83354013zh_TW
dc.description.abstract (摘要) 本文主要在探討評審間意見一致程度的問題, 除了回顧kappa係數和weighted kappa係數及其應用外. 並討論如何採用對數線性模型中加入代表一致程度的參數來分析這類的問題. kappa係數提供應用者一個快速獲得評審間意見一致程度的指標, 而利用對數線性模型分析, 則可以獲得更多的訊息. 因此, 我們嘗試將kappa係數和對數線性模型作一對比. 此外,針對三個評審間意見一致程度的問題我們也引進K-ABC係數, 並與模型log(m-ijk) = u + l(A, i) + l(B, j) +l(C, k) + Delta*I(i=j=k) 作以對比, 利用模擬實驗將Delta(hat)與K(ABC)所對應的可能範圍列出, 供使用者參考.zh_TW
dc.description.abstract (摘要) The focus of this study is on the measurement of interater agreement. Analyses in terms of kappa tyoe coefficients and in terms of loglinearmodel techniques are reviewed, and issues related to the two approaches adderssed. In additition, a new kappa type corfficient Kappa-ABC is introduceto provide indication of agreement among three raters. Its possible connections with the coefficient Delta in the model log(m-ijk) = u + l(A, i) + l(B, j) + j(C, k) + Delta*I (i=j=k) are studied.en_US
dc.description.tableofcontents 第一章、緒論..........1
第一節、研究動機與研究目的..........1
第二節、章節結構介紹..........3
第二章、一致程度之測量—KAPPA係數和WEIGHTED KAPPA係數..........5
第一節、兩個評審問意見之一致程度測量..........5
第二節、三個評審問意見之一致程度測量..........10
第三章、一致性之對數線性模型..........15
第一節、配適獨立模式的方法..........15
第二節、TANNER和YOUNG的方法..........16
第三節、DARROCH和MCCLOUD的方法..........21
第四節、舉例說明..........24
第四章、一致程度之問題探討..........27
第一節、KABC和其他評估一致程度係數的對比..........27
第二節、KABC和三維對數線性模型的對比..........31
第三節、其他係數與對數線性模型的對比..........33
第五章、結論..........35
附錄A..........36
附錄B..........41
參考文獻..........42
zh_TW
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#B2002002795en_US
dc.subject (關鍵詞) 對數線性模式zh_TW
dc.subject (關鍵詞) 評審間意見一致程度zh_TW
dc.subject (關鍵詞) Loglinear modelen_US
dc.subject (關鍵詞) Interater agreementen_US
dc.subject (關鍵詞) Kappaen_US
dc.title (題名) 評審間意見一致程度之問題探討zh_TW
dc.title (題名) The Problems of Interater Agreementen_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) I.Agresti, A. (1988) "A model for agreement between ratings on an ordinal scale." Biometrics, 44, 539-548.
2.Agresti, A. (1990) "Categorical data analysis." New York: Wiley.
3.Agresti, A. (1992) "Modelling pattems of agreement and disagreement." Statistical Methods in Medical Research, 201-218.
4.Bennett, E. M., Alpert,R. and Goldstein, A. C. (1 954) "Communications through limited response questioning." Public Opinion Quarterly, 18, 303-308.
5.Black.man, N. J-M. and Koval, J. J. (1993) "Estimating rater agreement in 2 X 2 tables : correction for chance and intraclass correlation." Applied Psychological Measurement, 17, 211-223.
6.Cicchetti, D. V. and Feinstein, A. R. (1990) "High agreement but low kappa : 1.
The problems of two paradoxes." Journal 01 Clinical Epidemiology,43, 543-549.
7.Cicchetti, D. V. and Feinstein. A. R. (1990) "High agreement but low kappa :II. Resolving the paradoxes." Journal of Clinical Epidemiology, 43, 551-558.
8.Cohen,J. (1960) "A coefficient of agreement fornominal scale." Educational and Psychological Measurement 20,37-46.
9.Cohen,J. (1968) "Weighted kappa : Nominal scale agreement with provision for scaled disagreement or partial credit." Psychological Bulletin,70, 213-220.
10.Conger, A. J. (1980) "Integration and generalization of kappas for mutiple raters." Psychological Bulletin, 88 , 322-328.
11. Darroch, J. N. and McCloud, P. J. (1986) "Category distinguishability and observer agreement." Australian Journal of Statistics,28,371-388.
12.Dice, L. R. (1954) "Measures of the amount of ecologic association between species." Ecology26,297-302.
13 .Fleiss, J.L. and Cohen, J. (1969) "Large sample standard errors of jappa and weighted kappa." Psychological Bulletin, 72,323-327.
14.Fleiss,J.L. (1971) "Measuring nominal scale agreement among many raters." Psychological Bulletin, 76,378-382.
15 .Fleiss, J.L. (1975) "Measuring agreement between two judges on the presence or absence of a trait." Biometrics ,31,651-659.
16.Fleiss,J.L. (1981) "Statistical methods for rates and proportions." chapter 13 (2nd ed.) New York: Wiley.
17.Goodman,L.A. and Kruskal, W.H. (1954) "Measures of association for cross classification." Journal of the American Statistical Association ,49,732-764.
18.Hubert,L. (1977) "Kappa revisited. "Psychological Bulletin, 84,289-297.
19.James,I.R. (1983) "Analysis of nonagreements among multiple raters." Biometrics, 39, 651-657.
20.Landis,J.R. and Koch, G.G. (1975a) "A review of statistical methods in the analysis of data arising from observer reliability sstudies (part I) ." Statistical Neerlandica , 29.101-123.
2l.Landis,J.R. and Koch, G.G. (1975b) "A review of statistical methods in the analysis of data arising from observer reliability sstudies (Part II)." Statistical Neerlandica , 29.151 -16l.
22.Landis, J.R. and Koch, G.G. (1977a) "The measurement of observer agreement for categorical data." Biometrics ,33,159-174.
23 .Landis, J.R. and Koch, G. G. (1977b) "An application of hierarchical kappa-type statistics in the assessment of majority agreement among mutiple observers." Biometrics ,33 ,363-374.
24.Light,R. (1971) "Measures of response agreement for qualitative data :some generalizations and aiternatives." Psychological Bulletin, 76,365-377.
25 .Mak, TK. (1988) "Analyzing intraclass correlation for dichotomous variable." Applied Statistics, 37,344-352.
26.Maxwell,AE. and PiIliner, A.E. G. (1968) "MReDeriving coefficients of reliability and agreement for ratings." The British Journal of Mathematical and Statistical Psychology, 21,105-116.
27.Rogot,E. and Goldberg, I.D. (1966) "A proposed index for measuring agreement in test-retest studies." Journal of Chronic Diseases, 19,991-1006.
28.Scott,W.A (1955) "Reliability of content analysis: the case of nominal scale coding. " Public Opinion Quarterly ,19,32 1-325.
29.Tanner,M.A ane Young, M.A (1985) "Modeling agreement among raters."
Journal of the American Statistical Association ,80,175-180.
30.Zwick,R. (1988) "Another look at interrater agreement." Psychological Bulletin, 103,3,374-378.
zh_TW