學術產出-Theses

Article View/Open

Publication Export

Google ScholarTM

政大圖書館

Citation Infomation

  • No doi shows Citation Infomation
題名 2018 至 2022 年學測和指考英文閱讀測驗困難題目之分析
Analysis of Difficult English Reading Comprehension Test Items in the GSAT and AST from 2018 to 2022
作者 黃品瑄
Huang, Pin-Hsuan
貢獻者 尤雪瑛
Yu, Hsueh-Ying
黃品瑄
Huang, Pin-Hsuan
關鍵詞 學測
指考
英文閱讀測驗
閱讀能力
難題分析
GSAT
AST
English reading comprehension section
Reading constructs
Item difficulty analysis
日期 2022
上傳時間 8-Feb-2023 15:21:36 (UTC+8)
摘要 本研究旨在探討近五年(2018-2022)學測與指考英文閱讀測驗試題中,困難題所測的閱讀能力,以及影響該題目難度之可能因素。研究採用質性分析法,以 Karakoc (2019)修正版之閱讀能力表為依據,分析 28 道困難題。並用影響難度之可能變因來分析困難題及其配合的文章。四項變因包括:「文章可讀性」、「文章體裁」、「問題類型」、「誘答力」。研究結果顯示,困難題常考的四種閱讀能力為:「掌握文章細節」、「掌握文章主旨」、「辨認段落主旨及細節」以及「由上下文猜測詞意」(由多至寡排列)。有關難題因素分析的結果為:「文章可讀性」和「文章體裁」不是影響答題難度,「問題類型」和「誘答力」較可能影響題目難度。以問題類型之來看,可能的困難來自題幹敘述不清楚、選項經過改述、答題線索分散,亦或是有些問題需要資訊整合或推論的能力。從誘答力來看,誘答項會測驗「句子間的邏輯關係」或是給予「和學生基模有關但和文章無關」之資訊,
且其在結構、內容或用字上與正答相似。綜上因素皆會交錯影響閱讀測驗答題之難度。本研究於文末提出與閱讀教學和未來研究有關之建議,希冀此試題分析得以為英語教學帶來正向影響。
The present study aims to investigate reading constructs measured in the difficult English reading comprehension test items of the GSAT and AST from 2018 to 2022, and to
explore possible factors contributing to their difficulties.
A qualitative item analysis was conducted. A total number of 28 difficult test items were analyzed by the revised Karakoc’s (2019) constructs list. Furthermore, the difficult items and their accompanying reading passages were examined by four difficulty predictor variables: (a) Readability, (b) Text Structure Types, (c) Question Types, and (d) Plausibility of Distractors. One of the findings indicated that four types of reading constructs were commonly
measured in the difficult items. The most frequently tested reading construct was “Understanding facts, details and specific information” followed by “Understanding a main
idea and general information,” “Identifying both main idea and the details of a paragraph,” and “Guessing a meaning of an unknown word from the context.”Regarding the potential factors of item difficulty, the results showed that “Readability” and “Text Structure Types” might not be critical factors contributing to item difficulty, while
“Question Types” and “Plausibility of Distractors” might be. For textually-explicit items, their difficulty resulted from three factors, ambiguity of the stem, paraphrased options, and distance between clues. For textually-implicit items, they were difficult because they often required students to synthesize information and make inferences. As for distractors, they were plausible when they included students’ misconceptions or showed similarity with the correct answer. In the study, two misconceptions related to “logic relations between sentences” and “irrelevant schemata” were found. Three types of distractor similarity in terms of structure, content, and word were also found. All the above-mentioned factors would come into play to
influence students’ difficulty in answering reading comprehension questions.
參考文獻 Alderson, C. J., & Alderson, J. C. (2000). Assessing
reading. Cambridge University Press.
Alderson, J.C., and Lukmani, Y. (1989). Cognition and reading: cognitive levels as embodied in test questions. Reading in a Foreign Language, 5(2), 253-270.
Alonzo, J., Basaraba, D., Tindal, G., & Carriveau, R. S. (2009). They read, but how well do they understand? An empirical look at the nuances of measuring reading
comprehension. Assessment for Effective Intervention, 35(1), 34-44.
Ascalon, M. E., Meyers, L. S., Davis, B. W., & Smits, N. (2007). Distractor similarity and item-stem structure: Effects on item difficulty. Applied Measurement in Education, 20(2), 153-170.
Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford university press.
Baghaei, P., & Ravand, H. (2015). A cognitive processing model of reading comprehension in English as a foreign language using the linear logistic test model. Learning and Individual Differences, 43, 100-105.
Bailin, A., & Grafstein, A. (2016). Readability: Text and context. Springer.
Barton, M. L. (1997). Addressing the literacy crisis: Teaching reading in the content areas. National Association of Secondary School Principals, 81(587), 22-30.
Beck, L.L., McKeown, M.G., Sinatra, G.M., and Loxterman, J.A. (1991). Revising social studies text from a text-processing perspective: evidence of improved comprehensibility. Reading Research Quarterly, 26(3), 251-276.
Best, R. M., Floyd, R. G., & McNamara, D. S. (2008). Differential competencies contributing to children`s comprehension of narrative and expository texts. Reading Psychology, 29(2), 137-164.
Brown, H.D. (2001). Teaching by principles: An interactive approach to language pedagogy. 2nd ed. New York: Longman.
Brown, H.D. (2004). Language assessment: Principles and classroom practices. New York: Pearson Education.
Buck. G. (2001). Assessing Listening, pp. 115-153. Cambridge University Press.
Carrell, P. L. (1987). Readability in ESL. Reading in a Foreign Language, 4, 21-40.
Carrell, P. L., Devine, J., & Eskey, D. E. (Eds.). (1988a). Interactive approaches to second language reading. Cambridge University Press.
Carrell, P. L. (1988b). Some causes of text-boundedness and schema interference in ESL reading. In P. L. Carrel, J. Devine, & E. Eskey (Eds.), Interactive approaches to second language reading (pp. 103-113). New York: Cambridge
University Press.
Case, S. M., & Swanson, D. B. (1998). Constructing written test questions for the basic and clinical sciences (2nd ed., pp. 22-25). Philadelphia: National Board of Medical Examiners
Chikalanga, I. (1992). A suggested taxonomy of inferences for the reading teacher. Reading in a Foreign Language, 8(2), 697-709.
Chen, H.C (2009). An analysis of the reading skills measured in reading comprehension tests on the Scholastic Achievement English test (SAET) and the Department Required English Test (DRET). Unpublished Master Thesis. Taipei:
National Taiwan Normal University.
Clinton, V., Taylor, T., Bajpayee, S., Davison, M. L., Carlson, S. E., & Seipel, B.(2020). Inferential comprehension differences between narrative and expository
texts: a systematic review and meta-analysis. Reading and Writing, 33(9), 2223-2248.
Collins, J. (2006). Writing multiple-choice questions for continuing medical education activities and self-assessment modules. Radiographics,26, 543–551.
College Entrance Examination Center. (2016a). 107 GSAT English test-preparation guide. Retrieved from: https://reurl.cc/QbjXXM
College Entrance Examination Center. (2016b). 107 AST English test-preparation guide. Retrieved from: https://reurl.cc/aGkVRX
College Entrance Examination Center. (2019). 111 GSAT English test-preparation guide. Retrieved from: https://reurl.cc/qNOL33
Davey, B. (1988). Factors Affecting the Difficulty of Reading Comprehension Items for Successful and Unsuccessful Readers. The Journal of Experimental Education, 56(2), 67–76.
Davis, F. B. (1944). Fundamental factors of comprehension in reading.Psychometrika, 9(3), 185-197.
Dale, E. & Chall, J. (1948). A formula for predicting readability. Educational Research Bulletin, 27, 37–54.
Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32,221-233.
Fuchs, L. S. (2002). Examining the Reading Difficulty of Secondary Students with Learning Disabilities. Remedial & Special Education, 23(1), 31-41.
Gough, P. B. (1972). One second of reading. Visible Language, 6(4), 291-320.
Goodman, K.S. (1967). Reading: A psycholinguistic guessing game. In H. Singer & R. Ruddell (Eds.), Theoretical models and processes of reading. Network, Delaware: International Reading Association.
Grabe, W. (1991). Current Developments in Second Language Reading Research.TESOL Quarterly, 25(3), 375–406.
Grabe, W., & Stoller, F. L. (2002). The nature of reading abilities. In Teaching and researching reading, 9-39. Boston: Pearson Education.
Grabe, W. (2002). Narrative and expository macro-genres. Genre in the classroom: Multiple perspectives, 249-267.
Gray, W.S. & Leary, B.E. (1935). What makes a book readable. Chicago: University of Chicago Press.
Gray, W.S. (1960). The major aspects of reading. In H. Robinson (ed.), Sequential development of reading abilities (Vol. 90, pp. 8-24). Chicago: Chicago University Press.
Graesser, A. C., McNamara, D. S., & Louwerse, M. M. (2003). What do readers need to learn in order to process coherence relations in narrative and expository text?
Rethinking reading comprehension (pp. 82–99). New York: Guilford Press.
Hallgren, K. A. (2012). Computing inter-rater reliability for observational data: an overview and tutorial. Tutorials in Quantitative Methods for Psychology, 8(1), 23.
Hsu, W.L. (2005). An analysis of the reading comprehension questions in the JCEE English test. Unpublished Master Thesis. Kaohsiung: National Kaohsiung
Normal University.
Huang, T.S. (1994). A qualitative analysis of the JCEE English tests. Taipei: The Crane Publishing Company.
Hudson, T. (1996). Assessing second language academic reading from a communicative competence perspective: Relevance for TOEFL 2000. Princeton, NJ: Educational Testing Service.
Hughes, A. (2003) Testing for language teachers. New York: Cambridge University Press.
Jamieson, J., Jones, S., Kirsch, I., Mosenthal, P., & Taylor, C. (2000). TOEFL 2000 framework. Princeton, NJ: Educational Testing Service.
Jang, E.E. (2009). Cognitive diagnostic assessment of L2 reading comprehension ability: Validity arguments for fusion model application to language assessment. Language Testing, 26, 031-073.
Jeng, H. S. (2001). A comparison of the English reading comprehension passages and items in the College Entrance Examinations of Hong Kong, Taiwan and Mainland China. Concentric: Studies in Linguistics, 27(2), 217-251.
Jeng, H., Chen, L. H., Hadzima, A. M., Lin, P. Y., Martin, R., Yeh, H. N., ... & Wu, H. C. (1999). An Experiment on Designing English Proficiency Tests of Two
Difficulty Levels for the College Entrance Examination in Taiwan. In Second International Conference on English Testing in Asia held at Seoul National University, Korea, included in the Proceedings (pp. 12-38).
Just, M. A., & Carpenter, P. A. (1992). A capacity theory of comprehension: individual differences in working memory. Psychological Review, 99(1), 122.
Karakoc, A. I. (2019). Reading and Listening Comprehension Subskills: The Match between Theory, Coursebooks, and Language Proficiency Tests. Advances in Language and Literary Studies,10(4), 166-185.
Kincaid, J. P., Fishburne Jr, R. P., Rogers, R. L., & Chissom, B. S. (1975). Derivation of new readability formulas (automated readability index, fog count and flesch
reading ease formula) for navy enlisted personnel. Naval Technical Training Command Millington TN Research Branch.
Kirby, J. R. (1988). Style, strategy, and skill in reading. In Learning strategies and learning styles (pp. 229-274). Springer, Boston, MA.
Kirsch, I. S., & Mosenthal, P. B. (1990). Exploring document literacy: Variables underlying the performance of young adults. Reading research quarterly, 5-30.
Klare, G. R. (1968). The role of word frequency in readability. Elementary English, 45(1), 12-22.
Lai, H., Gierl, M. J., Touchie, C., Pugh, D., Boulais, A., & De Champlain, A. (2016). Using automatic item generation to improve the quality of MCQ distractors. Teaching and Learning in Medicine, 28, 166–173.
Lan, W. H., & Chern, C. L. (2010). Using Revised Bloom`s Taxonomy to analyze reading comprehension questions on the SAET and the DRET. 當代教育研究季刊, 18(3), 165-206.
Lin, C. (2010). Evaluating readability of a university Freshman EFL reader. Studies in English for professional communications and applications, 75-88.
Livingston, S. A. (2009). Constructed-Response Test Questions: Why We Use Them; How We Score Them. R&D Connections. Number 11. Educational Testing Service.
Lu, J. Y. (2002). An analysis of the reading comprehension test given in the English Subject Ability Test in Taiwan and its pedagogical implications. Unpublished
Master’s Thesis. Taipei: National Chengchi University.
Meyer, B. J. (2017). Prose Analysis: Purposes, Procedures, and Problems 1. In Understanding expository text (pp. 11-64). Routledge.
McCarthy. (1991). Discourse analysis for language teachers / Michael McCarthy. Cambridge University Press.
McCormick, S. (1992). Disabled readers` erroneous responses to inferential comprehension questions: Description and analysis. Reading Research Quarterly, 27(1), 55-77.
McHugh, M. L. (2012). Interrater reliability: the kappa statistic. Biochemia medica, 22(3), 276-282.
McNamara, D. S., Graesser, A. C., & Louwerse, M. M. (2012). Sources of text difficulty: Across genres and grades. Measuring up: Advances in how we assess reading ability, 89-116.
Mitkov, R. (2003). Computer-aided generation of multiple-choice tests. In Proceedings of the HLT-NAACL 03 workshop on Building educational applications using natural language processing (pp. 17-22).
Mitkov, R., Varga, A., & Rello, L. (2009). Semantic similarity of distractors in
multiple-choice tests: extrinsic evaluation. In Proceedings of the workshop on geometrical models of natural language semantics (pp. 49-56).
Mikulecky, B. S., & Jeffries, L. (2007). Advanced reading power: Extensive reading,vocabulary building, comprehension skills, reading faster. White Plains, N. Y.:
Pearson
Munby, J. (1978) Communicative Syllabus Design. Cambridge, Cambridge University Press.
Nemati, M. (2003). The relationship between topic difficulty and mode of discourse: An in-depth study of EFL writers production, recognition, and attitude. Iranian
Journal of Applied Linguistics, 6(2), 87-116.
Nuttall, C. (2005). Teaching Reading Skills in a foreign language. (3rd ed.) Macmillan Education
Ozuru, Y., Rowe, M., O`Reilly, T., & McNamara, D. S. (2008). Where`s the difficulty in standardized reading tests: The passage or the question? Behavior Research
Methods, 40(4), 1001-15.
Perkins, K., & Brutten, S. R. (1988). An item discriminability study of textually explicit, textually implicit, and scripturally implicit questions. RELC Journal,
19(2), 1-11.
Rumelhart, D. E. (1977). Toward an interactive model of reading. In S. Dornic (Ed.), Attention and performance VI, (pp. 573-603). Hillsdale, NJ: Erlbaum.
Sáenz, L. M., & Fuchs, L. S. (2002). Examining the reading difficulty of secondary students with learning disabilities: Expository versus narrative text. Remedial
and Special Education, 23(1), 31-41.
Seddon, G.M. (1978). The properties of Bloom’s taxonomy of educational objectives for the cognitive domain. Review of Educational Research,48(2), 303-323.
Spencer, M., Gilmour, A. F., Miller, A. C., Emerson, A. M., Saha, N. M., & Cutting, L. E. (2019). Understanding the influence of text complexity and question type on reading outcomes. Reading and writing, 32(3), 603-637.
Stenner, A. J. (1996). Measuring reading comprehension with the Lexile framework.
Smith, R. L., & Smith, J. K. (1988). Differential use of item information by judges using Angoff and Nedelsky procedures. Journal of Educational Measurement, 25, 259–274.
Tarrant, M., Ware, J., & Mohammed, A. M. (2009). An assessment of functioning and non-functioning distractors in multiple-choice questions: A descriptive analysis.
BMC Medical Education, 9(40), 1–8. Testa, S., Toscano, A., & Rosato, R. (2018). Distractor efficiency in an item pool for a statistics classroom exam: assessing its relation with item cognitive level classified according to Bloom’s taxonomy. Frontiers in psychology, 9, 1585.
Towns, M. H. (2014). Guide to developing high-quality, reliable, and valid multiple-choice assessments. Journal of Chemical Education, 91, 1426–1431.
Urquhart, A. H., & Weir, C. J. (2014). Reading in a second language: Process, Product and Practice. Routledge.
Vacc, N. A., Loesch, L. C., & Lubik, R. E. (2001). Writing multiple-choice test items.In G. R. Walz & J. C. Bleuer (Eds.), Assessment: Issues and challenges for the
millennium (pp. 215–222). Greensboro, NC: ERIC Clearinghouse on Counseling and Student Services.
Wangru, C. (2016). Vocabulary teaching based on semantic-field. Journal of Education and Learning, 5(3), 64-71.
Warrens, M. J. (2015). Five ways to look at Cohen`s kappa. Journal of Psychology & Psychotherapy, 5(4), 1.
Weir, C., Hawkey, R., Green, A., & Devi, S. (2012). The cognitive processes underlying the academic reading construct as measured by IELTS. IELTS collected papers, 2, 212-269.
Weir, C.J., & Porter, D. (1996). The multi-divisible or unitary nature of reading: The language tester between Scylla and Charybdis. Reading in a Foreign Language, 10, 1-19.
Wolf, D. F. (1993). Issues in reading comprehension assessment: Implications for the development of research instruments and classroom tests. Foreign Language Annals, 26(3), 322-331
描述 碩士
國立政治大學
英國語文學系
109551015
資料來源 http://thesis.lib.nccu.edu.tw/record/#G0109551015
資料類型 thesis
dc.contributor.advisor 尤雪瑛zh_TW
dc.contributor.advisor Yu, Hsueh-Yingen_US
dc.contributor.author (Authors) 黃品瑄zh_TW
dc.contributor.author (Authors) Huang, Pin-Hsuanen_US
dc.creator (作者) 黃品瑄zh_TW
dc.creator (作者) Huang, Pin-Hsuanen_US
dc.date (日期) 2022en_US
dc.date.accessioned 8-Feb-2023 15:21:36 (UTC+8)-
dc.date.available 8-Feb-2023 15:21:36 (UTC+8)-
dc.date.issued (上傳時間) 8-Feb-2023 15:21:36 (UTC+8)-
dc.identifier (Other Identifiers) G0109551015en_US
dc.identifier.uri (URI) http://nccur.lib.nccu.edu.tw/handle/140.119/143334-
dc.description (描述) 碩士zh_TW
dc.description (描述) 國立政治大學zh_TW
dc.description (描述) 英國語文學系zh_TW
dc.description (描述) 109551015zh_TW
dc.description.abstract (摘要) 本研究旨在探討近五年(2018-2022)學測與指考英文閱讀測驗試題中,困難題所測的閱讀能力,以及影響該題目難度之可能因素。研究採用質性分析法,以 Karakoc (2019)修正版之閱讀能力表為依據,分析 28 道困難題。並用影響難度之可能變因來分析困難題及其配合的文章。四項變因包括:「文章可讀性」、「文章體裁」、「問題類型」、「誘答力」。研究結果顯示,困難題常考的四種閱讀能力為:「掌握文章細節」、「掌握文章主旨」、「辨認段落主旨及細節」以及「由上下文猜測詞意」(由多至寡排列)。有關難題因素分析的結果為:「文章可讀性」和「文章體裁」不是影響答題難度,「問題類型」和「誘答力」較可能影響題目難度。以問題類型之來看,可能的困難來自題幹敘述不清楚、選項經過改述、答題線索分散,亦或是有些問題需要資訊整合或推論的能力。從誘答力來看,誘答項會測驗「句子間的邏輯關係」或是給予「和學生基模有關但和文章無關」之資訊,
且其在結構、內容或用字上與正答相似。綜上因素皆會交錯影響閱讀測驗答題之難度。本研究於文末提出與閱讀教學和未來研究有關之建議,希冀此試題分析得以為英語教學帶來正向影響。
zh_TW
dc.description.abstract (摘要) The present study aims to investigate reading constructs measured in the difficult English reading comprehension test items of the GSAT and AST from 2018 to 2022, and to
explore possible factors contributing to their difficulties.
A qualitative item analysis was conducted. A total number of 28 difficult test items were analyzed by the revised Karakoc’s (2019) constructs list. Furthermore, the difficult items and their accompanying reading passages were examined by four difficulty predictor variables: (a) Readability, (b) Text Structure Types, (c) Question Types, and (d) Plausibility of Distractors. One of the findings indicated that four types of reading constructs were commonly
measured in the difficult items. The most frequently tested reading construct was “Understanding facts, details and specific information” followed by “Understanding a main
idea and general information,” “Identifying both main idea and the details of a paragraph,” and “Guessing a meaning of an unknown word from the context.”Regarding the potential factors of item difficulty, the results showed that “Readability” and “Text Structure Types” might not be critical factors contributing to item difficulty, while
“Question Types” and “Plausibility of Distractors” might be. For textually-explicit items, their difficulty resulted from three factors, ambiguity of the stem, paraphrased options, and distance between clues. For textually-implicit items, they were difficult because they often required students to synthesize information and make inferences. As for distractors, they were plausible when they included students’ misconceptions or showed similarity with the correct answer. In the study, two misconceptions related to “logic relations between sentences” and “irrelevant schemata” were found. Three types of distractor similarity in terms of structure, content, and word were also found. All the above-mentioned factors would come into play to
influence students’ difficulty in answering reading comprehension questions.
en_US
dc.description.tableofcontents Table of Contents iii
List of Tables v
Chinese Abstract vi
English Abstract vii

Chapter
1. Introduction 1
1.1 Background and Motivation 1
1.2 Research Questions 5
1.3 Significance of the Study 5
2. Literature Review 6
2.1 Models of Reading Process 6
2.2 Reading Skills and Subskills 8
2.3 Item Types in Reading Comprehension Tests 11
2.4 Testing Reading Comprehension in GSAT and AST English 14
2.5 Factors Affecting Item Difficulty in Reading
Comprehension Tests 17
2.5.1 Passage Variables 17
2.5.2 Question Variables 20
2.5.3 Conclusion 22
3. Methodology 24
3.1 Data Collection 24
3.2 The Instruments 26
3.2.1 The Reading Constructs List 26
3.2.2 The List of Item Difficulty Predictor Variables 30
3.3 Data Analysis 34
3.3.1 Data Coding Procedure 34
3.3.2 Inter-Rater Reliability 35
4. Results and Discussion 37
4.1 Frequency Distribution of the Tested Reading Constructs
37
4.2 Analysis of Target Reading Passages and Difficult Test Items 42
4.2.1 Readability of the Target Reading Passages 42
4.2.2 Text Structure Types of the Target Reading Passages 45
4.2.3 Question Types of the Difficult Test Items 46
4.2.4 Plausibility of Distractors in Difficult Test Items 53
5. Conclusion
5.1 Summary of the Results 61
5.2 Pedagogical Implications 65
5.3 Limitations of the Study and Suggestions for Future Research 66
References 69
Appendices
A. Coding Training File 75
B. Readability of Passages Without Difficult Items in GSAT and AST 82
C. Numbers of Text Types in Passages Without Difficult Items 83
D. Examples of Textually-Explicit Items and Their Passages 84
E. Examples of Textually-Implicit Items and Their Passages 87
zh_TW
dc.format.extent 1725975 bytes-
dc.format.mimetype application/pdf-
dc.source.uri (資料來源) http://thesis.lib.nccu.edu.tw/record/#G0109551015en_US
dc.subject (關鍵詞) 學測zh_TW
dc.subject (關鍵詞) 指考zh_TW
dc.subject (關鍵詞) 英文閱讀測驗zh_TW
dc.subject (關鍵詞) 閱讀能力zh_TW
dc.subject (關鍵詞) 難題分析zh_TW
dc.subject (關鍵詞) GSATen_US
dc.subject (關鍵詞) ASTen_US
dc.subject (關鍵詞) English reading comprehension sectionen_US
dc.subject (關鍵詞) Reading constructsen_US
dc.subject (關鍵詞) Item difficulty analysisen_US
dc.title (題名) 2018 至 2022 年學測和指考英文閱讀測驗困難題目之分析zh_TW
dc.title (題名) Analysis of Difficult English Reading Comprehension Test Items in the GSAT and AST from 2018 to 2022en_US
dc.type (資料類型) thesisen_US
dc.relation.reference (參考文獻) Alderson, C. J., & Alderson, J. C. (2000). Assessing
reading. Cambridge University Press.
Alderson, J.C., and Lukmani, Y. (1989). Cognition and reading: cognitive levels as embodied in test questions. Reading in a Foreign Language, 5(2), 253-270.
Alonzo, J., Basaraba, D., Tindal, G., & Carriveau, R. S. (2009). They read, but how well do they understand? An empirical look at the nuances of measuring reading
comprehension. Assessment for Effective Intervention, 35(1), 34-44.
Ascalon, M. E., Meyers, L. S., Davis, B. W., & Smits, N. (2007). Distractor similarity and item-stem structure: Effects on item difficulty. Applied Measurement in Education, 20(2), 153-170.
Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford university press.
Baghaei, P., & Ravand, H. (2015). A cognitive processing model of reading comprehension in English as a foreign language using the linear logistic test model. Learning and Individual Differences, 43, 100-105.
Bailin, A., & Grafstein, A. (2016). Readability: Text and context. Springer.
Barton, M. L. (1997). Addressing the literacy crisis: Teaching reading in the content areas. National Association of Secondary School Principals, 81(587), 22-30.
Beck, L.L., McKeown, M.G., Sinatra, G.M., and Loxterman, J.A. (1991). Revising social studies text from a text-processing perspective: evidence of improved comprehensibility. Reading Research Quarterly, 26(3), 251-276.
Best, R. M., Floyd, R. G., & McNamara, D. S. (2008). Differential competencies contributing to children`s comprehension of narrative and expository texts. Reading Psychology, 29(2), 137-164.
Brown, H.D. (2001). Teaching by principles: An interactive approach to language pedagogy. 2nd ed. New York: Longman.
Brown, H.D. (2004). Language assessment: Principles and classroom practices. New York: Pearson Education.
Buck. G. (2001). Assessing Listening, pp. 115-153. Cambridge University Press.
Carrell, P. L. (1987). Readability in ESL. Reading in a Foreign Language, 4, 21-40.
Carrell, P. L., Devine, J., & Eskey, D. E. (Eds.). (1988a). Interactive approaches to second language reading. Cambridge University Press.
Carrell, P. L. (1988b). Some causes of text-boundedness and schema interference in ESL reading. In P. L. Carrel, J. Devine, & E. Eskey (Eds.), Interactive approaches to second language reading (pp. 103-113). New York: Cambridge
University Press.
Case, S. M., & Swanson, D. B. (1998). Constructing written test questions for the basic and clinical sciences (2nd ed., pp. 22-25). Philadelphia: National Board of Medical Examiners
Chikalanga, I. (1992). A suggested taxonomy of inferences for the reading teacher. Reading in a Foreign Language, 8(2), 697-709.
Chen, H.C (2009). An analysis of the reading skills measured in reading comprehension tests on the Scholastic Achievement English test (SAET) and the Department Required English Test (DRET). Unpublished Master Thesis. Taipei:
National Taiwan Normal University.
Clinton, V., Taylor, T., Bajpayee, S., Davison, M. L., Carlson, S. E., & Seipel, B.(2020). Inferential comprehension differences between narrative and expository
texts: a systematic review and meta-analysis. Reading and Writing, 33(9), 2223-2248.
Collins, J. (2006). Writing multiple-choice questions for continuing medical education activities and self-assessment modules. Radiographics,26, 543–551.
College Entrance Examination Center. (2016a). 107 GSAT English test-preparation guide. Retrieved from: https://reurl.cc/QbjXXM
College Entrance Examination Center. (2016b). 107 AST English test-preparation guide. Retrieved from: https://reurl.cc/aGkVRX
College Entrance Examination Center. (2019). 111 GSAT English test-preparation guide. Retrieved from: https://reurl.cc/qNOL33
Davey, B. (1988). Factors Affecting the Difficulty of Reading Comprehension Items for Successful and Unsuccessful Readers. The Journal of Experimental Education, 56(2), 67–76.
Davis, F. B. (1944). Fundamental factors of comprehension in reading.Psychometrika, 9(3), 185-197.
Dale, E. & Chall, J. (1948). A formula for predicting readability. Educational Research Bulletin, 27, 37–54.
Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32,221-233.
Fuchs, L. S. (2002). Examining the Reading Difficulty of Secondary Students with Learning Disabilities. Remedial & Special Education, 23(1), 31-41.
Gough, P. B. (1972). One second of reading. Visible Language, 6(4), 291-320.
Goodman, K.S. (1967). Reading: A psycholinguistic guessing game. In H. Singer & R. Ruddell (Eds.), Theoretical models and processes of reading. Network, Delaware: International Reading Association.
Grabe, W. (1991). Current Developments in Second Language Reading Research.TESOL Quarterly, 25(3), 375–406.
Grabe, W., & Stoller, F. L. (2002). The nature of reading abilities. In Teaching and researching reading, 9-39. Boston: Pearson Education.
Grabe, W. (2002). Narrative and expository macro-genres. Genre in the classroom: Multiple perspectives, 249-267.
Gray, W.S. & Leary, B.E. (1935). What makes a book readable. Chicago: University of Chicago Press.
Gray, W.S. (1960). The major aspects of reading. In H. Robinson (ed.), Sequential development of reading abilities (Vol. 90, pp. 8-24). Chicago: Chicago University Press.
Graesser, A. C., McNamara, D. S., & Louwerse, M. M. (2003). What do readers need to learn in order to process coherence relations in narrative and expository text?
Rethinking reading comprehension (pp. 82–99). New York: Guilford Press.
Hallgren, K. A. (2012). Computing inter-rater reliability for observational data: an overview and tutorial. Tutorials in Quantitative Methods for Psychology, 8(1), 23.
Hsu, W.L. (2005). An analysis of the reading comprehension questions in the JCEE English test. Unpublished Master Thesis. Kaohsiung: National Kaohsiung
Normal University.
Huang, T.S. (1994). A qualitative analysis of the JCEE English tests. Taipei: The Crane Publishing Company.
Hudson, T. (1996). Assessing second language academic reading from a communicative competence perspective: Relevance for TOEFL 2000. Princeton, NJ: Educational Testing Service.
Hughes, A. (2003) Testing for language teachers. New York: Cambridge University Press.
Jamieson, J., Jones, S., Kirsch, I., Mosenthal, P., & Taylor, C. (2000). TOEFL 2000 framework. Princeton, NJ: Educational Testing Service.
Jang, E.E. (2009). Cognitive diagnostic assessment of L2 reading comprehension ability: Validity arguments for fusion model application to language assessment. Language Testing, 26, 031-073.
Jeng, H. S. (2001). A comparison of the English reading comprehension passages and items in the College Entrance Examinations of Hong Kong, Taiwan and Mainland China. Concentric: Studies in Linguistics, 27(2), 217-251.
Jeng, H., Chen, L. H., Hadzima, A. M., Lin, P. Y., Martin, R., Yeh, H. N., ... & Wu, H. C. (1999). An Experiment on Designing English Proficiency Tests of Two
Difficulty Levels for the College Entrance Examination in Taiwan. In Second International Conference on English Testing in Asia held at Seoul National University, Korea, included in the Proceedings (pp. 12-38).
Just, M. A., & Carpenter, P. A. (1992). A capacity theory of comprehension: individual differences in working memory. Psychological Review, 99(1), 122.
Karakoc, A. I. (2019). Reading and Listening Comprehension Subskills: The Match between Theory, Coursebooks, and Language Proficiency Tests. Advances in Language and Literary Studies,10(4), 166-185.
Kincaid, J. P., Fishburne Jr, R. P., Rogers, R. L., & Chissom, B. S. (1975). Derivation of new readability formulas (automated readability index, fog count and flesch
reading ease formula) for navy enlisted personnel. Naval Technical Training Command Millington TN Research Branch.
Kirby, J. R. (1988). Style, strategy, and skill in reading. In Learning strategies and learning styles (pp. 229-274). Springer, Boston, MA.
Kirsch, I. S., & Mosenthal, P. B. (1990). Exploring document literacy: Variables underlying the performance of young adults. Reading research quarterly, 5-30.
Klare, G. R. (1968). The role of word frequency in readability. Elementary English, 45(1), 12-22.
Lai, H., Gierl, M. J., Touchie, C., Pugh, D., Boulais, A., & De Champlain, A. (2016). Using automatic item generation to improve the quality of MCQ distractors. Teaching and Learning in Medicine, 28, 166–173.
Lan, W. H., & Chern, C. L. (2010). Using Revised Bloom`s Taxonomy to analyze reading comprehension questions on the SAET and the DRET. 當代教育研究季刊, 18(3), 165-206.
Lin, C. (2010). Evaluating readability of a university Freshman EFL reader. Studies in English for professional communications and applications, 75-88.
Livingston, S. A. (2009). Constructed-Response Test Questions: Why We Use Them; How We Score Them. R&D Connections. Number 11. Educational Testing Service.
Lu, J. Y. (2002). An analysis of the reading comprehension test given in the English Subject Ability Test in Taiwan and its pedagogical implications. Unpublished
Master’s Thesis. Taipei: National Chengchi University.
Meyer, B. J. (2017). Prose Analysis: Purposes, Procedures, and Problems 1. In Understanding expository text (pp. 11-64). Routledge.
McCarthy. (1991). Discourse analysis for language teachers / Michael McCarthy. Cambridge University Press.
McCormick, S. (1992). Disabled readers` erroneous responses to inferential comprehension questions: Description and analysis. Reading Research Quarterly, 27(1), 55-77.
McHugh, M. L. (2012). Interrater reliability: the kappa statistic. Biochemia medica, 22(3), 276-282.
McNamara, D. S., Graesser, A. C., & Louwerse, M. M. (2012). Sources of text difficulty: Across genres and grades. Measuring up: Advances in how we assess reading ability, 89-116.
Mitkov, R. (2003). Computer-aided generation of multiple-choice tests. In Proceedings of the HLT-NAACL 03 workshop on Building educational applications using natural language processing (pp. 17-22).
Mitkov, R., Varga, A., & Rello, L. (2009). Semantic similarity of distractors in
multiple-choice tests: extrinsic evaluation. In Proceedings of the workshop on geometrical models of natural language semantics (pp. 49-56).
Mikulecky, B. S., & Jeffries, L. (2007). Advanced reading power: Extensive reading,vocabulary building, comprehension skills, reading faster. White Plains, N. Y.:
Pearson
Munby, J. (1978) Communicative Syllabus Design. Cambridge, Cambridge University Press.
Nemati, M. (2003). The relationship between topic difficulty and mode of discourse: An in-depth study of EFL writers production, recognition, and attitude. Iranian
Journal of Applied Linguistics, 6(2), 87-116.
Nuttall, C. (2005). Teaching Reading Skills in a foreign language. (3rd ed.) Macmillan Education
Ozuru, Y., Rowe, M., O`Reilly, T., & McNamara, D. S. (2008). Where`s the difficulty in standardized reading tests: The passage or the question? Behavior Research
Methods, 40(4), 1001-15.
Perkins, K., & Brutten, S. R. (1988). An item discriminability study of textually explicit, textually implicit, and scripturally implicit questions. RELC Journal,
19(2), 1-11.
Rumelhart, D. E. (1977). Toward an interactive model of reading. In S. Dornic (Ed.), Attention and performance VI, (pp. 573-603). Hillsdale, NJ: Erlbaum.
Sáenz, L. M., & Fuchs, L. S. (2002). Examining the reading difficulty of secondary students with learning disabilities: Expository versus narrative text. Remedial
and Special Education, 23(1), 31-41.
Seddon, G.M. (1978). The properties of Bloom’s taxonomy of educational objectives for the cognitive domain. Review of Educational Research,48(2), 303-323.
Spencer, M., Gilmour, A. F., Miller, A. C., Emerson, A. M., Saha, N. M., & Cutting, L. E. (2019). Understanding the influence of text complexity and question type on reading outcomes. Reading and writing, 32(3), 603-637.
Stenner, A. J. (1996). Measuring reading comprehension with the Lexile framework.
Smith, R. L., & Smith, J. K. (1988). Differential use of item information by judges using Angoff and Nedelsky procedures. Journal of Educational Measurement, 25, 259–274.
Tarrant, M., Ware, J., & Mohammed, A. M. (2009). An assessment of functioning and non-functioning distractors in multiple-choice questions: A descriptive analysis.
BMC Medical Education, 9(40), 1–8. Testa, S., Toscano, A., & Rosato, R. (2018). Distractor efficiency in an item pool for a statistics classroom exam: assessing its relation with item cognitive level classified according to Bloom’s taxonomy. Frontiers in psychology, 9, 1585.
Towns, M. H. (2014). Guide to developing high-quality, reliable, and valid multiple-choice assessments. Journal of Chemical Education, 91, 1426–1431.
Urquhart, A. H., & Weir, C. J. (2014). Reading in a second language: Process, Product and Practice. Routledge.
Vacc, N. A., Loesch, L. C., & Lubik, R. E. (2001). Writing multiple-choice test items.In G. R. Walz & J. C. Bleuer (Eds.), Assessment: Issues and challenges for the
millennium (pp. 215–222). Greensboro, NC: ERIC Clearinghouse on Counseling and Student Services.
Wangru, C. (2016). Vocabulary teaching based on semantic-field. Journal of Education and Learning, 5(3), 64-71.
Warrens, M. J. (2015). Five ways to look at Cohen`s kappa. Journal of Psychology & Psychotherapy, 5(4), 1.
Weir, C., Hawkey, R., Green, A., & Devi, S. (2012). The cognitive processes underlying the academic reading construct as measured by IELTS. IELTS collected papers, 2, 212-269.
Weir, C.J., & Porter, D. (1996). The multi-divisible or unitary nature of reading: The language tester between Scylla and Charybdis. Reading in a Foreign Language, 10, 1-19.
Wolf, D. F. (1993). Issues in reading comprehension assessment: Implications for the development of research instruments and classroom tests. Foreign Language Annals, 26(3), 322-331
zh_TW