簡易檢索 / 詳目顯示

研究生: 藍偉華
Wei-Hua Lan
論文名稱: 大學學科能力測驗及指定科目考試英文閱讀測驗之評析:以布魯姆認知分類(修訂版)析之
An Analysis of Reading Comprehension Questions on the SAET and the DRET Using Revised Bloom's Taxonomy
指導教授: 陳秋蘭
Chen, Chiou-Lan
學位類別: 碩士
Master
系所名稱: 英語學系
Department of English
論文出版年: 2007
畢業學年度: 95
語文別: 英文
論文頁數: 144
中文關鍵詞: 試題分析閱讀測驗布魯姆認知分類修訂版大學學科能力英語測驗大學指定科目英語科考試
英文關鍵詞: item analysis, reading comprehension tests, Revised Bloom's Taxonomy, Scholastic Ability English Test (SAET), Department Required English Test (DRET)
論文種類: 學術論文
相關次數: 點閱:210下載:81
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 摘要
    本研究使用布魯姆認知分類修正版來探討近五年(2002-2006年)大學學科能力測驗及指定科目考試英文閱讀測驗試題中欲測試的認知層次與知識型態,以及歷年考生在各類題型的表現。
    本研究採質化與量化分析法。質的分析採內容分析法,將一百四十個考題各自分類於一個主要及次要認知層次和一個主要及次要知識類別。量化分析則使用電腦統計軟體SPSS 14.0,以交叉分析來檢測由認知及知識類型所共同組成的問題類型出現的頻率及分布,雙因子變異數分析則用來個別檢測兩考試中各類題型之答對率間有無顯著差距及其差距是否每年相同,單因子變異數分析則用來檢測各類題型的鑑別度,以了解高分組與低分組學生於各類題型上表現的差距。
    本研究結果摘要如下:
    第一,兩種考試中,試題的認知層次可分為四種 (知識、理解、應用、分析)及其八種次層次,內容分為三種知識類型(事實、概念、程序)及三種次類型,共可歸納出五種主要題型及九種次要題型。
    第二,「記憶事實性知識」及「理解事實性知識」在兩種考試中為最常被考的問題類型。只有少數題目的認知歷程達「應用」和「分析」兩高層次。此兩種考試最大的差異在於次要問題類型的頻率、出現、和分布。學科能力測驗含較多的「執行」(或稱為「應用」)問題;指定科目考試則包含較多的「推論」問題。
    第三,在學科能力測驗中,並未發現一個共同模式可顯示五年來不同題型之答對率間有顯著差距;但在指定科目考試中則存在此共同模式,亦即在指定科目考試中,學生在「理解概念性知識」問題表現顯著較佳,然而,在「推論細節」方面的閱讀問題較差。
    最後,在高低分組學生的表現方面,不管是哪種問題類型,高低分組的答對率差距大概都達五十左右。然而,在指定科目考試中,高分組學生在「推論細節」的問題表現很差,以致於此類問題的鑑別度不佳。
    本研究結果建議英語教師應幫助學生建立閱讀或準備閱讀考試時所需的四種基礎認知技巧,尤其是理解技巧中的推論。

    ABSTRACT
    This study aimed to investigate the cognitive process levels and knowledge types in the Revised Bloom’s Taxonomy tested on the reading comprehension items on the SAET (Scholastic Achievement English Test) and the DRET (Department Required English Test) administered from 2002 to 2006, and to explore how test takers (all examinees, high scorers, and low scorers) performed on different types of items.
    Both qualitative and quantitative analyses were adopted. The qualitative analysis was conducted by categorizing each of the 140 comprehension items into a major and a sub cognitive process and a major and a sub knowledge type in the Revised Bloom’s Taxonomy. SPSS 14.0 statistical package was used to do the quantitative analysis. The frequency distribution of the question types (i.e., combinations of the cognitive levels and knowledge types identified) was done by the Crosstabulation analysis. The two-way ANOVA test was applied to the SAET and the DRET to investigate whether there were significant differences among the passing rates of various question types and to examine whether these differences were consistent through years. Moreover, to see how the high and low scorers differed while answering different types of questions each year, the discrimination indexes were analyzed via the one-way ANOVA test.
    The results of this study are summarized as follows:
    First, in both test item analyses, four lowest levels in the Revised Bloom’s Taxonomy (Remember, Understand, Apply, and Analyze) along with eight sub-levels, and three types of knowledge (Factual, Conceptual, and Procedural) along with three subtypes were identified, together comprising five major question types and nine subtypes of questions.
    Second, items on Remember Factual Knowledge and Understand Factual Knowledge were the majority in the two tests. Few items were found at higher levels of Apply and Analyze. The major differences between the SAET and the DRET were the frequency, occurrence, and distribution of items testing different cognitive sub-skills and knowledge subtypes. It was found that Executing/Apply items were more favored in the SAET, whereas the DRET had more items on Inferring (a subtype under Understand category).
    Third, in the SAET, no general pattern was found in the significant discrepancy among the passing rates of various question types in these years; whereas a general pattern emerged in the DRET, with Understand Conceptual Knowledge items being significantly best performed. However, examinees performed extremely poor on sub question type of inferring unstated details.
    Finally, a gap around 50 was found between the passing rates of the high and low scorers regardless of the question types in the SAET and the DRET. Yet, it was found that, in the DRET, the high scorers performed worst in answering questions on Inferring specific details, making this type of question obtain unsatisfactory discriminatory power.
    It is suggested that English teacher should help learners develop the four needed cognitive skills, especially the inferring sub-skills of understanding, in reading or test preparing.

    TABLE OF CONTENTS CHINESE ABSTRACT …………………………………………………i ENGLISH ABSTRACT ……………………………………………………ii ACKNOWLEDGEMENTS………………………………………………iv TABLE OF CONTENTS…………………………………………………v LIST OF TABLES…………………………………………………………ix CHAPTER ONE INTRODUCTION ……………………………………1 Background and Motivation …………………………………………1 Research Questions …………………………………………………3 Significance of the Study …………………………………………….4 Organization of the Thesis ……………………………………………5 CHAPTER TWO LITERATURE REVIEW ……………………………6 The Rationale of Reading ……………………………………………6 Models of Reading ……………………………………………6 Reading Skills ……………………………………………9 Reading Skills and Bloom’s Taxonomy ………………………10 Testing Reading in the EFL/ESL Context ……………………13 Reading Comprehension Test Item Analysis ……………………15 Revision of Bloom’s Taxonomy ………………………………19 The Original Bloom’s Taxonomy ……………………19 The Revised Bloom’s Taxonomy……………………………21 The Knowledge Dimension ………………………………22 The Cognitive Process Dimension ………………………24 Remember …………………………………………25 Understand ……………………………………………26 Apply ………………………………………………26 Analyze ……………………………………………27 Evaluate ……………………………………………27 Create ………………………………………………28 The Taxonomy Table ……………………………………28 Differences Between the Original and the Revised Taxonomy…29 Application of Bloom’s Taxonomy……………………………….…30 Summary ……………………………………………………………41 CHAPTER THREE METHODOLOGY ………………………………42 Materials ……………………………………………………………42 Instrument …………………………………………………………43 Data Analysis ………………………………………………………44 Trial Item Analysis ………………………………………44 Formal Item Analysis ………………………………47 Remember Factual Knowledge ………………………48 Recognizing specific details and elements …………49 Understand Factual Knowledge ……………………………49 Interpreting specific details and elements ……………49 Inferring specific details and elements ………………51 Understand Conceptual Knowledge ………………………51 Classifying into classifications and categories ………51 Summarizing principles and generalizations …………52 Inferring classifications and categories ………………53 Explaining principles and generalizations ……………53 Apply Procedural Knowledge ………………………………54 Executing knowledge of subject specific skills and algorithms……………………………………………………54 Analyze Conceptual Knowledge …………………………55 Attributing principles and generalizations ……………55 Analysis of the Data Coded and the Passing Rates …………56 Summary ……………………………………………………………57 CHAPTER FOUR RESULTS …………………………………………58 Cognitive Skills and Knowledge Types Measured …………………58 Similarities and Differences Between the SAET and the DRET ………61 SAET …………………………………………………………63 DRET …………………………………………………………64 Similarities ………………………………………………………64 Differences ………………………………………………………65 Examinees’ Performances on Each Question Type ……………………68 SAET ……………………………………………………………69 DRET ……………………………………………………………74 Comparisons of High and Low Scorers’ Performances on Each Question Type …………………………………………………………82 SAET …………………………………………………………82 DRET ………………………………………………………87 Summary ……………………………………………………………92 CHAPTER FIVE DISCUSSION AND CONCLUSIONS …………………94 Discussion ……………………………………………………………94 Cognitive Skills and Knowledge Types Measured ………………94 Similarities and Differences Between the SAET and the DRET …97 Examinees’ Performances on Each Question Type ………………100 Comparisons of High and Low Scorers’ Performances on Each Question Type .………………………………………………….103 Conclusions ………………………………………………………104 Summary of the Major Findings …………………………………104 Pedagogical Implications …………………………………………106 Teaching …………………………………………………106 Testing ……………………………………………………107 Limitations of the Present Study and Suggestions for Future Research…108 REFERENCES.............................................................................................111 APPENDIX A ...............................................................................................123 APPENDIX B …………………………………………………………124 APPENDIX C ………………………………………………………………127 APPENDIX D ………………………………………………………………128 APPENDIX E ………………………………………………………………129 APPENDIX F ………………………………………………………………138 LIST OF TABLES Table 1 Structure of the Original Taxonomy …………………………………… 20 Table 2 Structure of the Knowledge Dimension of the Revised Taxonomy …… 24 Table 3 Structure of the Cognitive Process Dimension of the Revised Taxonomy…25 Table 4 An Example of Objectives Classification into the Taxonomy Table ……29 Table 5 The Number of Reading Passages and Comprehension Test Items …… 42 Table 6 Major Types and Subtypes of the Cognitive Skills and Knowledge Tested ……………………………………………………………………………59 Table 7 The Crosstabulation of the Cognitive Skills and Knowledge Types Identified on the SAET and the DRET Test Items ………………………………60 Table 8 Cognitive (Sub)skills and Knowledge (Sub)types Measured on the SAET and the DRET Test Items………………………………………………… 62 Table 9 Question Subtypes in the SAET and the DRET by Year ……………… 66 Table 10 Average Passing Rates of Items Measuring Different Cognitive Skills and Knowledge Types in 2002 SAET …………………………………………69 Table 11 Average Passing Rates of Items Measuring Different Cognitive Skills and Knowledge Types in 2003 SAET ………………………………………… 70 Table 12 Average Passing Rates of Items Measuring Different Cognitive Skills and Knowledge Types in 2004 SAET ………………………………………… 71 Table 13 Average Passing Rates of Items Measuring Different Cognitive Skills and Knowledge Types in 2005 SAET ………………………………………… 71 Table 14 Average Passing Rates of Items Measuring Different Cognitive Skills and Knowledge Types in 2006 SAET ………………………………………72 Table 15 Average Passing Rates of Items Measuring Different Cognitive Skills and Knowledge Types in 2002 DRET ………………………………………… 75 Table 16 Average Passing Rates of Items Measuring Different Cognitive Skills and Knowledge Types in 2003 DRET ...…………………………………………75 Table 17 Average Passing Rates of Items Measuring Different Cognitive Skills and Knowledge Types in 2004 DRET ………………………………………… 76 Table 18 Average Passing Rates of Items Measuring Different Cognitive Skills and Knowledge Types in 2005 DRET ………………………………………… 77 Table 19 Average Passing Rates of Items Measuring Different Cognitive Skills and Knowledge Types in 2006 DRET ………………………………………… 77 Table 20 Average Passing Rates of Items Measuring Different Cognitive Skills and Knowledge Types from 2002 to 2006 DRET ……………………………… 79 Table 21 Tukey’s HSD Test for Passing Rates on Different Question Types in 2002 to 2006 DRET …………………………………………………………… 81 Table 22 Passing Rates of the High & Low Scorers and the Discrimination Indexes on Different Question Types in 2002 to 2006 SAET …………………83 Table 23 Passing Rates of the High & Low Scorers and the Discrimination Indexes on Different Question Types in the 2002 to 2006 DRET …………… 88

    REFERENCES
    Adams-Smith, D. (1981). Levels of questioning: Teaching creative thinking through ESL. Forum, 19(1), 15-21.
    Airasian, P. (1994). The impact of the taxonomy on testing and evaluation. In I L. W. Andersion & L. A. Sosniak (Eds.), Bloom’s taxonomy: A forty-year retrospective (pp.82-102). Chicago, IL: The National Society for the Study of Education.
    Airasian, P. & Miranda, H. (2002). The role of assessment in the revised Bloom’s taxonomy. Theory Into Practice, 41(4), 255-259.
    Alderson, J.C. (1990). Testing reading comprehension skills (Part One). Reading in a foreign language. 6(2):425-438.
    Alderson, J.C. (1996). The testing of reading. In Nuttall, C. (1996). Teaching reading skills in a foreign language. pp. 212-228. London: Heinemann.
    Alderson, J. C. (2000). Assessing reading. UK: Cambridge University Press.
    Alderson, C., Clapham, C. & Wall, D. (1995). Language test construction and
    evaluation. Cambridge: Cambridge University Press.
    Alderson J.C. & Lukmani, Y. (1989). Cognition and reading: Cognitive levels as embodied in test questions. Reading in a Foreign Language, 5(2), 253-270.
    Anderson, R.C., Reynolds, R.E., Schallert, D.L. & Goetz, E.T. (1977). Frameworks for comprehending discourse. American Educational Research Journal, 14(4), 367-381.
    Anderson, L.W., & Krathwohl, D.R. (Eds.) (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s educational objectives. New York: Longman.
    Anderson, L.W. (2002). Curricular alignment: A re-examination. Theory Into Practice, 41(4), 255-259.
    Anderson, L.W. & Sosniak. L.A. (1994). Bloom's taxonomy: A forty-year
    retrospective. Chicago, Illinois: The University of Chicago Press.
    Aviles, C.B. (1999). Understanding and testing for “critical thinking” with Bloom’s taxonomy of educational objectives. Washington (DC): ERIC Clearinghouse. Report no. SO032219.
    Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford: Oxford University Press.
    Bachman, L. F. (2000). Modern language testing at the turn of the century: Assuring that what we count counts. Language Testing, 17(1), 1-27.
    Bachman, L. F. & Palmer, A. S. (1996). Language testing in practice. Oxford: Oxford University Press.
    Baker, D. (1989). Language testing: A critical survey and practical guide. London: Edward Arnold.
    Beatty, R.Jr. (1975). Reading comprehension skills and Bloom’s taxonomy. Reading World. 15(2), 101-08.
    Bernhardt, E.B. (1991). Reading development in a second language: Theoretical, empirical and classroom perspectives. Norwood, Ablex Publishing Corporation, NJ.
    Bissell, A. P., and Lemons, P. P. (2006). A new method for assessing critical thinking in the Classroom. Bioscience, 56(1), 66-72.
    Bloom, B.S., Engelhart, M.D., Frost, E.J., Hill, W.H. & Krathwohl, D.R. (1956). Taxonomy of educational objectives: Handbook I, Cognitive domain. New York: David McKay.
    Bransford, J.D. & Johnson, M.K. (1972). Contextual prerequisites
    for understanding: Some investigations of comprehension and recall. Journal of Verbal Learning and Verbal Behavior, 11, 717-26.
    Brown, J.D. (1996). Testing in language programs. New Jersey: Prentice Hall.
    Buckles, S., & Siegfried, J.J. (2006). Using multiple-choice questions to evaluate in-depth learning of economics. Journal of Economic Education, 37(1), 48-57.
    Carrell, P. (1983). “Some issues in studying the role of schemata, or background knowledge in second language comprehension.” Reading in a Foreign Language, 1(2), 81-92.
    Carrell, P. L. (1988). Some causes of text-boundedness and schema interference in ESL reading. In Carrell, P. L., Devine, J. & Esdey, D. (Eds.), Interactive approaches to second language reading (p.101-113). Cambridge: Cambridge University Press.
    Carrell, P. L., & Eisterhold, J. (1983). Schema theory and ESL reading pedagogy. TESOL Quarterly 17, 553-573.
    Carver, R. P. (1992). What do standardized tests of reading comprehension measure in terms of efficiency, accuracy and rate? Reading Research Quarterly, 27, 347-359.
    Chapelle, C., Grabe, W., & Berns, M. (1997). Communicative language proficiency: Definition and Implications for TOEFL 2000. (TOEFL Monograph Series, MS-10) Princeton, NJ:ETS.
    Cheek, E., Flippo, R. & Lindsey, J. (1989). Reading for success in elementary schools. Chicago: Holt, Rinehart & Winston.
    Chen, H. J. (2004). An analysis of computer science exam questions using revised Bloom’s taxonomy. Unpublished master thesis. Taipei: National Taiwan Normal University.
    Chyung, S.Y., & Stepich, D. (2003). Applying the “congruence” principle of Bloom’s taxonomy to designing online instruction. The Quarterly of Distance Education, 4(3), 3317-330.
    Coady, J. (1979). A psycholinguistic model of the ESL reader. In Ronald Hackay, Bruce Brakeman, and R. R. Jordan (Eds.), In reading in a second language (p.5-12). Rowley, MA: Newbury House Publishers.
    Costin, B. Wei-hao Shen (1986). Cognition and comprehension: A study of the need for incorporation of Bloom’s taxonomy of educational objectives in English as a second language remedial reading programs. (ED297595).
    Cummings, O. W. (1982). Defferential measurement of reading comprehension skills for students with discrepant subskill profiles. Journal of Education Measurement, 19(1), 59-66.
    Davies, A. (1990). Principles of language test. Oxford: Basil Blackwell Ltd.
    David, A. (2002a). A framework relating outcomes based education and the taxonomy of educational objectives. Studies in Education Evaluation, 28(1), 35-59.
    David, A. (2002b). Implications and applications of modern test theory in the context of outcomes based education. Studies in Educational Evaluation, 28(2), 103-1221.
    Dubin, F.D., Eskey, D. E. & Grabe, W. (1986). Teaching second language reading for academic purposes. Reading. Mass: Addison-wesley.
    Ebel, R.L., & Frisbie, D.A. (1991). Essentials of educational measurement. Prentice-Hall, New-York.
    Eskey, D. E. (1988). Holding in the bottom: an interactive approach to the language problems of second language readers. In Carrell, P. L., Devine, J. and Esdey, D. (Eds.), Interactive approaches to second language reading (p. 93-100). Cambridge: Cambridge University Press.
    Farr, R., Pritchard, R. & Smitten, B. (1990). A description of what happens when an examinee takes a multiple-choice reading comprehension test. Journal of Educational Measurement, 27(3), 209-226.
    Flavell, J. (1979). Metacognition and cognitive monitoring: A new area of cognitive developmental inquiry. American Psychologies, 34, 906-911.
    Flynn, L. (1989). Developing critical reading skills through cooperative problem solving. The Reading Teacher, 42 (9), 664-668.
    Freebody, P. & Anderson, R.C. (1983). Effects of vocabulary difficulty, text cohesion
    and schema availability on reading difficulty, text cohesion and schema availability on reading comprehension. Reading Research Quarterly, 18 (3), 277-294.
    Frisbie, D.A., Miranda, D.U., & Baker, K.K. (1993). An evaluation of elementary textbook tests as classroom assessment tools. Applied Measurement in Education, 6(1),21-36.
    Furst, E. (1994). Bloom's taxonomy: Philosophical and educational issues. In Anderson, L.W., & L. Sosniak, L.A. (Eds.). Bloom's Taxonomy: A forty year retrospective (pp. 28-40). Chicago, Illinois: The University of Chicago Press.
    Gierl, M.J. (1997). Comparing cognitive representations of test developer and students on a mathematics test with Bloom’s taxonomy. Journal of Educational Research, 91 (1), 26-32.
    Goodman, K.S. (1976). Reading: A psycholinguistic guessing game. In H. Singer & R. Ruddell (Eds), Theoretical models and processes of reading. Newark, Delaware: International Reading Association.
    Gough, P.B. (1972). One second of reading. In Kavanagh, F.J. and Mattingly, G. (eds.), pp. 331-58.
    Grabe, W. (1991). Current developments in second language reading research. TESOL Quarterly, 25(3), 375-406.
    Granello, D. H. (2001). Promoting cognitive complexity in graduate written work: Using Bloom’s taxonomy as a pedagogical tool to improve literature reviews. Counselor Education & Supervision, 40(4), 292-307.
    Gray. (1960) The major aspects of reading. In Robinson, H. (ed.). Sequential development of reading abilities. Supplementary Education Monographs, no 90. Chicago University Press, pp.8-24.
    Grellet, F. (1981). Developing reading skills: A practical guide to reading comprehension exercises. Cambridge: Cambridge University Press.
    Hampton, D. R., & Krentler, K. A. (1993). The use of management and marketing textbook multiple-choice questions: A case study. Journal of Education for Business, 69 (1), p40-43.
    Henning, G. (1987). A guide to language testing: Development, evaluation, research. Boston, Massachusetts: Heinle & Heinle Publishers.
    Hickey, M. (1988). Developing critical reading readiness in primary grades. The Reading Teacher, 42(3), 192-194.
    Hoeppel, F. C., Jr. (1980). A taxonomical analysis of questions found in reading skills development books used in Maryland Community College developmental/remedial reading programs. (ED241085)
    Huang, T.S. (1994). A qualitative analysis of the JCEE English tests. Taipei: Crane.
    Hudson, T. (1982). The effects of induced schemata on the “short circuit” in L2 reading: non-decoding factors in L2 reading performance. Language Learning, 32(1), 1-31.
    Hughes, A. (1989). Testing for language teachers. Cambridge: Cambridge University Press.
    Hoff, D.J. (200l). Teaching, standards, tests found not aligned. Education Week, 21(9), p.6.
    Hsu, W. L. (2005). An analysis of the reading comprehension questions in the JCEE English test. Unpublished master thesis. Kaohsiung: National Kaohsiung Normal University.
    James, M.O. (1987). ESL reading pedagogy: Implication of schema-theoretical research. In J. Devine, O. L. Carrel, and D. E. Eskey (Eds.), Research in reading in English as a second language (p.177-188). Washington, D. C.: Teachers of English to Speakers of Other Language.
    James, R. (2002). Improving instruction. Theory Into Practice 41(4), 233-237.
    Jeng, H.Hs. (1991). A statistical analysis of the multiple-choice items of the 1991 JCEE English test. Paper presented in the first international symposium on English teaching in the republic of China.
    Jeng, H.Hs., Yang, I.L., Chen, Hs.Ch., Chen, L.Hs., Chen, K.T., & Wu, H.Ch. (1999). An experiment in designing English proficiency tests of two difficulty levels for the college entrance examination in Taiwan. Taipei: CEEC.
    Jeng, H.Hs. (2001). A comparison of the English reading comprehension passages and items in the 1999 college entrance examinations of Hong Kong, Taiwan and Mainland China. Concentric: Studies in English literature and linguistics, 27(2), 217-251.
    Jiang, W.J., & Lin, B.F. (1999). 八十八學年度大學聯考英文試題簡析[A brief analysis of English test items in 1999 JCEE]. English Teaching and Learning, 24(1), 3-23.
    Johnson, P. (1981). Effects on reading comprehension of language complexity and cultural background of a text. TESTOL Quarterly, 15, 169-181.
    Johnson, P. (1982). Effects on reading comprehension of building background knowledge. TESTOL Quarterly, 16(4), 503-516.
    Karns, J.M.L., Burton, G. E., & Martin, G. D. (1983). Learning objectives and testing: An analysis of six principles of economies textbooks, using Bloom’s taxonomy. The journal of Economic Education, 4, 16-20.
    Karlin, R. (1980). Teaching elementary reading: Principles and strategies. New York: Harcourt Brace Jovanovich.
    Krathwohl, D.R. (2002). A revision of Bloom’s taxonomy: An overview. Theory Into Practice, 41 (4), 212-218.
    Kreitzer, A. E. & Madaus, G. F. (1994). Empirical investigations of theherarchical structure of the taxonomy. In Andersion, L.W., & Sosniak, L.A. (Eds.). Bloom’s taxonomy: A forty-year retrospective (pp.64-81). Chicago, IL: The National Society for the Study of Education.
    Lee, K. C. (2004). 修定Bloom認知分類表及命題實例[Revised Bloom’s taxonomy and examples of categorizing questions]. Journal of Educational Research, 122, 98-127.
    Liu, C. Y. (2004). An Analysis of High School Computer Textbook Using Revised Bloom’s Taxonomy Unpublished master thesis. Taipei: National Taiwan Normal University.
    Lu, J. J. (2002). An analysis of the reading comprehension test given in the English subject ability test in Taiwan and its pedagogical implications. Unpublished master thesis. Taipei: National Chengchi University.
    Lunzer, E., Waite, M., and Dolan, T.(1979). Comprehension and comprehension tests. In Lunzer, E., and Gardner, K. (eds). The effective us of reading. Heinemann Educational Books, pp. 37-71.
    Matthews, M. (1990). Skill taxonomies and problems for the testing of reading. Reading in a Foreign Language, 7(1), 511-517.
    Mandler, J.M. & Johnson, N.S. (1977). Remembrance of things parsed: Story structure and recall. Cognitive Psychology, 9, 111-151.
    Masters, J.C., Hulsmeyer, B.S., Pike, M.E., Leichty, K., Miller, M.T., & Verst, A.L. (2001). Assessment of multiple-choice questions in selected test banks accompanying text books used in nursing education. Journal of Nursing Education, 40(1), 25-32.
    Mayer, R. E. (2002). Rote versus meaningful learning. Theory Into practice, 41 (4), 226-232.
    McNamara, T. (2000). Language testing. Oxford: Oxford University Press.
    Mo, Chien-ching. (1987). A study of English reading comprehension and general guidelines for testing reading. Journal of National Chengchi University, 55, 173-206. Taipei: National Chengchi University.
    Munby, J. (1978). Communicative syllabus design. Cambridge: Cambridge University Press.
    Nevo, N. (1989). Test-taking strategies on a multiple-choice test of reading comprehension. Language Testing, 6, 199-215.
    Nuttall, C. (1996). Teaching reading skills in a foreign language. London: Heinemann.
    Paul, R. (1993). Critical thinking: What every person needs to survive in a rapidly changing world (3rd ed.). Rohnert Park, California: Sonoma State University Press.
    Pintrich, P.R., & Schunk, D.H. (1996). Motivation in education: Theory, research, and applications. Englewood Cliffs, NJ: Merrill Prentice-Hall.
    Rosenshine, B.V. (1980). Skill hierarchies in reading comprehension. In Spiro, R.J. et al. (eds), pp. 535-54.
    Rost, D. H. (1993). Assessing the different components of reading comprehension: Fact or fiction. Language Testing, 10(1), 79-92.
    Rubin, D. (1982). Diagnosis and correction in reading instruction. New York: Holt, Rinehart and Winston.
    Rumelhart, D.E. (1977). Toward and interactive model of reading. In Dornic, S. (ed.), pp.573-603.
    Scott, T. (2003). Bloom’s taxonomy applied to testing in computer classes. The Journal of Computing in Small Colleges, 19 (1), 267-274.
    Singh, M., Chirgwin, S. & Elliott, K. (1997). The misrepresentation of Asia in the media classroom practices of critical reading. Unpublished Paper, Central Queensland University.
    Simkin, M. G., Kuechler, W. L. (2005) Multiple-choice tests and student understanding: What is the connection? Decision Science—The Journal of Innovative Education, 3(1), 73-98.
    Smith, F. (1971). Understanding reading: A psycholinguistic Analysis of reading and learning to read. New York: Holt, Rinehart & Winston.
    Stanovich, K.E. (1980). Toward and interactive compensatory model of individual differences in the development of reading fluency. Reading Research Quarterly, 16(1), 32-71.
    Squire, P.J. (2001). Cognitive levels of testing agricultural science in senior secondary schools in Botswana. Education, 118(1), 100-110.
    Surjosuseno, T. T. and Watts, V. (1999). Using Bloom's Taxonomy to teach critical reading in English as a foreign language classes. Queensland Journal of Educational Research, 15(2), 227-244. URL:http://education.curtin.edu.au/iier/qjer/qjer15/surjosuseno.html
    Teale, W. H. & Rowley, G. (1984). Standardized Testing and the teaching of reading:
    A practical guide with evaluations of reading tests commonly used in Australian schools. San Antonio: The University of Texas, Division of Education School of Social and Behavioral Sciences. (ED 291 769).
    Urquhart, A.H. (1987). Comprehension and interpretation. Reading in a Foreign language, 3 (2), pp.387-409.
    Urquhart, A.H. & Weir, C.J. (1998). Reading in a second language: Process, product and practice. London: Longman.
    Weir, C. J. (1990). Communicative language testing. New Jersey: Prentice Hall.
    Weir, C. J. (1997). The testing of reading in a second language. In C. Clapham and D. Corson (Eds.), Encyalepedia of language and education volumn 7: Language testing and assessment (p.39-49). Netherlands: Kluwer Academic Publishers.
    Weir, C.J., Hughes, A., & Porter, D. (1990). Reading skills: Hierarchies, implicational relationships and identifiably. Reading in a Foreign Language, 7(1),505-510.
    Williams, E. & Moran, C. (1989). Reading in a foreign language at intermediate and advanced levels with particular reference to English. Language Teaching, 22(4), 217-228.
    Wu, C.J., Fu, Q.Y., Lin, S.H. (2005). 九年一貫課程英語文學習領域能力指標之解讀-以Bloom教育目標分類系統(修訂版)析之[Interpretation of the competence indicators of English learning area in the Grade 1-9 curriculum by revised Bloom’s taxonomy]. In W. C. Chang (Eds.), The grades 1-9 curriculum: Challenges and strategies for English education. Taipei: National Taiwan Normal University.
    Xu, Y.J., & Lu, S.F. (1998).八十七學年度大學聯考英文試題簡評[A brief criticism of English test items in 1998 JCEE]. English Teaching and Learning, 23(2), 23-40.
    Yu, M.N. (1997). Educational measurement and evaluation: Achievement measurement and teaching evaluation. Taipei: Psychology Publisher.
    Yu, Hs. Y. (2006). The development of English testing and teaching in Taiwan: A survey of college entrance English exam and high school English teaching. English Teaching and Learning, Special Issue (2), 133-151.
    Zheng, Ch.Y. (1996). A study on the inference generation and learning achievement of Teacher College students in reading environmental texts. Unpublished dissertation. Taipei: National Taiwan Normal University.

    QR CODE