研究生: |
林克謙 Lin, Ke-Chian |
---|---|
論文名稱: |
混合時間模型─考慮輕率受訪者回答含有虛題的問卷行為─之應用 The Use of Mixture Response Time Model to Account for Careless Respondents in Survey Questionnaires with a Bogus Item |
指導教授: |
蔡蓉青
Tsai, Rung-Ching 呂翠珊 Lu, Tsui-Shan |
學位類別: |
碩士 Master |
系所名稱: |
數學系 Department of Mathematics |
論文出版年: | 2015 |
畢業學年度: | 103 |
語文別: | 英文 |
論文頁數: | 71 |
中文關鍵詞: | 虛題 、反應時間 、潛在類別 、Rasch混合模型結合反應時間 |
英文關鍵詞: | bogus item, response time, latent class, MRM-RT |
DOI URL: | https://doi.org/10.6345/NTNU202205379 |
論文種類: | 學術論文 |
相關次數: | 點閱:261 下載:19 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
一份問卷調查的健全推論建立在有效率的估計之上,但是有效率的估計往往受到不認真的受訪者的影響。問卷調查廣泛應用虛題(bogus item)找出不認真的受訪者、發現那些不看題目的人。另一種方法是反應答題的時間,其使用方式已由Rasch混合模型結合反應時間的方法確立,並能有效地提升估計的效率。在這篇研究中,我們提出結合虛題與混合反應時間的模型。此模型是利用虛題並考慮反應時間來區分「兩種不同類型的不認真受訪者的回答以及認真受訪者的回覆」。分析時保留虛題做為共變數與時間做為潛在類別的應變數的優點。在模擬實驗中,當樣本數500人以上,反應時間有降低估計值的標準誤的趨勢。更進一步,我們應用該模型分析真實資料時,其中有高達22\%的不認真受訪者。結果顯示:我們未受限制的模型表現得比受限制的模型好。
A valid inference for a survey is always based on efficient estimation. The efficient estimation is threatened and distorted by the careless response. Bogus item is commonly used in survey questionnaire to detect the careless respondents and powerfully unveils those responding without reading the items. Response time is another way to identify the careless responses and its use of enhancing item parameter estimation was verified by the mixture Rasch model with response time component (MRM-RT). Therefore, we propose a mixture response time model to analyze survey questionnaire with a bogus item. The goal of our proposed model is to classify two behaviors of the careless responding in addition to the attentive respondents, using the bogus item and taking account into response time. We attempt to converse the benefit of the bogus item used as a covariate and the response time used as the latent class indicators in analysis. In the simulation studies, we find that the standard errors of the model with response time decrease substantially as the sample size is equal or larger than 500. Further, we also apply our proposed model to a real data set and with a larger proportion of careless responding (22\%), the results show that the unrestricted models perform better than the restricted models.
[1] Beach, D. A. (1989). Identifying the random responder. Journal of Psychology:Interdisciplinary and Applied, 123, 101-103.
[2] Chan, S. C., Lu, T. S. & Tsai, R. C. (2014). Incorporating Response Time to Analyze Test Data with Mixture Structural Equation Modeling. Psychological
Testing, 61(4), 463-488.
[3] Cheng, J. C., Hung, J. F., & Huang, T. C. (2013). Promoting junior high students’situational interests with multiple teaching strategies in informal nanometerrelated
curricula. Journal of Educational Practice and Research, 26(2), 1-28.
[4] Cred'e, M. (2010). Random responding as a threat to the validity of effect size estimates in correlational research. Educational and Psychological Measurement, 70(4), 596-612.
[4] Ferrando, P. J. & Lorenzo-Seva, U. (2007). An item response theory model for incorporating response time data in binary personality items. Applied Psychological Measurement, 31(6), 525-543.
[5] Forero, C. G. & Maydeu-Olivares A. (2009). Estimation of IRT graded response models: limited versus full information methods. Psychological Methods, 14(3), 275-299.
[6] Fraley, R. C., Waller, N. G. & Brennan, K. A. (2000). Item response theory analysis of self-report measures of adult attachment. Journal of Personality and Social Psychology 78(2), 350-365.
[7] George, D., & Mallery, P. (2003). SPSS for Windows step by step: A simple guide and reference. 11.0 update (4th ed.). Boston, MA: Allyn & Bacon.
[8] Hargittai, E. (2008). An update on survey measures of web-oriented digital literacy. Social Science Computer Review 27(1), 130-137.
[9] Holden, R. R., Kroner, D. G., Fekken, G. C. & Popham, S. M. (1992). A model of personality test item response dissimulation. Journal of Personality and Social Psychology 63(2), 272-279.
[10] Johnson, J. A. (2005). Ascertaining the validity of individual protocols from Web-based personality inventories. Journal of Research in Personality 39, 103–129.
[11] Meade, A. W. & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17(3), 437-455.
[12] Meyer, J. P. (2010). A mixture Rasch model with item response time components, Applied Psychological Measurement, 34(7), 521-538.
[13] Muthen, L. K. & Muthen, B. O. (1998-2010). Mplus User’s Guide. Los Angeles: Muthen & Muthen.
[14] Koch, W. R. (1983). Likert scaling using the graded response latent trait model. Applied Psychological Measurement, 7(1), 15-32.
[15] Linnenbrink-Garcia, L., Durik, A. M., Conley, A. M., Barron, K. E., Tauer, J. M., Karabenick, S. A., & Harackiewicz, J. M. (2010). Measuring situational interest
in academic domains. Educational and Psychological Measurement, 70(4), 647-671.
[16] Mitchell, M. (1993). Situational interest: Its multifaceted structure in the secondary school mathematics classroom. Journal of Educational Psychology, 85, 424-436.
[17] Samejima, F. (1973). Homogeneous case of the continuous response model. Psychometrika, 38, 203-219.
[18] Schmitt, N. & Stults, D. M. (1985). Factors defined by negatively keyed items: the result of careless respondents. Applied Psychological Measurement, 9(4), 367-373.
[19] Wise, S. L. & DeMars, C. E. (2006). An application of item response time: the effort-moderated IRT model. Journal of Educational Measurement 43(1), 19-38.
[20] Woods, C. M. (2006). Careless responding to reverse-worded items: implications for confirmatory factor analysis. Journal of Psychopathology and Behavioral Assessment, 28(3), 189-194.