簡易檢索 / 詳目顯示

研究生: 林長毅
Lin, Chang-Yi
論文名稱: 探討AI生成與人工事實查核報告之語言風格差異——以新冠肺炎假新聞為例
Exploring the Linguistic Style Disparities between AI-generated and Manually Written Fact-check Reports: A Case Study on COVID-19 Fake News
指導教授: 邱銘心
Chiu, Ming-Hsin
口試委員: 邱銘心
Chiu, Ming-Hsin
張瑜芸
Chang, Yu-Yun
謝吉隆
Hsieh, Ji-Lung
口試日期: 2024/07/30
學位類別: 碩士
Master
系所名稱: 圖書資訊學研究所
Graduate Institute of Library and Information Studies
論文出版年: 2024
畢業學年度: 112
語文別: 中文
論文頁數: 76
中文關鍵詞: 生成式AI事實查核語言風格分析新冠肺炎假新聞提示工程
英文關鍵詞: Generative AI, fact-checking, linguistic style analysis, COVID-19 fake news, prompt engineering
DOI URL: http://doi.org/10.6345/NTNU202401665
論文種類: 學術論文
相關次數: 點閱:105下載:21
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究旨在探討生成式 AI(Generative AI)生成的事實查核報告與人工撰寫的事實查核報告在語言風格上的差異,並以新冠肺炎假新聞為例進行比較。研究透過提示工程技術,設計了原型指令、思維鏈(Chain of Thought)和線索引導(Clue And Reasoning Prompting)三種不同的提示模板,使用 ChatGPT-4o 生成事實查核報告,並與臺灣事實查核中心的人工查核報告進行對比分析。研究從詞彙豐富度、句法複雜度、邏輯流暢性、關鍵字分佈和情感極性五個面向,利用自然語言處理工具進行分析。結果顯示,AI生成的查核報告在語言流暢性及一致性方面表現較佳,但在事實準確度及深度分析上仍有待改進,人工撰寫的報告則在專業性和語言靈活度方面表現更具優勢。本研究期望透過這些比較和分析,為生成式 AI 在事實查核領域的應用提供實證基礎,並提出改進生成模型語言風格的可能途徑,以期提高其在實際應用中的準確性和可靠性。

    This study aims to investigate the linguistic style differences between AI-generated and manually written fact-checking reports, with a focus on COVID-19 misinformation as a case study. This research utilized prompt engineering techniques to design three distinct prompt templates, including Vanilla, Chain of Thought (CoT), and Clue and Reasoning Prompting (CARP), and then it employed ChatGPT-4 to generate fact-checking reports. Subsequently, these AI-generated reports were compared to manually written reports from the Taiwan FactCheck Center using natural language processing tools. The comparative analysis concentrated on five linguistic dimensions:lexical richness, syntactic complexity, logical coherence, keyword frequency distribution, and sentiment polarity. The results indicate that AI-generated reports exhibit better performance in linguistic fluency and consistency. However, they still require improvement in terms of factual accuracy and depth of analysis. Conversely, manually written reports demonstrate greater advantages in professionalism and linguistic flexibility. It is hoped that this study contributes to provide empirical evidence supporting the application of generative AI in fact-checking through comparative analysis. It seeks to pave the way for enhancing the linguistic style of generative models, ultimately improving their accuracy and reliability in practical applications.

    1 緒論 1 1.1 研究背景 1 1.2 研究目的 6 1.3 研究問題 7 1.4 研究範圍 7 1.5 名詞解釋 7 2 文獻回顧 9 2.1 假新聞及其影響 9 2.2 事實查核與假新聞之語言學特徵 18 2.3 生成式 AI之應用與語言風格研究 25 2.4 人工書寫和 AI生成文本的比較研究 31 2.5 小結 34 3 研究方法 36 3.1 研究方法概述 36 3.2 研究資料來源 36 3.3 提示模板設計 37 3.4 語料搜集與處理 42 3.5 語言風格分析 43 4 研究結果 54 4.1 詞彙豐富度 54 4.2 句法豐富度 56 4.3 邏輯流暢性 57 4.4 關鍵字分佈 59 4.5 情感極性分析 62 4.6 綜合討論 63 5 結論 65 5.1 結論 65 5.2 研究成果與未來研究建議 67 參考文獻 69

    Ahmad, M., Mahmood, M. A., & Siddique, A. R. (2019). Organisational skills in aca- demic writing: A study on coherence and cohesion in pakistani research ab- stracts. Languages, 4(4), 92.
    Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in chatgpt: Implications in scientific writing. Cureus, 15(2).
    Amirjalili, F., Neysani, M., & Nikbakht, A. (2024). Exploring the boundaries of author- ship: A comparative analysis of ai-generated text and human academic writing in english literature. Frontiers in Education, 9, 1347421.
    Apuke, O. D., & Omar, B. (2021). Fake news and covid-19: Modelling the predictors of fake news sharing among social media users. Telematics and Informatics, 56, 101475.
    Ariyaratne, S., Iyengar, K. P., Nischal, N., Chitti Babu, N., & Botchu, R. (2023). A comparison of chatgpt-generated articles with human-written articles. Skeletal Radiology, 52(9), 1755–1758.
    Bahrini, A., Khamoshifar, M., Abbasimehr, H., Riggs, R. J., Esmaeili, M., Majdabad- kohne, R. M., & Pasehvar, M. (2023). Chatgpt: Applications, opportunities, and threats. 2023 Systems and Information Engineering Design Symposium (SIEDS), 274–279.
    Baidoo-Anu, D., & Ansah, L. O. (2023). Education in the era of generative artificial intelligence (ai): Understanding the potential benefits of chatgpt in promoting teaching and learning. Journal of AI, 7(1), 52–62.
    Bastos, M. T., & Mercea, D. (2019). The brexit botnet and user-generated hyperpartisan news. Social Science Computer Review, 37(1), 38–54.
    Bhattacherjee, A. (2022). The effects of news source credibility and fact-checker credi- bility on users’ beliefs and intentions regarding online misinformation. Journal of Electronic Business & Digital Economics, 1(1/2), 24–33.
    Biswas, S. S. (2023). Role of chatgpt in public health. Annals of Biomedical Engineer- ing, 51(5), 868–869.
    Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877–1901.
    Caramancion, K. M. (2023). News verifiers showdown: A comparative performance evaluation of chatgpt 3.5, chatgpt 4.0, bing ai, and bard in news fact-checking. 2023 IEEE Future Networks World Forum (FNWF), 1–6.
    
Chang, E. Y. (2023). Prompting large language models with the socratic method. 2023 IEEE 13th Annual Computing and Communication Workshop and Conference(CCWC), 0351–0360.

    Chang, K., Xu, S., Wang, C., Luo, Y., Xiao, T., & Zhu, J. (2024). Efficient prompting methods for large language models: A survey. https://arxiv.org/abs/2404.01077
    Chen, Y.-P., Chen, Y.-Y., Yang, K.-C., Lai, F., Huang, C.-H., Chen, Y.-N., & Tu, Y.-C. (2022). The prevalence and impact of fake news on covid-19 vaccination in tai- wan: Retrospective study of digital media. Journal of Medical Internet Research, 24(4), e36830.

    Choi, E. C., & Ferrara, E. (2024). Automated claim matching with large language models: Empowering fact-checkers in the fight against misinformation. Companion Proceedings of the ACM on Web Conference 2024, 1441–1449.

    Choudhury, A., & Shamszare, H. (2023). Investigating the impact of user trust on the adoption and use of chatgpt: Survey analysis. Journal of Medical Internet Research, 25, e47184.

    Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., Hesse, C., & Schulman, J. (2021). Training verifiers to solve math word problems. CoRR, abs/2110.14168. https://arxiv.org/abs/2110.14168
    Emma Hoes, S. A., & Bermeo, J. (2023). Leveraging chatgpt for efficient fact-checking. https://doi.org/10.31234/osf.io/qnjkf
    Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative ai and chatgpt: Applications, challenges, and ai-human collaboration.
    George, A. S., & George, A. H. (2023). A review of chatgpt ai’s impact on several business sectors. Partners Universal International Innovation Journal, 1(1), 9– 23.
    Ghazal Aghagoli, B., Siff, E. J., Tillman, A. C., Feller, E. R., et al. (2020). Covid-19: Misinformation can kill. Rhode Island Medical Journal, 103(5), 12–14. Giachanou, A., Ghanem, B., Ríssola, E. A., Rosso, P., Crestani, F., & Oberski, D. (2022). The impact of psycholinguistic patterns in discriminating between fake news spreaders and fact checkers. Data & Knowledge Engineering, 138, 101960.
    Gozalo-Brizuela, R., & Garrido-Merchan, E. C. (2023). Chatgpt is not all you need. a state of the art review of large generative ai models. https://arxiv.org/abs/2301.04655
    Graves, L., & Amazeen, M. (2019). Fact-checking as idea and practice in journalism. Oxford Research Encyclopedia of Communication.

    Grieve, J., & Woodfield, H. (2023). The language of fake news. Cambridge University Press.
    Gundapu, S., & Mamidi, R. (2021). Transformer based automatic covid-19 fake news detection system. ArXiv, abs/2101.00180. https : / / api . semanticscholar . org /CorpusID:230433638
    He, P., Liu, X., Gao, J., & Chen, W. (2021). Deberta: Decoding-enhanced bert with disentangled attention. International Conference on Learning Representations. https://openreview.net/forum?id=XPZIaotutsD
    Heidari, M., Zad, S., Hajibabaee, P., Malekzadeh, M., HekmatiAthar, S., Uzuner, O., & Jones, J. H. (2021). Bert model for fake news detection based on social bot activ- ities in the covid-19 pandemic. 2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), 0103–0109.
    Henkel, M., Jacob, A., & Perrey, L. (2023). What shapes our trust in scientific infor- mation? a review of factors influencing perceived scientificness and credibility. European Conference on Information Literacy, 107–118.
    Herbold, S., Hautli-Janisz, A., Heuer, U., Kikteva, Z., & Trautsch, A. (2023). A large- scale comparison of human-written versus chatgpt-generated essays. Scientific Reports, 13(1), 18617.
    Howard, P. N., Bradshaw, S., Kollanyi, B., & Bolsolver, G. (2017). Junk news and bots during the french presidential election: What are french voters sharing over twitter in round two? ComProp Data Memo, 21(3), 8.
    Kalla, D., Smith, N., Samaah, F., & Kuraku, S. (2023). Study and analysis of chat gpt and its impact on different fields of study. International Journal of Innovative Science and Research Technology, 8(3).
    Kaplan, A., & Haenlein, M. (2019). Siri, siri, in my hand: Who’s the fairest in the land? on the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25.
    Kovach, B., & Rosenstiel, T. (2007). The elements of journalism: What newspeople should know and the public should expect (1st rev. ed., Completely updated and rev.). Three Rivers Press.
    Kovach, B., & Rosenstiel, T. (2011). Blur: How to know what’s true in the age of infor- mation overload. Bloomsbury Publishing USA.
    Kumar, S., & Shah, N. (2018). False information on web and social media: A survey. ArXiv, abs/1804.08559. https://api.semanticscholar.org/CorpusID:5058880
    Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., et al. (2018). The science of fake news. Science, 359(6380), 1094–1096.
    Lin, Y.-C. J. (2022). Establishing legitimacy through the media and combating fake news on covid-19: A case study of taiwan. Chinese Journal of Communication, 15(2), 250–270.
    Ling, W., Yogatama, D., Dyer, C., & Blunsom, P. (2017, July). Program induction by rationale generation: Learning to solve and explain algebraic word problems. In R. Barzilay & M.-Y. Kan (Eds.), Proceedings of the 55th annual meeting of the association for computational linguistics (volume 1: Long papers) (pp. 158– 167). Association for Computational Linguistics. https://doi.org/10.18653/v1/ P17-1015
    Lingard, L., & Watling, C. (2021). Coherence: Keeping the reader on track. In Story, not study: 30 brief lessons to inspire health researchers as writers (pp. 119– 125). Springer.
    Liu, F., Yacoob, Y., & Shrivastava, A. (2023, May). COVID-VTS: Fact extraction and verification on short video platforms. In A. Vlachos & I. Augenstein (Eds.), Pro- ceedings of the 17th conference of the european chapter of the association for computational linguistics (pp. 178–188). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.eacl-main.14
    Lo, C. K. (2023). What is the impact of chatgpt on education? a rapid review of the literature. Education Sciences, 13(4), 410.
    Lucassen, R. (2021). Exploratory analysis on linguistic patterns of fake and real news related to the covid-19 pandemic [Master’s thesis, Utrecht University].
    Lund, B. D., & Wang, T. (2023). Chatting about chatgpt: How may ai and gpt impact academia and libraries? Library Hi Tech News, 40(3), 26–29.

    Matheson, D. (2004). Weblogs and the epistemology of the news: Some trends in online journalism. New Media & Society, 6(4), 443–468.
    McQuail, D. (1987). Mass communication theory: An introduction. Sage Publications, Inc.

    Meel, P., & Vishwakarma, D. K. (2020). Fake news, rumor, information pollution in social media and web: A contemporary survey of state-of-the-arts, challenges and opportunities. Expert Systems with Applications, 153, 112986.
    Monti, F., Frasca, F., Eynard, D., Mannion, D., & Bronstein, M. M. (2019). Fake news detection on social media using geometric deep learning. https://arxiv.org/abs/1902.06673

    Morris, D. S., Morris, J. S., & Francia, P. L. (2020). A fake news inoculation? fact checkers, partisan identification, and the power of misinformation. Politics, Groups, and Identities, 8(5), 986–1005.
    Muñoz-Ortiz, A., Gómez-Rodríguez, C., & Vilares, D. (2023). Contrasting linguistic patterns in human and llm-generated text. https://arxiv.org/abs/2308.09067
    Newell, A., Shaw, J. C., & Simon, H. A. (1959). Report on a general problem solving program. IFIP congress, 256, 64.
    Nordquist, R. (2023). Paragraph transition: Definition and examples [Accessed: 2024-07-19]. https://www.thoughtco.com/what-is-a-paragraph-transition-1691482
    Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al. (2022). Training language models to fol- low instructions with human feedback. Advances in Neural Information Processing Systems, 35, 27730–27744.
    Pehlivanoglu, D., Lin, T., Deceus, F., Heemskerk, A., Ebner, N. C., & Cahill, B. S. (2021). The role of analytical reasoning and source credibility on the evaluation of real and fake full-length news articles. Cognitive Research: Principles and Implications, 6, 1–12.
    Pérez-Rosas, V., Kleinberg, B., Lefevre, A., & Mihalcea, R. (2018, August). Auto- matic detection of fake news. In E. M. Bender, L. Derczynski, & P. Isabelle (Eds.), Proceedings of the 27th international conference on computational lin- guistics (pp. 3391–3401). Association for Computational Linguistics. https:// aclanthology.org/C18-1287
    Porter, E., & Wood, T. J. (2021). The global effectiveness of fact-checking: Evidence from simultaneous experiments in argentina, nigeria, south africa, and the united kingdom. Proceedings of the National Academy of Sciences, 118(37), e2104235118.
    Rashkin, H., Choi, E., Jang, J. Y., Volkova, S., & Choi, Y. (2017). Truth of varying shades: Analyzing language in fake news and political fact-checking. Proceed- ings of the 2017 conference on empirical methods in natural language process- ing, 2931–2937.
    Rathore, F. A., & Farooq, F. (2020). Information overload and infodemic in the covid-19 pandemic. J Pak Med Assoc, 70(5), S162–S165.
    Ratnayake, H., & Wang, C. (2023). A prompting framework to enhance language model output. Australasian Joint Conference on Artificial Intelligence, 66–81.
    Sallam, M., Salim, N. A., Barakat, M., & Ala’a, B. (2023). Chatgpt applications in medical, dental, pharmacy, and public health education: A descriptive study highlighting the advantages and limitations. Narra J , 3(1).
    Sanh, V., Webson, A., Raffel, C., Bach, S. H., Sutawika, L., Alyafeai, Z., Chaffin, A., Stiegler, A., Le Scao, T., Raja, A., Dey, M., Bari, M. S., Xu, C., Thakker, U., Sharma, S., Szczechla, E., Kim, T., Chhablani, G., V. Nayak, N., ... Rush, A. M. (2022). Multitask Prompted Training Enables Zero-Shot Task Generalization. ICLR 2022 - Tenth International Conference on Learning Representations. https: //inria.hal.science/hal-03540072
    Sawiński, M., Węcel, K., Księżniak, E. P., Stróżyna, M., Lewoniewski, W., Stolarski, P., & Abramowicz, W. (2023). Openfact at checkthat! 2023: Head-to-head gpt vs. bert-a comparative study of transformers language models for the detection of check-worthy claims. CEUR Workshop Proceedings, 3497.
    Shahi, G. K., Dirkson, A., & Majchrzak, T. A. (2021). An exploratory study of covid-19 misinformation on twitter. Online Social Networks and Media, 22, 100104.
    Shu, K., Wang, S., & Liu, H. (2019). Beyond news contents: The role of social context for fake news detection. Proceedings of the twelfth ACM international conference on web search and data mining, 312–320.
    Solomon, D. H., Bucala, R., Kaplan, M. J., & Nigrovic, P. A. (2020). The ”infodemic” of covid-19.
    Sun, X., Li, X., Li, J., Wu, F., Guo, S., Zhang, T., & Wang, G. (2023). Text classification via large language models. Findings of the Association for Computational Linguistics: EMNLP 2023, 8990–9005.
    Vo, N., & Lee, K. (2019). Learning from fact-checkers: Analysis and generation of fact-checking language. Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, 335–344.
    Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151.
    Wani, A., Joshi, I., Khandve, S., Wagh, V., & Joshi, R. (2021). Evaluating deep learning approaches for covid19 fake news detection. Combating Online Hostile Posts in Regional Languages during Emergency Situation: First International Workshop, CONSTRAINT 2021, Collocated with AAAI 2021, Virtual Event, February 8, 2021, Revised Selected Papers 1, 153–163.
    Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policymaking (Vol. 27). Council of Europe Strasbourg.
    Waszak, P. M., Kasprzycka-Waszak, W., & Kubanek, A. (2018). The spread of med- ical fake news in social media–the pilot quantitative study. Health Policy and Technology, 7(2), 115–118.
    Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q. V., Zhou, D., et al. (2022). Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35, 24824–24837.
    Weizenbaum, J. (1966). Eliza—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45.
    White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer- Smith, J., & Schmidt, D. C. (2023). A prompt pattern catalog to enhance prompt engineering with chatgpt. https://arxiv.org/abs/2302.11382
    Zarocostas, J. (2020). How to fight an infodemic. The Lancet, 395(10225), 676.
    Zhang, X., & Ghorbani, A. A. (2020). An overview of online fake news: Characteriza- tion, detection, and discussion. Information Processing & Management, 57(2), 102025.
    Zhou, X., & Zafarani, R. (2018). Fake news: A survey of research, detection methods, and opportunities. ArXiv, abs/1812.00315. https://api.semanticscholar.org/CorpusID:54437297
    Zhou, X., & Zafarani, R. (2020). A survey of fake news: Fundamental theories, detectionmethods, and opportunities. ACM Computing Surveys (CSUR), 53(5), 1–40.
    孫維三. (2022). 病毒, 資訊與社會: 以 luhmann 的社會系統理論觀察訊息疫情. 新聞學研究, (152), 161–208.

    林宗弘. (2021). 資訊疫情(infodemic):假訊息如何加速 covid-19 全球大流行?. 人文與社會科學簡訊, 23(4), 38–46.

    歐昱傑, 楊淑晴, 宋庭瑋, & 羅藝方. (2022). 新冠肺炎謠言內容分析之探究. Taiwan Journal of Publich Health/Taiwan Gong Gong Wei Sheng Za Zhi, 41(1).
    羅世宏. (2018). 關於 [假新聞] 的批判思考: 老問題, 新挑戰與可能的多重解方. 資訊社會研究, (35), 51–86.

    臺灣事實查核中心. (2020). 查核工具箱總整理. https://tfc-taiwan.org.tw/articles/category/22/24

    臺灣傳播資料庫. (2021). 「可以不要再出現假新聞了嗎?」-探討台灣民眾對假新聞的感受及影響. https://crctaiwan.dcat.nycu.edu.tw/epaper/%E7%AC% AC292%E6%9C%9F20221031.html

    下載圖示
    QR CODE