簡易檢索 / 詳目顯示

研究生: 廖盈翔
Liao, Yin-Hsiang
論文名稱: Question Generation through Transfer Learning
Question Generation through Transfer Learning
指導教授: 柯佳伶
Koh, Jia-Ling
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 英文
論文頁數: 52
英文關鍵詞: sequence-to-sequence model
DOI URL: http://doi.org/10.6345/NTNU202000787
論文種類: 學術論文
相關次數: 點閱:69下載:9
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • An automatic question generation (QG) system aims to produce questions from a text,
    such as a sentence or a paragraph. This system can be useful on the frontline of education,
    as making questions is a time-consuming and expert-participating craft. Traditional
    approaches are mainly based on heuristic and hand-crafted rules to transduce a
    declarative sentence into a related interrogative sentence. In this work, we propose a
    data-driven approach, which leverages a neural sequence-to-sequence framework with
    various transfer learning strategies to capture the underlying information of making a
    question, on a target domain with rare training pairs. Our experiment shows this
    modified model is capable to generate satisfactory results to some extent.

    Abstract i Acknowledgement ii Chapter 1 Introduction - 1 - 1.1 Motivation - 1 - 1.2 Challenge - 2 - 1.3 Method - 3 - Chapter 2 Related Works - 8 - 2.1 Question Generation - 8 - 2.2 Seq2seq Models - 12 - 2.3 Domain Adaptation - 13 - Chapter 3 Methods - 15 - 3.1 Problem definition - 15 - 3.2 Data Preparation - 15 - 3.2.1 Source Domain Data - 16 - 3.2.2 Target Domain Data - 17 - 3.3 Baseline Model - 19 - 3.3.1 Pointer Network Model - 19 - 3.3.2 Pointer Network with Reinforcement Module - 28 - 3.4 Domain Adaptation - 30 - 3.4.1 Supervised Domain Adaptation - 30 - 3.4.2 Unsupervised Domain Adaptation - 31 - Chapter 4 Performance Evaluation - 32 - 4.1 Experiments Setup - 32 - 4.2 Evaluation Measurements - 33 - 4.3 Supervised Transfer Learning - 34 - Experiment 1: - 35 - Experiment 2: - 36 - Experiment 3: - 39 - 4.4 Unsupervised Transfer Learning - 40 - Experiment 4 - 40 - Chapter 5 Discussion - 42 - Chapter 6 Conclusion - 45 - Reference - 47 - Appendix - 52 -

    D. Bahdanau, K. Cho, Y. Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. ICLR 2015 oral presentation. https://arxiv.org/abs/1409.0473v7
    G. Chen, J. Yang, C. Hauff and G. Houben. 2018. LearningQ: A Large-scale Dataset for Educational Question Generation. ICWSM-18.
    K. Cho, B. van Merrienboer, D. Bahdanau, and Y. Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. SSST-8, 103–111
    Y. Chung, H. Lee, J. Glass. 2018. Supervised and Unsupervised Transfer Learning for Question Answering. NAACL-HLT 2018, 1585–1594
    X. Du, J. Shao, and C. Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 1342–1352.
    E. Grave, P. Bojanowski, P. Gupta, A. Joulin, T. Mikolov. 2018. Learning Word Vectors for 157 Languages. arXiv:1802.06893v2 [cs.CL] https://arxiv.org/abs/1802.06893v2
    M. Heilman and N. A. Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, Los Angeles, California, 609–617.
    J. Karpicke. 2012. Retrieval-Based Learning: Active Retrieval Promotes Meaningful Learning. Current Directions in Psychological Science 21, 3 (May 2012), 157–163.
    Y. Keneshloo. T. Shi, N. Ramakrishnan, C. K. Reddy. 2018. Deep Reinforcement Learning for Sequence to Sequence Models. arXiv:1805.09461. https://arxiv.org/abs/1805.09461
    G. Kovacs. 2016. Effects of In-Video Quizzes on MOOC Lecture Viewing. In Proc. Conference on Learning at Scale, 31–40.
    C. Lin, 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In Proceedings of the Workshop on Text Summarization Branches Out (WAS 2004).
    M. Luong, H. Pham, C. D. Manning. 2015. Effective Approaches to Attention-based Neural Machine Translation. EMNLP 2015, 1412–1421.
    M. Oquab, L. Bottou, I. Laptev, J. Sivic. 2014. Learning and transferring mid-level image representations using convolutional neural networks. In: Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on. pp. 1717–1724. IEEE
    K. Papineni, S. Roukos, T. Ward, and W. J. Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics.
    V. Rus, B. Wyse, P. Piwek, M. Lintean, S. Stoyanchev, and C. Moldovan. 2010. The first question generation shared task evaluation challenge. In Proceedings of the 6th International Natural Language Generation Conference. Association for Computational Linguistics, Stroudsburg, PA, USA, 251–257.
    M. Sachan and E. P. Xing. 2018. Self-Training for jointly learning to ask and answer questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, 629–640.
    A. See, P. J. Liu, and C. D. Manning. 2017. Get To The Point: Summarization with Pointer-Generator Networks. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 1073–1083.
    C. C. Shao, T. Liu, Y. Lai, Y. Tseng, and Sam Tsai. 2018. DRCD: A Chinese machine reading comprehension dataset. ArXiv preprint. https:/arxiv.org/abs/1806.00920
    J. Sun. 2012. "Jieba" (Chinese for "to stutter") Chinese text segmentation: built to be the best Python Chinese word segmentation module.
    I. Sutskever, O. Vinyals, and Q. V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems (NIPS), pages 3104–3112.
    D. Tang, N. Duan, T. Qin, Z. Yan, and M. Zhou. 2017. Question answering and question generation as dual tasks. ArXiv e-prints. https://arxiv.org/abs/1706.02027
    L. Vanderwende. 2008. The importance of being important: Question generation. In Proceedings of the 1st Workshop on the Question Generation Shared Task Evaluation Challenge, Arlington, VA
    O. Vinyals, M. Fortunato, and N. Jaitly. 2015a. Pointer networks. In Advances in Neural Information Processing Systems, pages 2674–2682.
    T. Wang, X. Yuan, and A. Trischler. 2017. A joint model for question answering and question generation. 1st Workshop on Learning to Generate Natural Language.
    Y. Wang, C. Liu, M. Huang, and L. Nie. 2018. Learning to ask questions in open-domain conversational systems with typed decoders. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 2193–2203.
    Z. Wang, A. S. Lan, W. Nie, A. E. Waters, P. J. Grimaldi, and R. G. Baraniuk. 2018 QG-net: A data-driven question generation model for educational content. In Proceedings of the Fifth Annual ACM Conference on Learning at Scale, pages 7:1– 7:10.
    Q. Zhou, N. Yang, F. Wei, C. Tan, H. Bao, and M. Zhou. 2017. Neural question generation from text: A preliminary study. ArXiv e-prints. https://arxiv.org/abs/1704.01792

    下載圖示
    QR CODE