研究生: |
黃怡萍 Huang, Yi-Ping |
---|---|
論文名稱: |
應用兩階段生成模型於會議摘要之研究 A Study of Extract-then-Generate Model for Meeting Summarization |
指導教授: |
陳柏琳
Chen, Berlin |
口試委員: |
陳柏琳
Chen, Berlin 陳冠宇 Chen, Kuan-Yu 洪志偉 Hung, Jeih-Weih 曾厚強 Tseng, Hou-Chiang |
口試日期: | 2023/07/21 |
學位類別: |
碩士 Master |
系所名稱: |
資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2023 |
畢業學年度: | 111 |
語文別: | 中文 |
論文頁數: | 47 |
中文關鍵詞: | 會議摘要 、自動文件摘要 、自然語言處理 、異質圖神經網路 、對話語 篇剖析 、生成式模型 |
英文關鍵詞: | Meeting Summarization, Automatic Document Summarization, Natural Language Processing, Heterogeneous Graph Neural Network, Dialogue Discourse Parsing, Generative Model |
研究方法: | 實驗設計法 |
DOI URL: | http://doi.org/10.6345/NTNU202301628 |
論文種類: | 學術論文 |
相關次數: | 點閱:111 下載:11 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來,由於疫情的影響和遠端工作的普及,線上會議和視訊交流平台的使用 變得更加廣泛。但隨之而來的問題是,會議記錄往往包含許多分散的資訊,要 在大量的對話中擷取和理解關鍵資訊是困難的,且隨著會議越來越頻繁,意味 著參與者需要在有限的時間內掌握會議的要點,以便在忙碌的日程中做出明智 的決策。在這樣的情境下,能夠從會議紀錄中自動辨識和摘要出關鍵資訊的技 術變得更為重要。
自動文件摘要主要分為擷取式 (Extractive) 和重寫式 (Abstractive) 兩種方 法,擷取式摘要透過計算原始文件中每個句子的重要性分數,選擇得分高的句 子並將它們組合起來成為摘要。重寫式摘要透過對原始文件的理解重新改寫句 子,生成出一個簡潔且包含原始文件中核心內容的摘要。由於對話中的話語經 常是不流暢且資訊分散的,使用擷取式摘要容易擷取出不完整的句子,造成可 讀性不高。目前在會議摘要任務中,主要的應用是能夠將原始語句改寫的重寫 式摘要。雖然已有許多相關的研究被提出,重寫式的方法應用在會議摘要中仍 面臨幾個普遍性的限制,包括輸入長度問題、複雜的對話結構,以及缺乏訓練 資料與事實不一致,而這些問題也是提高會議摘要模型效能的關鍵。
本論文專注在「輸入長度問題」和「對話式結構」的研究,提出了一個先 擷取後生成的會議摘要模型架構,在擷取階段設計了三種方法來選擇重要的文 本片段,分別是異質圖神經網路模型、對話語篇剖析和文本相似度。在生成階 段使用先進的生成式預訓練模型。實驗結果顯示,提出的方法透過微調基線模 型,可以達到效果提升。
In recent years, the use of online meetings and video communication platforms has become more widespread due to the impact of the pandemic and the popularity of remote work. However, this trend brings along certain challenges. Meeting transcripts often contain scattered information, making it difficult to extract and understand key details from a large volume of conversations. Additionally, as meetings become increasingly frequent, participants need to grasp the main points of the discussions within limited time to make informed decisions amidst their busy schedules. In such a context, the ability to automatically identify and summarize crucial information from meeting transcripts becomes even more important.
Automatic document summarization can be categorized into two main approaches: extractive and abstractive. Extractive summarization calculates the importance scores of each sentence in the original document and selects high-scoring sentences to form the summary. On the other hand, abstractive summarization involves understanding the original document and rewriting sentences to generate a concise summary that captures the core content. Extractive summarization is prone to extracting incomplete sentences due to the often disjointed and scattered nature of dialogues, leading to reduced readability. Currently, the primary application in meeting summarization tasks is abstractive summarization, which involves rewriting the original sentences. Despite the numerous related studies, the application of abstractive methods in meeting summarization still faces several common limitations, including input length constraints, complex dialogue structures, the lack of training data, and consistency with facts. Addressing these issues is crucial for improving the performance of meeting summarization models.
This paper focuses on the research of "input length constraints" and "dialogue- style structures" and proposes a meeting summarization model architecture that follows an extract-then-generate approach. In the extraction phase, three methods are designed to select important text segments: heterogeneous graph neural network model, dialogue discourse parsing, and cosine similarity. Advanced generative pre-training model are employed in the generation phase. Experimental results demonstrate that the proposed approach, through fine-tuning the baseline model, achieves performance improvements.
L. P. Kumar and A. Kabiri, “Meeting Summarization: A Survey of the State of the Art.” arXiv, Dec. 15, 2022.
M. Zhong, Y. Liu, Y. Xu, C. Zhu, and M. Zeng, “DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization.” arXiv, Jan. 06, 2022.
V. Rennard, G. Shang, J. Hunter, and M. Vazirgiannis, “Abstractive Meeting Summarization: A Survey.” arXiv, Apr. 25, 2023.
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kai-ser, and I. Polosukhin, “Attention is All you Need,” in Advances in Neural Infor-mation Processing Systems, Curran Associates, Inc., 2017.
J. Carletta, S. Ashby, S. Bourban, M. Flynn, M. Guillemot, T. Hain, J. Kadlec, V. Karaiskos, W. Kraaij, M. Kronenthal, G. Lathoud, M. Lincoln, A. Lisowska, I. McCowan, W. Post, D. Reidsma, and P. Wellner, “The AMI meeting corpus: a pre-announcement,” in Proceedings of the Second international conference on Machine Learning for Multimodal Interaction, in MLMI’05. Berlin, Heidelberg: Springer-Verlag, 11 2005, pp. 28–39.
Z. Mao, C. H. Wu, A. Ni, Y. Zhang, R. Zhang, T. Yu, B. Deb, C. Zhu, A. Awadallah, and D. Radev, “DYLE: Dynamic Latent Extraction for Abstractive Long-Input Summarization,” in Proceedings of the 60th Annual Meeting of the As-sociation for Computational Linguistics (Volume 1: Long Papers), Dublin, Ire-land: Association for Computational Linguistics, May 2022, pp. 1687–1698.
C. Zhu, R. Xu, M. Zeng, and X. Huang, “A Hierarchical Network for Abstractive Meeting Summarization with Cross-Domain Pretraining,” in Findings of the Asso-ciation for Computational Linguistics: EMNLP 2020, Online: Association for Computational Linguistics, Jan. 2020, pp. 194–203.
H. Sacks, E. Schegloff, and G. Jefferson, “A Simplest Systematics for the Organi-zation of Turn-Taking for Conversation,” vol. 50, no. 4, 1974.
M. Stone, U. Stojnic, and E. Lepore, “Situated Utterances and Discourse Rela-tions,” in Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) – Short Papers, Potsdam, Germany: Association for Computational Linguistics, Mar. 2013, pp. 390–396.
N. Asher, J. Hunter, M. Morey, B. Farah, and S. Afantenos, “Discourse Structure and Dialogue Acts in Multiparty Dialogue: the STAC Corpus,” in Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), Portorož, Slovenia: European Language Resources Association (ELRA), May 2016, pp. 2721–2727.
D. Wang, P. Liu, Y. Zheng, X. Qiu, and X. Huang, “Heterogeneous Graph Neural Networks for Extractive Document Summarization,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online: Associ-ation for Computational Linguistics, Jul. 2020, pp. 6209–6219.
T.-C. Chi and A. Rudnicky, “Structured Dialogue Discourse Parsing,” in Proceed-ings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Edinburgh, UK: Association for Computational Linguistics, Sep. 2022, pp. 325–335.
L. Dong, N. Yang, W. Wang, F. Wei, X. Liu, Y. Wang, J. Gao, M. Zhou, and H.-W. Hon, “Unified Language Model Pre-training for Natural Language Under-standing and Generation,” in Advances in Neural Information Processing Systems, Curran Associates, Inc., 2019.
J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” arXiv, May 24, 2019.
J. Carbonell and J. Goldstein, “The use of MMR, diversity-based reranking for reordering documents and producing summaries,” in Proceedings of the 21st an-nual international ACM SIGIR conference on Research and development in in-formation retrieval, in SIGIR ’98. New York, NY, USA: Association for Com-puting Machinery, Spring 1998, pp. 335–336.
R. McDonald, “A study of global inference algorithms in multi-document summa-rization,” in Proceedings of the 29th European conference on IR research, in ECIR’07. Berlin, Heidelberg: Springer-Verlag, Summer 2007, pp. 557–564.
G. Erkan and D. R. Radev, “LexRank: Graph-based Lexical Centrality as Salience in Text Summarization,” J. Artif. Intell. Res., vol. 22, pp. 457–479, Dec. 2004,
R. Mihalcea and P. Tarau, “TextRank: Bringing Order into Text,” in Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, Barcelona, Spain: Association for Computational Linguistics, Jul. 2004, pp. 404–411.
R. Nallapati, B. Zhou, C. dos Santos, Ç. Gu̇lçehre, and B. Xiang, “Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond,” in Pro-ceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, Berlin, Germany: Association for Computational Linguistics, Aug. 2016, pp. 280–290.
R. Nallapati, F. Zhai, and B. Zhou, “SummaRuNNer: A Recurrent Neural Net-work Based Sequence Model for Extractive Summarization of Documents,” Proc. AAAI Conf. Artif. Intell., vol. 31, no. 1, Art. no. 1, Feb. 2017,
T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language Models are Few-Shot Learners,” in Advances in Neural Information Processing Systems, Curran Associates, Inc., 2020, pp. 1877–1901.
M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online: Association for Computational Linguistics, Jul. 2020, pp. 7871–7880.
J. Zhang, Y. Zhao, M. Saleh, and P. J. Liu, “PEGASUS: pre-training with extract-ed gap-sentences for abstractive summarization,” in Proceedings of the 37th Inter-national Conference on Machine Learning, in ICML’20, vol. 119. JMLR.org, 13 2020, pp. 11328–11339.
B. Gliwa, I. Mochol, M. Biesek, and A. Wawer, “SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization,” in Proceedings of the 2nd Workshop on New Frontiers in Summarization, Hong Kong, China: Associa-tion for Computational Linguistics, Jan. 2019, pp. 70–79.
C. Zhu, Y. Liu, J. Mei, and M. Zeng, “MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization,” in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online: Association for Computational Linguis-tics, Jun. 2021, pp. 5927–5934.
L. Zhao, F. Zheng, K. He, W. Zeng, Y. Lei, H. Jiang, W. Wu, W. Xu, J. Guo, and F. Meng, “TODSum: Task-Oriented Dialogue Summarization with State Tracking.” arXiv, Oct. 25, 2021.
K. Krishna, S. Khosla, J. Bigham, and Z. C. Lipton, “Generating SOAP Notes from Doctor-Patient Conversations Using Modular Summarization Techniques,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online: Association for Computational Linguistics, Aug. 2021, pp. 4958–4972.
X. Feng, X. Feng, and B. Qin, “A Survey on Dialogue Summarization: Recent Advances and New Frontiers.” arXiv, Apr. 27, 2022.
G. Murray, S. Renals, and J. Carletta, “Extractive summarization of meeting re-cordings.,” in Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2005.
K. Riedhammer, B. Favre, and D. Hakkani-Tur, “A keyphrase based approach to interactive meeting summarization,” in 2008 IEEE Spoken Language Technology Workshop, Feb. 2008, pp. 153–156.
S. Xie, Y. Liu, and H. Lin, “Evaluating the effectiveness of features and sampling in extractive meeting summarization,” in 2008 IEEE Spoken Language Technology Workshop, Feb. 2008, pp. 157–160.
N. Garg, B. Favre, K. Riedhammer, and D. Hakkani-Tur, “ClusterRank: A Graph Based Method for Meeting Summarization,” in Proceedings of the Annual Confer-ence of the International Speech Communication Association, INTERSPEECH, Sep. 2009, pp. 1499–1502.
Y.-N. Chen and F. Metze, “Two-layer mutually reinforced random walk for im-proved multi-party meeting summarization,” in 2012 IEEE Spoken Language Technology Workshop (SLT), Feb. 2012, pp. 461–466.
S. Xie and Y. Liu, “Improving supervised learning for meeting summarization us-ing sampling and regression,” Comput. Speech Lang., vol. 24, no. 3, pp. 495–514, Spring 2010,
C. Lai, J. Carletta, and S. Renals, “Detecting Summarization Hot Spots in Meet-ings Using Group Level Involvement and Turn-Taking Features,” in Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2015.
S. Banerjee, P. Mitra, and K. Sugiyama, “Abstractive Meeting Summarization Us-ingDependency Graph Fusion.” arXiv, Sep. 22, 2016.
G. Shang, W. Ding, Z. Zhang, A. Tixier, P. Meladianos, M. Vazirgiannis, and J.-P. Lorré, “Unsupervised Abstractive Meeting Summarization with Multi-Sentence Compression and Budgeted Submodular Maximization,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia: Association for Computational Linguistics, Jul. 2018, pp. 664–674.
K. Filippova, “Multi-Sentence Compression: Finding Shortest Paths in Word Graphs,” in Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), Beijing, China: Coling 2010 Organizing Committee, Aug. 2010, pp. 322–330.
T. Oya, Y. Mehdad, G. Carenini, and R. Ng, “A Template-based Abstractive Meeting Summarization: Leveraging Summary and Source Text Relationships,” in Proceedings of the 8th International Natural Language Generation Conference (INLG), Philadelphia, Pennsylvania, U.S.A.: Association for Computational Lin-guistics, Jun. 2014, pp. 45–53.
M. Zhong, D. Yin, T. Yu, A. Zaidi, M. Mutuma, R. Jha, A. H. Awadallah, A. Ce-likyilmaz, Y. Liu, X. Qiu, and D. Radev, “QMSum: A New Benchmark for Que-ry-based Multi-domain Meeting Summarization,” in Proceedings of the 2021 Con-ference of the North American Chapter of the Association for Computational Lin-guistics: Human Language Technologies, Online: Association for Computational Linguistics, Jun. 2021, pp. 5905–5921.
A. See, P. J. Liu, and C. D. Manning, “Get To The Point: Summarization with Pointer-Generator Networks.” arXiv, Apr. 25, 2017.
M. Li, L. Zhang, H. Ji, and R. J. Radke, “Keep Meeting Summaries on Topic: Ab-stractive Multi-Modal Meeting Summarization,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy: Associ-ation for Computational Linguistics, Jul. 2019, pp. 2190–2196.
Y. Zhang, A. Ni, T. Yu, R. Zhang, C. Zhu, B. Deb, A. Celikyilmaz, A. H. Awadallah, and D. Radev, “An Exploratory Study on Long Dialogue Summariza-tion: What Works and What’s Next,” in Findings of the Association for Computa-tional Linguistics: EMNLP 2021, Punta Cana, Dominican Republic: Association for Computational Linguistics, Jan. 2021, pp. 4426–4433.
I. Beltagy, M. E. Peters, and A. Cohan, “Longformer: The Long-Document Trans-former.” arXiv, Dec. 02, 2020.
Z. Liu and N. F. Chen, “Dynamic Sliding Window for Meeting Summarization.” arXiv, Aug. 31, 2021.
Y. Zhang, A. Ni, Z. Mao, C. H. Wu, C. Zhu, B. Deb, A. Awadallah, D. Radev, and R. Zhang, “Summ^N: A Multi-Stage Summarization Framework for Long In-put Dialogues and Documents,” in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ire-land: Association for Computational Linguistics, May 2022, pp. 1592–1604.
P. Ganesh and S. Dingliwal, “Restructuring Conversations using Discourse Rela-tions for Zero-shot Abstractive Dialogue Summarization.” arXiv, Oct. 13, 2020.
X. Feng, X. Feng, B. Qin, and X. Geng, “Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization,” presented at the Twenty-Ninth International Joint Conference on Artificial Intelligence, Aug. 2021, pp. 3808–3814.
C.-W. Goo and Y.-N. Chen, “Abstractive Dialogue Summarization with Sentence-Gated Modeling Optimized by Dialogue Acts.” arXiv, Sep. 29, 2018.
H. Zhang, J. Cai, J. Xu, and J. Wang, “Pretraining-Based Natural Language Gen-eration for Text Summarization,” in Proceedings of the 23rd Conference on Com-putational Natural Language Learning (CoNLL), Hong Kong, China: Association for Computational Linguistics, Jan. 2019, pp. 789–797.
L. Lebanoff, K. Song, F. Dernoncourt, D. S. Kim, S. Kim, W. Chang, and F. Liu, “Scoring Sentence Singletons and Pairs for Abstractive Summarization,” in Pro-ceedings of the 57th Annual Meeting of the Association for Computational Lin-guistics, Florence, Italy: Association for Computational Linguistics, Jul. 2019, pp. 2175–2189.
J. Xu and G. Durrett, “Neural Extractive Text Summarization with Syntactic Com-pression,” in Proceedings of the 2019 Conference on Empirical Methods in Natu-ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China: Association for Computational Linguistics, Jan. 2019, pp. 3292–3303.
A. Bajaj, P. Dangati, K. Krishna, P. Ashok Kumar, R. Uppaal, B. Windsor, E. Brenner, D. Dotterrer, R. Das, and A. McCallum, “Long Document Summariza-tion in a Low Resource Setting using Pretrained Language Models,” in Proceed-ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop, Online: Association for Computational Linguistics, Aug. 2021, pp. 71–80.
Y.-C. Chen and M. Bansal, “Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia: Association for Computational Linguistics, Jul. 2018, pp. 675–686.
S. Bae, T. Kim, J. Kim, and S. Lee, “Summary Level Training of Sentence Rewrit-ing for Abstractive Summarization.” arXiv, Sep. 26, 2019.
W. Mann and S. Thompson, “Rhetorical Structure Theory: A Framework for the Analysis of Texts,” in IPRA (International Pragmatics Association) Papers in Pragmatics, 1987.
R. Prasad, N. Dinesh, A. Lee, E. Miltsakaki, L. Robaldo, A. Joshi, and B. Webber, “The Penn Discourse TreeBank 2.0.,” in Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08), Marrakech, Mo-rocco: European Language Resources Association (ELRA), May 2008.
A. Lascarides and N. Asher, “Segmented Discourse Representation Theory: Dy-namic Semantics With Discourse Structure,” in Computing, 2007, pp. 87–124.
J. Li, M. Liu, M.-Y. Kan, Z. Zheng, Z. Wang, W. Lei, T. Liu, and B. Qin, “Mol-weni: A Challenge Multiparty Dialogues-based Machine Reading Comprehension Dataset with Discourse Structure,” in Proceedings of the 28th International Con-ference on Computational Linguistics, Barcelona, Spain (Online): International Committee on Computational Linguistics, Feb. 2020, pp. 2642–2652.
Z. Liu and N. Chen, “Improving Multi-Party Dialogue Discourse Parsing via Do-main Integration,” in Proceedings of the 2nd Workshop on Computational Ap-proaches to Discourse, Punta Cana, Dominican Republic and Online: Association for Computational Linguistics, Jan. 2021, pp. 122–127.
S. Afantenos, E. Kow, N. Asher, and J. Perret, “Discourse parsing for multi-party chat dialogues,” in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal: Association for Computational Linguistics, Sep. 2015, pp. 928–937.
J. Perret, S. Afantenos, N. Asher, and M. Morey, “Integer Linear Programming for Discourse Parsing,” in Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, California: Association for Computational Linguistics, Jun. 2016, pp. 99–109.
Z. Shi and M. Huang, “A Deep Sequential Model for Discourse Parsing on Multi-Party Dialogues.” arXiv, Dec. 01, 2018.
A. Wang, L. Song, H. Jiang, S. Lai, J. Yao, M. Zhang, and J. Su, “A Structure Self-Aware Model for Discourse Parsing on Multi-Party Dialogues,” presented at the Twenty-Ninth International Joint Conference on Artificial Intelligence, Aug. 2021, pp. 3943–3949.
A. L. Berger, S. A. Della Pietra, and V. J. Della Pietra, “A Maximum Entropy Approach to Natural Language Processing,” Comput. Linguist., vol. 22, no. 1, pp. 39–71, 1996.
G. Sidorov, A. Gelbukh, H. Gomez Adorno, and D. Pinto, “Soft Similarity and Soft Cosine Measure: Similarity of Features in Vector Space Model,” Comput. Sist., vol. 18, Sep. 2014,
J. Steinberger and K. Jezek, “Using Latent Semantic Analysis in Text Summariza-tion and Summary Evaluation,” in Proceedings of the 7th International Conference ISIM, Apr. 2004, pp. 93–100.
M. Jain and H. Rastogi, “Automatic Text Summarization using Soft-Cosine Simi-larity and Centrality Measures,” in 2020 4th International Conference on Electron-ics, Communication and Aerospace Technology (ICECA), Jan. 2020, pp. 1021–1028.
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph Attention Networks.” arXiv, Feb. 04, 2018.
A. Janin, D. Baron, J. Edwards, D. Ellis, D. Gelbart, N. Morgan, B. Peskin, T. Pfau, E. Shriberg, A. Stolcke, and C. Wooters, “The ICSI Meeting Corpus,” in 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP ’03)., Apr. 2003.
C.-Y. Lin, “ROUGE: A Package for Automatic Evaluation of Summaries,” in Text Summarization Branches Out, Barcelona, Spain: Association for Computational Linguistics, Jul. 2004, pp. 74–81.