研究生: |
張虔榮 Qian-Rong Chang |
---|---|
論文名稱: |
潛在概念分析-利用中文網路資料在向量空間模型中呈現語意關係概念知識 Latent Conceptual Analysis--Using Web data in Chinese to represent conceptual knowledge about word relations in a vector space model |
指導教授: |
謝舒凱
Hsieh, Shu-Kai 張妙霞 Chang, Miao-Hsia |
學位類別: |
碩士 Master |
系所名稱: |
英語學系 Department of English |
論文出版年: | 2012 |
畢業學年度: | 100 |
語文別: | 英文 |
論文頁數: | 107 |
中文關鍵詞: | 關係相似度 、詞彙關係 、向量空間模型 |
英文關鍵詞: | relation similarity, lexical relations, Vector Space Model |
論文種類: | 學術論文 |
相關次數: | 點閱:139 下載:6 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在自然語言處理領域中, 詞彙模式(lexical pattern)經常被使用在許多計算語意關係之間相似度的實驗裡。然而儘管這些詞彙模式的重要性日益增加,對於它們被宣稱所代表的語意關係,卻很少有學者去探討它們反映了哪種層面的訊息。本論文主張這些詞彙模式和它們所代表的語意關係,在語言使用過程中,具備了同樣的概念特性。
同時本論文也提出一個稱為潛在概念分析(LCA)的計算模型,這個計算模型能掌握並且運用詞彙模式所具備的概念特性來進行相似度的計算。潛在概念分析是個自動化演算法,該演算法主要利用奇異值分解法(SVD)來處理因為大規模語料庫所產生的高維度問題。在本篇論文中,首先有35組詞彙模式經由半自動方式產生出來,作為LCA的輸入資料來源,接著每組詞彙模式都會產生一組列表,該列表會按照相似度距離由近到遠列出其他的34組詞彙模式。為了檢視LCA的功能,最後產生出來的結果會與另一組由手動標注的結果相互對照,這組由手動分群而成的結果所採取的準則來自詞彙資源網站FrameNet分類所依據的標準,最後結果顯示LCA所完成的相似度距離計算與手動分群的結果相似。
本論文所採取的方法近似於Turney (2006)與Bollegala et al. (2009)所使用的方法,但差異在於本論文所提出之方法並不只是依靠頻率的分布情形,另外也將語言使用者對詞彙模式的概念知識納入LCA的計算考量。因為LCA的語料來源是網路內容,因此網路內容所具備的不穩定和易變動的特性也有時會影響LCA的表現。未來相關研究可依長期蒐集資料的方式來降低這個問題的影響。
In the field of Natural Language Processing, lexical patterns are often applied in many experiments that involve similarity measure among word relations. Despite their growing importance, however, these patterns are rarely examined in terms of what aspect they inherit from the word relation they are claimed to represent. In the thesis, it is proposed that lexical patterns exhibit the same conceptual nature as word relations do. They both display conceptual qualities when they are applied in language use.
It is also proposed in this thesis that the conceptual nature of lexical patterns can be captured and implemented in a computational model, latent conceptual analysis (LCA), to calculate similarity among the patterns. LCA is an automatic algorithm that relies on singular vector decomposition (SVD) to reduce the high dimensionality resulted from large-scale corpus. In the thesis, after 35 lexical patterns are generated semi-automatically, each of them is sent to LCA as input data, whose distance from the other 34 patterns will be subsequently determined. To validate the performance of LCA, the result is compared to that of a manual clustering method whose standards are based on principles applied in FrameNet. As revealed from the comparison, LCA has achieved a result similar to that of manual clustering.
The approach adopted in the thesis is similar to that applied by Turney (2006) and Bollegala et al. (2009). However, instead of relying solely on frequency distribution, language users’ conceptual knowledge about lexical patterns is also taken into consideration in LCA. Because LCA uses Web contents as its corpus, the dynamic and constantly changing nature of data collected from the Web can sometimes affect the performance of LCA. Therefore it is suggested that future studies applying LCA should collect data in a long-term fashion to alleviate this problem.
Allan, K. (1986). Linguistic meaning (2 vols.). London: Routledge.
Agirre, E., and Edmonds, P. (2006). Word Sense Disambiguation: Algorithms and Applications. Springer.
Banko, M., Cafarella, M., Soderland, S., Broadhead, M. and Etzioni, O. (2007). Open information extraction from the Web. In Proceedings of International Joint Conference on Artificial Intelligence, IJCAI’07 (pp. 2670-2676).
Berland, M. and Charniak, E. (1999). Finding parts in very large corpora. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, ACL’99 (pp. 57-64).
Bollegala, D., Matsuo, Y., and Ishizuka, M. (2007). Measuring semantic similarity between words using Web search engines. In Proceedings of the 16th International Conference on World Wide Web, WWW’07 (pp. 757-766).
Bollegala, D., Matsuo, Y., and Ishizuka, M. (2009). Measuring the similarity between implicit semantic relations from the web. In Proceedings of the 18th International Conference on World Wide Web, WWW’09 (pp. 651-660).
Charles, W., Reed, M. and Derryberry, D. (1994). Conceptual and associative processing in antonymy and synonymy. Applied Psycholinguistics, 15, 329-354.
Clarke, Charles L. A, Gordon V. Cormack, and Christopher R. Palmer. (1998). An overview of multitext. ACM SIGIR Forum, 32(2), 14-15
Church, K., and Hanks, P. (1989). Word association norms, mutual information, and lexicography. In Proceedings of the 27th Annual Conference of the Association of Computational Linguistics, pp. 76-83, Vancouver, British Columbia.
Dang, H., Lin, J., and Kelly, D. (2006). Overview of the TREC 2006 question answering track. In Proceedings of the Fifteenth Text Retrieval Conference, TREC’06.
Davidov, D., Rappoport, A. (2008). Classification of semantic relationships between nominal using pattern clusters. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, ACL’08 (pp. 227-235).
Deerwester, S., Dumais, S, Landauer, T., Furnas, G, and Harshman, R. (1990). Indexing by latent semantic analysis. Journal of the American Society for Information Science, JASIS (pp. 391-407), 41(6).
Fellbaum, C. (1998). WordNet: An Electronic Lexical Database. Cambridge, MA, USA: MIT Press.
Fillmore, C.J. (1968). The Case for Case. Universals in Linguistic Theory. New York: Holt, Rinehart, and Winston. 1-88.
Fillmore, C.J. (1970). The Grammar of Hitting and Breaking. Readings in English Transformational Grammar. Ginn and Company. 120-133.
Fillmore, C.J. (1975). An Alternative to Checklist Theories of Meaning. Proceedings of the First Annual Meeting of the Berkeley Linguistics Society. Berkeley: Berkeley Linguistics Society. 123-131.
Fillmore, C.J. (1976). Frame Semantics and the Nature of Language. Origins and Evolution of Language and Speech. New York: New York Academy of Science. 20-32.
Fillmore, C.J. (1977). Scenes-and-Frames Semantics. Linguistics Structures Proceeding. Dordrecht: North Holland Publishing Company. 55-81.
Fillmore, C.J. (1982). Frame Semantics. Linguistics in the Morning Calm. Seoul:Hanshin. 111-38.
Fillmore, C.J., and B.T.S. Atkins. (1992). Toward a Frame-based Lexicon: The Semantics of RISK and its Neighbors. Frames, Fields and Contrasts: New Essays in Semantic and Lexical Organization. Hillsdale: Erlbaum. 75-102.
Fillmore, C.J., and B.T.S. Atkins. (1994). Starting where the Dictionaries stop: The Challenge for Computational Lexicography. Computational Approaches to the Lexicon. Oxford: Oxford University Press. 349-393.
Fillmore, C.J., and B.T.S. Atkins. (2000). Describing Polysemy: The Case of ‘Crawl’. Polysemy. Oxford: Oxford University Press. 91-110.
Foltz, P., Laham, D., and Landauer, T. (1999). The intelligent essay assessor: Applications to educational technology. Interactive Multimedia Electronic Journal of Computer-Enhanced Learning, 1(2).
Furnas, G., Landauer, T., Gomez, L., and Dumais, S. (1983). Statistical semantics: Analysis of the potential performance of keyword information systems. Bell System Technical Journal, 62(6), 1753-1806.
Gentner, D., Bowdle, B., Wolff, P. and Boronat, C. (2001). Metaphor is like analogy. The Analogical Mind: Perspectives from Cognitive Science (pp. 199-253). Cambridge, MA, USA: MIT Press.
Gross, D., Fischer, U., and Miller, G. (1989). The organization of adjectival meanings. Journal of Memory and Language, 28, 92-106.
Hearst, M. (1992). Automatic acquisition of hyponyms from large text corpora. In Proceedings of the Fourteenth International Conference on Computational Linguistics (pp. 539-545). Nantes, France.
Jones, S. (2002). Antonymy: A Corpus-based Perspective. New York: Routledge.
Jones, S. (2010). Using Web data to explore lexico-semantic relations. In Lexical-Semantic Relations, ed. Storjohann, Petra, 49-67, Amsterdam: John Benjamins.
Landauer, T., and Dumais, S. (1997). A solution to Plato’s problem: The latent semantic analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review, 104(2), 211-240.
Lauer, M. (1995). Designing Statistical Language Learners: Experiments on Compound Nouns. Ph.D. Thesis, Macquarie University, Sydney.
Lin, D. (1998). Automatic retrieval and clustering of similar words. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics, COLING-ACL’98 (pp.768-774), Montreal, Canada.
Luhn, H. (1958). The automatic creation of literature abstracts. IBM Journal of Research and Development, 2(2), 159-165.
Lyons, J. (1977). Semantics (2 Vols.). Cambridge: Cambridge: Cambridge University Press.
Mahalanobis, P., (1936). On the generalized distance in statistics. In Proceedings of the National Institute of Science of India, 12, 49-55.
Manning, C., and Schütze, H. (1999). Foundations of Statistical Natural Language Processing. MIT Press, Cambridge, MA.
Murphy, M.L. (2003). Semantic relations and the lexicon. Cambridge: Cambridge University Press.
Pantel, P., and Lin, D. (2002). Discovering word senses from text. In Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining, (pp. 613-619), New York, NY.
Pantel, P. and Pennacchiotti, M. (2006). Espresso: Leveraging generic patterns for automatic harvesting semantic relations. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL, 113-120.
Pedersen, T. (2006). Unsupervised corpus-based methods for WSD. In Word Sense Disambiguation: Algorithms and Applications, pp. 133-166. Springer.
Petruck, M. (1996). Frame Semantics. Handbook of Pragmatics. Amsterdam: Benjamins. 1-13.
Raybeck, D. and Herrmann, D. (1990). A cross-cultural examination of semantic relations. Journal of Cross-Cultural Psychology, 21, 452-473.
Rosario, B. and Hearst, M. (2001). Classifying the semantic relations in noun-compounds via a domain-specific lexical hierarchy. In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing, EMNLP’01 (pp. 82-90). Pittsburgh, PA.
Sahlgren, M. (2006). The Word-Space Model. Ph.D. thesis, Department of Linguistics, Stockholm University.
Salton, G., Wong, A., and Yang, C. (1975). A vector space model for automatic indexing. Communications of the ACM, 18(11), 613-620.
Singhal, A., Salton, G., Mitra, M., and Buckley, C. (1996). Document length normalization. Information Processing and Management, 32(5), 619-633.
Spärck Jones, K. (1972). A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 28(1), 11-21.
Spence, D. P. and Owens, K. (1990). Lexical co-occurrence and association strength. Journal of Psycholinguistic Research, 19, 317-330.
Turney, P. (2006). Similarity of semantic relations. Computational Linguistics, 32(3), 379-416.
Turney, P. and Michael L. Littman. (2005). Corpus-based leaning of analogies and semantic relations. Machine Learning, 60(1-3), 251-278.
Turney, P., Michael L. Littman, Jeffery Bigham, and Victor Shnayder. (2003). Combining independent modules to solve multiple-choice synonym and analogy problems. In Proceedings of the International Conference on Recent Advances in Natural Language Processing, RANLP’03 (pp. 482-489). Borovets, Bulgaria.
Turney, P., Pantel, P. (2010). From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37, 141-188.
Veale, T. (2004). WordNet sits the SAT: A knowledge-based approach to lexical analogy. In Proceedings of the 16th European Conference on Artificial Intelligence, ECAI’04 (pp. 606-612). Valencia, Spain.
Wandmacher, T., Ovchinnikova, E., and Alexandrov, T. (2008). Does latent semantic analysis reflect human associations? Proceedings of the ESSLLI Workshop on Distributional Lexical Semantics. 63-70.
Weaver, W. (1955). Translation. In Locke, W, and Booth, D. (Eds.), Machine Translation of Languages: Fourteen Essays. MIT Press, Cambridge, MA.
Zelenko, D., Aone, C., and Richardella, A. (2003). Kernel methods for relation extraction. Journal of Machine Learning Research, 3, 1083-1106.