簡易檢索 / 詳目顯示

研究生: 羅仕翰
Lo, Shih-Han
論文名稱: 以異質關係進行疾病診斷碼表示法學習之研究
Representation Learning for Diagnosis Codes by Heterogeneous Relationships
指導教授: 柯佳伶
Koh, Jia-Ling
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 67
中文關鍵詞: 疾病診斷碼預測獨立訓練聯合訓練整合模型
英文關鍵詞: Diagnosis Codes Prediction, Independent Training Model, Joint Learning, Integration Model
DOI URL: http://doi.org/10.6345/NTNU202000433
論文種類: 學術論文
相關次數: 點閱:132下載:25
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 疾病診斷碼表示法的學習在近年來被廣為研究,然而許多研究僅考慮疾病診斷碼間的出現關聯資料。本論文以疾病診斷碼為主體,同時考慮與其他診斷碼、個人屬性特徵、醫療用藥或醫療處置等資料出現的關聯進行表示法的學習,用於下一次看診的疾病診斷碼進行預測。本論文提出兩種訓練表示法的方法,第一種是對各屬性特徵分開進行獨立訓練,第二種是將特徵合在一個模型中進行聯合訓練。表示法訓練完成後,針對兩種不同的表示法訓練方法所得到的疾病診斷碼表示法提供對應的預測模型,其中針對獨立訓練的疾病診斷碼表示法提出三種整合方式:直接接合、權重合成及注意力機制。實驗結果顯示,獨立訓練模型的直接接合及注意力機制整合方式,以及聯合訓練模型,與Med2Vec相較起來,在預測效能都有顯著的上升。特徵組合探討方面,以聯合訓練模型,特徵採用疾病診斷碼搭配看診時間及醫療處置時,可得到最佳預測效果。

    Representation learning of diagnosis codes is studied by extensive research, but most of them only use co-occurrence data among disease codes. This thesis not only takes the disease codes, but also uses the occurrence data with other diseases, the personal features, and the medical or procedure treatments for representation learning, which are used to predict the diagnosis codes occurring in the next visit. We propose two methods for representation learning of diagnosis codes. The first one is an independent model to train the representation of diagnosis codes by each feature separately. The second one uses all features to jointly train their representation in the same model, which is called the joint learning approach. Moreover, for the learnt representations of codes, there are different prediction models designed. Among them, three integrational methods are proposed to combine the representations learnt from independent model in the prediction model, i.e., concatenation, combination with weights, and attention mechanism. The results of experiments show that the performances of independent model with concatenation, independent model with attention mechanism and joint learning model are better than Med2Vec significantly. In terms of feature combination, the best predictive effect is obtained by the joint training model by using disease diagnosis codes, diagnosis time, and medical procedures.

    摘要 i ABSTRACT ii 誌謝 iii 附圖目錄 vi 附表目錄 viii 第一章 緒論 1 1.1 研究動機 1 1.2 研究目的 2 1.3 研究問題 2 1.4 論文方法 7 1.5 論文架構 10 第二章 文獻探討 11 2.1 向量表示法學習及其預測 11 2.1.1 文字資料 11 2.1.2 病歷記錄預測 12 2.2 注意力機制 14 2.3 聯合訓練 16 第三章 問題定義與資料前處理 18 3.1 問題與名詞定義 18 3.2 輸入資料產生方法 19 第四章 病歷紀錄特徵向量學習與預測 24 4.1 疾病診斷碼向量表示法之訓練 24 4.1.1 獨立訓練 24 4.1.2 聯合訓練 30 4.2 預測模型 33 4.2.1 針對獨立訓練 34 4.2.2 針對聯合訓練 39 第五章 實驗結果評估與探討 41 5.1 資料集及其型態 41 5.2 向量表示法訓練回合數調定 42 5.2.1 獨立訓練回合數調定 42 5.2.2 聯合訓練回合數調定 44 5.3 評估指標 45 5.4 實驗評估 46 5.4.1 不同方法的預測效能評估 46 5.4.2 特徵組合探討 50 第六章 結論與未來研究方向 64 參考文獻 65 附錄一 ICD-9編碼分類列表 67

    [1] D. Bahdanau, K. Cho, and Y. Bengio. “Neural Machine Translation by Jointly Learning to Align and Translate.” Paper presented at Proceedings of the 3rd International Conference on Learning Representations (ICLR), 2015.
    [2] E. Choi, M. T. Bahadori, E. Searles, C. Coffey, M. Thompson, J. Bost, and J. Tejedor-Sojo, and J. Sun. “Multi-Layer Representation Learning for Medical Concepts.” Paper presented at Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2016.
    [3] E. Choi, M. T. Bahadori, L. Song, W. F. Stewart, and J. Sun. “GRAM: Graph-based Attention Model for Healthcare Representation Learning.” Paper presented at Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2017.
    [4] E. Choi, M. T. Bahadori, and J. Sun. “Doctor AI: Predicting Clinical Events via Recurrent Neural Networks.” In 2016 Machine Learning and Healthcare Conference (MLHC), 2016.
    [5] E. Choi, M. T. Bahadori, J. Sun, J. Kulas, A. Schuetz, and W. F. Stewart. “RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism.” Paper presented at Advances in Neural Information Processing Systems (NIPS), 2016.
    [6] S. Kuzi, A. Shtok, and O. Kurland. “Query Expansion Using Word Embeddings.” Paper presented at Proceedings of the 25th ACM International on Conference on Information and Knowledge Management (CIKM), 2016.
    [7] M. Luong, H. Pham, and C. D. Manning. “Effective Approaches to Attention-based Neural Machine Translation.” Paper presented at Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2015.
    [8] F. Ma, R. Chitta, J. Zhou, Q. You, T. Sun, and J. Gao. “Dipole: Diagnosis Prediction in Healthcare via Attention-based Bidirectional Recurrent Neural Networks.” Paper presented at Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2017.
    [9] T. Mikolov, K. Chen, G. Corrado, and J. Dean. “Efficient estimation of word representations in vector space.” Paper presented at Proceedings of the International Conference on Learning Representations (ICLR), 2013.
    [10] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. “Distributed representations of words and phrases and their compositionality.” Paper presented at Advances in Neural Information Processing Systems 26 (NIPS), 2013.
    [11] V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu. “Recurrent Models of Visual Attention.” Paper presented at Advances in Neural Information Processing Systems (NIPS), 2014.
    [12] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. “Attention is all you need.” Paper presented at Advances in Neural Information Processing Systems (NIPS), 2017.
    [13] M. Xie, H. Yin, H. Wang, F. Xu, W. Chen, and S. Wang. “Learning Graph-based POI Embedding for Location-based Recommendation.” Paper presented at Proceedings of ACM International Conference on Information Knowledge Management, 2016.
    [14] Z. Zhang, S. Liu, M. Li, M. Zhou, and E. Chen. “Joint Training for Neural Machine Translation Models with Monolingual Data.” Paper presented at the 32nd AAAI Conference on Artificial Intelligence, 2018.
    [15] G. Zuccon, B. Coopman, P. Bruza, and L. Azzopardi. “Integrating and Evaluating Neural Word Embeddings in Information Retrieval.” Paper presented at Proceedings of the 20th Australian Document Computing Symposium (ADCS), 2015.

    下載圖示
    QR CODE