Basic Search / Detailed Display

Author: 戴宗明
Tai, Tsung-Ming
Thesis Title: 基於深度學習的車輛隨意網路路由協定
Deep Reinforcement Learning Routing for Vehicular Ad-hoc Network
Advisor: 賀耀華
Ho, Yao-Hua
Degree: 碩士
Master
Department: 資訊工程學系
Department of Computer Science and Information Engineering
Thesis Publication Year: 2018
Academic Year: 106
Language: 中文
Number of pages: 58
Keywords (in Chinese): 車輛隨意網路智慧交通系統強化學習路由協定人工智慧深度學習
Keywords (in English): Vehicular Ad-hoc Network (VANET), Intelligent Transport System (ITS), Position-awareness Routing
DOI URL: http://doi.org/10.6345/THE.NTNU.DCSIE.017.2018.B02
Thesis Type: Academic thesis/ dissertation
Reference times: Clicks: 226Downloads: 20
Share:
School Collection Retrieve National Library Collection Retrieve Error Report
  • 車輛隨意網路 (Vehicular Ad-hoc Network, VANET) 能提供許多智慧車輛的應用以及智慧交通系統 (Intelligence Traffic System, ITS) 所需的網路基礎。藉由車輛之間的封包交換達到傳遞訊息的作用,可應用於行車安全、路況警示或是駕駛輔助系統。車輛隨意網路具有節點高速移動、拓樸改變快速等性質,加上道路環境複雜和訊號干擾的問題,如何使封包能夠可靠地成功送達目的地,成為路由在車輛隨意網路上的主要研究領域。
    此研究提出深度強化學習車輛網路路由協定 (Deep Reinforcement Learning Routing for VANET, vDRL),類似於以位置為基礎的路由協定,並且不需要仰賴於任何路由規則,藉由強化學習 (Reinforcement Learning) 的泛化能力,使其足以適應不同環境與車輛的特色。實驗結果顯示在大多數不同的情境設定中,vDRL相較於貪婪邊界無狀態路由(Greedy Perimeter Stateless Routing, GPSR) ,不僅提高封包的送達成功率、也降低端點對端點的延遲,以及路由所需的節點數。除此之外,此研究也提出一個有效的流程架構,藉由導入不同的街道地圖與真實車流量資訊,並使用強化學習訓練出最佳化的路由協定。

    In Intelligent Transport System (ITS), smart vehicle applications such as collision and road hazard warnings provide a safer and smarter driving environment. For safety applications, information is often exchanged between vehicle-to-vehicle (V2V). This type of fundamental network infrastructure is called Vehicular Ad-hoc Network (VANET). The main difference between VANET and Mobile Ad-hoc Network (MANET) is the highly dynamic characteristic of network topology due to high mobility of vehicles. This characteristic presents greater challenges for the VANET’s routing protocol to achieve high successful packet deliver ratio while reduce end-to-end delay and overhead. Thus, designing an efficient routing protocol is one of the active research topics in VANET.
    In this research, we proposed Deep Reinforcement Learning Routing for VANET (vDRL) to address the above-mentioned problem. Similar to position-awareness routing protocols, the location vehicles are used in the proposed vDRL. However, the reinforcement learning is applied for the next hop selection in the vDRL. Unlike other routing protocols in VANET, vDRL does not required fix routing rules which allows it to adapt the highly dynamic vehicle network environment. In addition, a network simulator is implemented that combines with reinforcement learning and neural network model. The simulator is able to generate variety of maps with different streets and traffic model for training the routing protocols to adapt different scenarios. The experiment results shown the proposed vDRL routing protocol is able to achieve high deliver rate and low delay with low overhead.

    附圖目錄 V 表目錄 VI 第一章 緒論 1 第二章 文獻探討 4 第一節 車輛隨意網路的特性與路由協定 4 2.1.1 車輛隨意網路的特性 4 2.1.2 現有的車輛隨意網路路由協定 6 第二節 強化學習背景 10 2.2.1 強化學習的常用名詞 10 2.2.2 馬可夫決策過程 12 2.2.3 蒙地卡羅法 14 2.2.4 時間差分法 16 第三節 基於強化學習的路由協定 16 第三章 方法設計 18 第一節 問題描述 18 第二節 強化學習應用於車輛隨意網路路由 20 第三節 於模擬環境以強化學習訓練路由協定 22 3.3.1 回放緩衝 23 3.3.2 封包轉送 24 3.3.3 行動價值函數的更新 25 3.3.4 數值逼近法 27 3.3.5 相關訓練細節 30 第四節 於車輛上執行路由協定 31 第四章 實驗分析 33 第一節 實驗設定 33 第二節 強化學習於車輛隨意網路的收斂性驗證 37 第三節 不同情境下的強化學習路由效能評估 39 4.3.1 不同連線模型的路由表現 39 4.3.2 不同車輛密度的路由表現 43 4.3.3 不同移動速率的路由表現 46 4.3.4 不同通訊距離的路由表現 47 4.3.5 地圖 (A) 與地圖 (B) 的路由成功率比較 50 第五章 結論與未來展望 53 參考文獻 54

    一、英文文獻
    Benamar, Maria, et al. "Recent study of routing protocols in VANET: survey and taxonomy." WVNT 1st International Workshop on Vehicular Networks and Telematics. 2013.

    Martinez, Francisco J., et al. "A survey and comparative study of simulators for vehicular ad hoc networks (VANETs)." Wireless Communications and Mobile Computing 11.7 (2011): 813-828.

    Hartenstein, Hannes, and L. P. Laberteaux. "A tutorial survey on vehicular ad hoc networks." IEEE Communications magazine 46.6 (2008).

    Yousefi, Saleh, Mahmoud Siadat Mousavi, and Mahmood Fathy. "Vehicular ad hoc networks (VANETs): challenges and perspectives." ITS Telecommunications Proceedings, 2006 6th International Conference on. IEEE, 2006.

    Singh, Surmukh, and Sunil Agrawal. "VANET routing protocols: Issues and challenges." Engineering and Computational Sciences (RAECS), 2014 Recent Advances in. IEEE, 2014.

    Liu, Jianqi, et al. "A survey on position-based routing for vehicular ad hoc networks." Telecommunication Systems 62.1 (2016): 15-30.

    Sharef, Baraa T., Raed A. Alsaqour, and Mahamod Ismail. "Vehicular communication ad hoc routing protocols: A survey." Journal of network and computer applications 40 (2014): 363-396.

    Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. Vol. 1. No. 1. Cambridge: MIT press, 1998.

    Kaelbling, Leslie Pack, Michael L. Littman, and Andrew W. Moore. "Reinforcement learning: A survey." Journal of artificial intelligence research 4 (1996): 237-285.

    Sutton, Richard S., Doina Precup, and Satinder Singh. "Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning." Artificial intelligence 112.1-2 (1999): 181-211.

    Spaan, Matthijs TJ. "Partially observable Markov decision processes." Reinforcement Learning. Springer, Berlin, Heidelberg, 2012. 387-414.

    Jaakkola, Tommi, Satinder P. Singh, and Michael I. Jordan. "Reinforcement learning algorithm for partially observable Markov decision problems." Advances in neural information processing systems. 1995.

    Tesauro, Gerald. "Temporal difference learning and TD-Gammon." Communications of the ACM 38.3 (1995): 58-68.

    Nareyek, Alexander. "Choosing search heuristics by non-stationary reinforcement learning." Metaheuristics: Computer decision-making. Springer, Boston, MA, 2003. 523-544.

    Rummery, Gavin A., and Mahesan Niranjan. On-line Q-learning using connectionist systems. Vol. 37. University of Cambridge, Department of Engineering, 1994.

    Watkins, Christopher JCH, and Peter Dayan. "Q-learning." Machine learning 8.3-4 (1992): 279-292.

    Ranjan, Prabhakar, and Kamal Kant Ahirwar. "Comparative study of vanet and manet routing protocols." Proc. of the International Conference on Advanced Computing and Communication Technologies (ACCT 2011). 2011.

    Li, Fan, and Yu Wang. "Routing in vehicular ad hoc networks: A survey." IEEE Vehicular technology magazine 2.2 (2007).

    Perkins, Charles, Elizabeth Belding-Royer, and Samir Das. Ad hoc on-demand distance vector (AODV) routing. No. RFC 3561. 2003.

    Johnson, David B., and David A. Maltz. "Dynamic source routing in ad hoc wireless networks." Mobile computing. Springer, Boston, MA, 1996. 153-181.

    Karp, Brad, and Hsiang-Tsung Kung. "GPSR: Greedy perimeter stateless routing for wireless networks." Proceedings of the 6th annual international conference on Mobile computing and networking. ACM, 2000.

    Blum, Jeremy, Azim Eskandarian, and Lance Hoffman. "Mobility management in IVC networks." Intelligent Vehicles Symposium, 2003. Proceedings. IEEE. IEEE, 2003.

    Maihofer, Christian. "A survey of geocast routing protocols." IEEE Communications Surveys & Tutorials 6.2 (2004).

    Sutton, Richard S., et al. "Policy gradient methods for reinforcement learning with function approximation." Advances in neural information processing systems. 2000.

    Boyan, Justin A., and Andrew W. Moore. "Generalization in reinforcement learning: Safely approximating the value function." Advances in neural information processing systems. 1995.

    Parr, Ronald, et al. "An analysis of linear models, linear value-function approximation, and feature selection for reinforcement learning." Proceedings of the 25th international conference on Machine learning. ACM, 2008.

    Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529.

    Macker, Joseph. "Mobile ad hoc networking (MANET): Routing protocol performance issues and evaluation considerations." (1999).

    Ranjan, Prabhakar, and Kamal Kant Ahirwar. "Comparative study of vanet and manet routing protocols." Proc. of the International Conference on Advanced Computing and Communication Technologies (ACCT 2011). 2011.

    Chitkara, Mahima, and Mohd Waseem Ahmad. "Review on manet: characteristics, challenges, imperatives and routing protocols." International Journal of Computer Science and Mobile Computing 3.2 (2014): 432-437.

    Menouar, Hamid, Massimiliano Lenardi, and Fethi Filali. "Movement prediction-based routing (MOPR) concept for position-based routing in vehicular networks." Vehicular Technology Conference, 2007. VTC-2007 Fall. 2007 IEEE 66th. IEEE, 2007.

    Souza, Evandro, Ioanis Nikolaidis, and Pawel Gburzynski. "A new aggregate local mobility (ALM) clustering algorithm for VANETs." Communications (ICC), 2010 IEEE International Conference on. IEEE, 2010.

    Rahbar, Hamidreza, Kshirasagar Naik, and Amiya Nayak. "DTSG: Dynamic time-stable geocast routing in vehicular ad hoc networks." Ad Hoc Networking Workshop (Med-Hoc-Net), 2010 The 9th IFIP Annual Mediterranean. IEEE, 2010.

    Young-Bae Ko and Nitin H. Vaidya, “Geocasting in mobile ad hoc networks: Location-based multicast algorithms,” in Proceedings of the 2nd Workshop on Mobile Computing Systems and Applica- tions (WMCSA 99), New Orleans, USA, Feb. 1999, pp. 101–110.

    Young-Bae Ko and Nitin H. Vaidya, “Location-aided routing (LAR) in mobile ad hoc networks,” in Proceedings of the Fourth ACM/IEEE International Conference on Mobile Computing and Networking (MobiCom’98), Dallas, USA, 1998.

    Boyan, Justin A., and Michael L. Littman. "Packet routing in dynamically changing networks: A reinforcement learning approach." Advances in neural information processing systems. 1994.

    Peshkin, Leonid, and Virginia Savova. "Reinforcement learning for adaptive routing." Neural Networks, 2002. IJCNN'02. Proceedings of the 2002 International Joint Conference on. Vol. 2. IEEE, 2002.

    Stampa, Giorgio, et al. "A Deep-Reinforcement Learning Approach for Software-Defined Networking Routing Optimization." arXiv preprint arXiv:1709.07080 (2017).

    Huang, Chung-Ming, Kun-chan Lan, and Chang-Zhou Tsai. "A survey of opportunistic networks." Advanced Information Networking and Applications-Workshops, 2008. AINAW 2008. 22nd International Conference on. IEEE, 2008.

    Fall, Kevin. "A delay-tolerant network architecture for challenged internets." Proceedings of the 2003 conference on Applications, technologies, architectures, and protocols for computer communications. ACM, 2003.

    Vahdat, Amin, and David Becker. "Epidemic routing for partially connected ad hoc networks." (2000).

    Lindgren, Anders, Avri Doria, and Olov Schelen. "Probabilistic routing in intermittently connected networks." Service assurance with partial and intermittent resources. Springer, Berlin, Heidelberg, 2004. 239-254.

    Burgess, John, et al. "Maxprop: Routing for vehicle-based disruption-tolerant networks." INFOCOM 2006. 25th IEEE International Conference on Computer Communications. Proceedings. IEEE, 2006.

    Leontiadis, Ilias, and Cecilia Mascolo. "GeOpps: Geographical opportunistic routing for vehicular networks." (2007): 1-6.

    下載圖示
    QR CODE