研究生: |
游鈞凱 You, Jiun-Kai |
---|---|
論文名稱: |
結合改良式物件姿態估測之最佳機器人夾取策略 Optimal Robotic Grasping Strategy Incorporating Improved Object Pose Estimation |
指導教授: |
許陳鑑
Hsu, Chen-Chien |
學位類別: |
碩士 Master |
系所名稱: |
電機工程學系 Department of Electrical Engineering |
論文出版年: | 2021 |
畢業學年度: | 109 |
語文別: | 英文 |
論文頁數: | 51 |
英文關鍵詞: | object pose estimation, LINEMOD, Occlusion LINEMOD, grasp strategy |
DOI URL: | http://doi.org/10.6345/NTNU202100110 |
論文種類: | 學術論文 |
相關次數: | 點閱:165 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
[1] [Online]. Available https://ifr.org/
[2] [Online]. Available https://reurl.cc/j5b9aq
[1] [Online]. Available https://ifr.org/
[2] [Online]. Available https://reurl.cc/j5b9aq
[3] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrishnan, V. Vanhoucke, and S. Levine, “QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation,” Proceedings 10 of The 2nd Conference on Robot Learning, volume 87 of Proceedings of Machine Learning Research, PMLR, 2018, pp. 651–673
[4] A. Bicchi and V. Kumar, “Robotic grasping and contact: a review,” Proceedings 2000 ICRA. Millennium Conference, IEEE International Conference on Robotics and Automation. Symposia Proceedings, San Francisco, CA, USA, 2000, pp. 348-353 vol.1.
[5] J. Bohg, A. Morales, T. Asfour and D. Kragic, “Data-Driven Grasp Synthesis—A Survey,” IEEE Transactions on Robotics, April 2014, vol. 30, no. 2, pp. 289-309.
[6] D. G Lowe, “Object recognition from local scale-invariant features,” Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 1999, pp. 1150–1157.
[7] S. Tulsiani and J. Malik, “Viewpoints and keypoints,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015, pp. 1510–1519.
[8] G. Pavlakos, X. Zhou, A. Chan, K. G Derpanis, and K. Daniilidis, “6-DOF object pose from semantic keypoints,” 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 2017, pp. 2011–2018.
[9] S. Hinterstoisser, V. Lepetit, S. Ilic, S. Holzer, G. Bradski, K. Konolige, and N. Navab, “Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes,” Asian Conference on Computer Vision (ACCV), Daejeon, Korea, 2012, pp. 548–562.
[10] Z. Cao, Y. Sheikh, and N. K. Banerjee, “Real-time scalable 6DoF pose estimation for texture-less objects,” 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 2016, pp. 2441–2448.
[11] E. Brachmann, A. Krull, F. Michel, S. Gumhold, J. Shotton, and C. Rother, “Learning 6d object pose estimation using 3d object coordinates,” European Conference on Computer Vision, Springer, Zurich, Switzerland, 2014, pp. 536–551.
[12] A. Segal, D. Haehnel, and S. Thrun, “Generalized-icp,” Robotics: Science and Systems (RSS), 2009.
[13] C. Wang et al., “DenseFusion: 6D object pose estimation by iterative dense fusion,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 3338–3347.
[14] S. Peng et al., “Pvnet: pixel-wise voting network for 6dof pose estimation,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 4561–4570.
[15] P. Wohlhart and V. Lepetit, “Learning descriptors for object recognition and 3D pose estimation,” 2015 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 2015, pp. 3109–3118.
[16] M. Rad and V. Lepetit, “BB8: a scalable, accurate, robust to partial occlusion method for predicting the 3D poses of challenging objects without using depth,” IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3828–3836.
[17] W. Kehl, F. Manhardt, F. Tombari, S. Ilic, and N. Navab, “SSD-6D: making RGB-based 3D detection and 6D pose estimation great again,” IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1521–1529.
[18] M. Sundermeyer, Z. Marton, M. Durner, M. Brucker, and R. Triebel, “Implicit 3D orientation learning for 6D object detection from RGB images,” European Conference on Computer Vision (ECCV), Munich, Germany, 2018, pp. 699–715.
[19] Y. Li, G. Wang, X. Ji, Y. Xiang, and D. Fox, “DeepIM: deep iterative matching for 6D pose estimation,” European Conference on Computer Vision (ECCV), Munich, Germany, 2018, pp. 683–698.
[20] P. Castro, A. Armagan, and T. Kim, “Accurate 6D object pose estimation by pose conditioned mesh reconstruction,” 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 2020, pp. 4147–4151, doi: 10.1109/ICASSP40776.2020.9053627.
[21] C. Lin, C. Tsai, Y. Lai, S. Li, and C. Wong, “Visual object recognition and pose estimation based on a deep semantic segmentation network,” IEEE Sensors Journal, vol. 18, no. 22, pp. 9370-9381, 15 Nov., 2018.
[22] A. Gadwe and H. Ren, “Real-time 6DOF pose estimation of endoscopic instruments using printable markers,” IEEE Sensors Journal, vol. 19, no. 6, pp. 2338-2346, 15 March, 2019.
[23] Sergey Zakharov, Ivan Shugurov, and Slobodan Ilic, “Dpod: 6d pose object detector and refiner,” IEEE International Conference on Computer Vision (ICCV), 2019.
[24] Bugra Tekin, Sudipta N Sinha, and Pascal Fua, “Real-time seamless single shot 6d object pose prediction,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 292–301, 2018.
[25] Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox, “Posecnn: a convolutional neural network for 6d object pose estimation in cluttered scenes,” Robotics: Science and System XIV, Pittsburgh, Pennsylvania, USA, 2018, doi: 10.15607/RSS.2018.XIV.019.
[26] Y. Hu, J. Hugonot, P. Fua, and M. Salzmann, “Segmentation-driven 6D object pose estimation,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 3385–3394.
[27] Z. Zhao, G. Peng, H. Wang, H. Fang, C. Li, and C. Lu, “Estimating 6D Pose From Localizing Designated Surface Keypoints,” arXiv: 1812.01387, 2018.
[28] K. Park, T. Patten, and M. Vincze, “Pix2pose: Pixel-wise coordinate regression of objects for 6d pose estimation,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019, pp. 7667–7676.
[29] S.-K. Huang, C.-C. Hsu, W.-Y. Wang and C.-H. Lin, “Iterative Pose Refinement for Object Pose Estimation Based on RGBD Data,” Sensors 2020, 20(15), 4114, doi: 10.3390/s20154114.
[30] David G Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, 60(2):91–110, 2004.
[31] Viswanathan, Deepak Geetha, “Features from Accelerated Segment Test (FAST),” (n.d.).
[32] Bay, H., Ess, A., Tuytelaars, T., Gool, L.V, “Surf: Speeded Up Robust Features,” Computer Vision and Image Understanding 10, 346–359 (2008).
[33] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski. “ORB: An efficient alternative to SIFT or SURF,” 2011 International Conference on Computer Vision, Barcelona, 2011, pp. 2564-2571
[34] S. Hinterstoisser, V. Lepetit, S. Ilic, S. Holzer, G. Bradski, K. Konolige, and N. Navab, “Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes,” ACCV, 2012.
[35] E. Brachmann, A. Krull, F. Michel, S. Gumhold, J. Shotton, and C. Rother, “Learning 6d object pose estimation using 3d object coordinates,” ECCV, 2014.
[36] E. Brachmann, F. Michel, A. Krull, M. Ying Yang, S. Gumhold, “Uncertainty-driven 6d pose estimation of objects and scenes from a single rgb image,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 3364-3372, doi: 10.1109/CVPR.2016.366
[37] Andreas ten Pas, Marcus Gualtieri, Kate Saenko, and Robert Platt, “Grasp pose detection in point clouds,” The International Journal of Robotics Research, December 2017, 36(13-14):1455-1473.
[38] A. Mousavian, C. Eppner and D. Fox, “6-DOF GraspNet: Variational Grasp Generation for Object Manipulation,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019, pp. 2901-2910.
[39] B. Zhao, H. Zhang, X. Lan, H. Wang, Z. Tian, N. Zheng, “Regnet: Regionbased grasp network for single-shot grasp detection in point clouds,” 2020, arXiv:2002.12647.
[40] A. T Miller, S. Knoop, H. Christensen, and P. K Allen, “Automatic grasp planning using shape primitives,” IEEE International Conference on Robotics and Automation, 2003.
[41] N. Vahrenkamp, L. Westkamp, N. Yamanobe, E. E. Aksoy and T. Asfour, “Part-based grasp planning for familiar objects,” 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, 2016, pp. 919-925, doi: 10.1109/HUMANOIDS.2016.7803382.
[42] T. Patten, K. Park, and M. Vincze, “Dgcm-net: Dense geometrical correspondence matching network for incremental experience-based robotic grasping,” 2020, arXiv:2001.05279
[43] A. Sahbani, S. El-Khoury, and P. Bidaud. “An overview of 3d object grasp synthesis algorithms,” Robotics and Autonomous Systems, Volume 60, Issue 3, March 2012, Pages 326-336
[44] J. Varley, C. DeChant, A. Richardson, J. Ruales and P. Allen. “Shape completion enabled robotic grasping,” 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, 2017, pp. 2442-2447.
[45] Y. Domae, H. Okuda, Y. Taguchi, K. Sumi and T. Hirai, “Fast graspability evaluation on single depth maps for bin picking with general grippers,” 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, 2014, pp. 1997-2004.
[46] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J.-A. Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” 2017, arXiv:1703.09312.
[47] Y. Jiang, S. Moseson and A. Saxena, “Efficient grasping from RGBD images: Learning using a new rectangle representation,” 2011 IEEE International Conference on Robotics and Automation, Shanghai, 2011, pp. 3304-3311, doi: 10.1109/ICRA.2011.5980145.
[48] M. Vohra, R. Prakash and L. Behera, “Real-time grasp pose estimation for novel objects in densely cluttered environment,” 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 2019, pp. 1-6.
[49] D. Park, Y. Seo, D. Shin, J. Choi, and S.-Y. Chun, “A single multi-task deep neural network with post-processing for object detection with reasoning and robotic grasp detection,” 2019, arXiv:1909.07050.
[50] J. Bohg and D. Kragic, “Learning grasping points with shape context,” Robotics and Autonomous Systems, Volume 58, Issue 4, April 2010, Pages 362-377.
[51] [Online]. Available https://www.coppeliarobotics.com/