簡易檢索 / 詳目顯示

研究生: 鍾暿峒
Chung, Si-Tung
論文名稱: 應用對比式演算法則於印刷電路板的自動元件檢測方法之研究
Automated Component Inspection Method for Printed Circuit Boards Using Contrastive Algorithm
指導教授: 黃文吉
Hwang, Wen-Jyi
口試委員: 尤信程
You, Shing-Chern
官振傑
Guan, Albert
黃文吉
Hwang, Wen-Jyi
口試日期: 2023/07/24
學位類別: 碩士
Master
系所名稱: 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 42
中文關鍵詞: 對比式學習元件檢測物件偵測人工智慧物聯網
英文關鍵詞: contrastive learning, component inspection, object detection, artificial intelligence, internet of things
DOI URL: http://doi.org/10.6345/NTNU202301276
論文種類: 學術論文
相關次數: 點閱:83下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在現今工業的生產製程中,檢測產品上的瑕疵常會利用到自動光學檢測,透過將攝影裝置架設在產線上進行檢測。而印刷電路板做為電子工業製品的大宗,檢測上方的細小元件的數量和位置是一大難題。由於電子元件種類繁多,為了自動化檢測元件,建立並訓練類神經網路模型被視為一種解決方法。因為模型可以從大量的樣本中學習到特徵而且具備很高的辨識準確度,而其計算過程可以透過GPU的並行處理能力得到很快的推論速度。良好的模型架構可以讓模型適應不同的元件種類,同時對於增減元件可以具有更高的可擴展性來應對需求的變化。
    然而,現有的物件檢測模型對於小目標的檢測還無法達到高準確度,而工廠產線上的環境光源變化也增加了模型辨識元件的困難度。因此,對於現有的自動元件檢測方法,本論文以對比式理論為基礎,提出了一套使用在類神經網路模型的訓練方法。經過此方法訓練的模型可以在不同環境光線的影響下,依然能正確檢測出印刷電路板上的電子元件。
    由於工廠的產線不會只生產同一種產品,元件檢測方法應該要能夠應對不同的需求。但是,若元件的種類增加,會降低現有方法辨識的準確度。因此,本論文提出具有高度彈性的模型架構,可以根據不同的元件種類調整,且能檢測多種元件,也具有高準確度。
    實際情況下,待檢測的印刷電路板並非固定在產線上。若要做到Real-time檢測,需要邊緣運算裝置與攝影裝置搭配使用。而邊緣運算裝置的硬體資源有限,具備高準確度的模型往往有很大的計算量和總參數量。因此,本論文的模型架構會在增加少量參數的同時維持辨識的準確度,並能夠在邊緣運算裝置上正常運行。

    In today's industrial production processes, automated optical inspection is often used to detect defects on products by setting up cameras on the production line. Printed Circuit Boards (PCBs) are a major product in the electronics industry, and inspecting the quantity and position of small components on them poses a significant challenge. Building and training neural network models are considered as a solution for automated component detection. These models can learn features from a large number of samples and achieve high accuracy of inspection, while their computations can be accelerated through the parallel processing capability of GPUs. Well-designed model architectures allow adaptation to different component types and provide higher flexibility to various demands in component additions or removals.
    However, currently existing object detection models still struggle to achieve high accuracy in detecting small objects, and the variability of ambient light on factory production lines further complicates the task of component recognition. Therefore, this thesis proposes a training method based on contrastive algorithm for neural network models in the context of automated component detection. Models trained with this method can accurately detect electronic components on PCBs even under varying ambient light conditions.
    As production lines in factories typically handle diverse products, the component detection method should be able to suit different requirements. However, increasing the number of component types may lead to decreased the accuracy of models trained with existing methods. To address this, the thesis proposes a highly flexible model architecture that can be adjusted for different component types and detect multiple components while maintaining high accuracy.
    In practical scenarios, the PCBs to be inspected are not fixed on the production line. Edge computing devices are used with cameras to achieve real-time detection. Edge computing devices have limited hardware resources, but models with high accuracy often come with high computation overhead and large parameter requirements. Therefore, the proposed model architecture in this thesis aims to increase parameter efficiency while maintaining recognition accuracy, enabling normal operation on edge computing devices.

    第一章 緒論 1 第一節 研究背景 1 第二節 研究動機 3 第三節 研究目的 4 第四節 研究貢獻 4 第二章 理論基礎 6 第一節 對比式學習 6 第二節 CenterNet元件檢測 8 第三節 轉移學習 9 第三章 研究方法 11 第一節 資料集 11 第二節 資料增量 12 第三節 模型架構(Multiple Class) 14 第四節 模型訓練 16 第四章 實驗數據與效能分析 22 第一節 實驗環境 22 第二節 模型整體架構 (Single Class) 23 第三節 待檢測元件 26 第四節 資料集產生 27 第五節 元件檢測結果 28 第六節 模型評估方法 29 第七節 模型比較 32 第八節 與現有模型比較 35 第五章 結論 38 參考文獻 39

    古佳儫(2022)。基於熱點圖標記法則發展元件佈局檢測系統應用於PCB之研究(未出版之碩士論文)。國立臺灣師範大學資訊工程研究所 http://doi.org/10.6345/NTNU202201353
    Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., &Zagoruyko, S. (2020). End-to-End Object Detection with Transformers. In A. Vedaldi, H. Bischof, T. Brox, & J.-M. Frahm (Eds.), Computer Vision – ECCV 2020 (pp. 213–229). Springer International Publishing. https://doi.org/10.1007/978-3-030-58452-8_13
    Chen, C., Tian, X., Wu, F., & Xiong, Z. (2017). UDNet: Up-DownNetwork for Compact and Efficient Feature Representation in Image Super-Resolution. 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), 1069–1076. https://doi.org/10.1109/ICCVW.2017.130
    Duan, K., Bai, S., Xie, L., Qi, H., Huang, Q., & Tian, Q. (2019).CenterNet: Keypoint Triplets for Object Detection. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 6568–6577. https://doi.org/10.1109/ICCV.2019.00667
    Hadsell, R., Chopra, S., & LeCun, Y. (2006). DimensionalityReduction by Learning an Invariant Mapping. 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), 2, 1735–1742. https://doi.org/10.1109/CVPR.2006.100
    He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R. (2020). MomentumContrast for Unsupervised Visual Representation Learning. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9726–9735. https://doi.org/10.1109/CVPR42600.2020.00975
    Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi,A., Fischer, I., Wojna, Z., Song, Y., Guadarrama, S., & Murphy, K. (2017). Speed/Accuracy Trade-Offs for Modern Convolutional Object Detectors. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3296–3297. https://doi.org/10.1109/CVPR.2017.351
    Lin, T.-Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2020). FocalLoss for Dense Object Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(2), 318–327. https://doi.org/10.1109/TPAMI.2018.2858826
    Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016).You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 779–788. https://doi.org/10.1109/CVPR.2016.91
    Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN:Towards Real-Time Object Detection with Region Proposal Networks. Advances in Neural Information Processing Systems, 28. https://proceedings.neurips.cc/paper_files/paper/2015/hash/14bfa6bb14875e45bba028a21ed38046-Abstract.html
    Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L.-C. (2018). MobileNetV2: Inverted Residuals and Linear Bottlenecks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4510–4520. https://doi.org/10.1109/CVPR.2018.00474
    Sohn, K. (2016). Improved Deep Metric Learning with Multi-class N-pair Loss Objective. Advances in Neural Information Processing Systems, 29. https://proceedings.neurips.cc/paper_files/paper/2016/hash/6b180037abbebea991d8b1232f8a8ca9-Abstract.html
    Wu, Z., Shen, C., & van den Hengel, A. (2019). Wider or Deeper:Revisiting the ResNet Model for Visual Recognition. Pattern Recognition, 90, 119–133. https://doi.org/10.1016/j.patcog.2019.01.006
    Yao, T., Yi, X., Cheng, D. Z., Yu, F., Chen, T., Menon, A., Hong, L.,Chi, E. H., Tjoa, S., Kang, J. (Jay), & Ettinger, E. (2021). Self-supervised Learning for Large-scale Item Recommendations. Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 4321–4330. https://doi.org/10.1145/3459637.3481952
    Zhou, X., Wang, D., & Krähenbühl, P. (2019). Objects as Points.In ArXiv e-prints. https://doi.org/10.48550/arXiv.1904.07850

    無法下載圖示 本全文未授權公開
    QR CODE