Hui Sun, Qiao Zhou, Zhihui Hu, Yu Tian, Yueying Wang, and Qiang Le
[1] D. He, Y. Pang, G. Lodewijks, and X. Liu, Healthy speedcontrol of belt conveyors on conveying bulk materials, PowderTechnology, 327, 2018, 408–419. [2] J. Li and C.J.O. Miao, The conveyor belt longitudinal tearon-line detection based on improved SSR algorithm, Optik,127(19), 2016, 8002–8010. [3] T. Qiao, L. Chen, Y. Pang, G. Yan, and C. Miao, Integrativebinocular vision detection method based on infrared and visiblelight fusion for conveyor belts longitudinal tear, Measurement,110, 2017, 192–201. [4] C. Hou, T. Qiao, H. Zhang, Y. Pang, and X. Xiong,Multispectral visual detection method for conveyor beltlongitudinal tear, Measurement, 143, 2019, 246–257. [5] C. Hou, T. Qiao, M. Qiao, X. Xiong, Y. Yang, and H. Zhang,Research on audio-visual detection method for conveyor beltlongitudinal tear, IEEE Access, 7, 2019, 120202–120213. [6] J. Che, T. Qiao, Y. Yang, H. Zhang, and Y. Pang, Longitudinaltear detection method of conveyor belt based on audio-visualfusion, Measurement, 176, 2021, 109152. [7] W. Zhang, X. Wang, T. Chen, L. Gao, X. Sun, and H.Ren, Fast target extraction based on Bayesian blob analysisand simulated annealing for underwater images, InternationalJournal of Robotics and Automation, 32(2), 2017, 101–108.[7] G. Wang, L. Zhang, H. Sun, and C. Zhu, Longitudinal teardetection of conveyor belt under uneven light based on Haar-AdaBoost and Cascade algorithm, Measurement, 168, 2021,108341. [8] J. Yang, S. Li, Z. Wang, H. Dong, J. Wang, S. Tang, Using deeplearning to detect defects in manufacturing: A comprehensivesurvey and current challenges, Materials , 13(24), 2020, 5755. [9] D. Qu, T. Qiao, Y. Pang, Y. Yang, and H. Zhang, Research onADCN method for damage detection of mining conveyor belt,IEEE Sensors Journal, 21(6), 2020, 8662–8669. [10] G. Wang, Z. Rao, H. Sun, C. Zhu, and Z. Liu, A belttearing detection method of YOLOv4-BELT for multi-sourceinterference environment, Measurement, 189, 2022, 110469. [11] M. Zhang, Y. Zhang, M. Zhou, K. Jiang, H. Shi, Y. Yu, and N.Hao, Application of lightweight convolutional neural networkfor damage detection of conveyor belt, Applied Sciences, 11(16),2021, 7282. [12] Y. Wang, Y. Wang, and L. Dang, Video detection of foreignobjects on the surface of belt conveyor underground coal minebased on improved SSD, Journal of Ambient Intelligence andHumanized Computing, 14, 2023, 1–10. [13] G. Wang, Z. Yang, H. Sun, Q. Zhou, and Z. Yang, AC-SNGAN:Multi-class data augmentation for damage detection of conveyorbelt surface using improved ACGAN, Measurement, 224, 2024,113814.[15] S. Li, J. Wang, J. Sheng, Z. Liu, S. Li, and Y. Cui,Maritime target detection for unmanned surface vehicles basedon lightweight networks under foggy weather, InternationalJournal of Robotics and Automation, 39(1), 2024, 31–45. [15] S. Li, J. Wang, J. Sheng, Z. Liu, S. Li, and Y. Cui,Maritime target detection for unmanned surface vehicles basedon lightweight networks under foggy weather, InternationalJournal of Robotics and Automation, 39(1), 2024, 31–45.[14] G. Wang, Z. Liu, H. Sun, C. Zhu, and Z. Yang, Yolox-BTFPN:An anchor-free conveyor belt damage detector with a biasedfeature extraction network, Measurement, 200, 2022, 111675.[15] Y. Gao, H.-M. Hu, B. Li, and Q. Guo, Naturalness preservednonuniform illumination estimation for image enhancementbased on retinex, IEEE Transactions on Multimedia, 20(2),2017, 335–344. [16] A. Yang, Single underwater image restoration based on adaptivecolor correction and adaptive transmission fusion, IEEE Access,29(4), 2020, 43006–43006. [17] X. Jin, Z. Chen, J. Lin, W. Zhou, J. Chen, and C.Shan, AI-GAN: Signal de-interference via asynchronousinteractive generative adversarial network, in ProceedingIEEE International Conf. on Multimedia & Expo Workshops(ICMEW), Shanghai, China, 2019, 228–233. [18] A. Howard, M. Sandler, B. Chen, W. Wang, L.-C. Chen, M.Tan, G. Chu, Y. Zhu, R. Pang, V. Vasudevan, Q.V. Le, and H.Adam, Searching for MobileNetV3, in Proceeding IEEE/CVFInternational Conf. on Computer Vision (ICCV), Seoul, SouthKorea, 2019, 1314–1324. [19] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, L. Kaiser, I. Polosukhin, Attention is all you need,arXiv:1706.03762, 30, 2017. [20] M. Dehghani, J. Djolonga, B. Mustafa, P. Padlewski, J. Heek,J. Gilmer, A. Steiner, M. Caron, R. Geirhos, I. Alabdulmohsin,R. Jenatton, L. Beyer, M. Tschannen, A. Arnab, X. Wang,C.R. Ruiz, M. Minderer, J. Puigcerver, U. Evci, M. Kumar, S.Van Steenkiste, G. F. Elsayed, A. Mahendran, F. Yu, A. Oliver,F. Huot, J. Bastings, M. Collier, A.A. Gritsenko, V. Birodkar,C. N. Vasconcelos, Y. Tay, T. Mensink, A. Kolesnikov, F.Pavetic, D. Tran, T. Kipf, M. Lucic, X. Zhai, D. Keysers, J.J.Harmsen, N. Houlsby, Scaling vision transformers to 22 billionparameters, in Proceeding International Conf. on MachineLearning, 2023, 7480–7512. [21] Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao,Z. Zhang, L. Dong, F. Wei, and B. Guo, Swin transformer V2:Scaling up capacity and resolution, in Proceeding IEEE/CVFConf. on Computer Vision and Pattern Recognition (CVPR),New Orleans, LA, 2022, pp. 11999–12009. [22] Y. Chen, X. Dai, D. Chen, M. Liu, X. Dong, L. Yuan, andZ. Liu, Mobile-former: Bridging mobilenet and transformer,in Proceeding IEEE/CVF Conf. on Computer Vision andPattern Recognition (CVPR), New Orleans, LA, 2022,5260–5269. [23] A. Wang, H. Chen, Z. Lin, J. Han, and G. Ding, RepViT:Revisiting mobile CNN from ViT perspective, 2024 IEEE/CVFConf. on Computer Vision and Pattern Recognition (CVPR),Seattle, WA, 2024, 15909–15920. [24] Z. Xia, X. Pan, S. Song, L.E. Li, and G. Huang, Visiontransformer with deformable attention, in Proceedings ofthe IEEE/CVF Conf. on Computer Vision and PatternRecognition, New Orleans, LA, 2022, 4784– 4793. [25] A.G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang,T. Weyand, M. Andreetto, and H. Adam, MobileNets: Efficientconvolutional neural networks for mobile vision applications,arXiv:1704.04861, 2017. [26] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L -C.Chen , MobileNetV2: Inverted residuals and linear bottlenecks,in Proceedings of the IEEE Conf. on Computer Vision andPattern Recognition, Salt Lake City, UT, 2018, 4510–4520. [27] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y.Fu, and A.C. Berg, SSD: Single shot multibox detector, inProceeding 14th European Conf. Computer Vision (ECCV),Amsterdam, 2016, 21–37.
Important Links:
Go Back