Create New Account
Login
Search or Buy Articles
Browse Journals
Browse Proceedings
Submit your Paper
Submission Information
Journal Review
Recommend to Your Library
Call for Papers
VISUAL SERVOING IN VIRTUALISED ENVIRONMENTS BASED ON OPTICAL FLOW LEARNING AND CONSTRAINED OPTIMISATION, 1-10.
Takuya Iwasaki, Solvi Arnold, and Kimitoshi Yamazaki
References
[1] L.E. Weiss, A.C. Sanderson, and C.P. Neuman, Dynamic sensor-based control of robots with visual feedback, IEEE Journal ofRobotics and Automation, RA-3(5), 1987, 404–417.
[2] B. Espiau; F. Chaumette, and P. Rives, A new approach tovisual servoing in robotics, IEEE Transactions on Roboticsand Automation, 8(3), 1992, 313–326.
[3] D. Kragic and H. Christensen, Robust visual servoing, TheInternational Journal of Robotics Research 22(10–11), 2003,923–939. doi:10.1177/0278364903022100099
[4] F. Chaumette, and S. Hutchinson, Visual servo control. PartI. Basic approaches, IEEE Robotics & Automation Magazine,13(4), 2006, 82–90.
[5] J.P. Bandera, J.A. Rodr´ıguez, L. Molina-Tanco, and A.Bandera, A survey of vision-based architectures for robotlearning by imitation, International Journal of HumanoidRobotics, 9(1), 2012, 1250006.
[6] S. Benhimane, and E. Malism, Homography-based 2D visualtracking and servoing, The International Journal of RoboticsResearch, 26(7), 2007, 661–676.
[7] G. Silveira, and E. Malis, Direct visual servoing: Vision-basedestimation and control using only nonmetric information, IEEETransactions on Robotics, 28(4), 2012, 974–980.
[8] Y. Iwatani, K. Watanabe, and K. Hashimoto: Visual trackingwith occlusion handling for visual servo control, Proc. of IEEEInt’l Conf. on Robotics and Automation, Pasadena, CA, 2008,101–106.
[9] G. Chesi, K. Hashimoto, D. Prattichizzo, and A. Vicino,Keeping features in the field of view in eye-in-hand visualservoing: A switching approach, IEEE Trans. on Robotics,20(5), 2004, 908–913.
[10] M. Bakthavatchalam, F. Chaumette, and E. Marchand,Photometric moments: New promising candidates for visualservoing, Proc. IEEE Int. Conf. on Robotics and Automa-tion, Karlsruhe, 2013, 5241–5246, doi: 10.1109/ICRA.2013.6631326.
[11] F. Castelli, S. Michieletto, S. Ghidoni, and E. Pagello. Amachine learning-based visual servoing approach for fast robotcontrol in industrial setting, International Journal of AdvancedRobotic Systems, 14(6), 2017. doi:10.1177/1729881417738884
[12] D.J. Agravante, G. Claudio, F. Spindler, and F.Chaumette, Visual servoing in an optimization frameworkfor the whole-body control of humanoid robots, IEEERobotics and Automation Letters, 2(2), 2017, 608–615,doi: 10.1109/LRA.2016.2645512.
[13] A. Vakanski, F. Janabi-Sharifi, and I. Mantegh, An image-basedtrajectory planning approach for robust robot programming bydemonstration, Robotics and Autonomous Systems, 98, 2017,241–257.
[14] F. Chaumette, and S. Hutchinson, Visual servo control, PartI: Basic approaches, IEEE Robotics & Automation Magazine,vol. 13, no. 4, pp. 82–90, Dec. 2006.
[15] Q. Bateux, E. Marchand, J. Leitner, F. Chaumette, and P.Corke, Training deep neural networks for visual servoing, Proc.of IEEE Int. Conf. on Robotics and Automation, Brisbane,QLD, 2018, pp. 3307–3314.
[16] F. Tokuda, S. Arai, and K. Kosuge: Convolutionalneural network-based visual servoing for eye-to-handmanipulator, IEEE Access, 9, 2021, 91820–91835, doi:10.1109/ACCESS.2021.3091737.
[17] S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen, Learninghand-eye coordination for robotic grasping with deep learningand large-scale data collection, The International Journal ofRobotics Research, 37(4–5), 2018, 421–436.
[18] S. Iqbal, J. Tremblay, T. To, J. Cheng, E. Leitch, A.Campbell, K. Leung, D. McKay, and S. Birchfield, Towardsim-to-real directional semantic grasping, 2019, arXiv preprintarXiv:1909.02075.
[19] T. Kawagoshi, S. Arnold, and K. Yamazaki, Visual servoingusing virtual space for both learning and task execution, Proc.of the 2021 IEEE/SICE International Symposium on SystemIntegration, Fukushima, 2021, 292–297.
[20] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazırbas,V. Golkov, P. van der Smagt, D. Cremers, and T. Brox,Flownet: Learning optical flow with convolutional networks,Proc. IEEE International Conf. on Computer Vision, Santiago,2015, 2758–2766.
[21] Gazebo: http://gazebosim.org/Point Cloud Library,https://pointclouds.org (accessed Nov. 14, 2022).
[22] Tensorflow, https://www.tensorflow.org/ (accessed Nov. 14,2022).
[23] CVXOPT, https://cvxopt.org/ (accessed Nov. 14, 2022).
[24] A. Yamaji, GSS generator: A software to distribute manypoints with equal intervals on an unit sphere, Geoinformatics,12(1), 2001, 3–12.
[25] D. Bolya, C. Zhou, F. Xiao, and Y.J. Lee: YOLACT: Real-TimeInstance Segmentation, IEEE/CVF International Conferenceon Computer Vision, 2019, 9156–9165.
[26] P. Katara, Y.V.S Harish, H. Pandya, A. Gupta, A. MehdiSanchawala, G. Kumar, B. Bhowmick, and K. MadhavaKrishna, DeepMPCVS: Deep model predictive control forvisual servoing, Proc. 4th Annual Conf. on Robot Learning,Cambridge, MA,, 2020, 1–10.
[27] E. Godinho Ribeiro, R. de Queiroz Mendes, and V. Grassi,Real-time deep learning approach to visual servo control andgrasp detection for autonomous robotic manipulation, Roboticsand Autonomous Systems, 139, 2021, 103757.
Important Links:
Abstract
DOI:
10.2316/J.2023.206-0810
From Journal
(206) International Journal of Robotics and Automation - 2023
Go Back