Estimating future forceps movement using deep learning for robotic camera control in laparoscopic surgery Online publication date: Tue, 19-Apr-2022
by Yamato Umetani; Masahiko Minamoto; Shigeki Hori; Tetsuro Miyazaki; Kenji Kawashima
International Journal of Mechatronics and Automation (IJMA), Vol. 9, No. 2, 2022
Abstract: In laparoscopic surgery, an assistant surgeon must hold the laparoscope. Autonomous control of the holder helps the surgeon focus on the surgery. Estimating the future movement of the forceps can improve the control performance of a holder robot. We have previously proposed a method for estimating the position of the forceps 0.1 seconds ahead in a 2D image by using deep learning, based on segmented forceps in the camera image. In this study, we extend the prediction time to 0.1-3 seconds ahead, and investigate the accuracy of the estimation when the number of past position inputs to the convolutional neural network changes. We confirm that the forceps position in a 2D image 0.8 seconds ahead can be estimated online using only the past three positions in the suturing task within an error of 30 pixels, which is acceptable for laparoscope holder control.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Mechatronics and Automation (IJMA):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com