Modular, Label-Efficient Dataset Generation for Instrument Detection for Robotic Scrub Nurses
- verfasst von
- Jorge Adrián Badilla Solórzano, Nils Claudius Gellrich, Thomas Seel, Sontje Ihler
- Abstract
Surgical instrument detection is a fundamental task of a
robotic scrub nurse. For this, image-based deep learning techniques are
effective but usually demand large amounts of annotated data, whose creation
is expensive and time-consuming. In this work, we propose a strategy
based on the copy-paste technique for the generation of reliable synthetic
image training data with a minimal amount of annotation effort.
Our approach enables the efficient in situ creation of datasets for specific
surgeries and contexts. We study the amount of employed manually annotated
data and training set sizes on our model’s performance, as well
as different blending techniques for improved training data. We achieve
91.9 box mAP and 91.6 mask mAP, training solely on synthetic data, in a
real-world scenario. Our evaluation relies on an annotated image dataset
of the wisdom teeth extraction surgery set, created in an actual operating
room. This dataset, the corresponding code, and further data are made
publicly available (https://github.com/Jorebs/Modular-Label-Efficient-
Dataset-Generation-for-Instrument-Detection-for-Robotic-Scrub-Nurses).- Organisationseinheit(en)
-
Institut für Mechatronische Systeme
- Externe Organisation(en)
-
Medizinische Hochschule Hannover (MHH)
- Typ
- Aufsatz in Konferenzband
- Seiten
- 95-105
- Anzahl der Seiten
- 11
- Publikationsdatum
- 27.04.2024
- Publikationsstatus
- Veröffentlicht
- Peer-reviewed
- Ja
- ASJC Scopus Sachgebiete
- Theoretische Informatik, Allgemeine Computerwissenschaft
- Elektronische Version(en)
-
https://doi.org/10.1007/978-3-031-58171-7_10 (Zugang:
Geschlossen)