Automated labeling using tracked ultrasound imaging: Application in tracking vertebrae during spine surgery

  • 1Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States.
  • 2Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States.
  • 3Globus Medical, Audubon, PA, United States.
  • 4Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins Medicine, Baltimore, MD, United States.
  • 5Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States. Electronic address: ali.uneri@jhu.edu.

|

Abstract

PURPOSE

Recent advancements in machine learning (ML) allow for rapid analysis of complex image data, which supports the use of ultrasound (US)-based solutions in interventional procedures. These solutions often require large, labeled datasets that can be time-consuming to curate and subject to inter- and intra-labeler variability. This work presents a practical method for automated labeling of US images by transferring labels from 3D diagnostic images (e.g., CT or MR) using tracked US imaging to support supervised training. The approach was applied to segmenting spinal vertebrae, and the quality of the generated labels was evaluated by registering individual vertebrae from US to CT images to account for potential spinal deformation during surgery.

METHODS

The proposed approach uses tracked US imaging to map target structures from CT volumes onto individual US frames. A dataset of spine images was created by scanning cadaveric torso specimens. Automated data cleaning methods were used to discard invalid frames, and data augmentations were applied to account for variability in image appearance. A simple U-Net model, called TernausNet, was trained for segmenting vertebrae using three labeling strategies: full vertebra (FV), posterior surface (PS), and weighted posterior surface (PSw). The labels were evaluated through vertebrae segmentation and registration of the resulting segmentations to corresponding CT structures, considering the impact of labeling strategy, calibration errors, and data cleaning.

RESULTS

The proposed labeling strategies yielded improved segmentation accuracy over the direct mapping of CT labels (viz. FV), yielding a median of 5.18 [4.24, 6.66] mm RMSD for PS and 3.86 [2.87, 5.60] mm for PSw labeling. The PSw approach was particularly effective in reducing hallucination artifacts in the acoustic shadow regions below the vertebral cortex. Using the resulting segmentations, registrations were solved with 1.56 [1.30, 1.62] mm TRE for PS and 1.52 [1.32, 2.38] mm for PSw labeling. Automated data cleaning and augmentation were found to significantly enhance the accuracy of bone feature segmentation and vertebra registration.

CONCLUSIONS

The study presents an automated labeling method for US imaging that supports the training of ML models by mapping 3D structures onto 2D US frames. The results highlight the importance of proper probe calibration, data cleaning, and specific labeling strategies in mitigating segmentation and registration errors. The work demonstrates the potential of real-time US imaging as a tool for precise anatomical tracking in surgery.

Related Concept Videos