Ultrasound Guided Regional Anesthesia using Deep Learning
Author: Dylan Mac Yves
UGRA is a research project on nerve segmentation from ultrasound images using deep learning. The study utilizes public dataset of ultrasound images in the brachial plexus region. The model used is U-Net, and the research involves preprocessing, data augmentation, and experiments under different conditions.
Abstract
In this study, we developed a deep learning model for nerve segmentation in the brachial plexus. We utilized a public dataset containing images from two ultrasound machines with artery, vein, muscle, and nerve annotations. Experimental results showed that training on combined dataset yielded a higher dice score compared to separate training. Excluding additional anatomical landmarks during training also increased nerve segmentation score, although the differences were not statistically significant.
Dataset
Branchial Plexus
Exploratory Data Analysis
Methodology
- Images of ultrasound 1 and 2 are trained separately, with train/testsplits (85:15) repeated using three seeds (reduce evaluation bias)
- Splits are patient-based to prevent leakage, and Group K-Fold (k=5) is used for cross validation
- The image is resized to 512x512 pixels before training
- Data Augmentation:
- Random translation (12.5%), p=0.5
- Random rotation (10deg), p=0.5
- Training Setup:
- Loss: Binary cross entropy
- Epoch: 100 w/ early stopping (patience=20)
- Scheduler: RLRonP
- Device: NVIDIA Tesla V100
- Evaluation Metric: $Dice = \frac{2|A \cap B|}{|A| + |B|}$
Experiement Result
- The combined dataset yields slightly higher Dice scores than the separate dataset, although the difference is not statistically significant.
- Models trained on a single landmark generally achieve higher nerve Dice scores than those trained on multiple landmarks.