Deep learning enables accurate soft tissue deformation estimation in vivo

Reece D. Huff1, Frederick Houghton1, Conner C. Earl2, Elnaz Ghajar-Rahimi2, Ishan Dogra1, Denny Yu2, Craig J. Goergen2, Carisa Harris-Adamson1, & Grace D. O’Connell1

1UC Berkeley & 2Purdue University

preprint code SB3C abstract tutorial

StrainNet

Introduction

StrainNet is a novel deep-learning approach for measuring strain from medical images. Understanding how materials and structures deform is essential for making improvements and optimizing designs in engineering applications, and in biology, accurately measuring tissue deformation in the human body is important for assessing its state in disease and health. However, accurately acquiring tissue deformation in vivo is difficult. The traditional image-based strain analysis techniques used in combination with medical imaging systems such as MRI and ultrasound are often limited by noise, out-of-plane motion, and image resolution. StrainNet is designed to overcome these limitations and provide a more complete understanding of the mechanics of tendons under various loads.


Background

• Digital Image Correlation (DIC) and Direct Deformation Estimation (DDE) are traditional image-based strain analysis techniques that have been used in the past to measure deformation.
However, these techniques are often limited by noise, out-of-plane motion, and image resolution, particularly when applied to medical images in challenging, in vivo settings.
StrainNet is a novel deep-learning approach that utilizes a training set based on real-world clinical observations and image artifacts to overcome the limitations of traditional image-based strain analysis techniques.

Schematic of how DIC, DDE, and StrainNet calculate strain from an image pair. a. The image on the left represents a reference image (i.e., I1), while the image on the right represents a deformed image (i.e., I2) with vertical tension in the top left (λyy), pure shear in the top right and lower left (γ), and combination of shear with horizontal extension in the lower right corner (λxx and γ). b. DIC solves for displacements of four pixels using square subset regions (blue boxes) and uses numerically differentiation to estimate strain (dark purple dashed box). δαβ represents the errors from numerical differentiation. c. DDE solves for the deformation gradient of each subset directly (orange dashed box). d. StrainNet estimates full-field strain given a pair of input images (I1 and I2).

Method

StrainNet uses a two-stage architecture, with a DeformationClassifier and three separate networks (TensionNet, CompressionNet, and RigidNet) to predict the strain. The DeformationClassifier determines if the image is undergoing tension, compression, or rigid body motion, and the image is then passed into the corresponding network for strain prediction. The training set for StrainNet was developed to emulate real-world observations and challenges, and the model was trained and tested on both synthetic and real images of flexor tendons undergoing contraction in vivo.

Architecture of StrainNet. a. StrainNet comprises two stages: the first stage is the DeformationClassifier, and the second stage includes TensionNet, CompressionNet, and RigidNet. b. The architecture of DeformationClassifier is composed of convolutional layers, max pooling, and ReLU activation functions. The resulting features are flattened and passed through a fully-connected neural network to predict the probability of the image pair undergoing tension, compression, or rigid body motions. c. The architecture of TensionNet, CompressionNet, and RigidNet includes convolutional layers, max pooling, upsampling, skip layers, and ReLU activation functions, and predicts the full strain field (εxx, εxy, εyy) between the two input images. d. The key to the blocks in b. and c.. All blocks are connected by ReLU activation functions.

Results

The results of our study demonstrate the effectiveness and potential of using deep learning for image-based strain analysis in challenging, in vivo settings.

On both synthetic test cases with known deformations and real, experimentally collected ultrasound images of flexor tendons undergoing contraction in vivo, StrainNet outperforms traditional techniques such as DIC and DDE, with median strain errors 48-84% lower than DIC and DDE.

Performance of DIC, DDE, and StrainNet on sets of synthetically generated test cases where the largest applied strain, εlongmax, is varied from 4% to 16%.

Additionally, StrainNet revealed strong correlations between tendon strains and applied forces in vivo, highlighting the potential for StrainNet to be a valuable tool in the assessment of rehabilitation or disease progression.

Median longitudinal strain predicted by StrainNet during tendon contraction across all of the trials (n = 13). ×'s, 's, and 's correspond to 10%, 30%, and 50% maximum voluntary contraction (MVC).

In the following video, you'll see StrainNet's predicted strain distribution for three levels of muscle contraction: 10%, 30%, and 50% of maximum voluntary contraction (MVC).


Limitations and future work

While our study demonstrates the effectiveness and potential of StrainNet, there are still limitations and areas for future work. For example, the approach may not be well-suited for certain types of tissue or deformation, and there is still room for improvement in terms of accuracy and robustness. Future work will focus on expanding the applicability of the approach and improving its accuracy and generalizability.


Citation

@article{huff2023strainnet,
title={Deep learning enables accurate soft tissue deformation estimation in vivo},
author={Huff, Reece D and Houghton, Frederick and Earl, Conner C and Ghajar-Rahimi, Elnaz and Dogra, Ishan and Yu, Denny and Harris-Adamson, Carisa and Goergen, Craig J and O'Connell, Grace D},
journal={bioRxiv},
year={2023},
publisher={Cold Spring Harbor Laboratory},
doi = {10.1101/2023.09.04.556266}
}



Acknowledgements

This study was supported by the National Institutes of Health (NIH R21 AR075127-02), the National Science Foundation (NSF GRFP), and the National Institute for Occupational Safety and Health (NIOSH) / Centers for Disease Control and Prevention (CDC) (Training Grant T42OH008429).