Partner: Christopher Comstock

Memorial Sloan-Kettering Cancer Center (US)

Ostatnie publikacje
1.Byra M., Jarosik P., Szubert A., Galperine M., Ojeda-Fournier H., Olson L., Comstock Ch., Andre M., Andre M., Breast mass segmentation in ultrasound with selective kernel U-Net convolutional neural network, Biomedical Signal Processing and Control, ISSN: 1746-8094, DOI: 10.1016/j.bspc.2020.102027, Vol.61, pp.102027-1-10, 2020

Streszczenie:

In this work, we propose a deep learning method for breast mass segmentation in ultrasound (US). Variations in breast mass size and image characteristics make the automatic segmentation difficult. To addressthis issue, we developed a selective kernel (SK) U-Net convolutional neural network. The aim of the SKswas to adjust network's receptive fields via an attention mechanism, and fuse feature maps extractedwith dilated and conventional convolutions. The proposed method was developed and evaluated usingUS images collected from 882 breast masses. Moreover, we used three datasets of US images collectedat different medical centers for testing (893 US images). On our test set of 150 US images, the SK-U-Netachieved mean Dice score of 0.826, and outperformed regular U-Net, Dice score of 0.778. When evaluatedon three separate datasets, the proposed method yielded mean Dice scores ranging from 0.646 to 0.780. Additional fine-tuning of our better-performing model with data collected at different centers improvedmean Dice scores by ~6%. SK-U-Net utilized both dilated and regular convolutions to process US images. We found strong correlation, Spearman's rank coefficient of 0.7, between the utilization of dilated convo-lutions and breast mass size in the case of network's expansion path. Our study shows the usefulness ofdeep learning methods for breast mass segmentation. SK-U-Net implementation and pre-trained weightscan be found at github.com/mbyr/bus_seg.

Słowa kluczowe:

attention mechanism, breast mass segmentation, convolutional neural networks, deep learning, receptive field, ultrasound imaging

Afiliacje autorów:

Byra M.-IPPT PAN
Jarosik P.-other affiliation
Szubert A.-other affiliation
Galperine M.-other affiliation
Ojeda-Fournier H.-University of California (US)
Olson L.-University of California (US)
Comstock Ch.-Memorial Sloan-Kettering Cancer Center (US)
Andre M.-University of California (US)
140p.
2.Byra M., Galperin M., Ojeda-Fournier H., Olson L., O Boyle M., Comstock C., Andre M., Breast mass classification in sonography with transfer learning using a deep convolutional neural network and color conversion, Medical Physics, ISSN: 0094-2405, DOI: 10.1002/mp.13361, Vol.46, No.2, pp.746-755, 2019

Streszczenie:

Purpose: We propose a deep learning-based approach to breast mass classification in sonographyand compare it with the assessment of four experienced radiologists employing breast imagingreporting and data system 4th edition lexicon and assessment protocol. Methods: Several transfer learning techniques are employed to develop classifiers based on a set of882 ultrasound images of breast masses. Additionally, we introduce the concept of a matching layer. The aim of this layer is to rescale pixel intensities of the grayscale ultrasound images and convertthose images to red, green, blue (RGB) to more efficiently utilize the discriminative power of theconvolutional neural network pretrained on the ImageNet dataset. We present how this conversioncan be determined during fine-tuning using back-propagation. Next, we compare the performance ofthe transfer learning techniques with and without the color conversion. To show the usefulness of ourapproach, we additionally evaluate it using two publicly available datasets. Results: Color conversion increased the areas under the receiver operating curve for each transferlearning method. For the better-performing approach utilizing the fine-tuning and the matching layer,the area under the curve was equal to 0.936 on a test set of 150 cases. The areas under the curves forthe radiologists reading the same set of cases ranged from 0.806 to 0.882. In the case of the two sepa-rate datasets, utilizing the proposed approach we achieved areas under the curve of around 0.890. Conclusions: The concept of the matching layer is generalizable and can be used to improve theoverall performance of the transfer learning techniques using deep convolutional neural networks. When fully developed as a clinical tool, the methods proposed in this paper have the potential to helpradiologists with breast mass classification in ultrasound.

Słowa kluczowe:

BI-RADS, breast mass classification, convolutional neural networks, transfer learning, ultrasound imaging

Afiliacje autorów:

Byra M.-IPPT PAN
Galperin M.-Almen Laboratories, Inc. (US)
Ojeda-Fournier H.-University of California (US)
Olson L.-University of California (US)
O Boyle M.-University of California (US)
Comstock C.-Memorial Sloan-Kettering Cancer Center (US)
Andre M.-University of California (US)
100p.

Abstrakty konferencyjne
1.Byra M., Galperin M., Ojeda-Fournier H., Olson L., O Boyle M., Comstock C., Andre M., Comparison of deep learning and classical breast mass classification methods in ultrasound, ASA, 178th Meeting of the Acoustical Society of America, 2019-12-02/12-06, San Diego (US), DOI: 10.1121/1.5136937, Vol.146, No.4, pp.2864-1, 2019

Streszczenie:

We developed breast mass classification methods based on deep convolutional neural networks (CNNs) and morphological features (MF), then compared those to assessment of four experienced radiologists employing BI-RADS protocol. The classification models were developed based on 882 clinical ultrasound B-mode images of masses with confirmed findings and regions of interest indicating mass areas. Various transfer learning techniques, including fine-tuning of a pre-trained CNN, were investigated to develop deep learning models. A matching layer technique was applied to convert gray-scale images to red, green, blue to efficiently utilize discrimination of the pre-trained model. For the classical approach, we calculated MF related to breast mass shape (e.g., height-width ratio, circularity) and then trained binary classifiers. We additionally evaluated both approaches using two publicly available US datasets. Several statistical measures (area under the receiver operating curve [AUC], sensitivity and specificity) were used to assess the classification performance on a test set of 150 cases. The matching layer significantly increased AUC from 0.895 to 0.936 while radiologists’ AUCs ranged from 0.806 to 0.882. This study shows both deep learning and classical models achieve high performance. When developed as a clinical tool, the methods examined in this study have potential to aid radiologists accurate breast mass classification with ultrasound.

Afiliacje autorów:

Byra M.-IPPT PAN
Galperin M.-Almen Laboratories, Inc. (US)
Ojeda-Fournier H.-University of California (US)
Olson L.-University of California (US)
O Boyle M.-University of California (US)
Comstock C.-Memorial Sloan-Kettering Cancer Center (US)
Andre M.-University of California (US)