Partner: Moi Hoon Yap


Ostatnie publikacje
1.Yap M., Bill C., Byra M., Ting-yu L., Huahu Y., Galdran A., Yung-Han C., Raphael B., Sven K., Friedrich C., Yu-wen L., Ching-hui Y., Kang L., Qicheng L., Ballester M., Carneiro G., Yi-Jen J., Juinn-Dar H., Pappachan J., Reeves N., Vishnu C., Darren D., Diabetic foot ulcers segmentation challenge report: Benchmark and analysis, Medical Image Analysis, ISSN: 1361-8415, DOI: 10.1016/j.media.2024.103153, Vol.94, No.103153, pp.1-14, 2024

Streszczenie:

Monitoring the healing progress of diabetic foot ulcers is a challenging process. Accurate segmentation of foot ulcers can help podiatrists to quantitatively measure the size of wound regions to assist prediction of healing status. The main challenge in this field is the lack of publicly available manual delineation, which can be time consuming and laborious. Recently, methods based on deep learning have shown excellent results in automatic segmentation of medical images, however, they require large-scale datasets for training, and there is limited consensus on which methods perform the best. The 2022 Diabetic Foot Ulcers segmentation challenge was held in conjunction with the 2022 International Conference on Medical Image Computing and Computer Assisted Intervention, which sought to address these issues and stimulate progress in this research domain. A training set of 2000 images exhibiting diabetic foot ulcers was released with corresponding segmentation ground truth masks. Of the 72 (approved) requests from 47 countries, 26 teams used this data to develop fully automated systems to predict the true segmentation masks on a test set of 2000 images, with the corresponding ground truth segmentation masks kept private. Predictions from participating teams were scored and ranked according to their average Dice similarity coefficient of the ground truth masks and prediction masks. The winning team achieved a Dice of 0.7287 for diabetic foot ulcer segmentation. This challenge has now entered a live leaderboard stage where it serves as a challenging benchmark for diabetic foot ulcer segmentation.

Słowa kluczowe:

Deep learning, Diabetic foot ulcers, Segmentation, Convolutional neural networks

Afiliacje autorów:

Yap M.-other affiliation
Bill C.-other affiliation
Byra M.-IPPT PAN
Ting-yu L.-other affiliation
Huahu Y.-other affiliation
Galdran A.-other affiliation
Yung-Han C.-other affiliation
Raphael B.-other affiliation
Sven K.-other affiliation
Friedrich C.-other affiliation
Yu-wen L.-other affiliation
Ching-hui Y.-other affiliation
Kang L.-other affiliation
Qicheng L.-other affiliation
Ballester M.-other affiliation
Carneiro G.-other affiliation
Yi-Jen J.-other affiliation
Juinn-Dar H.-other affiliation
Pappachan J.-other affiliation
Reeves N.-other affiliation
Vishnu C.-other affiliation
Darren D.-other affiliation
200p.
2.Thomas C., Byra M., Marti R., Yap Moi H., Zwiggelaar R., BUS-Set: A benchmark for quantitative evaluation of breast ultrasound segmentation networks with public datasets, Medical Physics, ISSN: 0094-2405, DOI: 10.1002/mp.16287, pp.1-21, 2023

Streszczenie:

Purpose: BUS-Set is a reproducible benchmark for breast ultrasound (BUS) lesion segmentation, comprising of publicly available images with the aim of improving future comparisons between machine learning models within the field of BUS. Method: Four publicly available datasets were compiled creating an overall set of 1154 BUS images, from five different scanner types. Full dataset details have been provided, which include clinical labels and detailed annotations. Further- more, nine state-of -the-art deep learning architectures were selected to form the initial benchmark segmentation result, tested using five-fold cross-validation and MANOVA/ANOVA with Tukey statistical significance test with a threshold of 0.01. Additional evaluation of these architectures was conducted, exploring possible training bias, and lesion size and type effects. Results: Of the nine state-of -the-art benchmarked architectures, Mask R-CNN obtained the highest overall results, with the following mean metric scores: Dice score of 0.851, intersection over union of 0.786 and pixel accuracy of 0.975. MANOVA/ANOVA and Tukey test results showed Mask R-CNN to be statistically significant better compared to all other benchmarked models with a p-value > 0.01. Moreover, Mask R-CNN achieved the highest mean Dice score of 0.839 on an additional 16 image dataset, that contained multiple lesions per image. Further analysis on regions of interest was conducted, assessing Hamming distance, depth-to-width ratio (DWR), circularity, and elongation, which showed that the Mask R-CNN’s segmentations maintained the most morphological fea-tures with correlation coefficients of 0.888, 0.532, 0.876 for DWR, circularity, and elongation, respectively. Based on the correlation coefficients, statistical test indicated that Mask R-CNN was only significantly different to Sk-U-Net.Conclusions: BUS-Set is a fully reproducible benchmark for BUS lesion seg-mentation obtained through the use of public datasets and GitHub. Of the state-of -the-art convolution neural network (CNN)-based architectures, Mask R-CNN achieved the highest performance overall, further analysis indicated that a training bias may have occurred due to the lesion size variation in the dataset. All dataset and architecture details are available at GitHub: https://github.com/corcor27/BUS-Set, which allows for a fully reproducible benchmark.

Słowa kluczowe:

breast segmentation,public datasets

Afiliacje autorów:

Thomas C.-other affiliation
Byra M.-IPPT PAN
Marti R.-other affiliation
Yap Moi H.-other affiliation
Zwiggelaar R.-other affiliation
100p.