Prediction of reader estimates of mammographic density using convolutional neural networks
Evans, D Gareth R
AffiliationUniversity of Manchester, School of Computer Science, Manchester
MetadataShow full item record
AbstractMammographic density is an important risk factor for breast cancer. In recent research, percentage density assessed visually using visual analogue scales (VAS) showed stronger risk prediction than existing automated density measures, suggesting readers may recognize relevant image features not yet captured by hand-crafted algorithms. With deep learning, it may be possible to encapsulate this knowledge in an automatic method. We have built convolutional neural networks (CNN) to predict density VAS scores from full-field digital mammograms. The CNNs are trained using whole-image mammograms, each labeled with the average VAS score of two independent readers. Each CNN learns a mapping between mammographic appearance and VAS score so that at test time, they can predict VAS score for an unseen image. Networks were trained using 67,520 mammographic images from 16,968 women and for model selection we used a dataset of 73,128 images. Two case-control sets of contralateral mammograms of screen detected cancers and prior images of women with cancers detected subsequently, matched to controls on age, menopausal status, parity, HRT and BMI, were used for evaluating performance on breast cancer prediction. In the case-control sets, odd ratios of cancer in the highest versus lowest quintile of percentage density were 2.49 (95% CI: 1.59 to 3.96) for screen-detected cancers and 4.16 (2.53 to 6.82) for priors, with matched concordance indices of 0.587 (0.542 to 0.627) and 0.616 (0.578 to 0.655), respectively. There was no significant difference between reader VAS and predicted VAS for the prior test set (likelihood ratio chi square, p=0.134 ). Our fully automated method shows promising results for cancer risk prediction and is comparable with human performance.
CitationIonescu GV, Fergie M, Berks M, Harkness EF, Hulleman J, Brentnall AR, et al. Prediction of reader estimates of mammographic density using convolutional neural networks. J Med Imaging. 2019 Jul;6(3):031405.
JournalJournal of Medical Imagine
- A comparison of five methods of measuring mammographic density: a case-control study.
- Authors: Astley SM, Harkness EF, Sergeant JC, Warwick J, Stavrinos P, Warren R, Wilson M, Beetles U, Gadde S, Lim Y, Jain A, Bundred S, Barr N, Reece V, Brentnall AR, Cuzick J, Howell T, Evans DG
- Issue date: 2018 Feb 5
- Convolutional Neural Network Based Breast Cancer Risk Stratification Using a Mammographic Dataset.
- Authors: Ha R, Chang P, Karcich J, Mutasa S, Pascual Van Sant E, Liu MZ, Jambawalikar S
- Issue date: 2019 Apr
- Impact of type of full-field digital image on mammographic density assessment and breast cancer risk estimation: a case-control study.
- Authors: Busana MC, Eng A, Denholm R, Dowsett M, Vinnicombe S, Allen S, Dos-Santos-Silva I
- Issue date: 2016 Sep 26
- Assessment of a Four-View Mammographic Image Feature Based Fusion Model to Predict Near-Term Breast Cancer Risk.
- Authors: Tan M, Pu J, Cheng S, Liu H, Zheng B
- Issue date: 2015 Oct
- Deep learning networks find unique mammographic differences in previous negative mammograms between interval and screen-detected cancers: a case-case study.
- Authors: Hinton B, Ma L, Mahmoudzadeh AP, Malkov S, Fan B, Greenwood H, Joe B, Lee V, Kerlikowske K, Shepherd J
- Issue date: 2019 Jun 22