Prediction of reader estimates of mammographic density using convolutional neural networks
dc.contributor.author | Ionescu, GV | |
dc.contributor.author | Fergie, M | |
dc.contributor.author | Berks, M | |
dc.contributor.author | Harkness, EF | |
dc.contributor.author | Hulleman, J | |
dc.contributor.author | Brentnall, AR | |
dc.contributor.author | Cuzick, J | |
dc.contributor.author | Evans, D Gareth R | |
dc.contributor.author | Astley, SM | |
dc.date.accessioned | 2019-03-29T14:22:23Z | |
dc.date.available | 2019-03-29T14:22:23Z | |
dc.date.issued | 2019 | en |
dc.identifier.citation | Ionescu GV, Fergie M, Berks M, Harkness EF, Hulleman J, Brentnall AR, et al. Prediction of reader estimates of mammographic density using convolutional neural networks. J Med Imaging. 2019 Jul;6(3):031405. | en |
dc.identifier.pmid | 30746393 | en |
dc.identifier.doi | 10.1117/1.JMI.6.3.031405 | en |
dc.identifier.uri | http://hdl.handle.net/10541/621622 | |
dc.description.abstract | Mammographic density is an important risk factor for breast cancer. In recent research, percentage density assessed visually using visual analogue scales (VAS) showed stronger risk prediction than existing automated density measures, suggesting readers may recognize relevant image features not yet captured by hand-crafted algorithms. With deep learning, it may be possible to encapsulate this knowledge in an automatic method. We have built convolutional neural networks (CNN) to predict density VAS scores from full-field digital mammograms. The CNNs are trained using whole-image mammograms, each labeled with the average VAS score of two independent readers. Each CNN learns a mapping between mammographic appearance and VAS score so that at test time, they can predict VAS score for an unseen image. Networks were trained using 67,520 mammographic images from 16,968 women and for model selection we used a dataset of 73,128 images. Two case-control sets of contralateral mammograms of screen detected cancers and prior images of women with cancers detected subsequently, matched to controls on age, menopausal status, parity, HRT and BMI, were used for evaluating performance on breast cancer prediction. In the case-control sets, odd ratios of cancer in the highest versus lowest quintile of percentage density were 2.49 (95% CI: 1.59 to 3.96) for screen-detected cancers and 4.16 (2.53 to 6.82) for priors, with matched concordance indices of 0.587 (0.542 to 0.627) and 0.616 (0.578 to 0.655), respectively. There was no significant difference between reader VAS and predicted VAS for the prior test set (likelihood ratio chi square, p=0.134 ). Our fully automated method shows promising results for cancer risk prediction and is comparable with human performance. | en |
dc.language.iso | en | en |
dc.relation.url | https://dx.doi.org/10.1117/1.JMI.6.3.031405 | en |
dc.title | Prediction of reader estimates of mammographic density using convolutional neural networks | en |
dc.type | Article | en |
dc.contributor.department | University of Manchester, School of Computer Science, Manchester | en |
dc.identifier.journal | Journal of Medical Imagine | en |
dc.description.note | en] | |
refterms.dateFOA | 2020-04-27T11:46:50Z |