Diabetic Retinopathy Winner's Interview: 1st place, Ben Graham

Diabetic Retinopathy Winner's Interview: 1st place, Ben Graham

Ben Graham finished at the top of the leaderboard in the high-profileDiabetic Retinopathy competition. In this blog, he shares his approach on a high-level with key takeaways. Ben finished 3rd in the National Data Science Bowl, a competition that helped develop many of the approaches used to compete in this challenge.

Screen Shot 2015-09-10 at 11.33.04 AM

Ben's Kaggle profile

The Basics

What made you decide to enter this competition?

I wanted to experiment with training CNNs with larger images to see what kind of architectures would work well. Medical images can in some ways be more challenging than classifying regular photos as the important features can be very small.

Let's Get Technical

What preprocessing and supervised learning methods did you use?

For preprocessing, I first scaled the images to a given radius. I then subtracted local average color to reduce differences in lighting.

Screen Shot 2015-09-10 at 11.12.53 AM

For supervised learning, I experimented with convolutional neural network architectures. To map the network predictions to the integer labels needed for the competition, I used a random forest so that I could combine the data from the two eyes to make each prediction.

Screen Shot 2015-09-10 at 11.14.11 AM

Were you surprised by any of your findings?

I was surprised by a couple of things. First, that increasing the scale of the images beyond radius=270 pixels did not seem to help. I was expecting the existence of very small features, only visible at higher resolutions, to tip the balance in favor of larger images. Perhaps the increase in processing times for larger images was too great.

I was also surprised by the fact that ensembling (taking multiple views of each image, and combining the results of different networks) did very little to improve accuracy. This is rather different to the case of normal photographs, where ensembling can make a huge difference.

Which tools did you use?

Python and OpenCV for preprocessing. SparseConvNet for processing. I was curious to see if I could sparsify the images during preprocessing; however, due to time constraints I didn't get that working. SparseConvNet implements fractional max-pooling, which allowed me to experiment with different types of spatial data aggregation.

Bio

Ben Graham is an Assistant Professor at the University of Warwick, UK. His research interests are probabilistic spatial models such as percolation, and machine learning.

posted @ 2015-09-11 19:06  菜鸡一枚  阅读(420)  评论(0编辑  收藏  举报