A team of researchers led by the University of Texas at Austin has developed a new, fully automatic method that combines biophysical models of tumor growth with machine learning algorithms to automatically identify brain tumors.
Researchers are using supercomputers at UT’s Texas Advanced Computing Center as part of the process to analyze magnetic resonance imaging data of patients with gliomas, the most common and aggressive type of primary brain tumor.
“Our goal is to take an image and delineate it automatically and identify different types of abnormal tissue—edema, enhancing tumor (areas with very aggressive tumors), and necrotic tissue,” said George Biros, professor of mechanical engineering at and leader of the ICES Parallel Algorithms for Data Analysis and Simulation Group.
“It’s similar to taking a picture of one’s family and doing facial recognition to identify each member, but here you do tissue recognition, and all this has to be done automatically,” added Biros, who has worked for nearly a decade to develop the computing algorithms that can characterize gliomas.
His work was recently validated at the 2017 International Conference on Medical Image Computing and Computer Assisted Intervention held last month in Quebec City, Canada.
At the conference, Biros and collaborators from the University of Houston, University of Pennsylvania, and University of Stuttgart tested their new method in this year’s Multimodal Brain Tumor Segmentation Challenge, an annual competition where research groups from around the world present methods and results for computer-aided identification and classification of brain tumors using pre-operative MR scans.
According to Biros, in the final part of the challenge, participants were given data from 140 patients and over the course of two days had to identify the location of tumors and segment them into different tissue types. Overall, the team scored in the top 25 percent in the challenge and was near the top for whole tumor segmentation.
In particular, the team was able to run their analysis on 140 brains in less than four hours and correctly characterized the testing data with nearly 90 percent accuracy—which is comparable to the accuracy rate of radiologists. However, Biros is quick to emphasize that the machine learning method won’t replace radiologists and surgeons—instead, it will improve the reproducibility of assessments and potentially speed up diagnoses.
Next stop for the image segmentation classifier is the University of Pennsylvania, where it will be deployed by the end of the year to be worked on by collaborator Christos Davatzikos, a professor of radiology and director of the Center for Biomedical Image Computing and Analytics.
“We have all the tools and basic ideas; now, we polish it and see how we can improve it,” concluded Biros.
Register or login for access to this item and much more
All Health Data Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access