Researchers from the Massachusetts Institute of Technology are using a machine learning algorithm that is able to vastly accelerate the process of overlapping medical images to enable radiologists to study differences between them.
Historically, it’s taken radiologists as long as two hours using imaging systems to align millions of pixels in scans that they wish to compare through medical imaging registration. The technique is crucial for taking two MRI scans to analyze differences—for example, if a patient has a brain tumor, image registration enables clinicians to put one image from several months ago on top of a recent scan to gauge the growth of the disease.
But the work of the MIT researchers has resulted in an algorithm that can line up brain scans and other 3-D images more than 1,000 times faster using new learning techniques. It works by “learning” while registering thousands of pairs of images—that enables it to gain information on how to best align images, estimating some optical alignment parameters. After going through this learning process, it uses those parameters to map all the pixels of one image to another instantaneously.
Results of the research will be released at upcoming conferences—the Conference on Computer Vision and Pattern Recognition and at the Medical Image Computing and Computer Assisted Interventions Conference.
“The tasks of aligning a brain MRI shouldn’t be that different when you’re aligning one pair of brain MRIs or another,” says Guha Balakrishnan, a graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory and Department of Engineering and Computer Science, who is a co-author on both papers. “If you’re able to learn something from previous image registration, you can do a new task much faster and with the same accuracy.”
MRI scans are basically hundreds of stacked 2-D images that form massive 3-D images, called “volumes,” containing a million or more 3-D pixels, called “voxels.” It takes significant time to align all voxels in the first volume with those in the second. The process is made more complex because scans can come from different machines and have different spatial orientations.
The researchers’ algorithm, called “VoxelMorph,” is powered by a convolutional neural network, a machine-learning approach commonly used for image processing. They trained their algorithm on 7,000 publicly available MRI brain scans and then tested it on 250 additional scans.
The researchers found the algorithm could accurately register all of their 250 test brain scans—those registered after the training set—within two minutes using a traditional central processing unit, and in under one second using a graphics processing unit.
The algorithm is “unsupervised,” meaning it doesn’t require additional information beyond image data. Some registration algorithms incorporate CNN models but require a “ground truth,” requiring another traditional algorithm to be first run to compute accurate registrations.
The new algorithm has a wide range of potential applications in addition to analyzing brain scans, the researchers say. MIT colleagues, for instance, are currently running the algorithm on lung images. It also could enable image registration during operations, helping surgeons who may need comparisons during brain resections to ensure they’ve removed all of a tumor during brain surgery.
Register or login for access to this item and much more
All Health Data Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access