Goodbye, Flat X-Rays: AI Provides Visual Depth

PinIt

A new AI-based computational framework has shown that it can create 3D visualizations from data hundreds of times faster than traditional methods.

Researchers have been using artificial intelligence (AI) to X-ray image analysis in wide-ranging applications from astronomy to medical imaging. A new development in the field takes the benefits to a higher level by creating 3D visualizations of composite images.

The advances and discoveries from “traditional” 2D image analysis have shown the power of such techniques. This summer, a news item reported that “astronomers had managed to look behind a black hole for the first time and have proved that Albert Einstein was right about how these mysterious celestial behemoths behave. An international team of researchers used high-powered X-ray telescopes to study a supermassive black hole 800 million light years away at the center of a distant galaxy.”

It’s amazing what can be done, and has been done with X-rays. Now, artificial intelligence is introducing an astounding array of innovations to healthcare and scientific research, from instantaneous diagnosis of medical conditions to developing a better understanding of the universe. The latest innovation comes out of the U.S. Department of Energy’s Argonne National Laboratory, where AI is enabling scientists to view X-ray images in three dimensions.

The Argonne research team developed a new computational framework called 3D-CDI-NN, and has shown that it can create 3D visualizations from data hundreds of times faster than traditional methods. (CDI stands for coherent diffraction imaging, and NN stands for neural network.) Scientists who use the Advanced Photon Source (APS) at Argonne Labs to process 3D images are working on applications to turn X-ray data into visible, understandable shapes at a much faster rate.

A breakthrough in this area could have implications for astronomy, electron microscopy, and other areas of science dependent on large amounts of 3D data, Argonne officials said. “In order to make full use of what the upgraded APS will be capable of, we have to reinvent data analytics,” according to lead scientist Mathew Cherukara. “Our current methods are not enough to keep up. Machine learning can make full use and go beyond what is currently possible.”

As a final step, 3D-CDI-NN’s ability to fill in missing information and come up with a 3D visualization was tested on real X-ray data of tiny particles of gold, collected at beamline 34-ID-C at the APS. The result is a computational method that is hundreds of times faster on simulated data and nearly that fast on real APS data. The tests also showed that the network can reconstruct images with less data than is usually required to compensate for the information not captured by the detectors.

CDI stands for coherent diffraction imaging, an X-ray technique that involves bouncing ultra-bright X-ray beams off of samples. Those beams of light will then be collected by detectors as data, and it takes some computational effort to turn that data into images. Part of the challenge, explains Mathew Cherukara, leader of the Computational X-ray Science group in Argonne’s X-ray Science Division (XSD), is that the detectors only capture some of the information from the beams.

Scientists rely on computers to fill in that information, according to a description from Argonne. This takes some time to do in 2D and takes even longer with 3D images. “The solution, then, is to train an AI to recognize objects and the microscopic changes they undergo directly from the raw data, without having to fill in the missing info.”

The next step for this AI-based research is to integrate the network into the APS’s workflow so that it learns from data as it is taken. If the network learns from data at the beamline, he said, it will continuously improve, according to the Argonne team. At the same time, a massive upgrade of the APS is in the works, and the amount of data generated now will increase exponentially, they add. “The upgraded APS will generate X-ray beams that are up to 500 times brighter, and the coherence of the beam — the characteristic of light that allows it to diffract in a way that encodes more information about the sample — will be greatly increased. That means that while it takes two to three minutes now to gather coherent diffraction imaging data from a sample and get an image, the data collection part of that process will soon be up to 500 times faster. The process of converting that data to a usable image also needs to be hundreds of times faster than it is now to keep up.”

Avatar

About Joe McKendrick

Joe McKendrick is RTInsights Industry Editor and industry analyst focusing on artificial intelligence, digital, cloud and Big Data topics. His work also appears in Forbes an Harvard Business Review. Over the last three years, he served as co-chair for the AI Summit in New York, as well as on the organizing committee for IEEE's International Conferences on Edge Computing. (full bio). Follow him on Twitter @joemckendrick.

Leave a Reply

Your email address will not be published. Required fields are marked *