Scientists Are Shaking Up Reality to Do More Realistic, Efficient Science

By Samuel Lopez, staff writer; images contributed by Kedar Narayan
Image of a man in a plaid shirt sitting in front of a computer while wearing VR goggles and working a pair of joysticks

The VR configuration at the Center for Molecular Microscopy allows employees to interact with structures in a 3D environment.

Science is an interrogation of reality: “putting Nature through a thorough inquisition,” as poet W. H. Auden once wrote. It’s the search for what’s real. But sometimes traditional practices get in the way. What to do when reality itself slows the search?

Bend it.

That’s the route some scientists are taking, using virtual reality (VR) and augmented reality (AR) to interact with data in new ways that improve research, methods, and medicine.

VR puts users in a totally simulated, interactive 3D environment, usually through special goggles and joysticks. It’s most popular in the video game industry, where it’s gaining traction thanks to its immersive nature and decreasing cost.

AR is less immersive and instead adds layers of images or information overtop a user’s view of reality as seen through a screen or lens, like a camera, Google Glass, or microscope. The most prominent example is Niantic and The Pokémon Company’s blockbuster mobile game, Pokémon GO, in which players use their phone camera to see and capture 3D creatures “hiding” in the real world.

But to scientists like Kedar Narayan, Ph.D., deploying VR or AR in the lab is more than a game. It moves science from reality to a “reality plus,” where work becomes more efficient.

Sculpting 3D Models in a 3D World

Narayan and his team in the Center for Molecular Microscopy at the Frederick National Laboratory use VR to proofread 3D structures of mitochondria, organelles that create energy and may be involved in certain diseases.

The group specializes in volume-electron microscopy, producing high-resolution 3D images of cells and organelles. Until recently, they were limited to working with them on a computer screen—a 2D collection of pixels that only creates the illusion of 3D.

Now, the scientists put on a pair of goggles, grab a set of joysticks, and come face-to-face with mitochondria in a virtual 3D space. They use the joysticks to move around and correct errors created by the algorithm that assembles the models from the microscopy images. Those corrections alter each mitochondrion’s structure, making it more accurate for future analysis.

“That’s the difference [in what we’re doing],” Narayan said. “VR typically is, ‘Look at this cool thing. I can look around.’ But you don’t actually edit structures that you’re inside of. That’s what we’ve done.”

Conversely, editing on a computer screen is a painstaking process. It requires slicing the 3D model into thousands of stacked, thin 2D planes. The team then must scour each plane for errors, attempting to contextualize it with neighboring planes to avoid missing any errors that cross planes.

“Going step-by-step in 2D—I guarantee you this because we’ve done it—some errors are very, very difficult to catch,” Narayan said. “If you just want to go in and quickly do proofreading, the intuitive and efficient way to do it would be with VR.”

Narayan’s team developed and validated the VR proofreading method in partnership with arivis, Inc., a multinational tech company. The group presented a paper from the project at the recent Microscopy & Microanalysis 2020 virtual conference.

Their structure-editing technique has the potential to help scientists answer crucial questions about cancer. Mitochondrial involvement in the disease is accepted knowledge, but scientists still don’t know exactly how the organelles play a role. Studying their structures may reveal a correlation between different shapes and cancer’s progression or regression. But for scientists to meaningfully address these questions, the reconstructed mitochondrial structures must first be accurate. VR editing represents a way to achieve that accuracy.

Narayan admits that VR comes with its own limitations, including discomfort when wearing the goggles long-term, occasional disorientation while in the simulated environment, and a lack of tactile feedback. However, he expects those will improve as the emerging field develops.

The “Intermediate Way”

AR can be similarly useful in the lab, though its presence is often less conspicuous, says Raul Cachau, Ph.D., a senior principal scientist in the Biomedical Informatics and Data Science Directorate at the Frederick National Laboratory.

“It is with us. However, it may be in forms that may not be that exciting until you immerse yourself in what is going on and you realize the enormous value of having that information in there,” said Cachau, who has worked with AR in the past.

A potential value is in training fellows or early career scientists who work with biopsies, pathology images, or cell cultures. AR powered by artificial intelligence could analyze images of such objects and project information and overlays that interactively teach the scientist and train them to recognize distinguishing features or abnormalities in samples. The means for this level of AR haven’t yet arrived, though some scientists expect that they will become available as the technology becomes more inexpensive and refined.

More immediately, AR has begun to advance histopathology, a traditionally meticulous and time-consuming process for analyzing tissue samples. AR-capable microscopes, such as the one recently developed by Google Health, harness artificial-intelligence algorithms to analyze samples for potential areas of interest, then flag them in real time as the pathologist looks at the sample. The areas are projected as an overlay onto the lens’ field of view, speeding up the pathologist’s analysis.

AR can also simplify exams on computers. A scientist can open a photo or video of a specimen, cell culture, or lab animal, then run an AR program to help analyze it. The AR pulls on powerful graphical processing units and artificial-intelligence algorithms to evaluate the image, query databases of information, and display on-screen overlays and data sets that highlight unusual characteristics or provide information that helps contextualize the image.

From there, the scientist can determine the next step of the analysis. Cachau says it’s a significant leap over traditional methods such as consulting a book or switching back and forth between the image and a separate program housing related data or images. The AR overlays make the task easier, while the artificial intelligence narrows down the analysis before the scientist takes over.

“The absolute ‘the machine can do it all’ is not here yet. The physician or the scientist, ‘I know it all,’ doesn’t work any longer. There is this intermediate way,” Cachau said. “Augmented reality is that, a combination of what is in the computer or what is in reality with the opinion of the practitioner as it integrates both things through common observation.”

The interface for editing structures, as seen through the VR goggles at the Center for Molecular Microscopy. Users can manipulate their joysticks to pan, move around, and step inside the structures. Examples of the structural errors in mitochondria that Narayan’s group has corrected using VR.