A software-based method to correct optical aberrations in two-photon fluorescence microscopy could make high-resolution deep-tissue imaging more accessible and affordable.
A research team led by Professor Iksung Kang at KAIST and Professor Na Ji at UC Berkeley has developed a software-based method to correct optical aberrations in two-photon fluorescence microscopy for live biological imaging. The technology uses Neural Fields, a neural network-based approach, to reconstruct clear 3D images from blurred data without requiring any additional hardware.
The algorithm corrects aberrations caused by biological tissue, sample movement, and microscope alignment errors. The study was published in Nature Methods on April 13.
The Challenge of Deep Tissue Imaging
Two-photon fluorescence microscopy is a powerful tool for observing deep within living tissue. It works by focusing two low-energy photons simultaneously on a single point. However, as light travels through biological tissue, it scatters and bends, causing the image to blur—a problem known as optical aberration.
Conventional correction methods require additional hardware, such as wavefront sensors. The new algorithm eliminates this need by using captured image data to calculate and compensate for distortions.
Lowering the Barrier to Precision Imaging
This approach reduces the need for expensive equipment, potentially lowering research costs and making precise brain observation more accessible to a wider range of laboratories. The team plans to build on this work by developing an intelligent optical imaging system that automatically finds optimal images.
"This research combines optics and AI to improve visualization inside living organisms."
— Professor Iksung Kang
Publication Details
- Paper: "Adaptive optical correction for in vivo two-photon fluorescence microscopy with neural fields"
- DOI: 10.1038/s41592-026-03053-6
- Authors: Iksung Kang (co-corresponding, first author), Hyeonggeon Kim, Ryan Natan, Qinrong Zhang, Stella X. Yu, Na Ji (co-corresponding author)