Back
Science

PLGMamba Model Advances Hyperspectral Image Super-Resolution

View source

PLGMamba: A New AI Model for Sharper Hyperspectral Imaging

A newly developed artificial intelligence model, PLGMamba, is setting a new standard for hyperspectral image super-resolution, promising clearer and more detailed views of the world from space.

Researchers from Sun Yat-sen University, Guangdong Polytechnic Normal University, and the University of Extremadura have reported the model in the Journal of Remote Sensing (DOI: 10.34133/remotesensing.1027).

Bridging the Gap in Image Reconstruction

PLGMamba reconstructs high-resolution hyperspectral images from low-resolution inputs without requiring any changes to imaging hardware. The model was designed to overcome key limitations in existing convolutional neural networks (CNNs) and Transformers by combining local spectral similarity with global feature modeling.

The architecture works by dividing the low-resolution image into spectral groups, reconstructing them gradually to exploit local correlations while also capturing broader dependencies. This approach effectively preserves both spatial detail and spectral fidelity.

The model’s core architecture is built on two main modules:

  • Residual Attention Mamba (RatMamba): Handles local–global spectral–spatial feature extraction.
  • Residual Mamba (ResMamba): Manages feature fusion.

Proven Performance on Key Benchmarks

PLGMamba outperformed classical, CNN-based, Transformer-based, and other Mamba-based methods across multiple benchmark datasets, including Chikusei, Houston, Pavia, and Gaofen-5 (GF-5) satellite data.

Key performance results include:

  • Chikusei scene (x2 scale): PSNR 44.058, SAM 1.3404, ERGAS 10.069
  • Houston scene (x4 scale): PSNR 39.804, SAM 2.9186, ERGAS 11.015
  • GF-5 satellite imagery: QNR 0.9620, D_s 0.0167, D_l 0.0217

The model achieved its best performance when using 10 spectral groups during transfer to a new scenario, indicating a robust balance between local detail and global context.

Methodology and Training

PLGMamba’s loss function jointly optimizes three key objectives: spectral–spatial fidelity, spectral similarity, and spatial fidelity.
The model was trained in PyTorch using the Adam optimizer for 200 epochs with a minibatch size of 12 on an NVIDIA RTX 3060 GPU.

Looking Ahead

The authors indicate that future work will focus on two key areas: improving performance at the x8 scale factor and enabling terminal deployment for lightweight, practical applications of hyperspectral image super-resolution.