TY - JOUR
T1 - SG-GAN: Fine Stereoscopic-Aware Generation for 3D Brain Point Cloud Up-Sampling From a Single Image
AU - Hu, Bowen
AU - Yao, Weiheng
AU - Qiao, Sibo
AU - Pham, Hieu
AU - Wang, Shuqiang
AU - Ng, Michael Kwok-Po
N1 - This work was supported in part by the National Natural Science Foundations of China under Grant 62172403 and Grant 12326614 and in part by the Distinguished Young Scholars Fund of Guangdong under Grant 2021B1515020019. The work of Michael Kwok-Po Ng was supported in part by the National Key Research and Development Program of China under Grant 2024YFE0202900, Grant HKRGC GRF17300021, and Grant C7004-21GF, and in part by Joint NSFC-RGC under Grant N-HKU76921.
Publisher copyright:
© 2025 IEEE.
PY - 2025/4/21
Y1 - 2025/4/21
N2 - In minimally-invasive brain surgeries with indirect and narrow operating environments, 3D brain reconstruction is crucial. However, as requirements of accuracy for some new minimally-invasive surgeries (such as brain-computer interface surgery) are higher and higher, the outputs of conventional 3D reconstruction, such as point cloud (PC), are facing the challenges that sample points are too sparse and the precision is insufficient. On the other hand, there is a scarcity of high-density point cloud datasets, which makes it challenging to train models for direct reconstruction of high-density brain point clouds. In this work, a novel model named stereoscopic-aware graph generative adversarial network (SG-GAN) with two stages is proposed to generate fine high-density PC conditioned on a single image. The Stage-I GAN sketches the primitive shape and basic structure of the organ based on the given image, yielding Stage-I point clouds. The Stage-II GAN takes the results from Stage-I and generates high-density point clouds with detailed features. The Stage-II GAN is capable of correcting defects and restoring the detailed features of the region of interest (ROI) through the up-sampling process. Furthermore, a parameter-free-attention-based free-transforming module is developed to learn the efficient features of input, while upholding a promising performance. Comparing with the existing methods, the SG-GAN model shows superior performance in terms of visual quality, objective measurements, and performance in classification, as demonstrated by comprehensive results measured by several evaluation metrics including PC-to-PC error and Chamfer distance.
AB - In minimally-invasive brain surgeries with indirect and narrow operating environments, 3D brain reconstruction is crucial. However, as requirements of accuracy for some new minimally-invasive surgeries (such as brain-computer interface surgery) are higher and higher, the outputs of conventional 3D reconstruction, such as point cloud (PC), are facing the challenges that sample points are too sparse and the precision is insufficient. On the other hand, there is a scarcity of high-density point cloud datasets, which makes it challenging to train models for direct reconstruction of high-density brain point clouds. In this work, a novel model named stereoscopic-aware graph generative adversarial network (SG-GAN) with two stages is proposed to generate fine high-density PC conditioned on a single image. The Stage-I GAN sketches the primitive shape and basic structure of the organ based on the given image, yielding Stage-I point clouds. The Stage-II GAN takes the results from Stage-I and generates high-density point clouds with detailed features. The Stage-II GAN is capable of correcting defects and restoring the detailed features of the region of interest (ROI) through the up-sampling process. Furthermore, a parameter-free-attention-based free-transforming module is developed to learn the efficient features of input, while upholding a promising performance. Comparing with the existing methods, the SG-GAN model shows superior performance in terms of visual quality, objective measurements, and performance in classification, as demonstrated by comprehensive results measured by several evaluation metrics including PC-to-PC error and Chamfer distance.
KW - Point cloud compression
KW - Three-dimensional displays
KW - Surgery
KW - Image reconstruction
KW - Shape
KW - Generative adversarial networks
KW - Accuracy
KW - Feature extraction
KW - Magnetic resonance imaging
KW - Computer architecture
KW - 3D brain reconstruction
KW - free transforming module
KW - point cloud up-sampling
KW - two-stage generating
U2 - 10.1109/TETCI.2025.3558447
DO - 10.1109/TETCI.2025.3558447
M3 - Journal article
SN - 2471-285X
SP - 1
EP - 13
JO - IEEE Transactions on Emerging Topics in Computational Intelligence
JF - IEEE Transactions on Emerging Topics in Computational Intelligence
ER -