Sketch-based feature fusion and complement for robust sketch-to-voxel reconstruction

Fei Wang, Zhineng Zhang, Junkun Jiang, Baoquan Zhao*, Zhifeng Hao, Xiaonan Luo

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

Reconstructing 3D models from a single image remains challenges in computer graphics and vision, especially when dealing with free-hand sketches. Fragmented strokes and distorted lines often introduce ambiguities, leading to 3D models that deviate from the intended shape. Moreover, variations in sketching styles frequently result in incomplete object representations. To address the above challenges, we present the sketch-orientated Autoencoder SkFC-AE for high-quality 3D voxelized model reconstruction. Our approach features a Dual-space Feature Encoding mechanism, which extracts sketch semantics from both image space and geometric space using three encoders. To explore the details and common appearance of sketches, this study specifically proposes two additional modules , Feature Fusion Module (FFM) and Feature Complement Module (FCM), to fuse sketch features to create detailed embeddings and complement them with common embeddings derived from the sketch prior. Extensive experiments on two public sketch-to-voxel benchmarks, i.e., Sketch-Voxel ShapeNet and ModelNet-Canny, demonstrate that our SkFC-AE model significantly outperforms state-of-the-art models in terms of 3D reconstruction quality and detail preservation. Our code can be found at this link.
Original languageEnglish
Article number103522
Number of pages15
JournalInformation Fusion
Volume126, Part A
Early online date21 Jul 2025
DOIs
Publication statusE-pub ahead of print - 21 Jul 2025

User-Defined Keywords

  • Single-view 3D reconstruction
  • Hand-drawn sketch
  • Features fusion

Fingerprint

Dive into the research topics of 'Sketch-based feature fusion and complement for robust sketch-to-voxel reconstruction'. Together they form a unique fingerprint.

Cite this