Sketch2Human: Deep Human Generation With Disentangled Geometry and Appearance Constraints

  • Linzi Qu
  • , Jiaxiang Shang
  • , Hui Ye
  • , Xiaoguang Han
  • , Hongbo Fu*
  • *Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

3 Citations (Scopus)

Abstract

Geometry- and appearance-controlled full-body human image generation is an interesting but challenging task. Existing solutions are either unconditional or dependent on coarse conditions (e.g., pose, text), thus lacking explicit geometry and appearance control of body and garment. Sketching offers such editing ability and has been adopted in various sketch-based face generation and editing solutions. However, directly adapting sketch-based face generation to full-body generation often fails to produce high-fidelity and diverse results due to the high complexity and diversity in the pose, body shape, and garment shape and texture. Recent geometrically controllable diffusion-based methods mainly rely on prompts to generate appearance. It is hard to balance the realism and the faithfulness of their results to the sketch when the input is coarse. This work presents Sketch2Human, the first system for controllable full-body human image generation guided by a semantic sketch (for geometry control) and a reference image (for appearance control). Our solution is based on the latent space of StyleGAN-Human with inverted geometry and appearance latent codes as input. Specifically, we present a sketch encoder trained with a large synthetic dataset sampled from StyleGAN-Human’s latent space and directly supervised by sketches rather than real images. Considering the entangled information of partial geometry and texture in StyleGAN-Human and the absence of disentangled datasets, we design a novel training scheme that creates geometry-preserved and appearance-transferred training data to tune a generator to achieve disentangled geometry and appearance control. Although our method is trained with synthetic data, it can also handle hand-drawn sketches. Qualitative and quantitative evaluations demonstrate the superior performance of our method to state-of-the-art methods.

Original languageEnglish
Pages (from-to)4480-4492
Number of pages13
JournalIEEE Transactions on Visualization and Computer Graphics
Volume31
Issue number9
DOIs
Publication statusPublished - 23 May 2024

User-Defined Keywords

  • Full-body image generation
  • sketch-based generation
  • style mixing
  • style-based generator

Fingerprint

Dive into the research topics of 'Sketch2Human: Deep Human Generation With Disentangled Geometry and Appearance Constraints'. Together they form a unique fingerprint.

Cite this