Wenyue Chen1, Peng Li2, Wangguandong Zheng3, Chengfeng Zhao2, Mengfei Li2, Yaolong Zhu1, Zhiyang Dou4, Ronggang Wang1, Yuan Liu2
1 Peking University 2 The Hong Kong University of Science and Technology 3 Southeast University 4 The University of Hong Kong
Official code of SyncHuman: Synchronizing 2D and 3D Generative Models for Single-view Human Reconstruction
We tested on H800 with CUDA 12.1. Follow the steps below to set up the environment.
conda create -n SyncHuman python=3.10
conda activate SyncHuman
# PyTorch 2.1.1 + CUDA 12.1
conda install pytorch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 pytorch-cuda=12.1 -c pytorch -c nvidia
2) Follow trellis to setup the env
pip install accelerate safetensors==0.4.5 diffusers==0.29.1 transformers==4.36.0git clone https://github.com/xishuxishu/SyncHuman.git
cd SyncHuman
python download.py
The file organization structure is shown below:
SyncHuman
├── ckpts
│ ├── OneStage
│ └── SecondStage
├── SyncHuman
├── examples
├── inference_OneStage.py
├── inference_SecondStage.py
└── download.py
python inference_OneStage.py
python inference_SecondStage.py
If you want to change the example image used for inference, please modify the image_path in inference_OneStage.py.
Then you will get the final generated result at outputs/SecondStage/output.glb.
Our code is based on these wonderful works:
If you find this work useful, please cite our paper:
@article{chen2025synchuman,
title={SyncHuman: Synchronizing 2D and 3D Diffusion Models for Single-view Human Reconstruction},
author={Wenyue Chen, Peng Li, Wangguandong Zheng, Chengfeng Zhao, Mengfei Li, Yaolong Zhu, Zhiyang Dou, Ronggang Wang, Yuan Liu},
journal={arXiv preprint arXiv:2510.07723},
year={2025}
}