Skip to content

pvbvcv/D-PETA

Repository files navigation

D-PETA

SFDAKD(Source Free Domain Adaptation Keypoint Detection)

1. Prerequisites

Dependencies

  • dinov2==0.0.1.dev0
  • loralib==0.1.2
  • opencv-python==4.9.0.80
  • scikit-image==0.17.2
  • scikit-learn==1.3.2
  • scipy==1.10.1
  • torch==2.0.1
  • torchaudio==2.1.1+cu118
  • torchvision==0.16.1+cu118
  • triton==2.0.0
  • typing-extensions==4.8.0
  • typing-inspect==0.9.0
  • urllib3==1.26.13
  • webcolors==1.13

You can use the environment.yml to create your conda environment.

2. Training

source model training

single GPU python human_src.py

multi GPU CUDA_VISIBLE_DEVICES=x,x,x,x python human_src.py

adaptation

single GPU python human_tgt.py

multi GPU CUDA_VISIBLE_DEVICES=x,x,x,x python human_tgt.py

we use single 3090 gpu to complete the experiment

3. Data

SURREAL(source data)

@INPROCEEDINGS{varol17_surreal, title = {Learning from Synthetic Humans}, author = {Varol, G{\"u}l and Romero, Javier and Martin, Xavier and Mahmood, Naureen and Black, Michael J. and Laptev, Ivan and Schmid, Cordelia}, booktitle = {CVPR}, year = {2017} }

LSP(target data)

@inproceedings{Johnson10, title = {Clustered Pose and Nonlinear Appearance Models for Human Pose Estimation}, author = {Johnson, Sam and Everingham, Mark}, year = {2010}, booktitle = {Proceedings of the British Machine Vision Conference}, note = {doi:10.5244/C.24.12} }

usage of data

You should use the source data to train source model by human_src.py and use the target data to complete the adaptation by human_tgt.py.

4. Result

image

5.Acknowledgement

This repo is modified from open source SFDAKD codebase MAPS, SFDAHPE.

About

SFDAKD(D-PETA)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published