Efficient Domain Adaptation via Generative Prior for 3D Infant Pose Estimation

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Efficient Domain Adaptation via Generative Prior for 3D Infant Pose Estimation. / Zhou, Zhuoran ; Jiang, Zhongyu ; Chai, Wenhao ; Yang, Cheng-Yen ; Li, Lei; Hwang, Jenq-Neng.

2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2024. p. 51-59.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Zhou, Z, Jiang, Z, Chai, W, Yang, C-Y, Li, L & Hwang, J-N 2024, Efficient Domain Adaptation via Generative Prior for 3D Infant Pose Estimation. in 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, pp. 51-59, WACV 2024 - IEEE/CVF Winter Conference on Applications of Computer Vision , Waikola, Hawaii, United States, 04/01/2024. https://doi.org/10.1109/WACVW60836.2024.00013

APA

Zhou, Z., Jiang, Z., Chai, W., Yang, C-Y., Li, L., & Hwang, J-N. (2024). Efficient Domain Adaptation via Generative Prior for 3D Infant Pose Estimation. In 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (pp. 51-59). IEEE. https://doi.org/10.1109/WACVW60836.2024.00013

Vancouver

Zhou Z, Jiang Z, Chai W, Yang C-Y, Li L, Hwang J-N. Efficient Domain Adaptation via Generative Prior for 3D Infant Pose Estimation. In 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE. 2024. p. 51-59 https://doi.org/10.1109/WACVW60836.2024.00013

Author

Zhou, Zhuoran ; Jiang, Zhongyu ; Chai, Wenhao ; Yang, Cheng-Yen ; Li, Lei ; Hwang, Jenq-Neng. / Efficient Domain Adaptation via Generative Prior for 3D Infant Pose Estimation. 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2024. pp. 51-59

Bibtex

@inproceedings{441143886b874435ac4eb19907a748c1,
title = "Efficient Domain Adaptation via Generative Prior for 3D Infant Pose Estimation",
abstract = "Although 3D human pose estimation has gained impres-sive development in recent years, only a few works focus on infants, that have different bone lengths and also have limited data. Directly applying adult pose estimation mod-els typically achieves low performance in the infant domain and suffers from out-of-distribution issues. Moreover, the limitation of infant pose data collection also heavily con-strains the efficiency of learning-based models to lift 2D poses to 3D. To deal with the issues of small datasets, do-main adaptation and data augmentation are commonly used techniques. Following this paradigm, we take advantage of an optimization-based method that utilizes generative pri-ors to predict 3D infant keypoints from 2D keypoints with-out the need of large training data. We further apply a guided diffusion model to domain adapt 3D adult pose to infant pose to supplement small datasets. Besides, we also prove that our method, ZeDO-i, could attain efficient do-main adaptation, even if only a small number of data is given. Quantitatively, we claim that our model attains state-of-the-art MPJPE performance of 43.6 mm on the SyRIP dataset and 21.2 mm on the MINI-RGBD dataset.",
author = "Zhuoran Zhou and Zhongyu Jiang and Wenhao Chai and Cheng-Yen Yang and Lei Li and Jenq-Neng Hwang",
year = "2024",
doi = "10.1109/WACVW60836.2024.00013",
language = "English",
pages = "51--59",
booktitle = "2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)",
publisher = "IEEE",
note = "WACV 2024 - IEEE/CVF Winter Conference on Applications of Computer Vision ; Conference date: 04-01-2024 Through 08-01-2024",

}

RIS

TY - GEN

T1 - Efficient Domain Adaptation via Generative Prior for 3D Infant Pose Estimation

AU - Zhou, Zhuoran

AU - Jiang, Zhongyu

AU - Chai, Wenhao

AU - Yang, Cheng-Yen

AU - Li, Lei

AU - Hwang, Jenq-Neng

PY - 2024

Y1 - 2024

N2 - Although 3D human pose estimation has gained impres-sive development in recent years, only a few works focus on infants, that have different bone lengths and also have limited data. Directly applying adult pose estimation mod-els typically achieves low performance in the infant domain and suffers from out-of-distribution issues. Moreover, the limitation of infant pose data collection also heavily con-strains the efficiency of learning-based models to lift 2D poses to 3D. To deal with the issues of small datasets, do-main adaptation and data augmentation are commonly used techniques. Following this paradigm, we take advantage of an optimization-based method that utilizes generative pri-ors to predict 3D infant keypoints from 2D keypoints with-out the need of large training data. We further apply a guided diffusion model to domain adapt 3D adult pose to infant pose to supplement small datasets. Besides, we also prove that our method, ZeDO-i, could attain efficient do-main adaptation, even if only a small number of data is given. Quantitatively, we claim that our model attains state-of-the-art MPJPE performance of 43.6 mm on the SyRIP dataset and 21.2 mm on the MINI-RGBD dataset.

AB - Although 3D human pose estimation has gained impres-sive development in recent years, only a few works focus on infants, that have different bone lengths and also have limited data. Directly applying adult pose estimation mod-els typically achieves low performance in the infant domain and suffers from out-of-distribution issues. Moreover, the limitation of infant pose data collection also heavily con-strains the efficiency of learning-based models to lift 2D poses to 3D. To deal with the issues of small datasets, do-main adaptation and data augmentation are commonly used techniques. Following this paradigm, we take advantage of an optimization-based method that utilizes generative pri-ors to predict 3D infant keypoints from 2D keypoints with-out the need of large training data. We further apply a guided diffusion model to domain adapt 3D adult pose to infant pose to supplement small datasets. Besides, we also prove that our method, ZeDO-i, could attain efficient do-main adaptation, even if only a small number of data is given. Quantitatively, we claim that our model attains state-of-the-art MPJPE performance of 43.6 mm on the SyRIP dataset and 21.2 mm on the MINI-RGBD dataset.

U2 - 10.1109/WACVW60836.2024.00013

DO - 10.1109/WACVW60836.2024.00013

M3 - Article in proceedings

SP - 51

EP - 59

BT - 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)

PB - IEEE

T2 - WACV 2024 - IEEE/CVF Winter Conference on Applications of Computer Vision

Y2 - 4 January 2024 through 8 January 2024

ER -

ID: 378941805