Analyzing sedentary behavior in life-logging images

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Standard

Analyzing sedentary behavior in life-logging images. / Moghimi, Mohammad; Wu, Wanmin; Chen, Jacqueline; Godbole, Suneeta; Marshall, Simon; Kerr, Jacqueline; Belongie, Serge.

I: 2014 IEEE International Conference on Image Processing, ICIP 2014, 28.01.2014, s. 1011-1015.

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Harvard

Moghimi, M, Wu, W, Chen, J, Godbole, S, Marshall, S, Kerr, J & Belongie, S 2014, 'Analyzing sedentary behavior in life-logging images', 2014 IEEE International Conference on Image Processing, ICIP 2014, s. 1011-1015. https://doi.org/10.1109/ICIP.2014.7025202

APA

Moghimi, M., Wu, W., Chen, J., Godbole, S., Marshall, S., Kerr, J., & Belongie, S. (2014). Analyzing sedentary behavior in life-logging images. 2014 IEEE International Conference on Image Processing, ICIP 2014, 1011-1015. https://doi.org/10.1109/ICIP.2014.7025202

Vancouver

Moghimi M, Wu W, Chen J, Godbole S, Marshall S, Kerr J o.a. Analyzing sedentary behavior in life-logging images. 2014 IEEE International Conference on Image Processing, ICIP 2014. 2014 jan. 28;1011-1015. https://doi.org/10.1109/ICIP.2014.7025202

Author

Moghimi, Mohammad ; Wu, Wanmin ; Chen, Jacqueline ; Godbole, Suneeta ; Marshall, Simon ; Kerr, Jacqueline ; Belongie, Serge. / Analyzing sedentary behavior in life-logging images. I: 2014 IEEE International Conference on Image Processing, ICIP 2014. 2014 ; s. 1011-1015.

Bibtex

@inproceedings{f398e161f3c441548d5b97a98b15f455,
title = "Analyzing sedentary behavior in life-logging images",
abstract = "We describe a study that aims to understand physical activity and sedentary behavior in free-living settings. We employed a wearable camera to record 3 to 5 days of imaging data with 40 participants, resulting in over 360,000 images. These images were then fully annotated by experienced staff with a rigorous coding protocol. We designed a deep learning based classifier in which we adapted a model that was originally trained for ImageNet [1]. We then added a spatio-temporal pyramid to our deep learning based classifier. Our results show our proposed method performs better than the state-of-the-art visual classification methods on our dataset. For most of the labels our system achieves more than 90% average accuracy across different individuals for frequent labels and more than 80% average accuracy for rare labels.",
keywords = "Deep Learning, Large Scale Image Analysis, Visual Classification, Wearable camera",
author = "Mohammad Moghimi and Wanmin Wu and Jacqueline Chen and Suneeta Godbole and Simon Marshall and Jacqueline Kerr and Serge Belongie",
note = "Publisher Copyright: {\textcopyright} 2014 IEEE.",
year = "2014",
month = jan,
day = "28",
doi = "10.1109/ICIP.2014.7025202",
language = "English",
pages = "1011--1015",
journal = "2014 IEEE International Conference on Image Processing, ICIP 2014",

}

RIS

TY - GEN

T1 - Analyzing sedentary behavior in life-logging images

AU - Moghimi, Mohammad

AU - Wu, Wanmin

AU - Chen, Jacqueline

AU - Godbole, Suneeta

AU - Marshall, Simon

AU - Kerr, Jacqueline

AU - Belongie, Serge

N1 - Publisher Copyright: © 2014 IEEE.

PY - 2014/1/28

Y1 - 2014/1/28

N2 - We describe a study that aims to understand physical activity and sedentary behavior in free-living settings. We employed a wearable camera to record 3 to 5 days of imaging data with 40 participants, resulting in over 360,000 images. These images were then fully annotated by experienced staff with a rigorous coding protocol. We designed a deep learning based classifier in which we adapted a model that was originally trained for ImageNet [1]. We then added a spatio-temporal pyramid to our deep learning based classifier. Our results show our proposed method performs better than the state-of-the-art visual classification methods on our dataset. For most of the labels our system achieves more than 90% average accuracy across different individuals for frequent labels and more than 80% average accuracy for rare labels.

AB - We describe a study that aims to understand physical activity and sedentary behavior in free-living settings. We employed a wearable camera to record 3 to 5 days of imaging data with 40 participants, resulting in over 360,000 images. These images were then fully annotated by experienced staff with a rigorous coding protocol. We designed a deep learning based classifier in which we adapted a model that was originally trained for ImageNet [1]. We then added a spatio-temporal pyramid to our deep learning based classifier. Our results show our proposed method performs better than the state-of-the-art visual classification methods on our dataset. For most of the labels our system achieves more than 90% average accuracy across different individuals for frequent labels and more than 80% average accuracy for rare labels.

KW - Deep Learning

KW - Large Scale Image Analysis

KW - Visual Classification

KW - Wearable camera

UR - http://www.scopus.com/inward/record.url?scp=84949929440&partnerID=8YFLogxK

U2 - 10.1109/ICIP.2014.7025202

DO - 10.1109/ICIP.2014.7025202

M3 - Conference article

AN - SCOPUS:84949929440

SP - 1011

EP - 1015

JO - 2014 IEEE International Conference on Image Processing, ICIP 2014

JF - 2014 IEEE International Conference on Image Processing, ICIP 2014

ER -

ID: 302043908