Learning visual clothing style with heterogeneous dyadic co-occurrences

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Standard

Learning visual clothing style with heterogeneous dyadic co-occurrences. / Veit, Andreas; Kovacs, Balazs; Bell, Sean; McAuley, Julian; Bala, Kavita; Belongie, Serge.

I: Proceedings of the IEEE International Conference on Computer Vision, 17.02.2015, s. 4642-4650.

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Harvard

Veit, A, Kovacs, B, Bell, S, McAuley, J, Bala, K & Belongie, S 2015, 'Learning visual clothing style with heterogeneous dyadic co-occurrences', Proceedings of the IEEE International Conference on Computer Vision, s. 4642-4650. https://doi.org/10.1109/ICCV.2015.527

APA

Veit, A., Kovacs, B., Bell, S., McAuley, J., Bala, K., & Belongie, S. (2015). Learning visual clothing style with heterogeneous dyadic co-occurrences. Proceedings of the IEEE International Conference on Computer Vision, 4642-4650. https://doi.org/10.1109/ICCV.2015.527

Vancouver

Veit A, Kovacs B, Bell S, McAuley J, Bala K, Belongie S. Learning visual clothing style with heterogeneous dyadic co-occurrences. Proceedings of the IEEE International Conference on Computer Vision. 2015 feb. 17;4642-4650. https://doi.org/10.1109/ICCV.2015.527

Author

Veit, Andreas ; Kovacs, Balazs ; Bell, Sean ; McAuley, Julian ; Bala, Kavita ; Belongie, Serge. / Learning visual clothing style with heterogeneous dyadic co-occurrences. I: Proceedings of the IEEE International Conference on Computer Vision. 2015 ; s. 4642-4650.

Bibtex

@inproceedings{e677dc817e4a43508914352bf0107483,
title = "Learning visual clothing style with heterogeneous dyadic co-occurrences",
abstract = "With the rapid proliferation of smart mobile devices, users now take millions of photos every day. These include large numbers of clothing and accessory images. We would like to answer questions like 'What outfit goes well with this pair of shoes?' To answer these types of questions, one has to go beyond learning visual similarity and learn a visual notion of compatibility across categories. In this paper, we propose a novel learning framework to help answer these types of questions. The main idea of this framework is to learn a feature transformation from images of items into a latent space that expresses compatibility. For the feature transformation, we use a Siamese Convolutional Neural Network (CNN) architecture, where training examples are pairs of items that are either compatible or incompatible. We model compatibility based on co-occurrence in large-scale user behavior data, in particular co-purchase data from Amazon.com. To learn cross-category fit, we introduce a strategic method to sample training data, where pairs of items are heterogeneous dyads, i.e., the two elements of a pair belong to different high-level categories. While this approach is applicable to a wide variety of settings, we focus on the representative problem of learning compatible clothing style. Our results indicate that the proposed framework is capable of learning semantic information about visual style and is able to generate outfits of clothes, with items from different categories, that go well together.",
author = "Andreas Veit and Balazs Kovacs and Sean Bell and Julian McAuley and Kavita Bala and Serge Belongie",
note = "Publisher Copyright: {\textcopyright} 2015 IEEE.; 15th IEEE International Conference on Computer Vision, ICCV 2015 ; Conference date: 11-12-2015 Through 18-12-2015",
year = "2015",
month = feb,
day = "17",
doi = "10.1109/ICCV.2015.527",
language = "English",
pages = "4642--4650",
journal = "Proceedings of the IEEE International Conference on Computer Vision",
issn = "1550-5499",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

RIS

TY - GEN

T1 - Learning visual clothing style with heterogeneous dyadic co-occurrences

AU - Veit, Andreas

AU - Kovacs, Balazs

AU - Bell, Sean

AU - McAuley, Julian

AU - Bala, Kavita

AU - Belongie, Serge

N1 - Publisher Copyright: © 2015 IEEE.

PY - 2015/2/17

Y1 - 2015/2/17

N2 - With the rapid proliferation of smart mobile devices, users now take millions of photos every day. These include large numbers of clothing and accessory images. We would like to answer questions like 'What outfit goes well with this pair of shoes?' To answer these types of questions, one has to go beyond learning visual similarity and learn a visual notion of compatibility across categories. In this paper, we propose a novel learning framework to help answer these types of questions. The main idea of this framework is to learn a feature transformation from images of items into a latent space that expresses compatibility. For the feature transformation, we use a Siamese Convolutional Neural Network (CNN) architecture, where training examples are pairs of items that are either compatible or incompatible. We model compatibility based on co-occurrence in large-scale user behavior data, in particular co-purchase data from Amazon.com. To learn cross-category fit, we introduce a strategic method to sample training data, where pairs of items are heterogeneous dyads, i.e., the two elements of a pair belong to different high-level categories. While this approach is applicable to a wide variety of settings, we focus on the representative problem of learning compatible clothing style. Our results indicate that the proposed framework is capable of learning semantic information about visual style and is able to generate outfits of clothes, with items from different categories, that go well together.

AB - With the rapid proliferation of smart mobile devices, users now take millions of photos every day. These include large numbers of clothing and accessory images. We would like to answer questions like 'What outfit goes well with this pair of shoes?' To answer these types of questions, one has to go beyond learning visual similarity and learn a visual notion of compatibility across categories. In this paper, we propose a novel learning framework to help answer these types of questions. The main idea of this framework is to learn a feature transformation from images of items into a latent space that expresses compatibility. For the feature transformation, we use a Siamese Convolutional Neural Network (CNN) architecture, where training examples are pairs of items that are either compatible or incompatible. We model compatibility based on co-occurrence in large-scale user behavior data, in particular co-purchase data from Amazon.com. To learn cross-category fit, we introduce a strategic method to sample training data, where pairs of items are heterogeneous dyads, i.e., the two elements of a pair belong to different high-level categories. While this approach is applicable to a wide variety of settings, we focus on the representative problem of learning compatible clothing style. Our results indicate that the proposed framework is capable of learning semantic information about visual style and is able to generate outfits of clothes, with items from different categories, that go well together.

UR - http://www.scopus.com/inward/record.url?scp=84973883538&partnerID=8YFLogxK

U2 - 10.1109/ICCV.2015.527

DO - 10.1109/ICCV.2015.527

M3 - Conference article

AN - SCOPUS:84973883538

SP - 4642

EP - 4650

JO - Proceedings of the IEEE International Conference on Computer Vision

JF - Proceedings of the IEEE International Conference on Computer Vision

SN - 1550-5499

T2 - 15th IEEE International Conference on Computer Vision, ICCV 2015

Y2 - 11 December 2015 through 18 December 2015

ER -

ID: 301828880