Conditional similarity networks

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Standard

Conditional similarity networks. / Veit, Andreas; Belongie, Serge; Karaletsos, Theofanis.

I: Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 06.11.2017, s. 1781-1789.

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Harvard

Veit, A, Belongie, S & Karaletsos, T 2017, 'Conditional similarity networks', Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, s. 1781-1789. https://doi.org/10.1109/CVPR.2017.193

APA

Veit, A., Belongie, S., & Karaletsos, T. (2017). Conditional similarity networks. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 1781-1789. https://doi.org/10.1109/CVPR.2017.193

Vancouver

Veit A, Belongie S, Karaletsos T. Conditional similarity networks. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017. 2017 nov. 6;1781-1789. https://doi.org/10.1109/CVPR.2017.193

Author

Veit, Andreas ; Belongie, Serge ; Karaletsos, Theofanis. / Conditional similarity networks. I: Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017. 2017 ; s. 1781-1789.

Bibtex

@inproceedings{8fc513187e6246d7bca30376424cfe7f,
title = "Conditional similarity networks",
abstract = "What makes images similar? To measure the similarity between images, they are typically embedded in a featurevector space, in which their distance preserve the relative dissimilarity. However, when learning such similarity embeddings the simplifying assumption is commonly made that images are only compared to one unique measure of similarity. A main reason for this is that contradicting notions of similarities cannot be captured in a single space. To address this shortcoming, we propose Conditional Similarity Networks (CSNs) that learn embeddings differentiated into semantically distinct subspaces that capture the different notions of similarities. CSNs jointly learn a disentangled embedding where features for different similarities are encoded in separate dimensions as well as masks that select and reweight relevant dimensions to induce a subspace that encodes a specific similarity notion. We show that our approach learns interpretable image representations with visually relevant semantic subspaces. Further, when evaluating on triplet questions from multiple similarity notions our model even outperforms the accuracy obtained by training individual specialized networks for each notion separately.",
author = "Andreas Veit and Serge Belongie and Theofanis Karaletsos",
note = "Publisher Copyright: {\textcopyright} 2017 IEEE.; 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 ; Conference date: 21-07-2017 Through 26-07-2017",
year = "2017",
month = nov,
day = "6",
doi = "10.1109/CVPR.2017.193",
language = "English",
pages = "1781--1789",
journal = "Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017",

}

RIS

TY - GEN

T1 - Conditional similarity networks

AU - Veit, Andreas

AU - Belongie, Serge

AU - Karaletsos, Theofanis

N1 - Publisher Copyright: © 2017 IEEE.

PY - 2017/11/6

Y1 - 2017/11/6

N2 - What makes images similar? To measure the similarity between images, they are typically embedded in a featurevector space, in which their distance preserve the relative dissimilarity. However, when learning such similarity embeddings the simplifying assumption is commonly made that images are only compared to one unique measure of similarity. A main reason for this is that contradicting notions of similarities cannot be captured in a single space. To address this shortcoming, we propose Conditional Similarity Networks (CSNs) that learn embeddings differentiated into semantically distinct subspaces that capture the different notions of similarities. CSNs jointly learn a disentangled embedding where features for different similarities are encoded in separate dimensions as well as masks that select and reweight relevant dimensions to induce a subspace that encodes a specific similarity notion. We show that our approach learns interpretable image representations with visually relevant semantic subspaces. Further, when evaluating on triplet questions from multiple similarity notions our model even outperforms the accuracy obtained by training individual specialized networks for each notion separately.

AB - What makes images similar? To measure the similarity between images, they are typically embedded in a featurevector space, in which their distance preserve the relative dissimilarity. However, when learning such similarity embeddings the simplifying assumption is commonly made that images are only compared to one unique measure of similarity. A main reason for this is that contradicting notions of similarities cannot be captured in a single space. To address this shortcoming, we propose Conditional Similarity Networks (CSNs) that learn embeddings differentiated into semantically distinct subspaces that capture the different notions of similarities. CSNs jointly learn a disentangled embedding where features for different similarities are encoded in separate dimensions as well as masks that select and reweight relevant dimensions to induce a subspace that encodes a specific similarity notion. We show that our approach learns interpretable image representations with visually relevant semantic subspaces. Further, when evaluating on triplet questions from multiple similarity notions our model even outperforms the accuracy obtained by training individual specialized networks for each notion separately.

UR - http://www.scopus.com/inward/record.url?scp=85044252173&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2017.193

DO - 10.1109/CVPR.2017.193

M3 - Conference article

AN - SCOPUS:85044252173

SP - 1781

EP - 1789

JO - Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017

JF - Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017

T2 - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017

Y2 - 21 July 2017 through 26 July 2017

ER -

ID: 301826533