Pose, illumination and expression invariant pairwise face-similarity measure via Doppelgnger list comparison

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Standard

Pose, illumination and expression invariant pairwise face-similarity measure via Doppelgnger list comparison. / Schroff, Florian; Treibitz, Tali; Kriegman, David; Belongie, Serge.

I: Proceedings of the IEEE International Conference on Computer Vision, 2011, s. 2494-2501.

Publikation: Bidrag til tidsskriftKonferenceartikelForskningfagfællebedømt

Harvard

Schroff, F, Treibitz, T, Kriegman, D & Belongie, S 2011, 'Pose, illumination and expression invariant pairwise face-similarity measure via Doppelgnger list comparison', Proceedings of the IEEE International Conference on Computer Vision, s. 2494-2501. https://doi.org/10.1109/ICCV.2011.6126535

APA

Schroff, F., Treibitz, T., Kriegman, D., & Belongie, S. (2011). Pose, illumination and expression invariant pairwise face-similarity measure via Doppelgnger list comparison. Proceedings of the IEEE International Conference on Computer Vision, 2494-2501. https://doi.org/10.1109/ICCV.2011.6126535

Vancouver

Schroff F, Treibitz T, Kriegman D, Belongie S. Pose, illumination and expression invariant pairwise face-similarity measure via Doppelgnger list comparison. Proceedings of the IEEE International Conference on Computer Vision. 2011;2494-2501. https://doi.org/10.1109/ICCV.2011.6126535

Author

Schroff, Florian ; Treibitz, Tali ; Kriegman, David ; Belongie, Serge. / Pose, illumination and expression invariant pairwise face-similarity measure via Doppelgnger list comparison. I: Proceedings of the IEEE International Conference on Computer Vision. 2011 ; s. 2494-2501.

Bibtex

@inproceedings{42bd047b856f48c1ab9f65ec94d76d9f,
title = "Pose, illumination and expression invariant pairwise face-similarity measure via Doppelgnger list comparison",
abstract = "Face recognition approaches have traditionally focused on direct comparisons between aligned images, e.g. using pixel values or local image features. Such comparisons become prohibitively difficult when comparing faces across extreme differences in pose, illumination and expression. The goal of this work is to develop a face-similarity measure that is largely invariant to these differences. We propose a novel data driven method based on the insight that comparing images of faces is most meaningful when they are in comparable imaging conditions. To this end we describe an image of a face by an ordered list of identities from a Library. The order of the list is determined by the similarity of the Library images to the probe image. The lists act as a signature for each face image: similarity between face images is determined via the similarity of the signatures. Here the CMU Multi-PIE database, which includes images of 337 individuals in more than 2000 pose, lighting and illumination combinations, serves as the Library. We show improved performance over state of the art face-similarity measures based on local features, such as FPLBP, especially across large pose variations on FacePix and multi-PIE. On LFW we show improved performance in comparison with measures like SIFT (on fiducials), LBP, FPLBP and Gabor (C1).",
author = "Florian Schroff and Tali Treibitz and David Kriegman and Serge Belongie",
year = "2011",
doi = "10.1109/ICCV.2011.6126535",
language = "English",
pages = "2494--2501",
journal = "Proceedings of the IEEE International Conference on Computer Vision",
note = "2011 IEEE International Conference on Computer Vision, ICCV 2011 ; Conference date: 06-11-2011 Through 13-11-2011",

}

RIS

TY - GEN

T1 - Pose, illumination and expression invariant pairwise face-similarity measure via Doppelgnger list comparison

AU - Schroff, Florian

AU - Treibitz, Tali

AU - Kriegman, David

AU - Belongie, Serge

PY - 2011

Y1 - 2011

N2 - Face recognition approaches have traditionally focused on direct comparisons between aligned images, e.g. using pixel values or local image features. Such comparisons become prohibitively difficult when comparing faces across extreme differences in pose, illumination and expression. The goal of this work is to develop a face-similarity measure that is largely invariant to these differences. We propose a novel data driven method based on the insight that comparing images of faces is most meaningful when they are in comparable imaging conditions. To this end we describe an image of a face by an ordered list of identities from a Library. The order of the list is determined by the similarity of the Library images to the probe image. The lists act as a signature for each face image: similarity between face images is determined via the similarity of the signatures. Here the CMU Multi-PIE database, which includes images of 337 individuals in more than 2000 pose, lighting and illumination combinations, serves as the Library. We show improved performance over state of the art face-similarity measures based on local features, such as FPLBP, especially across large pose variations on FacePix and multi-PIE. On LFW we show improved performance in comparison with measures like SIFT (on fiducials), LBP, FPLBP and Gabor (C1).

AB - Face recognition approaches have traditionally focused on direct comparisons between aligned images, e.g. using pixel values or local image features. Such comparisons become prohibitively difficult when comparing faces across extreme differences in pose, illumination and expression. The goal of this work is to develop a face-similarity measure that is largely invariant to these differences. We propose a novel data driven method based on the insight that comparing images of faces is most meaningful when they are in comparable imaging conditions. To this end we describe an image of a face by an ordered list of identities from a Library. The order of the list is determined by the similarity of the Library images to the probe image. The lists act as a signature for each face image: similarity between face images is determined via the similarity of the signatures. Here the CMU Multi-PIE database, which includes images of 337 individuals in more than 2000 pose, lighting and illumination combinations, serves as the Library. We show improved performance over state of the art face-similarity measures based on local features, such as FPLBP, especially across large pose variations on FacePix and multi-PIE. On LFW we show improved performance in comparison with measures like SIFT (on fiducials), LBP, FPLBP and Gabor (C1).

UR - http://www.scopus.com/inward/record.url?scp=84856663391&partnerID=8YFLogxK

U2 - 10.1109/ICCV.2011.6126535

DO - 10.1109/ICCV.2011.6126535

M3 - Conference article

AN - SCOPUS:84856663391

SP - 2494

EP - 2501

JO - Proceedings of the IEEE International Conference on Computer Vision

JF - Proceedings of the IEEE International Conference on Computer Vision

T2 - 2011 IEEE International Conference on Computer Vision, ICCV 2011

Y2 - 6 November 2011 through 13 November 2011

ER -

ID: 301830708