Bayesian representation learning with oracle constraints

Research output: Contribution to conferencePaperResearchpeer-review

Standard

Bayesian representation learning with oracle constraints. / Karaletsos, Theofanis; Belongie, Serge; Rätsch, Gunnar.

2016. Paper presented at 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico.

Research output: Contribution to conferencePaperResearchpeer-review

Harvard

Karaletsos, T, Belongie, S & Rätsch, G 2016, 'Bayesian representation learning with oracle constraints', Paper presented at 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, 02/05/2016 - 04/05/2016.

APA

Karaletsos, T., Belongie, S., & Rätsch, G. (2016). Bayesian representation learning with oracle constraints. Paper presented at 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico.

Vancouver

Karaletsos T, Belongie S, Rätsch G. Bayesian representation learning with oracle constraints. 2016. Paper presented at 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico.

Author

Karaletsos, Theofanis ; Belongie, Serge ; Rätsch, Gunnar. / Bayesian representation learning with oracle constraints. Paper presented at 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico.

Bibtex

@conference{7a78e07ad35a44e4a0dfd8be6ff6339e,
title = "Bayesian representation learning with oracle constraints",
abstract = "Representation learning systems typically rely on massive amounts of labeled data in order to be trained to high accuracy. Recently, high-dimensional parametric models like neural networks have succeeded in building rich representations using either compressive, reconstructive or supervised criteria. However, the semantic structure inherent in observations is oftentimes lost in the process. Human perception excels at understanding semantics but cannot always be expressed in terms of labels. Thus, oracles or human-in-the-loop systems, for example crowdsourcing, are often employed to generate similarity constraints using an implicit similarity function encoded in human perception. In this work we propose to combine generative unsupervised feature learning with a probabilistic treatment of oracle information like triplets in order to transfer implicit privileged oracle knowledge into explicit nonlinear Bayesian latent factor models of the observations. We use a fast variational algorithm to learn the joint model and demonstrate applicability to a well-known image dataset. We show how implicit triplet information can provide rich information to learn representations that outperform previous metric learning approaches as well as generative models without this side-information in a variety of predictive tasks. In addition, we illustrate that the proposed approach compartmentalizes the latent spaces semantically which allows interpretation of the latent variables.",
author = "Theofanis Karaletsos and Serge Belongie and Gunnar R{\"a}tsch",
note = "Publisher Copyright: {\textcopyright} ICLR 2016: San Juan, Puerto Rico. All Rights Reserved.; 4th International Conference on Learning Representations, ICLR 2016 ; Conference date: 02-05-2016 Through 04-05-2016",
year = "2016",
language = "English",

}

RIS

TY - CONF

T1 - Bayesian representation learning with oracle constraints

AU - Karaletsos, Theofanis

AU - Belongie, Serge

AU - Rätsch, Gunnar

N1 - Publisher Copyright: © ICLR 2016: San Juan, Puerto Rico. All Rights Reserved.

PY - 2016

Y1 - 2016

N2 - Representation learning systems typically rely on massive amounts of labeled data in order to be trained to high accuracy. Recently, high-dimensional parametric models like neural networks have succeeded in building rich representations using either compressive, reconstructive or supervised criteria. However, the semantic structure inherent in observations is oftentimes lost in the process. Human perception excels at understanding semantics but cannot always be expressed in terms of labels. Thus, oracles or human-in-the-loop systems, for example crowdsourcing, are often employed to generate similarity constraints using an implicit similarity function encoded in human perception. In this work we propose to combine generative unsupervised feature learning with a probabilistic treatment of oracle information like triplets in order to transfer implicit privileged oracle knowledge into explicit nonlinear Bayesian latent factor models of the observations. We use a fast variational algorithm to learn the joint model and demonstrate applicability to a well-known image dataset. We show how implicit triplet information can provide rich information to learn representations that outperform previous metric learning approaches as well as generative models without this side-information in a variety of predictive tasks. In addition, we illustrate that the proposed approach compartmentalizes the latent spaces semantically which allows interpretation of the latent variables.

AB - Representation learning systems typically rely on massive amounts of labeled data in order to be trained to high accuracy. Recently, high-dimensional parametric models like neural networks have succeeded in building rich representations using either compressive, reconstructive or supervised criteria. However, the semantic structure inherent in observations is oftentimes lost in the process. Human perception excels at understanding semantics but cannot always be expressed in terms of labels. Thus, oracles or human-in-the-loop systems, for example crowdsourcing, are often employed to generate similarity constraints using an implicit similarity function encoded in human perception. In this work we propose to combine generative unsupervised feature learning with a probabilistic treatment of oracle information like triplets in order to transfer implicit privileged oracle knowledge into explicit nonlinear Bayesian latent factor models of the observations. We use a fast variational algorithm to learn the joint model and demonstrate applicability to a well-known image dataset. We show how implicit triplet information can provide rich information to learn representations that outperform previous metric learning approaches as well as generative models without this side-information in a variety of predictive tasks. In addition, we illustrate that the proposed approach compartmentalizes the latent spaces semantically which allows interpretation of the latent variables.

UR - http://www.scopus.com/inward/record.url?scp=85083954103&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85083954103

T2 - 4th International Conference on Learning Representations, ICLR 2016

Y2 - 2 May 2016 through 4 May 2016

ER -

ID: 301827503