Empirical analysis of the divergence of Gibbs sampling based learning algorithms for restricted Boltzmann machines

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Learning algorithms relying on Gibbs sampling based stochastic approximations of the log-likelihood gradient have become a common way to train Restricted Boltzmann Machines (RBMs). We study three of these methods, Contrastive Divergence (CD) and its refined variants Persistent CD (PCD) and Fast PCD (FPCD). As the approximations are biased, the maximum of the log-likelihood is not necessarily obtained. Recently, it has been shown that CD, PCD, and FPCD can even lead to a steady decrease of the log-likelihood during learning. Taking artificial data sets from the literature we study these divergence effects in more detail. Our results indicate that the log-likelihood seems to diverge especially if the target distribution is difficult to learn for the RBM. The decrease of the likelihood can not be detected by an increase of the reconstruction error, which has been proposed as a stopping criterion for CD learning. Weight-decay with a carefully chosen weight-decay-parameter can prevent divergence.
OriginalsprogEngelsk
TitelArtificial Neural Networks – ICANN 201 : 20th International Conference, Thessaloniki, Greece, September 15-18, 2010, Proceedings, Part III
RedaktørerK. Diamantaras, W. Duch, L. S. Iliadis
Antal sider10
ForlagSpringer
Publikationsdato2010
Sider208-217
ISBN (Trykt)978-3-642-15824-7
ISBN (Elektronisk)978-3-642-15825-4
DOI
StatusUdgivet - 2010
Eksternt udgivetJa
Begivenhed20th International Conference on Artificial Neural Networks (ICANN 2010) - Thessaloniki, Grækenland
Varighed: 15 sep. 201018 sep. 2010

Konference

Konference20th International Conference on Artificial Neural Networks (ICANN 2010)
LandGrækenland
ByThessaloniki
Periode15/09/201018/09/2010
NavnLecture notes in computer science
Vol/bind6354

ID: 33862803