Training Restricted Boltzmann Machines

Research output: Book/ReportPh.D. thesisResearch

  • Asja Fischer
Restricted Boltzmann machines (RBMs) are probabilistic graphical models that can also be interpreted as stochastic neural networks. Training RBMs is known to be challenging. Computing the likelihood of the model parameters or its gradient is in general computationally intensive. Thus, training relies on sampling based approximations of the log-likelihood gradient.

I will present an empirical and theoretical analysis of the bias of these approximations and show that the approximation error can lead to a distortion of the learning process. The bias decreases with increasing mixing rate of the applied sampling procedure and I will introduce a transition operator that leads to faster mixing. Finally, a different parametrisation of RBMs will be discussed that leads to better learning results and more robustness against changes in the data representation.
Original languageEnglish
PublisherDepartment of Computer Science, Faculty of Science, University of Copenhagen
Number of pages212
Publication statusPublished - 2014

ID: 124435713