Evolution of Stacked Autoencoders
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
Evolution of Stacked Autoencoders. / Silhan, Tim; Oehmcke, Stefan; Kramer, Oliver.
2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. p. 823-830 8790182.Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Evolution of Stacked Autoencoders
AU - Silhan, Tim
AU - Oehmcke, Stefan
AU - Kramer, Oliver
PY - 2019
Y1 - 2019
N2 - Choosing the best hyperparameters for neural networks is a big challenge. This paper proposes a method that automatically initializes and adjusts hyperparameters during the training process of stacked autoencoders. A population of autoencoders is trained with gradient-descent-based weight updates, while hyperparameters are mutated and weights are inherited in a Lamarckian kind of way. The training is conducted layer-wise, while each new layer initiates a new neuroevolutionary optimization process. In the fitness function of the evolutionary approach a dimensionality reduction quality measure is employed. Experiments show the contribution of the most significant hyperparameters, while analyzing their lineage during the training process. The results confirm that the proposed method outperforms a baseline approach on MNIST, FashionMNIST, and the Year Prediction Million Song Database.
AB - Choosing the best hyperparameters for neural networks is a big challenge. This paper proposes a method that automatically initializes and adjusts hyperparameters during the training process of stacked autoencoders. A population of autoencoders is trained with gradient-descent-based weight updates, while hyperparameters are mutated and weights are inherited in a Lamarckian kind of way. The training is conducted layer-wise, while each new layer initiates a new neuroevolutionary optimization process. In the fitness function of the evolutionary approach a dimensionality reduction quality measure is employed. Experiments show the contribution of the most significant hyperparameters, while analyzing their lineage during the training process. The results confirm that the proposed method outperforms a baseline approach on MNIST, FashionMNIST, and the Year Prediction Million Song Database.
KW - autoencoder
KW - hyperparameter tuning
KW - neuroevolution
UR - http://www.scopus.com/inward/record.url?scp=85071314505&partnerID=8YFLogxK
U2 - 10.1109/CEC.2019.8790182
DO - 10.1109/CEC.2019.8790182
M3 - Article in proceedings
SP - 823
EP - 830
BT - 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE Congress on Evolutionary Computation, CEC 2019
Y2 - 10 June 2019 through 13 June 2019
ER -
ID: 227137612