MSc Thesis Defence: Jens Egholm Pedersen


Modelling Neural Learning Systems in Artificial and Spiking Neural Networks


Spiking neural networks receive increasing attention due to their advantages over traditional artificial neural networks. They have
proven to be energy efficient, biological plausible, and up to 105 times faster if they are simulated on analogue (neuromorphic)
chips. Artificial neural network libraries use computational graphs as a pervasive representation, however, spiking models remain
heterogeneous and difficult to train.

Using the hypothetico-deductive method, the thesis posits two hypotheses that examines whether 1) there exists a common
representation for both neural networks paradigms, and whether 2) spiking and non-spiking models can learn a simple recognition
task. The first hypothesis is confirmed by specifying and implementing a domain-specific language that generates semantically similar spiking
and non-spiking neural networks. Through three classification experiments, the second hypothesis is shown to hold for non-spiking
models, but cannot be proven for the spiking models. The thesis contributes three findings: 1) a domain-specific language for modelling neural network topologies, 2) a preliminary model for generalisable learning through backpropagation in spiking neural networks, and 3) a method for transferring optimised non-spiking parameters to spiking neural networks. The latter contribution is promising because the vast machine learning literature can spill-over to the emerging field of spiking neural networks and neuromorphic computing. Future work includes improving the back-propagation model, exploring time-dependent models for learning, and adding support for neuromorphic chips.

Speaker: Jens Egholm Pedersen

External examiner: Mads Rosendahl, Associate Professor, RUC
Supervisor: Martin Elsman, Associate Professor, DIKU