23 May 2019

Two DIKU researchers receive grants from Independent Research Fund Denmark

New grants

Associate professors Yongluan Zhou and Yevgeny Seldin have both received approx. DKK 2,9 million from Independent Research Fund Denmark (DFF) to research in consistent and efficient EDA and theoretical foundations of learning with worst-case and easy data, respectively.

Both projects will run for four years from 1 September 2019.  

CEEDA: Consistent and Efficient Event-Driven Architecture

In Yongluan Zhou’s new project he will propose a deterministic transaction execution scheme for actor frameworks to achieve high data consistency without sacrificing performance as well as develop a prototype and simulators of use cases in IoT and Logisticsto to test the hypothesis.

Project background
Actor-based programming frameworks like Akka and Orleans facilitate the development of scalable, concurrent and distributed computations over high-throughput events, and they are highly popular in applications using event-driven architectures, like IoT, online games, etc. In such frameworks, actors maintains local states and act upon incoming events asynchronously. As inconsistency of local states would pose substantial complications to reasoning about business logic and maintenance of system properties, maintaining data consistency is of paramount importance. However most actor frameworks opt for performance over data consistency, but rely on developers to maintain data consistency, which is a highly complicated task.

Theoretical Foundations of Learning with Worst-Case and Easy Data

Yevgeny Seldin will in his project develop a theory of learning with data that is not necessarily i.i.d. and may even be adversarial. A number of algorithms that are optimal in i.i.d. and adversarial environments, as well as a whole range of intermediate settings, have been recently developed in online learning. He plans to build on these results and extend them to much more interesting and challenging scenarios.

Project background
Most of existing machine learning algorithms, from support vector machines to neural networks and deep learning, are based on the i.i.d. assumption that training and test data are independent and come from the same distribution. However, many real-world problems violate the i.i.d. assumption. For example, in spam filtering the spammers are not generating spam from a stationary distribution, but rather actively seek for the worst-case instances to get through the filtering algorithm.