MSc Defences

See the list of MSc defences at DIKU. The list will continuously be updated.

If the defences are announced as ‘online defence’, the student has to be alone in the room during the examination and assessment. Guests can participate online but the links for the defences are not public. If you want to be present during the defence, please contact uddannelse@diku.dk or the supervisor for a link.

Computer Science

June

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Title

Banking the unbanked: Future-proofing the least developed countries as they go from cash to online-payment 

Abstract

Banking is a necessity for everyone, it is a key factor to reduce poverty and is a focal point for many organizations around the world. Unfortunately, 1.7 billion people remain unbanked. We take an example-driven approach to explore the reasons for why this is the case, where this is the case, and how we
can bank the people of these countries. We introduce a banking model based on M-Pesa that circumvents some of the complications of the M-Pesa model.
In these regions, cash is king. As the digital divide lessens we implement two systems based around this model. One for the current generation, based on the technology already available, and one for future generations, based on technology that will become available. We find that converting from a static
agent model to a dynamic one, multiple benefits can appear: The distance to banks is reduced, fees might be reduced, new job opportunities are made, and lack of identification might no longer be a limiting factor.

Time and place

17 June at 14:00

Online

Supervisor(s)  

Fritz Henglein, Søren Terp Hørlück Jessen

External examiner(s)  

Mads Rosendahl

 

 

Title

Using Graph Neural Networks To Learn Node Embeddings For Spatial Transcriptomics Neighborhood Graphs

Abstract

Recently, spatial transcriptomics methods have emerged and become more accessible. However, the number of computational methods that make use of the spatial information is limited. Existing
machine learning methods either do not incorporate spatial aspects or work on regular structures. My aim with this thesis is to present a machine learning approach that makes use of the true strength of
the spatial transcriptomics technology: spatiality. By turning spatial data into neighborhood graphs, we abstract the spatial information and make it possible to work with Graph Neural Networks. With these, we learn how to aggregate spot information with neighboring spot information and use these aggregations for machine learning predictions. To facilitate this process, I provide a user-friendly pipeline that assists with the graph construction, model creation and -tuning, and the extraction of the node embeddings, the aggregated spot information. I compare the results with a benchmark model that does not factor spatial information to compare the method to neighborhood-agnostic approaches.
I found that our approach outperforms other machine learning methods that don’t factor spatial information by 7% in prediction accuracy in a supervised machine learning task classifying multiple
annotated brain regions within a mouse brain atlas with an overall score of 79.01%. Furthermore, I present how the node embeddings serve downstream data analysis tasks like clustering and anomaly
detection. Applying my method to another use case, detecting Alzheimer’s diseased brain tissue spots shows that our approach works across different datasets and use-cases.

Time and place

21 June at 15:00

Online

Supervisor(s)  

Anders Krogh, Tune Pers, Petar Todorov

External examiner(s)  

Jes Frellsen

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

July

Title

WebAssembly Backends for Futhark

Abstract

Futhark is a high-performance purely functional data-parallel array programming language targeting parallel compute hardware. Futhark has backends for several compute architectures and this thesis adds browsers by targeting WebAssembly and threaded WebAssembly. These are browser technologies which map better to the underlying hardware of devices, including multicore CPUs.
A JavaScript API is developed for easily calling compiled Futhark WebAssembly libraries in the browser. The implementation and generated WebAssembly code is benchmarked for both browsers and Node.js, against the Futhark sequential C and multicore C backends. The sequential WebAssembly performs close to sequential C speeds. The parallel execution of threaded WebAssembly speeds up some example programs by a factor equal to the number of physical CPU cores.

Time and place

09-07-2021 @ 09:00

Online

Supervisor(s)

Troels Henriksen

External examiner(s)

Maja Hanne Kirkeby

 

 

 

August

 

Title

3D Reconstruction of Transparent Objects

Abstract

In this paper we are going through the theory of Qian et al.[5] for 3D reconstruction of transparent objects, and what we need to implement their setup. We will show how to implement most of Qian et al.’s method, though with some key differences in our setup, which will also
be explained. We will show that the method shows promise with simple objects, like a glass filled with water. In the case of objects with view obstructions, such as a blue dolphin inside a transparent object, we find that we are able to reproduce a lot of the required steps needed for
the reconstruction, but that determining the exact position falters.

Time and place

03-08-2021 @ 9:00 - 10:00

Online

Supervisor(s)

Kim Steenstrup Pedersen

External examiner(s)

Morten Pol Engell-Nørregård

 

 

Title

Diagnosis and Prognosis Prediction for Ebola Virus Disease Using Machine Learning Methods

Abstract

Background. Ebola Virus Disease (EVD) is a neglected and deadly, emerging hemorrhagic viral infection with epidemic potential. Due to its incidence in resource-limited settings, diagnosis and management often rely on probabilistic decision-making. However, currently available clinical decision support tools are trained on small datasets fragmented across heterogeneous populations and, as a result, have limited statistical
predictive performance and generalizability.
Aim. In this work, we produce the largest standardized and centralized clinical EVD dataset on which we build and compare diagnostic and prognostic models using a range of machine learning methods, with the aim of creating adaptable models that are better able to adapt to incoming data.
Methods/Findings. The data is derived from the Ebola Data Platform (EDP) of the Infectious Disease Data Observatory which comprises 13558 patients triaged and/or treated for suspected EVD at one of 13 Ebola treatment centers established during the 2014 – 2016 West African Ebola epidemic. These tabular clinical datasets include demographics, clinical signs, symptoms and laboratory values as well as the diagnostic
label of RT-PCR (EVD+/EVD-) and the prognostic label (survival/death).
We 1) construct a standardized data cleaning and alignment pipeline to aggregate all the EDP datasets and 2) perform detailed data understanding on the resulting centralized dataset and its constituent local subsets. We then 3) develop a series of ML models (logistic regression (LogReg), k-nearest neighbors (kNN), support vector machine (SVM)
and random forest (RF)) for the tasks of diagnosis and prognosis. We evaluate 4) the local and central model performances for both predictive tasks. Finally, we 5) determine the most important clinical characteristics for each task.
The diagnostic central model has an average AUC evaluated across local datasets of 0.66 (LogReg), 0.74 (kNN), 0.76 (SVM) and 0.77 (RF). The diagnostic local models has an average AUC of 0.72 (LogReg), 0.76 (kNN), 0.74 (SVM) and 0.79 (RF). The most important diagnostic predictors are determined to be EVD contact history and diarrhea.
Similarly, the prognostic central model has an average AUC: 0.72 (LogReg), 0.71 (kNN), 0.75 (SVM) and 0.72 (RF). The prognostic local models has an average AUC of 0.75 (LogReg), 0.74 (kNN), 0.75 (SVM) and 0.72 (RF). The most important prognosis predictors are determined to be RT-PCR value and patient age.
Conclusion. This work is the first to produce diagnostic and prognostic models for Ebola on a dataset of this size. We also provide an analysis-ready dataset to facilitate further research.

Keywords: Ebola virus disease, epidemology, data pre-processing, machine learning

Time and place

13-08-2021 @ 10:00

Online

Supervisor(s)

Christina Lioma

External examiner(s)

Troels Andreasen

 

 

Title

Cascading Abort of Pre-Scheduled Actor Transaction

Abstract

Nowadays, the actor model is widely adopted in building stateful middle-tiers for large-scale interactive applications, where ACID transactions are essential to ensure application correctness. Snapper, an on-going research project conducted in DMS Lab of DIKU, has employed deterministic transaction execution where concurrent cross-actor transactions are pre-scheduled. Snapper improves transaction throughput significantly comparing to conventional dynamic concurrency control method, especially under high contention.

However, Snapper applies speculative execution where a transaction can be executed without waiting for transactions it depends on to commit. Thus, the abort of a transaction will cause cascading abort. Even though Snapper tightly pre-schedules all concurrent transactions, it is inevitable to have transactions abort due to user-defined transaction logic, erroneous transaction input and different types of failures. The aborts of transactions could cause the system result in an inconsistent state and block Snapper to proceed the pre-determined schedules. This thesis analyzed different scenarios of the abort of pre-scheduled transactions in Snapper, provided a formalized definition of the scope of cascading abort, proposed a cascading abort protocol for Snapper which can correctly handle aborts both in single-server and multi-server deployments, and implemented the protocol on Snapper.

An evaluation has been conducted to reveal the characteristics of the proposed cascading abort protocol. The experimental results show that in the singleserver deployment, submitting 10% transactions with user aborts will cause 60% transactions to abort, and get 30% throughput degradation comparing to the one with no user aborts. And Global transactions suffer much more from cascading abort than local transactions.

Supervisor

Yongluan Zhou

External examiner

Philippe Bonnet

Time and place

25 August 2021 at 10:00

Online

 

 

Title

Benefits of auxiliary information in automatic teeth segmentation

Abstract

This paper evaluates deep learning methods on segmentation of dental arches in panoramic radiographs. Our main aim is to test whether introducing auxiliary learning goals can improve image segmentation. We implemented three multi-output networks that detect (1) patient characteristics (e.g missing teeth, no dental artifacts), (2) buccal area, (3) individual teeth, alongside the dental arches. These design choices may restrict the region of interest and improve the internal representation of teeth shapes.

The models are based on the modified U-net (Ronneberger et al., 2015b) architecture and optimized with Dice loss. Two data sets, of 1500 and 116 samples, collected at different institutions (Silva et al., 2018; Abdi and Kasaei, 2020), were used for training and testing the methods. Additionally, we evaluated the networks against various patient conditions, namely: 32 teeth, 6= 32 teeth, dental artifacts, no dental artifacts.

The standard U-net architecture reached the highest Dice scores of 0.932 on the larger data set (Silva et al., 2018) and 0.946 on the group of patients with no missing teeth.

The model that outputs probability masks for individual teeth reached the best Dice score of 0.903 on the smaller data set (Abdi and Kasaei, 2020). We observe certain benefits in augmenting teeth segmentation with other information sources, which indicate the potential of this research direction and justifies further investigations.

Keywords: Computer vision, Deep learning, Segmentation methodologies

Supervisor

Bulat Ibragimov

External examiner

Rasmus Reinhold Paulsen

Time and place

30 August 2021

Online

 

September

 

Title

Tree count and canopy cover estimation using deep learning with remote sensing imagery

Abstract

Trees are an essential natural resource with ecological and economic importance. They serve as habitat for other animal and plant species, prevent against soil erosion, protect water bodies and serve as
important crops. The availability of very high-resolution satellite images has recently sparked a growing interest in mapping out individual trees at a large scale, using images to understand better their distribution, size and count in regions of interest. While most of the previous approaches have focused on segmenting individual trees, in this work, we focus on two derived quantities, namely, the canopy cover and the count of trees.

Canopy cover estimation is the problem of quantifying the presence of trees in a given area, and it is important for evaluating the effectiveness of forest conservation efforts. At the same time, tree counting
is relevant because it allows collecting information on individual trees, which provides additional insights beyond the canopy cover. For example, the tree count can be used to investigate the role of forests as
carbon sinks, because most of the carbon is stored in the tree trunks.

This work investigates end-to-end trainable deep learning models for segmentation and density estimation to predict the canopy cover and tree count. We consider the case of individual trees in the Sahara
and Sahel-Sudan regions and more dense scenes from Rwanda, using sub-meter high-resolution satellite and aerial imagery. In particular, we analyze ground truth generation using per-pixel and point supervision, various loss functions, and different blocks in a U-Net architecture. Finally, we integrate
the findings for each task in isolation to create a multi-task model that simultaneously attempts to learn canopy cover and count estimation to see the impact on performance in both of the tasks.

We train models that can predict canopy cover and tree count simultaneously with comparable accuracy to models explicitly trained for each task for the Sahara and Sahel-Sudan datasets. Furthermore,
experiments with density models trained on the Rwanda dataset suggest that using per-pixel supervision achieves better count performance for datasets that mostly present very dense scenes.

We perform an experiment to observe the effect of using dilated convolutions in the decoder of an U-Net model that targets canopy cover estimation. The results suggest that this block may allow possible
improvements, but further experimentation on tuning the dilation factors is necessary.

Supervisors

Christian Igel and Ankit Kariryaa

External examiner

Morten Pol Engell-Nørregård

Time and place

3 September 2021

Place TBD, contact uddannelse@di.ku.dk for further information.

 

 

Title

Unsupervised Clustering of Sparse Data in Futhark

Abstract

K-means is a basic building block of modern machine learning. As
such, its performance has a critical impact on workflows and explorations
it is involved in. In this thesis, we focus on application domains that involve large sparse datasets and investigate the feasibility of the Futhark
programming language to map k-means and its generalization, mixture
models, to efficient GPU code. We propose a framework that abstracts
from the (possibly sparse) representation of the data while maintaining
the efficiency of sparse representations, where they are used. We demonstrate, that the implementation of k-means, spherical k-means, Gaussian mixture models and von Mises-Fisher mixture models through our framework is possible without the need to explicitly address the underlying data representation. Our k-means implementation yields performance speedups of at least factor 10 over the multicore CPU implementation of scikit-learn and our implementation of Gaussian mixture models with diagonal covariance matrices achieves a speedup of factor 1893 over a single-core CPU implementation that does not support sparse data representations.

Supervisor

Cosmin Eugen Oancea

External examiner

Patrick Bahr

Time and place

3 September 2021 at 09:00

Online

 

 

Title

Generative Neural Networks for Ecosystem Simulation

Abstract

Climate change is one of the greatest threats humankind has ever faced. Remote sensing data can be used to monitor changes in the climate and the effect it has on ecosystems.

The aim of this project is to develop a system that can simulate changes in ecosystems using remote sensing data. Generative adversarial networks (GAN) have seen a rapid development in image generation and image translation over the past few years. This type of model has been used before to simulate ecosystem changes, for post-flood scenarios and by using environmental variables to generate realistic images, but these models are either only able to simulate one ecosystem change (flooding) or they do not use contextual information in satellite images.

We have created a deep learning model, building on state of the art loss functions and network architectures. By training the model on remote sensing data from Sentinel-2 we demonstrate that it learns to generate multiple types of realistic changes in a satellite image. Furthermore, it uses contextual information and is able to convincingly preserve objects and realistically simulate changes in the ecosystem.

Supervisors

Stefan Oehmcke and Christian Igel

External examiner

Morten Pol Engell-Nørregård

Time and place

3 September 2021

Place TBD, contact uddannelse@di.ku.dk for more information.

 

 

Title

Continuous Collision Detection Using Discrete-oriented Polytope Bounding Volume Hierarchies and Conservative Advancement

Abstract

Collision detection is the concern of this thesis and treated as a three-staged problem consisting of a broad-phase, a mid-phase and a narrow-phase. Alongside the introduction of continuous collision detection, background is given to existing research on methods applicable in each phase but with emphasis on the last two with surrounding discussion of the strengths and weaknesses of known approaches.

Based on the conducted research, theory is turned into practice and a prototype implementation of continuous collision detection is devised for an early staged interactive simulation library and its requirements are identified and taken into consideration. Guided by a testing strategy, the functionality of the implementation is first verified in isolation and then integrated into the library as a seemless, optional feature.

A bounding volume hierarchy of discreteoriented polytopes is used for mid-phase and conservative advancement is performed in the narrow-phase using local optimisation over signed distance fields for an accurate distance tracking routine. Experiments reveal that the prototype implementation works as expected with the assumptions made but lacks in performance for an interactive experience.

Through modular design, this was to be expected as inefficiencies are introduced along the way but opportunity for great acceleration in performance by means of parallel execution becomes possible which is left for future work.

Supervisor

Kenny Erleben

External examiner

Morten Pol Engell-Nørregård

Time and place

3 September 2021 at 10:00

Online

 

 

Title

MMLA-Investigating conversational characteristics with waveforms and spectrograms

Abstract

Conversation, defined as the communication between two or more people, is an indispensable part of interpersonal communication and teamwork in our daily lives. During the conversations, people can exchange their thoughts and ideas by hearing each other while observing body languages. As human activities are usually accompanied by conversations, therefore, it is plausible and meaningful to evaluate the activity’s effectiveness by analysing the conversations.

We designed a IoT system data pipeline integrating data collection, data analysis and instructive feedback to evaluate participants’ engagement levels of learning activities (app.9.1). This project concentrates on the feature extractor of the data pipeline which accepts audio data collected from sensors and outputs several conversational characteristics in segments.

We used THCHS-30 and MULTISIMO dataset for training. The THCHS-30 dataset involves 11043 utterances from 25 speakers. Based on the that, we developed a speaker identification model that can predict the speaker of a given utterance from a list registered speakers. The speaker identification
model achieves an accuracy of 94.8% on test set, referring to the probability of correctly identifying speaker. The MULTISIMO dataset contains 23 sessions of group conversations with an average
duration of 10 minutes. With this dataset, we manually labeled the slicing segments with emotional level and overlapped degree according to a pre-defined coding scheme. After that, we created the emotional predictor with the use of random forest, and an overlap detector with the structure of convolution neural network and bidirectional long short-term memory. The emotional predictor has an AUC of 0.88 under the micro-average ROC curve with classifier, and has a MAPE of 17.89% with regressor. The overlap detector (Unaugmented, Weighted, ZCR-Enhanced) achieves an overall accuracy of 0.7290 and an overlapped class f1 score of 0.6824.

In this project, we created three feature extractors, which are speaker identification model, emotional predictor and overlap detector. By giving more time, we believe that those models’ performance can be improved by refining the model structure and augmenting the training dataset in a proper way. Furthermore, we may use the tensorflow lite converter to deploy those models onto microcontrollers and complete the entire data pipeline in the future.

Supervisor(s)

Daniel Spikol

External examiner(s)

Andrea Corradini

Time and place

6 September 2021 at 14:00 - 15:30

Universitetsparken 1, 2-0-04

 

 

Title

Fuzziing as a Means of Bug Detection in eBPF

Abstract

In this thesis I examine fuzzing as a means of testing the In-Kernel eBPF Verier. I move the eBPF Verier to User-Space in order to obtain better fuzzing performance, and examine the limits on what fuzzing can ectively do, in the context of ensuring safe behavior of generated inputs. Dierent variations of fuzzers and strategies were applied to the Verier and used to discuss the positive and negative aspects of utilizing fuzzing to test the Verier in isolation and what would be required to utilize fuzzing to test the Verier and BPF as a whole.

Time and place

28-09-2021 at 14:00

01-0-S29 at the PLTC section, HCØ

Supervisor(s)

Ken Friis Larsen

External examiner(s

Philippe Bonnet

 

 

Title

Embedding Threads Into 3D-Printed Models

Abstract

3D printing is a fairly new technology that has been revolutionizing the manufacturing industry. It offers the creation of complex objects with various shapes, mostly using only one type of material in the form of filament. However, combining different materials (e.g. thread) during a 3D print can extend the range of capabilities of the printed objects.

This thesis introduces a new type of 3D printing system for effectively embedding threads into 3D objects. The threads can be easily manipulated within the print layers and can be fixed or loose inside a small pipe. This offers additional functionalities to 3D printed objects by taking advantage of thread properties such as elasticity, flexibility and thin shape. At the core of this system is a modified off-the-shelf fused deposition modeling (FDM) 3D printer that has an attached gear ring on the x-axis. There is a thread spool at the exterior of the ring, whose position can be controlled by the system. A python script parses the g-code file of the sliced model and prepares it for the printing process.

An essential contribution of the system is the support of various types of thread (macrame, elastic). The design space of the system is demonstrated by following applications: self-assembling boxes, actuated puppets, an abacus, and a hook. In addition, it contributes open-source firmware, hardware specifications, and 3D models for replication.

Keywords: 3D printing, rapid prototyping, textile

Supervisor(s)

Daniel Lee Ashbrook & Hyunyoung Kim

External examiner(s)

Mikael Brasholt Skov

Time and place

30-09-2021 09:00 - 10:30

Sigurdsgade 41, room 0-11

 

 

Title

ClipWidgets: 3D-printed Modular Tangible UI Extensions for Smartphones

Abstract

Touchscreens provide a platform for adaptable and versatile user interfaces making them a popular choice for modern smart devices. However, touchscreens lack physicality. Existing solutions to add tangible user interfaces to smart devices often require complicated assembly or occlude part of the touchscreen. To address the problem, I propose ClipWidgets: 3D-printed modular tangible UI extensions for smartphones. ClipWidgets uses a conical mirror and a custom phone case to redirect the field of view of the rear camera of a smartphone to the phone’s periphery. This allows the phone to optically sense input from modular passive 3D-printed widgets that are attached to the phone case. I developed three different widget types (button, dial and slider) that require no calibration and minimal assembly. To demonstrate the functionality of ClipWidgets I used it to prototype three different applications: a game controller, a music interface and an interactive graph tool.

Keywords: 3D printing, user interfaces

Supervisor(s)

Daniel Lee Ashbrook

External examiner(s)

Mikael Brasholt Skov

Time and place

30-09-2021 11:00 - 12:30

Sigurdsgade 41, room 0-11

 

Bioinformatics

 

 

 

Title

Mining the literature to detect connections between lifestyle and diseases

Abstract

Background and Methodology: Text mining is a flexible technique that can be applied
to various tasks in the biomedical field. The association between diseases and genes is
well established in the literature and as such it has been extensively mined and stored
in dedicated databases. However, another factor related to the onset and development
of diseases – lifestyle – is still hidden in the vast sea of texts, and there is no dedicated
database with this information integrated.
In this thesis, I fine-tuned the BioBERT model of natural language processing to
identify lifestyle factors, thereby extending a prototype lifestyle factors ontology. After
completing the expansion, I used the JensenLab dictionary-based tagger to extract
Disease-Lifestyle associations from PubMed. Tagger, an efficient dictionary-based text
mining software, is used both to identify lifestyle factors and diseases in text, and to
find the association between them by considering their co-occurrences within and
between sentences.
Results: After fine-tuning the pre-trained BioBERT model, the model’s prediction
accuracy for the named entity recognition task was 94.61%. This model was used to
predict whether Wikipedia titles with over 1000 matches in PubMed are also lifestyle
factors. After assigning proper thresholds for inclusion and extensive manual
annotation, 447 new terms from Wikipedia titles were added to the prototype ontology
of lifestyle factors. Finally, 501,952 pairs of Disease-Lifestyle associations were
obtained, by running tagger, out of which 50,997 were of high or very high confidence.
Conclusion: This project enriched the lifestyle factors ontology and detected
associations between diseases and lifestyle factors. The manual inspection of results
suggests to a certain extent that when the confidence level is high, the Disease-Lifestyle
associations found through text mining are credible, but further testing is needed to
avoid false positives.

Time and place

22 June at 09:00

Panum, Room 6.2.09

Supervisor(s)  

Lars Juhl Jensen, Aikaterini Despoina Nastou, Anders Krogh

External examiner(s)  

Jes Frellsen

 

 

Title

Using machine learning as a weapon to fight scientific fraud by detecting paper-mill publications 

Abstract

With the rapid development of society and economy, the increasingly serious problem
of scientific fraud has attracted public attention. The shadowy companies that fabricate
papers in bulk, the so-called paper mills, are gradually being noticed. In this thesis,
different machine learning-based methods were implemented to detect paper mill
publications. Some known paper mills were collected, and the biggest one called the
Tadpole paper mill is the one mainly used. Through the application of named entity
recognition from text-mining, all papers mentioning non-coding RNA in the Tadpole
paper mill were used as the input data to train supervised machine learning methods,
namely support vector machine, logistic regression, multinomial naive bayes, stochastic
gradient descent, passive aggressive classifier, random forest and XGBoost. Text was
vectorized using the TF-IDF approach and after hyperparameter optimization, the
trained classifiers were applied to other paper mills and papers from 2021 for prediction.
Almost all classifiers achieved good performance with approximately F1-scores of 90%,
proving that they can learn from the specific fraud style rather than the theme. From
prediction results, the classifier shows the ability to only identify fake papers belonging
to the paper mill it was trained on, and also does not have journal bias even if the paper
mill publications concentrate on some specific journals. In addition, the paper mills
seem to have fraud templates or patterns. According to their preference for combining
non-coding RNA and disease as main contents, the function of relationship extraction
was used to obtain papers mentioning such pairs for association analysis. After scoring
for confidence, the results show fake papers mainly focus on under-studied pairs. Such
fake studies linking ncRNA to disease represent a significant threat to science, because
it will pollute under-investigated fields and thereby mislead further research. In
conclusion, the paper mills may have and will definitely continue to seriously damage
the research ecosystem, while it is probable that the machine learning classifiers
working with detecting image duplication could better detect fraud and protect
scientific integrity.

Time and place

22 June at 10:30

Panum, Room 6.2.09

Supervisor(s)  

Anders Krogh, Lars Juhl Jensen, Aikaterini Despoina Nastou

External examiner(s)  

Jes Frellsen

 

 

 

 

 

 

 

Physics

 

 

Statistics

 

 

 

Sundhed og informatik

 

 

 

 

 

 

 

 

 

Title

Optimization of the Health Platform through local change - a study of the physician builder program

Abstract

Background:
This master thesis in Health Informatics examines the physician builder program after it was implemented in 2016 as a strategy to improve the condition of the Health Platform. A study conducted in 2019 showed 69% of physicians find the Health Platform does not facilitate their
work (Bansler, J. P, 2021 s. 12). A new study conducted in 2021 show that more than half of the physicians remain unsatisfied with the Health Platform (T. Jensen, F, 2021). This highlights that the physician builder program has not yet had the desired effect, however the reason is unclear.

Aims:
Based upon the experience of the physician builders this study aims to identify problem areas within the program to optimize workflow at the hospitals.

Methods:
This project examines the aims raised using qualitative methods. We conducted 10 interviews with application coordinators and physician builders which make up our entire data collection.

Results:
The results indicate that there are various problem areas throughout the physician builder program. The physician builders have been experiencing issues in terms of working conditions. Furthermore, cooperation between multiple actors in the program has been ineffective, resulting
in delays for the physician builders. Approval procedures has been slow, tedious, and frustrating for the physician builders and organizational changes implemented to the physician builder program has shown unintended effects.

Conclusion:
The findings indicate that the physician builder program is currently running suboptimal and that designing a successful physician builder program is challenging. Organizational changes implemented to the physician builder program has not proven as effective as expected. The
physician builders find it especially frustrating to deal with external factors, approval procedures and collaboration between actors all of which influence the outcome of the builds.

Supervisor(s)

Jørgen P. Bansler

External examiner(s)

Troels Mønsted

Time and place

8 September 2021 at 13:00

Room 2.03 in Sigurdsgade

 

Title

Agile Project Management in the Capitol Region of Denmark: an empirical study of the Capitol Regions' agile project management in regards to software development and maintenance of Sundhedsplatformen.

Abstract

With a qualitative and phenomenological method approach this master’s dissertation seeks to explore why the Capitol Region of Denmark chose to implement agile methodology in software development and maintenance of the electronic health record Sundhedsplatformen. This dissertation furthermore seeks to identify driving as well as restraining forces in regards to keeping (freezing) the agile methodology as a method in the governance of the electronic health record Sundhedsplatformen.
The aim of this dissertation was to get an overall view and understanding of agile methodology in general as well as in the specific context of Fokusområde Medicin’s use of agile methodology in developing and maintaining Sundhedsplatformen.
This dissertation performed 5 interviews with people from Fokusområde Medicin, with the aim of identifying driving and restraining forces that contribute to the freeze of change, the implementation of agile methodology introduced.
The findings indicate that the main reasons for the Capitol Region of Denmark to implement agile methodology, specifically Scaled Agile Framework for Lean Enterprises (SAFe), in software development and maintenance of the electronic health record Sundhedsplatformen (EPIC), are the incredibly large amount of criticism towards the electronic health records system as well as the poor and stagnating user satisfaction of it.
A taskforce of experts were called to propose solutions to the challenges that arose during and after the implementation of Sundhedsplatformen back in 2016. This dissertation finds that it is on the basis of the abovementioned task force’s report on proposed solution, that the Capitol Region of Denmark chose to reorganize the governance of Sundhedsplatformen as well as implement agile methodology to do so.
Furthermore the findings of this dissertation indicates that some of the driving forces for keeping and freezing of the use agile methodology in Fokusområde Medicin, is the motivation to change caused by the large amount of criticism and poor user satisfaction of Sundhedsplatformen and increased communication between developers and end users.
Some of the restraining forces against keeping and freezing the use of agile methodology are the organizational changes, the changes of workflows and working in teams that for some of the developers poses difficulties in regards collegial codependency and not being able to independently decide what tasks to prioritize when it comes to developing and maintaining Sundhedsplatformen.

Supervisor(s)

Erling Carl Havn

External examiner(s)

Jens Pedersen

Time and place

28-09-2021 at 13:00

UP1 2-0-04