Neuroadaptive modelling for generating images matching perceptual categories

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Neuroadaptive modelling for generating images matching perceptual categories. / Kangassalo, Lauri; Spapé, Michiel; Ruotsalo, Tuukka.

In: Scientific Reports, Vol. 10, No. 1, 14719, 2020.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Kangassalo, L, Spapé, M & Ruotsalo, T 2020, 'Neuroadaptive modelling for generating images matching perceptual categories', Scientific Reports, vol. 10, no. 1, 14719. https://doi.org/10.1038/s41598-020-71287-1

APA

Kangassalo, L., Spapé, M., & Ruotsalo, T. (2020). Neuroadaptive modelling for generating images matching perceptual categories. Scientific Reports, 10(1), [14719]. https://doi.org/10.1038/s41598-020-71287-1

Vancouver

Kangassalo L, Spapé M, Ruotsalo T. Neuroadaptive modelling for generating images matching perceptual categories. Scientific Reports. 2020;10(1). 14719. https://doi.org/10.1038/s41598-020-71287-1

Author

Kangassalo, Lauri ; Spapé, Michiel ; Ruotsalo, Tuukka. / Neuroadaptive modelling for generating images matching perceptual categories. In: Scientific Reports. 2020 ; Vol. 10, No. 1.

Bibtex

@article{e3c90dac64284fa08ee4f38b6e0f934b,
title = "Neuroadaptive modelling for generating images matching perceptual categories",
abstract = "Brain–computer interfaces enable active communication and execution of a pre-defined set of commands, such as typing a letter or moving a cursor. However, they have thus far not been able to infer more complex intentions or adapt more complex output based on brain signals. Here, we present neuroadaptive generative modelling, which uses a participant{\textquoteright}s brain signals as feedback to adapt a boundless generative model and generate new information matching the participant{\textquoteright}s intentions. We report an experiment validating the paradigm in generating images of human faces. In the experiment, participants were asked to specifically focus on perceptual categories, such as old or young people, while being presented with computer-generated, photorealistic faces with varying visual features. Their EEG signals associated with the images were then used as a feedback signal to update a model of the user{\textquoteright}s intentions, from which new images were generated using a generative adversarial network. A double-blind follow-up with the participant evaluating the output shows that neuroadaptive modelling can be utilised to produce images matching the perceptual category features. The approach demonstrates brain-based creative augmentation between computers and humans for producing new information matching the human operator{\textquoteright}s perceptual categories.",
author = "Lauri Kangassalo and Michiel Spap{\'e} and Tuukka Ruotsalo",
year = "2020",
doi = "10.1038/s41598-020-71287-1",
language = "English",
volume = "10",
journal = "Scientific Reports",
issn = "2045-2322",
publisher = "nature publishing group",
number = "1",

}

RIS

TY - JOUR

T1 - Neuroadaptive modelling for generating images matching perceptual categories

AU - Kangassalo, Lauri

AU - Spapé, Michiel

AU - Ruotsalo, Tuukka

PY - 2020

Y1 - 2020

N2 - Brain–computer interfaces enable active communication and execution of a pre-defined set of commands, such as typing a letter or moving a cursor. However, they have thus far not been able to infer more complex intentions or adapt more complex output based on brain signals. Here, we present neuroadaptive generative modelling, which uses a participant’s brain signals as feedback to adapt a boundless generative model and generate new information matching the participant’s intentions. We report an experiment validating the paradigm in generating images of human faces. In the experiment, participants were asked to specifically focus on perceptual categories, such as old or young people, while being presented with computer-generated, photorealistic faces with varying visual features. Their EEG signals associated with the images were then used as a feedback signal to update a model of the user’s intentions, from which new images were generated using a generative adversarial network. A double-blind follow-up with the participant evaluating the output shows that neuroadaptive modelling can be utilised to produce images matching the perceptual category features. The approach demonstrates brain-based creative augmentation between computers and humans for producing new information matching the human operator’s perceptual categories.

AB - Brain–computer interfaces enable active communication and execution of a pre-defined set of commands, such as typing a letter or moving a cursor. However, they have thus far not been able to infer more complex intentions or adapt more complex output based on brain signals. Here, we present neuroadaptive generative modelling, which uses a participant’s brain signals as feedback to adapt a boundless generative model and generate new information matching the participant’s intentions. We report an experiment validating the paradigm in generating images of human faces. In the experiment, participants were asked to specifically focus on perceptual categories, such as old or young people, while being presented with computer-generated, photorealistic faces with varying visual features. Their EEG signals associated with the images were then used as a feedback signal to update a model of the user’s intentions, from which new images were generated using a generative adversarial network. A double-blind follow-up with the participant evaluating the output shows that neuroadaptive modelling can be utilised to produce images matching the perceptual category features. The approach demonstrates brain-based creative augmentation between computers and humans for producing new information matching the human operator’s perceptual categories.

UR - http://www.scopus.com/inward/record.url?scp=85090334567&partnerID=8YFLogxK

U2 - 10.1038/s41598-020-71287-1

DO - 10.1038/s41598-020-71287-1

M3 - Journal article

C2 - 32895430

AN - SCOPUS:85090334567

VL - 10

JO - Scientific Reports

JF - Scientific Reports

SN - 2045-2322

IS - 1

M1 - 14719

ER -

ID: 255209973