Neural speed reading with structural-jump-LSTM

Publikation: KonferencebidragPaperForskningfagfællebedømt

Standard

Neural speed reading with structural-jump-LSTM. / Hansen, Christian; Hansen, Casper; Alstrup, Stephen; Simonsen, Jakob Grue; Lioma, Christina.

2019. Paper præsenteret ved 7th International Conference on Learning Representations, ICLR 2019, New Orleans, USA.

Publikation: KonferencebidragPaperForskningfagfællebedømt

Harvard

Hansen, C, Hansen, C, Alstrup, S, Simonsen, JG & Lioma, C 2019, 'Neural speed reading with structural-jump-LSTM', Paper fremlagt ved 7th International Conference on Learning Representations, ICLR 2019, New Orleans, USA, 06/05/2019 - 09/05/2019.

APA

Hansen, C., Hansen, C., Alstrup, S., Simonsen, J. G., & Lioma, C. (2019). Neural speed reading with structural-jump-LSTM. Paper præsenteret ved 7th International Conference on Learning Representations, ICLR 2019, New Orleans, USA.

Vancouver

Hansen C, Hansen C, Alstrup S, Simonsen JG, Lioma C. Neural speed reading with structural-jump-LSTM. 2019. Paper præsenteret ved 7th International Conference on Learning Representations, ICLR 2019, New Orleans, USA.

Author

Hansen, Christian ; Hansen, Casper ; Alstrup, Stephen ; Simonsen, Jakob Grue ; Lioma, Christina. / Neural speed reading with structural-jump-LSTM. Paper præsenteret ved 7th International Conference on Learning Representations, ICLR 2019, New Orleans, USA.

Bibtex

@conference{50c90ae9933849a9bda61f244b3bc978,
title = "Neural speed reading with structural-jump-LSTM",
abstract = "Recurrent neural networks (RNNs) can model natural language by sequentially”reading” input tokens and outputting a distributed representation of each token. Due to the sequential nature of RNNs, inference time is linearly dependent on the input length, and all inputs are read regardless of their importance. Efforts to speed up this inference, known as”neural speed reading”, either ignore or skim over part of the input. We present Structural-Jump-LSTM: the first neural speed reading model to both skip and jump text during inference. The model consists of a standard LSTM and two agents: one capable of skipping single words when reading, and one capable of exploiting punctuation structure (sub-sentence separators (,:), sentence end symbols (.!?), or end of text markers) to jump ahead after reading a word. A comprehensive experimental evaluation of our model against all five state-of-the-art neural reading models shows that Structural-Jump-LSTM achieves the best overall floating point operations (FLOP) reduction (hence is faster), while keeping the same accuracy or even improving it compared to a vanilla LSTM that reads the whole text.",
author = "Christian Hansen and Casper Hansen and Stephen Alstrup and Simonsen, {Jakob Grue} and Christina Lioma",
year = "2019",
language = "English",
note = "7th International Conference on Learning Representations, ICLR 2019 ; Conference date: 06-05-2019 Through 09-05-2019",

}

RIS

TY - CONF

T1 - Neural speed reading with structural-jump-LSTM

AU - Hansen, Christian

AU - Hansen, Casper

AU - Alstrup, Stephen

AU - Simonsen, Jakob Grue

AU - Lioma, Christina

PY - 2019

Y1 - 2019

N2 - Recurrent neural networks (RNNs) can model natural language by sequentially”reading” input tokens and outputting a distributed representation of each token. Due to the sequential nature of RNNs, inference time is linearly dependent on the input length, and all inputs are read regardless of their importance. Efforts to speed up this inference, known as”neural speed reading”, either ignore or skim over part of the input. We present Structural-Jump-LSTM: the first neural speed reading model to both skip and jump text during inference. The model consists of a standard LSTM and two agents: one capable of skipping single words when reading, and one capable of exploiting punctuation structure (sub-sentence separators (,:), sentence end symbols (.!?), or end of text markers) to jump ahead after reading a word. A comprehensive experimental evaluation of our model against all five state-of-the-art neural reading models shows that Structural-Jump-LSTM achieves the best overall floating point operations (FLOP) reduction (hence is faster), while keeping the same accuracy or even improving it compared to a vanilla LSTM that reads the whole text.

AB - Recurrent neural networks (RNNs) can model natural language by sequentially”reading” input tokens and outputting a distributed representation of each token. Due to the sequential nature of RNNs, inference time is linearly dependent on the input length, and all inputs are read regardless of their importance. Efforts to speed up this inference, known as”neural speed reading”, either ignore or skim over part of the input. We present Structural-Jump-LSTM: the first neural speed reading model to both skip and jump text during inference. The model consists of a standard LSTM and two agents: one capable of skipping single words when reading, and one capable of exploiting punctuation structure (sub-sentence separators (,:), sentence end symbols (.!?), or end of text markers) to jump ahead after reading a word. A comprehensive experimental evaluation of our model against all five state-of-the-art neural reading models shows that Structural-Jump-LSTM achieves the best overall floating point operations (FLOP) reduction (hence is faster), while keeping the same accuracy or even improving it compared to a vanilla LSTM that reads the whole text.

UR - http://www.scopus.com/inward/record.url?scp=85070529326&partnerID=8YFLogxK

M3 - Paper

T2 - 7th International Conference on Learning Representations, ICLR 2019

Y2 - 6 May 2019 through 9 May 2019

ER -

ID: 227228976