Neural speed reading with structural-jump-LSTM
Publikation: Konferencebidrag › Paper › Forskning › fagfællebedømt
Standard
Neural speed reading with structural-jump-LSTM. / Hansen, Christian; Hansen, Casper; Alstrup, Stephen; Simonsen, Jakob Grue; Lioma, Christina.
2019. Paper præsenteret ved 7th International Conference on Learning Representations, ICLR 2019, New Orleans, USA.Publikation: Konferencebidrag › Paper › Forskning › fagfællebedømt
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - CONF
T1 - Neural speed reading with structural-jump-LSTM
AU - Hansen, Christian
AU - Hansen, Casper
AU - Alstrup, Stephen
AU - Simonsen, Jakob Grue
AU - Lioma, Christina
PY - 2019
Y1 - 2019
N2 - Recurrent neural networks (RNNs) can model natural language by sequentially”reading” input tokens and outputting a distributed representation of each token. Due to the sequential nature of RNNs, inference time is linearly dependent on the input length, and all inputs are read regardless of their importance. Efforts to speed up this inference, known as”neural speed reading”, either ignore or skim over part of the input. We present Structural-Jump-LSTM: the first neural speed reading model to both skip and jump text during inference. The model consists of a standard LSTM and two agents: one capable of skipping single words when reading, and one capable of exploiting punctuation structure (sub-sentence separators (,:), sentence end symbols (.!?), or end of text markers) to jump ahead after reading a word. A comprehensive experimental evaluation of our model against all five state-of-the-art neural reading models shows that Structural-Jump-LSTM achieves the best overall floating point operations (FLOP) reduction (hence is faster), while keeping the same accuracy or even improving it compared to a vanilla LSTM that reads the whole text.
AB - Recurrent neural networks (RNNs) can model natural language by sequentially”reading” input tokens and outputting a distributed representation of each token. Due to the sequential nature of RNNs, inference time is linearly dependent on the input length, and all inputs are read regardless of their importance. Efforts to speed up this inference, known as”neural speed reading”, either ignore or skim over part of the input. We present Structural-Jump-LSTM: the first neural speed reading model to both skip and jump text during inference. The model consists of a standard LSTM and two agents: one capable of skipping single words when reading, and one capable of exploiting punctuation structure (sub-sentence separators (,:), sentence end symbols (.!?), or end of text markers) to jump ahead after reading a word. A comprehensive experimental evaluation of our model against all five state-of-the-art neural reading models shows that Structural-Jump-LSTM achieves the best overall floating point operations (FLOP) reduction (hence is faster), while keeping the same accuracy or even improving it compared to a vanilla LSTM that reads the whole text.
UR - http://www.scopus.com/inward/record.url?scp=85070529326&partnerID=8YFLogxK
M3 - Paper
T2 - 7th International Conference on Learning Representations, ICLR 2019
Y2 - 6 May 2019 through 9 May 2019
ER -
ID: 227228976