On Position Embeddings in BERT

Research output: Contribution to conferencePaperResearch

Documents

  • Fulltext

    Final published version, 2.12 MB, PDF document

Various Position Embeddings (PEs) have been proposed in Transformer based architectures~(e.g. BERT) to model word order. These are empirically-driven and perform well, but no formal framework exists to systematically study them. To address this, we present three properties of PEs that capture word distance in vector space: translation invariance, monotonicity, and symmetry. These properties formally capture the behaviour of PEs and allow us to reinterpret sinusoidal PEs in a principled way.
Moreover, we propose a new probing test (called `identical word probing') and mathematical indicators to quantitatively detect the general attention patterns with respect to the above properties. An empirical evaluation of seven PEs (and their combinations) for classification (GLUE) and span prediction (SQuAD) shows that: (1) both classification and span prediction benefit from translation invariance and local monotonicity, while symmetry slightly decreases performance;
(2) The fully-learnable absolute PE performs better in classification, while relative PEs perform better in span prediction. We contribute the first formal and quantitative analysis of desiderata for PEs, and a principled discussion about their correlation to the performance of typical downstream tasks.
Original languageEnglish
Publication date2021
Publication statusPublished - 2021
Event9th International Conference on Learning Representations - ICLR 2021 - Virtual
Duration: 3 May 20217 May 2021

Conference

Conference9th International Conference on Learning Representations - ICLR 2021
CityVirtual
Period03/05/202107/05/2021

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 300919719