Too Much in Common: Shifting of Embeddings in Transformer Language Models and its Implications

Abstract

The success of language models based on the Transformer architecture appears to be inconsistent with observed anisotropic properties of representations learned by such models. We resolve this by showing, contrary to previous studies, that the representations do not occupy a narrow cone, but rather drift in common directions. At any training step, all of the embeddings except for the ground-truth target embedding are updated with gradient in the same direction. Compounded over the training set, the embeddings drift and share common components, manifested in their shape in all the models we have empirically tested. Our experiments show that isotropy can be restored using a simple transformation.

Publication
In Source Themes Conference
Daniel Biś
Daniel Biś
Ph.D. Student

My research interests include Natural Language Processing, Deep Learning and Artificial Intelligence in general.

Related