Abstract: Computing universal distributed representations of sentences is a fundamental task
in natural language processing. We propose a method to learn such representations
by encoding the suffixes of word sequences in a sentence and training on the
Stanford Natural Language Inference (SNLI) dataset. We demonstrate the effectiveness
of our approach by evaluating it on the SentEval benchmark, improving
on existing approaches on several transfer tasks.
TL;DR: Using LSTM encodings of both prefixes and suffixes gives better universal sentence representations.
Keywords: universal sentence representation, LSTM, natural language inference
6 Replies
Loading