[edit]
Text Length Adaptation in Sentiment Classification
Proceedings of The Eleventh Asian Conference on Machine Learning, PMLR 101:646-661, 2019.
Abstract
Can a text classifier generalize well for datasets where the text length is different? For example, when short reviews are sentiment-labeled, can these transfer to predict the sentiment of long reviews (i.e., short to long transfer), or vice versa? While unsupervised transfer learning has been well-studied for cross domain/lingual transfer tasks, \textbf{Cross Length Transfer} (CLT) has not yet been explored. One reason is the assumption that length difference is trivially transferable in classification. We show that it is not, because short/long texts differ in context richness and word intensity. We devise new benchmark datasets from diverse domains and languages, and show that existing models from similar tasks cannot deal with the unique challenge of transferring across text lengths. We introduce a strong baseline model called \textsc{BaggedCNN} that treats long texts as bags containing short texts. We propose a state-of-the-art CLT model called \textbf{Le}ngth \textbf{Tra}nsfer \textbf{Net}work\textbf{s} (\textsc{LeTraNets}) that introduces a two-way encoding scheme for short and long texts using multiple training mechanisms. We test our models and find that existing models perform worse than the \textsc{BaggedCNN} baseline, while \textsc{LeTraNets} outperforms all models.