6533b871fe1ef96bd12d2290

RESEARCH PRODUCT

An In-Depth Experimental Comparison of RNTNs and CNNs for Sentence Modeling

Marcin SkowronZahra AhmadiAleksandrs StierStefan Kramer

subject

Structure (mathematical logic)Vanishing gradient problemPhrasebusiness.industryComputer scienceDeep learning05 social sciencesPattern recognition010501 environmental sciences01 natural sciencesConvolutional neural networkSet (abstract data type)0502 economics and businessBenchmark (computing)Artificial intelligence050207 economicsbusinessSentence0105 earth and related environmental sciences

description

The goal of modeling sentences is to accurately represent their meaning for different tasks. A variety of deep learning architectures have been proposed to model sentences, however, little is known about their comparative performance on a common ground, across a variety of datasets, and on the same level of optimization. In this paper, we provide such a novel comparison for two popular architectures, Recursive Neural Tensor Networks (RNTNs) and Convolutional Neural Networks (CNNs). Although RNTNs have been shown to work well in many cases, they require intensive manual labeling due to the vanishing gradient problem. To enable an extensive comparison of the two architectures, this paper employs two methods to automatically label the internal nodes: a rule-based method and (this time as part of the RNTN method) a convolutional neural network. This enables us to compare these RNTN models to a relatively simple CNN architecture. Experiments conducted on a set of benchmark datasets demonstrate that the CNN outperforms the RNTNs based on automatic phrase labeling, whereas the RNTN based on manual labeling outperforms the CNN. The results corroborate that CNNs already offer good predictive performance and, at the same time, more research on RNTNs is needed to further exploit sentence structure.

https://doi.org/10.1007/978-3-319-67786-6_11