ACL-IJCNLP 2015 logo

Conference proceeding

G. Weiss, “Learning Representations for Text-level Discourse Parsing,” in Proceedings of the ACL-IJCNLP 2015 Student Research Workshop, 2015, pp. 16–21.

Abstract

In the proposed doctoral work we will design an end-to-end approach for the challenging NLP task of text-level discourse parsing. Instead of depending on mostly hand-engineered sparse features and independent components for each subtask, we propose a unified approach completely based on deep learning architectures. To train better dense vector representations that capture communicative functions and semantic roles of discourse units and relations between them, we will jointly learn all discourse parsing subtasks at different layers of our stacked architecture and share their intermediate representations. By combining unsupervised training of word embeddings and related NLP tasks with our guided layer-wise multi-task learning of higher representations we hope to reach or even surpass performance of current state-of-the-art methods on annotated English corpora.