JoinER-BART: Joint Entity and Relation Extraction with Constrained Decoding, Representation Reuse and Fusion

Abstract

Joint Entity and Relation Extraction (JERE) is an important research direction in Information Extraction (IE). Given the surprising performance with fine-tuning of pre-trained BERT in a wide range of NLP tasks, nowadays most studies for JERE are based on the BERT model. Rather than predicting a simple tag for each word, these approaches are usually forced to design complex tagging schemes, as they may have to extract entity-relation pairs which may overlap with others from the same sequence of word representations in a sentence. Recently, sequence-to-sequence (seq2seq) pre-trained BART models show better performance than BERT models in many NLP tasks. Importantly, a seq2seq BART model can simply generate sequences of (many) entity-relation triplets with its decoder, rather than just tag input words. In this paper, we present a new generative JERE framework based on pre-trained BART. Different from the basic seq2seq BART architecture: 1) our framework employs a constrained classifier which only predicts either a token of the input sentence or a relation in each decoding step, and 2) we reuse representations from the pre-trained BART encoder in the classifier instead of a newly trained weight matrix, as this better utilizes the knowledge of the pre-trained model and context-aware representations for classification, and empirically leads to better performance. In our experiments on the widely studied NYT and WebNLG datasets, we show that our approach outperforms previous studies and establishes a new state-of-the-art (92.91 and 91.37 F1 respectively in exact match evaluation).

Publication
IEEE/ACM Transactions on Audio, Speech, and Language Processing
Hongfei Xu
Hongfei Xu
Researcher
Josef van Genabith
Josef van Genabith
Professor at German Research Center for Artificial Intelligence (DFKI)