Explanations include inconsistent claim fact triple, inconsistent context span, inconsistent claim component, coarse and fine-grained inconsistent entity types.
Further, we leverage this dataset to train a pipeline of four neural models to predict inconsistency type with explanations, given a (claim, context) sentence pair. When the inconsistency relates to an entity type, it is labeled as well at two levels (coarse and fine-grained). Based on this categorization, we contribute a novel dataset, FICLE (Factual Inconsistency CLassification with Explanation), with \(\sim \)8K samples where each sample consists of two sentences (claim and context) annotated with type and span of inconsistency. In this paper, we leverage existing work in linguistics to formally define five types of factual inconsistencies.
However, there has been no work on detecting and explaining types of factual inconsistencies in text, without any knowledge base in context. Existing work has focused on (a) finding fake news keeping a knowledge base in context, or (b) detecting broad contradiction (as part of natural language inference literature). Still, automated factual inconsistency detection is rather under-studied.
It is extremely important for automatic text generation systems like summarization, question answering, dialog modeling, and language modeling. Factual consistency is one of the most important requirements when editing high quality documents.