WP2. Commonsense knowledge-enhanced Natural Language Generation
The main goal of this WP is to analyse and propose novel and cost-effective NLG approaches that integrate commonsense knowledge within the generation process (in our case, the knowledge acquired from WP1), so that a new generation of common sense and conscious NLG systems can be produced. For this, several tasks related to NLG architectures and how to integrate knowledge into them are proposed and detailed next. The successful achievement of all of them will lead to the accomplishment of objectives OB3 and OB5.
Task 2.1 Definition and adaptation of language representation models for Natural Language Generation
Based on the existing language infrastructure analysis conducted in WP1, the objective here is to determine: 1) which models are more appropriate and effective for representing human language for each of the sub-tasks involved in the NLG process (Vicente et al., 2015), generally macroplanning, microplanning and surface realisation; and, 2) how they can be adapted to obtain specific language models depending on the targeted NLG tasks to be addressed, e.g., MarIA, the massive model for the Spanish language mentioned in WP1. Additionally, some linguistic variables such as the communicative intention of the message to be created will also be considered for the comparison of the different models that could be employed in the NLG tasks. The inclusion of this feature would enable the automatic change in form of the generated text depending on the intention to be accomplished. Consequently, this task will also analyse up to which point pragmatic features of language, such as communicative intentions, determine the linguistic elements that the generated text should include. This will enable a narrowing down of the generation process to produce content that is conscious of its pragmatic context, going beyond the lexical, syntactic and semantic features used so far in the state of the art.
Milestone: Language representation models for NLG together with the features they encode.
Task 2.2 Analysis and comparison of Natural Language Generation archiecture types
The goal of this task is to find a flexible but effective and efficient NLG architecture. The architecture of a NLG approach determines how the aforementioned sub-tasks (i.e., macroplanning, microplanning and surface realisation) are integrated in the generation process. This task will explore and experiment with different types of architectures, including sequential (also known as pipeline), integrated (also called “end-to-end”), and hybrid ones. In pipeline architectures, the different sub-tasks are undertaken in separate modules, whereas in integrated architectures, the whole process is jointly performed at once. Hybrid ones could benefit from the advantages that each of them has, minimising their limitations.
Milestone: Benchmarking on NLG architectures.
Task 2.3 Proposal and development of knowledge integration approaches in Natural Language Approaches architectures
The purpose of this task is to analyse how to incorporate commonsense knowledge into downstream NLG models. For this purpose, different options can be explored, including the following: i) directly encoding commonsense knowledge from structured knowledge bases as additional inputs to a neural 7 de 20 network in generation; ii) indirectly encoding commonsense knowledge into the parameters of neural networks through pretraining on commonsense knowledge bases or explanations; or, iii) using multitask objectives with common sense relation prediction. In this task, we first plan to explore the use of knowledge already available in semantic networks (e.g. LETO (Estevez-Velarde et al., 2019), as well as in other existing aforementioned resources (e.g. Atomic (Sap et al., 2019) or ConceptNet (Speer, Chin and Havasi, 2017), but also including new knowledge obtained from WP1. In parallel, our aim is also to determine to what extent neural language models (e.g. Transformers) can be modified and fine-tuned to integrate common sense knowledge during the NLG process.
Milestone: Proposal and development of a novel commonsense knowledge-enhanced NLG approach.
Task 2.4 Natural Language Generation Evaluation
The purpose of this task is to evaluate every intermediate or final result associated with the previous tasks. NLG approaches can be evaluated from different perspectives depending on the goal of the evaluation (Celikyilmaz et al., 2020). Within this context, extrinsic methods are those intended to determine whether the designed application achieves its objective, while intrinsic ones aim to examine the system’s performance and the quality of its output, regardless of the utmost function for which the system was designed. Both modalities can include automatic or human evaluation but according to recent studies, human evaluation is considered the more reliable strategy to assess a system (Van der Lee et al., 2021), so this type of evaluation will be prioritised as much as possible. Moreover, existing and new challenges, and shared tasks that fit within our scope will also be used as a means of evaluating and comparing our NLG approaches with respect to other methods developed by the research community under the same conditions. Finally, we do not discard that a new shared-task focused on knowledge-enhanced NLG will be defined and proposed for promoting research in this topic.
Milestone: Obtain competitive and/or better results with our proposed NLG approaches to advance the state of the art and put Spanish research at the forefront of NLP.
This research work is part of the R&D project “PID2021-123956OB-I00”, funded by MCIN/ AEI/10.13039/501100011033/ and by “ERDF A way of making Europe”.