References

Barros, C., Vicente, M., & Lloret, E. (2019). Tackling the challenge of computational identification of characters in fictional narratives. In 2019 IEEE International Conference on Cognitive Computing (ICCC), 122–129. IEEE. 

Bena, B., & Kalita, J. (2019). Introducing aspects of creativity in automatic poetry generation. In Proceedings of the 16th International Conference on Natural Language Processing, 26–35. 

Bosselut, A., Rashkin, H., Sap, M., Malaviya, C., Celikyilmaz, A., & Choi, Y. (2019). COMET: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 4762–4799. 

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … & Amodei, D. (2020). Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan & H. Lin (Eds.). Advances in Neural Information Processing Systems 33 (NeurIPS 2020), 1877–1901. 

Celikyilmaz, A., Clark, E., & Gao, J. (2020). Evaluation of text generation: a survey. In arXiv preprint, 1– 75. 

Chakrabarty, T., Zhang, X., Muresan, S., & Peng, N. (2021). MERMAID: Metaphor generation with symbolism and discriminative decoding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 4250–4261. 

Consuegra-Ayala, J. P., Gutiérrez, Y., Piad-Morffis, A., Almeida-Cruz, Y., & Palomar, M. (2021). Automatic extension of corpora from the intelligent ensembling of ehealth knowledge discovery systems outputs. Journal of Biomedical Informatics, 116, 1–16. 

Dale, R. (2020). Natural language generation: the commercial state of the art in 2020. Natural Language Engineering, 26(4), 481–487. 

Dathathri, S., Madotto, A., Lan, J., Hung, J., Frank, E., Molino, P., … & Liu, R. (2020). Plug and play language models: a simple approach to controlled text generation. In the Eighth International Conference on Learning Representations (ICLR 2020), 1–34. 

Estevez-Velarde, S., Montoyo, A., Almeida-Cruz, Y., Gutiérrez, Y., Piad-Morffis, A., & Muñoz, R. (2019). Demo application for LETO: Learning engine through ontologies. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), 276– 284.

Färber, S. & Färber, M. (2015). Fairy tales and wonderful stories as a pedagogical proposal for the elaboration of losses. European Psychiatry, 30, 1642. 

Feder, A., Oved, N., Shalit, U., & Reichart, R. (2021). CausaLM: Causal model explanation through counterfactual language models. Computational Linguistics 2021, 47(2), 333–386. 

Fellbaum, C. (Ed.). (1998). WordNet: An electronic lexical database. Cambridge, MA: MIT Press. 

Ferreira, T. C., van der Lee, C., Van Miltenburg, E., & Krahmer, E. (2019). Neural data-to-text generation: a comparison between pipeline and end-to-end architectures. In 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, 552–562. 

19 de 20 Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4), 681–694. 

Goleman, D. (1995). Emotional Intelligence. New York: Bantam Books. 

Hämäläinen, M. & Alnajjar, K. (2021). The great misalignment problem in human evaluation of NLP methods. In Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval), 69– 74. 

Kulikovskaya, I. E. & Andrienko, A. A. (2016). Fairy-tales for modern gifted preschoolers: developing creativity, moral values and coherent world outlook. Procedia-Social and Behavioral Sciences, 233, 53–57.

Kurup, L., Narvekar, M., Sarvaiya, R., & Shah, A. (2021). Evolution of neural text generation: Comparative analysis. In S.K. Bhatia, S. Tiwari, S. Ruidan, M. C. Trivedi & K. K. Mishra. (Eds.). Advances in Computer, Communication and Computational Sciences, 1158, 795–804. Springer, Singapore.

Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. Chicago and London: University of Chicago Press. 

Lau, J. H., Cohn, T., Baldwin, T., Brooke, J., & Hammond, A. (2018). Deep-speare: A joint neural model of poetic language, meter and rhyme. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1948–1958. 

Len, Y., Portet, F., Labbé, C., & Qader, R. (2020). Controllable neural natural language generation: Comparison of state-of-the-art control strategies. In WebNLG+: 3rd Workshop on Natural Language Generation from the Semantic Web, 1–7. 

Lenat, D. B. (1995). CYC: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11), 33–38. 

Navarro, B., Lafoz, M. R., & Sánchez, N. (2016). Metrical annotation of a large corpus of Spanish sonnets: representation, scansion and evaluation. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), 4360–4364. 

Navigli, R., & Ponzetto, S. P. (2010). BabelNet: Building a very large multilingual semantic network. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL ‘10), 216–225. 

Odabasi, B., Karakus, E. & Murat, M. (2012). The usage of tales (ELVES approach) as a new approach in analytic intelligence development and pedagogy methods. Procedia-Social and Behavioral Sciences, 47, 460–469. 

Odebrecht, C., Burnard, L., Navarro-Colorado, B. Eder, M. & Schöch, C. (2019). The european literary text collection (ELTeC). In Digital Humanities Conference (DF2019). 

Papay, S., & Padó, S. (2020). RiQuA: A corpus of rich quotation annotation for english literary text. In Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC), 835–841. 

Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training (preprint). https://s3-us-west-2.amazonaws.com/openaiassets/research-covers/language-unsupervised/language_understanding_paper.pdf 

Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., … & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research 21 (140), 1–67. 

Rai, S., & Chakraverty, S. (2020). A survey on computational metaphor processing. ACM Computing Surveys (CSUR), 53(2), 1–37. Reiter, E., & Dale, R. (2000). Building Natural Language Generation Systems (Studies in Natural Language Processing). Cambridge: Cambridge University Press.

 Reiter, E., Sripada, S., Hunter, J., Yu, J., & Davy, I. (2005). Choosing words in computer-generated weather forecasts. Artificial Intelligence, 167(1-2), 137–169.  

Rohrbach, A., Hendricks, L. A., Burns, K., Darrell, T., & Saenko, K. (2018). Object hallucination in image captioning. In CoRR, 1–11. 

Santillan, M. C., & Azcarraga, A. P. (2020). Poem generation using transformers and Doc2Vec embeddings. In 2020 International Joint Conference on Neural Networks (IJCNN), 1–7. IEEE. 

Sap, M., Le Bras, R., Allaway, E., Bhagavatula, C., Lourie, N., Rashkin, H., Roof, B., Smith, N. A., & Choi, Y. (2019). ATOMIC: An atlas of machine commonsense for if-then reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 3027–3035. 

Sheng, E., Chang, K., Natarajan, P., & Peng, N. (2021). Societal biases in language generation: Progress and challenges. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 4275–4293, Online. Association for Computational Linguistics. 

Shwartz, V., & Choi, Y. (2020). Do neural language models overcome reporting bias?. In Proceedings of the 28th International Conference on Computational Linguistics, 6863–6870. 

Sims, M., Park, J. H., & Bamman, D. (2019). Literary event detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3623–3634. 

Speer, R., Chin, J., & Havasi, C. (2017). ConceptNet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty-first AAAI Conference on Artificial Intelligence (AAAI’17). AAAI Press, 4444–4451. 

Suraperwata, R. H., & Suyanto, S. (2020). Language modeling for journalistic robot based on generative pretrained transformer 2. In 2020 8th International Conference on Information and Communication Technology (ICoICT), 1–6. IEEE. 

Syed, A. A., Gaol, F. L., & Matsuo, T. (2021). A survey of the state-of-the-art models in neural abstractive text summarization. IEEE Access, 9, 13248–13265. 

Uekermann, J., Kraemer, M., Abdel-Hamid, M., Schimmelmann, B. G., Hebebrand, J., Daum, I., … & Kis, B. (2010). Social cognition in attention-deficit hyperactivity disorder (ADHD). Neuroscience & biobehavioral reviews, 34(5), 734–743. 

van der Lee, C., Gatt, A., van Miltenburg, E., & Krahmer, E. (2020). Human evaluation of automatically generated text: Current trends and best practice guidelines. Computer Speech & Language, 1–24. 

van Heerden, I., & Bas, A. (2021). AI as author – Bridging the gap between machine learning and literary theory. Journal of Artificial Intelligence Research, 71, 175–189. 

Vicente, M., Barros, C., & Lloret, E. (2018). Statistical language modelling for automatic story generation. Journal of Intelligent & Fuzzy Systems, 34(5), 3069–3079. 

Vicente, M., Barros, C., Peregrino, F. S., Agulló, F., & Lloret, E. (2015). La generación de lenguaje natural: Análisis del estado actual. Computación y Sistemas, 19(4), 721–756. 

Wang, J., Zhang, X., Zhou, Y., Suh, C., & Rudin, C. (2021). There once was a really bad poet, it was automated but you didn’t know it. Transactions of the Association for Computational Linguistics, 9, 605–620. 

Wang, Z., Duan, Z., Zhang, H., Wang, C., Tian, L., Chen, B., & Zhou, M. (2020). Friendly topic assistant for transformer based abstractive summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 485–497. 

Zellers, R., Holtzman, A., Bisk, Y., Farhadi, A., & Choi, Y. (2019). HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 4791–4800.