Smaranda Muresan
Smaranda Muresan
Columbia University
Talk Title : Knowledge-enhanced Text Generation: The Curious Case of Figurative Language and Argumentation

Research in computational models for understanding figurative language and argumentation has seen a lot of progress in recent years. However, generation models for these tasks have been under-explored. There are two main challenges we have to address to make progress in this space: 1) the need to model common sense and/or connotative knowledge required for these tasks; and 2) the lack of large training data. I will present some of our recent work on knowledge-enhanced text generation for figurative language such as metaphor and simile, as well as argument reframing and enthymeme reconstruction. I will conclude by discussing opportunities and remaining challenges for incorporating knowledge in neural text generation systems.

Speaker Bio: Smaranda Muresan is a Research Scientist at the Data Science Institute and the Department of Computer Science at Columbia University and an Amazon Scholar. Before joining Columbia, she was a faculty member in the School of Communication and Information at Rutgers University where she co-founded the Laboratory for the Study of Applied Language Technologies and Society. At Rutgers, she was the recipient of the Distinguished Achievements in Research Award. Her research interests are in computational semantics and discourse, particularly figurative language understanding and generation, argument mining and generation, and fact-checking. Most recently, she has been interested in applying NLP to education and public health, as well as in building NLP technologies for low resource languages. She received best papers awards at SIGDIAL 2017 and ACL 2018 (short paper). She is currently serving as a board member of the North American Chapter of the Association for Computational Linguistics (NAACL) and will serve as a Program Co-Chair for ACL 2022.

Jonathan Berant
Jonathan Berant
Tel Aviv University / Allen Inst. of AI
Talk Title : Neuro-symbolic models for understanding complex questions

Questions that require integrating information over multiple pieces of information are important both from an applicative point-of-view, as such questions naturally arise in real life, and from a scientific point-of-view, as they allow us to test the reasoning abilities of our models. In this talk, I will describe symbolic representations for complex questions and will demonstrate their utility for multiple purposes. First, for improving compositional generalization, that is, the ability of question answering models to handle new compositions that did not occur at training time. Second, for achieving faithfulness, that is, for validating that the computation performed by a neural model indeed corresponds to our human intuitions. Last, for evaluation and robustness, that is, to automatically generate synthetic examples that evaluate and can improve model robustness.

Speaker Bio: Jonathan Berant is an associate professor at the School of Computer Science at Tel Aviv University and a research scientist at The Allen Institute for AI. Jonathan earned a Ph.D. in Computer Science at Tel-Aviv University, under the supervision of Prof. Ido Dagan. Jonathan was a post-doctoral fellow at Stanford University, working with Prof. Christopher Manning and Prof. Percy Liang, and subsequently a post-doctoral fellow at Google Research, Mountain View. Jonathan Received several awards and fellowships including The Rothschild fellowship, The ACL 2011 best student paper award, EMNLP 2014 best paper award, and NAACL 2019 best resource paper award, as well as several honorable mentions. Jonathan is currently an ERC grantee.

Antoine Bosselut
Antoine Bosselut
Stanford University
Talk Title : Symbolic Scaffolds for Neural Commonsense Representation and Reasoning
Situations described using natural language are richer than what humans explicitly communicate. For example, the sentence "She pumped her fist" connotes many potential auspicious causes. For machines to understand natural language, they must be able to make commonsense inferences about explicitly stated information. However, current NLP systems lack the ability to ground the situations they encounter to relevant world knowledge. Moreover, they struggle to reason over available facts to robustly generalize to future unseen events. In this talk, I will describe efforts at measuring the degree of commonsense knowledge already encoded by large-scale language models, and discuss how this understanding motivates the design of commonsense reasoning interfaces for NLP systems

Speaker Bio: Antoine Bosselut is a Postdoctoral Scholar at Stanford University and a Young Investigator at the Allen Institute for AI (AI2). He will join the École Polytechnique Fédéral de Lausanne (EPFL) as an Assistant Professor in 2021. He received his PhD at the University of Washington in 2020. He was recently named as one of the Forbes 30 under 30 list for Science and Healthcare. His research is on building knowledge-aware NLP systems, specializing in commonsense representation and reasoning.

Program Committee

Arun Iyer, Microsoft Research India
Aws Albarghouthi, University of Wisconsin-Madison
Chandra Bhagavatula, AI2
Charles Sutton, Google Brain
Kartik Talamadapula, IBM Research
Leon Weber, Humboldt University of Berlin
Matko Bošnjak, UCL
Robin Manhaeve, KU Leuven
Thomas Demeester, Ghent University