Speakers

Andreas Vlachos (University of Cambridge)

Talk title: Fact-checking as a conversation

Abstract: Misinformation is considered one of the major challenges of our time, resulting in numerous efforts against it.  Fact-checking, the task of assessing whether a claim is true or false, is considered a key in reducing its impact. In the first part of this talk, I will present our recent and ongoing work on automating this task using natural language processing, moving beyond simply classifying claims as true or false in the following aspects: incorporating tabular information, neurosymbolic inference, and using a search engine as a source of evidence. In the second part of this talk, I will present an alternative approach to combating misinformation via dialogue agents, and present results on how internet users engage in constructive disagreements and problem-solving deliberation.

Bio: Prof. Andreas Vlachos is a Professor of Natural Language Processing and Machine Learning at the Department of Computer Science and Technology at the University of Cambridge. He is a well-known NLP researcher working on several areas of NLP and ML, such as dialogue modelling, automated fact-checking, imitation learning, semantic parsing, natural language generation and summarization, language modelling, information extraction, active learning, clustering, biomedical text mining, etc. His research is supported by various agencies, such as ERC, EPSRC, ESRC, Facebook, Amazon, Google, and Huawei. 

Asahi Ushio (Amazon)

Talk title: Bridging the Gap between Text and Speech with Recent Speech Foundation Models 

Abstract: Large-scale language modeling is one of the most impactful research topics in modern NLP, and it recently has been applied in other modalities as well. This talk is aimed at providing a brief introduction of speech foundation model achieved by language modeling. Such speech foundation models are not only superior to the traditional baselines, but more importantly the unified interface between text and speech improves multimodal applications such as text-to-speech, automatic speech recognition and speech-to-speech translation. Although speech and text are closely related as both play important roles in human communication, and potentially there are many areas where NLP research can help for application and analysis, speech has its own research literature rooted in signal processing and its data structure is unique, which may make it difficult to step into the speech domain for NLP researchers. We hope this talk can be a good navigation for such NLP experts to be interested in speech, especially giving an idea to junior researchers or students for potential direction of future research applications.

Bio: Asahi Ushio is a scientist working at Amazon who studies information retrieval, especially product search. Aside from his role at Amazon, Asahi is collaborating with Kotoba tech, an AI startup based in Tokyo, to develop bilingual speech foundation models for Japanese and English. He received PhD from Cardiff University in December 2023, where he studied language model understanding of relational knowledge, question generation, and computational social science. During his PhD, he did research internships at the MusicLM team in Google, computational social science team at Snap, and search technology team at Amazon. 

 

Anna Rogers (IT University of Copenhagen)

Talk title: A Sanity Check on Emergent Properties

Abstract: One of the frequent points in the mainstream narrative about large language models is that they have "emergent properties", but there is a lot of disagreement about what that even means. If they are understood as a kind of generalization beyond training data - as something that a model does without being explicitly trained for it - I argue that we have not in fact established the existence of any such properties, and at the moment we do not even have the methodology for doing so.

Bio: Anna Rogers is tenured associate professor at the Computer Science department at IT University of Copenhagen. Her research focuses on interpretability, robustness, and sociotechnical aspects of large language models. She is one of the current editors-in-chief of ACL Rolling Review.

Arkaitz Zubiaga (Queen Mary University of London)

Talk title: Broadening and customising abusive language detection

Abstract: Despite significant progress in developing NLP methods for abusive language detection, there are still numerous shortcomings in existing models and datasets, not least when it comes to addressing more challenging circumstances. This is the case, for example, with the generalisation of abusive language detection models, which have been shown to perform reasonably well on data similar to the one seen during training, but underperform when there are variations on unseen data. I will discuss some of our recent and ongoing research on quantifying and addressing the limitations of generalising abusive language detection models. Likewise, perceptions on what constitutes abusive language vary across humans, which calls for considering individual opinions and preferences as to what constitutes abusive language. I will discuss some of our initial efforts in this direction.

Abstract: Arkaitz Zubiaga is a senior lecturer (associate professor) at Queen Mary University of London, where he leads the Social Data Science lab. His research lies in the intersection between computational social science and natural language processing. He is broadly interested in furthering NLP methods for mining social media and online data, with a core focus in recent years on tackling problematic issues on the Web and social media that can have a damaging effect on individuals or society at large, such as hate speech, misinformation, inequality, biases and other forms of online harm.

Javad Hosseini (Google Deepmind, UK)

Talk title: Synthetic data generation for domain generalization: The cases of Natural Language Inference and Proposition Segmentation

Abstract: Many NLP tasks are approached by either supervised training of language models on existing, human-annotated, task-specific data or by directly prompting large language models (LLMs), potentially using few-shot examples. Despite reporting high performance results on existing benchmarks for models trained with supervision, their realistic performance on out-of-distribution data is not necessarily as good. In addition, few-shot prompting LLMs is very costly to run at large scales, even if it shows good out-of-domain performance.

In this talk, I will present an approach for using Large Language Models (LLMs) to generate synthetic data for domain-generalization of existing models. Our approach has three main steps: 1) Generating synthetic raw text in many domains covering different text lengths. 2) Using the synthetic raw text and a teacher LLM trained on existing human annotated data to generate similar data in those domains. 3) Using the generated data to train scalable student models. We applied this recipe to two NLP tasks: Natural Language Inference, which happens to have access to large human-annotated datasets, and Abstractive Proposition Segmentation, which has a relatively small set of annotated data. In both cases, we show that our approach can be used to train scalable student models that work considerably better than training models on the original human-annotated data, when tested on out-of-domain datasets. 

Abstract: Javad Hosseini is a research scientist at Google DeepMind, UK, working on problems related to the factuality of large language models. Before joining Google, Javad completed his PhD (2020) and spent time as a post-doctoral research associate at the Institute for Language, Cognition and Computation (ILCC), University of Edinburgh, under the supervision of Mark Steedman. He obtained his MSc in Computer Science from the University of Washington, and earned his MSc and BSc in Computer Software Engineering from Sharif University of Technology.

Nafise Sadat Moosavi (University of Sheffield)

Talk title: Numbers in the Mist: Towards Accurate Numerical Understanding in LMs

Abstract: This talk discusses the importance of numerical reasoning in general-purpose language models, emphasizing its critical role in various applications. It introduces the FERMAT benchmark [Sivakumar & Moosavi, 2023] for a more fine-grained evaluation of numerical reasoning, highlighting that even basic number understanding remains a challenge for language models due to the infinite variety of numbers and their limited representation during pretraining. It then explores two approaches to improve number understanding in language models: (1) employing a contrastive loss to utilize different encodings of the same number resulting from various tokenization methods [Petrak et al., 2023], and (2) explicitly incorporating a mathematically grounded aggregation of a number's embedding computed from the embeddings of its corresponding digits within the model [Sivakumar & Moosavi, 2024].

Bio: Dr. Nafise Sadat Moosavi is a Lecturer in Natural Language Processing at the University of Sheffield's Computer Science Department, a position she has held since February 2022. Previously, she was a postdoctoral researcher at the Technical University of Darmstadt. Her research focuses on improving language models by addressing reasoning, robustness, fairness, and efficiency. Additionally, she applies NLP techniques to computational social science, including framing bias detection and identifying dehumanizing language. Dr. Moosavi serves as a senior area chair or area chair for top NLP conferences such as ACL, EMNLP, EACl, and COLING, and coordinates the SustaiNLP workshop series.

Emanuele Bugliarello (Google DeepMind, France)

Talk title: An introduction to (multicultural) vision and language models

Abstract: Over the last couple of years, we have witnessed an increasing interest in multimodal models that complement language understanding and generation with sensory signals, such as images, videos and sounds. In this talk, I’ll provide an introduction to vision and language models, and specifically models that process images and text. After presenting PaliGemma, Google DeepMind’s latest open model, I’ll touch on typical multimodal evaluation tasks and show how current models fall short in multilingual and multicultural understanding in MaRVL. Finally, I’ll present two simple yet effective approaches that can help bridge the gap between English and global performance.

Bio: Emanuele is a research scientist at Google DeepMind working on multimodal machine learning. He received his PhD from the University of Copenhagen where he worked on multilingual multimodal modelling and benchmarks. During his PhD, he spent time at Google Research, DeepMind, Mila, and Spotify Research. He received his PhD from EPFL, and his bachelor degrees from Politecnico di Torino and Tongji University.

Marie-Francine Moens (KU Leuven, Belgium)

Talk title: Spatial Language: How to Represent It and Reason with It?

Abstract: Spatial language understanding has multiple applications, for instance, when giving natural language instructions to robots or to autonomous vehicles, and in machine comprehension of narratives and dialogues. Whereas qualitative reasoning with symbolic representations were popular in the past, today we see a preference for quantitative reasoning with distributed representations, or for a mixture of both types of reasoning. We go deeper into the importance of spatial commonsense knowlegde acquired by vision-language foundation models and into the current limitations of these models when involved in spatial reasoning. We conclude by discussing perspectives for future research. 

Bio: Marie-Francine (Sien) Moens is holder of the ERC Advanced Grant CALCULUS (2018-2024) granted by the European Research Council. From 2012 till 2016 she was the coordinator of the MUSE project financed by Future and Emerging Technologies (FET) - Open of the European Commission. Both projects regard natural language understanding. In 2021 she was the general chair of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021). In 2011 and 2012 she was appointed as chair of the European Chapter of the Association for Computational Linguistics (EACL) and was a member of the executive board of the Association for Computational Linguistics (ACL). She is currently associate editor of the journal IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). From 2014 till 2018 she was the scientific manager of the EU COST action iV&L Net (The European Network on Integrating Vision and Language). In 2014 she was a Scottish Informatics and Computer Science Alliance (SICSA) Distinguished Visiting Fellow. She is a fellow of the European Laboratory for Learning and Intelligent Systems (ELLIS).