Por Lucas Dixon (Google Research).
This talk is about the promises of the internet, toxic language, and the growing complex role of machine learning in society. I'll introduce the promise, and challenges of toxicity detection, along with various applications that have happened. This will provide a basis for introducing large machine learning models for language. While basic analysis shows they encompass many unhealthy societal biases, making them a complex tool to productively use, these are amazingly diverse in utility. Moreover, they are also heralding the promise of a new kind of important object for machine learning: small data. While we have become accustomed to big data, and its many challenges, large language models seem to offer a new kind of utility for smaller high quality curated data, as well as a host of new applications. I’ll be outlining some of the emerging challenges and opportunities.
Bio: Lucas is a scientist in Google Research and Area 120 working on understanding the long term dynamics of recommendation systems. He is working on new ways to give users agency when interacting with recommendation systems, and exploring participatory approaches to the development of ML. Previously, he was Chief Scientist at Jigsaw where founded the engineering and scientific research efforts. While at Google and in Jigsaw, he has worked on range of topics including security, formal logics, machine learning, and data visualization. For example, Lucas worked on uProxy/Outline, and, DigitalAttackMap, and led development Syria Defection Tracker, and unfiltered.news, Project Shield, Conversation AI and the Perspective API. Before Google, Lucas completed his PhD and then worked at the University of Edinburgh on the automation of mathematical reasoning, graphical languages for quantum information. He also helped run a non-profit working towards more rational and informed discussion and decision making, and was a co-founder of TheoryMine.
Transmissão em direto via Zoom.