A Brief History of Large Language Models
15:30 - 16:00
Machine Learning models keep becoming larger and using more data. Neural scaling laws show that they will keep getting better as they grow even bigger, gaining new capabilities, many that are surprising even to researchers. In this talk we will explore the different kinds of Transformer models, from BERT, to GPT3, DALL-E 2 and PaLM, and how they can be used today and in the future.
Roland Szabo
Roland Szabo is a Machine Learning Consultant helping companies get started with Artificial Intelligence. He became interested in ML 10 years ago and since then has worked in all kinds of projects related to it, from computer vision, to natural language processing to time series data, from large companies such as Google in Zurich, to small startups where he led the ML team.