Title: Fundamental Limits of Learning with Feedforward and Recurrent Neural Networks
Abstract
Deep neural networks have led to breakthrough results in numerous practical machine learning tasks such as image classification, image captioning, control-policy-learning to play the board game Go, and most recently the prediction of protein structures. In this lecture, we will attempt
to understand some of the structural and mathematical reasons driving these successes. Specifically, we study what is possible in principle if no constraints are imposed on the learning algorithm and on the amount and quality of training data. The guiding theme will be a relation between the complexity of the objects to be learned and the networks approximating them, with the central result stating that universal Kolmogorov-optimality is achieved by feedforward neural networks in function learning and by recurrent neural networks in dynamical system learning.
Biography
Helmut Bölcskei was born in Mödling, Austria on May 29, 1970, and received the Dipl.-Ing. and Dr. techn. degrees in electrical engineering from Vienna University of Technology, Vienna, Austria, in 1994 and 1997, respectively. In 1998 he was with Vienna University of Technology. From 1999 to 2001 he was a postdoctoral researcher in the Information Systems Laboratory, Department of Electrical Engineering, and in the Department of Statistics, Stanford University, Stanford, CA. He was in the founding team of Iospan Wireless Inc., a Silicon Valley-based startup company (acquired by Intel Corporation in 2002) specialized in multiple-input multiple-output (MIMO) wireless systems for high-speed Internet access, and was a co-founder of Celestrius AG, Zurich, Switzerland. From 2001 to 2002 he was an Assistant Professor of Electrical Engineering at the University of Illinois at Urbana-Champaign. He has been with ETH Zurich since 2002, where he is a Professor of Mathematical Information Science in the Department of Electrical Engineering, also associated with the Department of Mathematics. Since 2021 he has also been associated with the Lagrange Mathematics and Computing Research Center, Paris, France. He was a visiting researcher at Philips Research Laboratories Eindhoven, The Netherlands, ENST Paris, France, and the Heinrich Hertz Institute Berlin, Germany. His research interests are in applied mathematics, machine learning theory, mathematical signal processing, data science, and statistics.
He received the 2001 IEEE Signal Processing Society Young Author Best Paper Award, the 2006 IEEE Communications Society Leonard G. Abraham Best Paper Award, the 2010 Vodafone Innovations Award, the ETH "Golden Owl" Teaching Award, is a Fellow of the IEEE, a 2011 EURASIP Fellow, was a Distinguished Lecturer (2013-2014) of the IEEE Information Theory Society, an Erwin Schrödinger Fellow (1999-2001) of the Austrian National Science Foundation (FWF), was included in the 2014 Thomson Reuters List of Highly Cited Researchers in Computer Science, was the 2016 Padovani Lecturer of the IEEE Information Theory Society, and received a 2021 Rothschild Fellowship from the Isaac Newton Institute for Mathematical Sciences, Cambridge University, UK. He served as an associate editor of the IEEE Transactions on Information Theory, the IEEE Transactions on Signal Processing, the IEEE Transactions on Wireless Communications, and the EURASIP Journal on Applied Signal Processing. He was editor-in-chief of the IEEE Transactions on Information Theory during the period 2010-2013 and served on the editorial board of the IEEE Signal Processing Magazine, “Foundations and Trends in Communication and Information Theory”, and “Foundations and Trends in Networking”. He was TPC co-chair of the 2008 IEEE International Symposium on Information Theory and the 2016 IEEE Information Theory Workshop and served on the Board of Governors of the IEEE Information Theory Society. He has been a delegate for faculty appointments of the president of ETH Zurich since 2008.