MS Teams: Department of Computing and Software, “Candidate Lectures and Seminars” Channel: https://teams.microsoft.com/l/channel/19%3a85e66d3f9fcc40fd99de7aa631e3d616%40thread.tacv2/Candidate%2520Lectures%2520and%2520Seminars?groupId=a2e98537-757f-4791-b72f-2cf4d7459f28&tenantId=44376307-b429-42ad-8c25-28cd496f4772
The Role of Over-parameterization in machine learning – a function space perspective
The conventional wisdom of simple models in machine learning theory misses the bigger picture, especially over-parameterized neural networks (NNs), where the number of parameters are much larger than the number of training data. Our goal is to explore the mystery behind NNs from a theoretical side.
In this talk, I will discuss the role of over-parameterization in neural networks, to theoretically understand why they can perform well. First, I will talk about the robustness of neural networks, affected by architecture and initialization in a function space theory view. It aims to answer a fundamental question: over-parameterization in NNs helps or hurts robustness? Second, I will talk about how deep reinforcement learning works well for function approximation. Potential future directions and some topics, e.g. trustworthy ML will also be briefly discussed.
Fanghui Liu is currently a Postdoctoral Fellow at Ecole Polytechnique Fédérale de Lausanne (EPFL), and previously was a postdoc researcher at ESAT-STADIUS, KU Leuven. He obtained his PhD degree at Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University in 2019. His research interests include machine learning, kernel methods, and learning theory, which leads to research output at JMLR, TPAMI, NeurIPS and presenting tutorials at CVPR 2023, ICASSP 2023.