A profound mystery of our times is to be able to explain the phenomenon of training neural nets i.e ``deep-learning”, which is the closest we have ever come to achieving "artificial intelligence". Trying to reason about these successes lands us in a plethora of extremely challenging mathematical questions. In this talk, we will give a brief introduction to neural nets and the various themes of our work in provable deep-learning.
We will see glimpses of our results on the structure of neural functions, loss functions for autoencoders, and algorithms for exact neural training. Next, we will explain our recent results about training a ReLU gate under mild distributional conditions.
Lastly, we will review this very new concept of "local elasticity" of a learning process and demonstrate how it appears to reveal certain universal phases of neural training. We will end by delineating various exciting future research programs in this theme of macroscopic phenomenology with neural nets.
Zoom link: https://us06web.zoom.us/j/89780423911?pwd=UWxUWjJkbHczRGVQeG41Z2RvMmVadz09
Meeting ID: 897 8042 3911