09:30 to 10:00 |
Christos H Papadimitriou (Columbia University) |
The Algorithmic Lens: How the Computational Perspective is Transforming the Sciences Computation transforms the sciences (physical, mathematical, life or social) not just by equipping them, but mainly by providing a novel and powerful perspective which often leads to unforeseen insights. Examples abound: quantum computation provides the right forum for questioning and testing some of the most basic tenets of quantum physics, while statistical mechanics has found in the efficiency of randomized algorithms a powerful metaphor for phase transitions. In mathematics, the P vs. NP question has joined the list of the most profound and consequential problems; in economics, considerations of computational complexity revise predictions of economic behavior and affect the design of economic mechanisms such as auctions. In biology some of the most fundamental problems, such as understanding the effectiveness of evolution and selection as well as how the Mind emerges from the Brain, can be productively recast in computational terms. My talk is a sequence of vignettes exemplifying this pattern.
|
|
|
10:00 to 10:30 |
Sourav Chatterjee (Stanford University) |
Understanding Rare Events in Graphs and Networks Understanding rare events in graphs and networks The mathematical theory of large deviations attempts to understand the probabilities and consequences of rare events. In spite of the growing importance of graphs and networks in the modern world, we did not have the tools to understand large deviations in graphical structures until quite recently. I will talk about the development of this new area in probability theory and its surprising implications.
|
|
|
10:30 to 11:00 |
Sanjeev Arora (Princeton University and Institute for Advanced Study) |
Can machines learn without supervision? This talk surveys challenges in designing algorithms for unsupervised learning, which learn from data that has not been labeled by humans. (By contrast, many recent success of machine learning involve algorithms that learn from labeled data.) We introduce various approaches that have been tried, and focus on recent findings about two topics:
(i) learning to represent "meaning" of text (word embeddings, sentence embeddings)
(ii) Generative adversarial nets (GANs), and their limitations.
|
|
|
11:00 to 11:30 |
-- |
Tea Break |
|
|
11:30 to 12:00 |
Shri Kulkarni (Caltech Optical Observatories, California Institute of Technology) |
The Restless Universe (and the Periodic Table) Following the Big Bang the Universe was homogeneous in matter, energy and barren of chemistry. It is the stars which built up the periodic table. Astronomers have now identified several classes of cosmic explosions of which supernovae constitute the largest group. The Palomar Transient Factory was an innovative 2-telescope, and its successor, the Zwicky Transient Factory (ZTF), is a high tech project with gigantic CCD cameras and sophisticated software system, and squarely aimed to systematically find ""blips and booms in the middle of the night"". The speaker will talk about the great returns and surprises from this project: super-luminous supernovae, new classes of transients, new light on progenitors of supernovae, detection of gamma-ray bursts by purely optical techniques and troves of pulsating stars and binary stars. ZTF is poised to become the stepping stone for the Large Synoptic Survey Telescope.
|
|
|
12:00 to 12:30 |
Ramesh Narayan (Harvard University) |
Observing a Black Hole Up Close and Personal During the last few decades, it has become increasingly clear that the Universe is full of black holes. There are literally millions in each galaxy. Most black holes are identified and studied through the energetic radiation emitted by hot gas orbiting the holes. However, until now, astronomers have never been able to obtain a close-up image of any black hole. This is about to change. The Event Horizon Telescope (EHT), an Earth-spanning radio interferometer, is poised to obtain the first event-horizon-scale images of Sagittarius A*, the supermassive black hole at the center of our Galaxy. EHT images are predicted to show certain geometrical distortions, which will provide new tests of the relativistic space-time near a black hole. The EHT will also provide an unprecedented close-up movie of the orbiting, radiating gas near the black hole. This will open up new opportunities for the study of high energy phenomena in hot magnetized astrophysical plasmas.
|
|
|
12:30 to 13:00 |
Roger Blandford (KIPAC, Stanford University) |
The Music of the Sphere A very low spatial resolution, three dimensional map of the universe today is made using Planck microwave background observations and the standard model of cosmology. This map can be improved using existing and future survey data. It should prove useful for improving the accuracy of the standard model and for testing basic features of the simplest model of inflation.
|
|
|
13:00 to 14:30 |
-- |
Lunch Break |
|
|
14:30 to 15:00 |
Jennifer Chayes (Microsoft Research) |
The Unreasonable Effectiveness of Learning Neural Nets: A Statistical Physics Perspective During the last 20 years, equilibrium statistical physics has provided a framework to understand the hardness of computation in random versions of classic problems of computer science. Recently, we have developed a non-equilibrium statistical physics theory of why neural nets work so well in practice. In this talk, I will review the equilibrium statistical physics perspective on computability, and introduce our non-equilibrium statistical physics perspective on learning neural nets.
|
|
|
15:00 to 15:30 |
Surya Ganguli (Stanford University) |
A Theory of Neural Dimensionality, Dynamics and Measurement While technological revolutions in neuroscience now enable us to record from ever increasing numbers of neurons, for the forseeable future we will still only record an infinitesimal fraction of the total number of neurons in mammalian circuits controlling complex behaviors. Nevertheless, despite operating within this extreme under-sampling limit, a wide array of statistical procedures for dimensionality reduction of multineuronal recordings uncover remarkably insightful, low dimensional neural state space dynamics whose geometry reveals how behavior and cognition emerge from neural circuits. What theoretical principles explain this remarkable success? In essence, how is it that we can understand anything about the brain while recording an infinitesimal fraction of its degrees of freedom? We develop an experimentally testable theoretical framework to answer this question. By making a novel conceptual connection between neural measurement and the theory of random projections, we derive scaling laws relating how many neurons we must record to accurately recover state space dynamics, given the complexity of the behavioral or cognitive task, and the smoothness of neural dynamics. Moreover we verify these scaling laws in the motor cortical dynamics of monkeys performing a reaching task. Overall, these results yield conceptual insights into how the complexity of neural dynamics is upper-bounded by the complexity of cognition and behavior itself.
|
|
|
15:30 to 16:00 |
Mriganka Sur (Massachusetts Institute of Technology, USA) |
The Computational Logic of Cortical Circuits The human brain has 80 billion neurons or brain cells organized into discrete processing systems. Each neuron connects with hundreds of other neurons via thousands of connections or synapses. Yet neurons do not connect indiscriminately: synaptic connections between specific sets of neurons create specific pathways and circuits that enable the brain’s remarkable information processing capabilities and give rise to cognition. Such specificity arises during brain development, and is sharpened by plasticity and learning. Brain architectures, especially those of the cerebral cortex, are far from rigid, however. Flexible reconfigurable networks are essential for cortical dynamics underlying cognition. Specifying the underlying principles is fundamental to understanding how the brain gives rise to the mind, and ultimately to framing how biological intelligence might be created in a future generation of machines.
|
|
|
16:00 to 16:30 |
Mukund Thattai (NCBS-TIFR) |
Molecular Archaeology: Using Genomes to Reconstruct Two Billion Years of Cellular Life Molecular archaeology: using genomes to reconstruct two billion years of cellular life It has been exactly 50 years since Lynn Margulis (at the time, Lynn Sagan) proposed her famous endosymbiont hypothesis. It is now well established that mitochondria were once free-living bacteria with their own division machinery which took up residence within another cell, thus creating the first true eukaryote. The physical fossil record suggests that eukaryotes emerged from prokaryotic ancestors over two billion years ago. Molecular archaeology can help uncover how this occurred. Using a new phylogenetic method with deep time resolution, we have retraced the earliest steps by which the host cell tamed the bacterial endosymbiont.
|
|
|