Error message

Monday, 12 May 2025
Time Speaker Title Resources
09:15 to 09:30 Prof. Rajesh Gopakumar (ICTS, Bengaluru, India) Welcome remarks

TBA

09:30 to 11:00 Prasad Raghavendra (Univ of California Berkeley) Using Markov Chains before they mix.

Markov chains are among the most popular sampling algorithms both in theory and practice. There is a vast theory on understanding the mixing times of Markov chains. But what if the Markov chain does not mix fast? Can we still use such Markov chains in down-stream applications of sampling, and what theoretical guarantees can we show about these chains? In this talk, we will define a notion of "locally stationary measure" -- which is an analogue of local optima in convex optimization. We will see some generic methods to analyze the structure of distributions that non-mixing Markov chains sample from, along with applications to finding large independent sets in graphs, and finding planted cuts in random graphs. Finally, we will conclude the talk with a set of open questions on locally stationary measures. Based on joint work with Kuikui Liu, Prasad Raghavendra, Amit Rajaraman and David X Wu.

11:30 to 12:30 Prasad Raghavendra (University of California, Berkeley, USA) Using Markov Chains before they mix.

Markov chains are among the most popular sampling algorithms both in theory and practice. There is a vast theory on understanding the mixing times of Markov chains. But what if the Markov chain does not mix fast? Can we still use such Markov chains in down-stream applications of sampling, and what theoretical guarantees can we show about these chains? In this talk, we will define a notion of "locally stationary measure" -- which is an analogue of local optima in convex optimization. We will see some generic methods to analyze the structure of distributions that non-mixing Markov chains sample from, along with applications to finding large independent sets in graphs, and finding planted cuts in random graphs. Finally, we will conclude the talk with a set of open questions on locally stationary measures. Based on joint work with Kuikui Liu, Prasad Raghavendra, Amit Rajaraman and David X Wu.

14:15 to 15:45 Pravesh Kothari (Princeton University, New Jersey, USA) Spectral Refutations and Their Applications to Algorithms and Combinatorics

I will present a method to reduce extremal combinatorial problems to establishing the unsatisfiability of k-sparse linear equations mod 2 (aka k-XOR formulas) with a limited amount of randomness. This latter task is then accomplished by bounding the spectral norm of certain "Kikuchi" matrices built from the k-XOR formulas. In these talks, I will discuss a couple of applications of this method from the following list. 

1. Proving hypergraph Moore bound (Feige's 2008 conjecture) -- the optimal trade-off between the number of equations in a system of k-sparse linear equations modulo 2 and the size of the smallest linear dependent subset. This theorem generalizes the famous irregular Moore bound of Alon, Hoory and Linial (2002) for graphs (equivalently, 2-sparse linear equations mod 2).

2. Proving a cubic lower bound on 3-query locally decodable codes (LDCs), improving on a quadratic lower bound of Kerenedis and de Wolf (2004) and its generalization to q-query locally decodable codes for all odd q,  

 3. Proving an exponential lower bound on linear 3-query locally correctable codes (LCCs). This result establishes a sharp separation between 3-query LCCs and 3-query LDCs that are known to admit a construction with a sub-exponential length. It is also the first result to obtain any super-polynomial lower bound for >2-query local codes. 

Time permitting, I may also discuss applications to strengthening Szemeredi's theorem, which asks for establishing the minimal size of a random subset of integers S such that every dense subset of integers contains a 3-term arithmetic progression with a common difference from S, and the resolution of Hamada's 1970 conjecture on the algebraic rank of binary 4-designs. 

I will include pointers to the many open questions and directions where meaningful progress seems within reach for researchers who may get interested in some of the topics.  

 

16:15 to 17:15 Pravesh Kothari (Princeton University, New Jersey, USA) Spectral Refutations and Their Applications to Algorithms and Combinatorics

I will present a method to reduce extremal combinatorial problems to establishing the unsatisfiability of k-sparse linear equations mod 2 (aka k-XOR formulas) with a limited amount of randomness. This latter task is then accomplished by bounding the spectral norm of certain "Kikuchi" matrices built from the k-XOR formulas. In these talks, I will discuss a couple of applications of this method from the following list. 

1. Proving hypergraph Moore bound (Feige's 2008 conjecture) -- the optimal trade-off between the number of equations in a system of k-sparse linear equations modulo 2 and the size of the smallest linear dependent subset. This theorem generalizes the famous irregular Moore bound of Alon, Hoory and Linial (2002) for graphs (equivalently, 2-sparse linear equations mod 2).

2. Proving a cubic lower bound on 3-query locally decodable codes (LDCs), improving on a quadratic lower bound of Kerenedis and de Wolf (2004) and its generalization to q-query locally decodable codes for all odd q,  

 3. Proving an exponential lower bound on linear 3-query locally correctable codes (LCCs). This result establishes a sharp separation between 3-query LCCs and 3-query LDCs that are known to admit a construction with a sub-exponential length. It is also the first result to obtain any super-polynomial lower bound for >2-query local codes. 

Time permitting, I may also discuss applications to strengthening Szemeredi's theorem, which asks for establishing the minimal size of a random subset of integers S such that every dense subset of integers contains a 3-term arithmetic progression with a common difference from S, and the resolution of Hamada's 1970 conjecture on the algebraic rank of binary 4-designs. 

I will include pointers to the many open questions and directions where meaningful progress seems within reach for researchers who may get interested in some of the topics.  

 

 

Tuesday, 13 May 2025
Time Speaker Title Resources
09:30 to 11:00 Pravesh Kothari (Princeton University, New Jersey, USA) The Proofs to Algorithms Method in Algorithm Design

I will present a method developed roughly over the past decade and a half that reduces efficient algorithm design to finding "low-degree sum-of-squares" certificates -- thus proofs -- of uniqueness (or, more generally, "list uniqueness") of near-optimal solutions in input instances. This is a principled way of designing and analyzing a semidefinite programming relaxation + rounding algorithm for your target problem. This technique turns out be powerful tool in algorithm design. 

In this tutorial, I will introduce this technique and cover special cases of a couple of recent important applications. The first comes from a recent renaissance of efficient high-dimensional robust statistical estimation, where the proofs-to-algorithms method played a central role in the eventual resolution of the robust Gaussian Mixture learning problem (dating back to Pearson in 1894 and a concrete version due to Vempala in 2010). The second will be drawn from combinatorial optimization. It will focus on finding planted cliques in the semirandom model, answering a question dating back to Feige and Kilian (2001) and, more recently, by Feige (2019) and Steinhardt (2018). 

Both applications are glimpses of a rich research area in which young researchers may find interesting directions for further research.

 

11:30 to 12:30 Pravesh Kothari (Princeton University, New Jersey, USA) The Proofs to Algorithms Method in Algorithm Design

I will present a method developed roughly over the past decade and a half that reduces efficient algorithm design to finding "low-degree sum-of-squares" certificates -- thus proofs -- of uniqueness (or, more generally, "list uniqueness") of near-optimal solutions in input instances. This is a principled way of designing and analyzing a semidefinite programming relaxation + rounding algorithm for your target problem. This technique turns out be powerful tool in algorithm design. 

In this tutorial, I will introduce this technique and cover special cases of a couple of recent important applications. The first comes from a recent renaissance of efficient high-dimensional robust statistical estimation, where the proofs-to-algorithms method played a central role in the eventual resolution of the robust Gaussian Mixture learning problem (dating back to Pearson in 1894 and a concrete version due to Vempala in 2010). The second will be drawn from combinatorial optimization. It will focus on finding planted cliques in the semirandom model, answering a question dating back to Feige and Kilian (2001) and, more recently, by Feige (2019) and Steinhardt (2018). 

Both applications are glimpses of a rich research area in which young researchers may find interesting directions for further research.

 

14:15 to 15:45 C. Seshadhri (University of California, Santa Cruz, USA) The long path to \sqrt{d} monotonicity testers

Since the early days of property testing, the problem of monotonicity testing has been a central problem of study. Despite the simplicity of the problem, the question has led to a (still continuing) flurry of papers over the past two decades. A long standing open problem has been to determine the non-adaptive complexity of monotonicity testing for Boolean functions on hypergrids. 

This talk is about the (almost) resolution of this question, by \sqrt{d} query "path testers". The path to these results is through a beautiful theory of "directed isoperimetry", showing that classic isoperimetric theorems on the Boolean hypercube extend to the directed setting. This fact is surprising, since directed graphs/random walks are often ill-behaved and rarely yield a nice theory. These directed theorems provide an analysis of directed random walks on product domains, which lead to optimal monotonicity testers. 

I will present some of the main tools used in these results, and try to provide an intuitive explanation of directed isoperimetric theorems.

 

16:15 to 17:15 C. Seshadhri (University of California, Santa Cruz, USA) The long path to \sqrt{d} monotonicity testers

Since the early days of property testing, the problem of monotonicity testing has been a central problem of study. Despite the simplicity of the problem, the question has led to a (still continuing) flurry of papers over the past two decades. A long standing open problem has been to determine the non-adaptive complexity of monotonicity testing for Boolean functions on hypergrids. 

This talk is about the (almost) resolution of this question, by \sqrt{d} query "path testers". The path to these results is through a beautiful theory of "directed isoperimetry", showing that classic isoperimetric theorems on the Boolean hypercube extend to the directed setting. This fact is surprising, since directed graphs/random walks are often ill-behaved and rarely yield a nice theory. These directed theorems provide an analysis of directed random walks on product domains, which lead to optimal monotonicity testers. 

I will present some of the main tools used in these results, and try to provide an intuitive explanation of directed isoperimetric theorems.

 

Wednesday, 14 May 2025
Time Speaker Title Resources
09:30 to 11:00 Santosh Vempala (Georgia Institute of Technology, Atlanta, USA) The Localization Method for Proving High-Dimensional Inequalities

We review the localization method, pioneered by Lov\'asz and Simonovits (1993) and developed substantially by Eldan (2012),  to prove inequalities in high dimension. At its heart, the method uses a sequence of transformations to convert an arbitrary instance to a highly structured one (often even one-dimensional). We will work out some illustrative examples. 

11:30 to 12:30 Santosh Vempala (Georgia Institute of Technology, Atlanta, USA) The Localization Method for Proving High-Dimensional Inequalities

We review the localization method, pioneered by Lov\'asz and Simonovits (1993) and developed substantially by Eldan (2012),  to prove inequalities in high dimension. At its heart, the method uses a sequence of transformations to convert an arbitrary instance to a highly structured one (often even one-dimensional). We will work out some illustrative examples. 

14:15 to 15:45 Jelani Osei Nelson (University of California, Berkeley) Streaming algorithms: a tutorial

Streaming algorithms make one pass over a massive dataset, and should answer some queries on the data while maintaining a memory footprint sublinear in the data size. We show non-trivial streaming algorithms, and lower bounds, for computing various statistics of data streams
(counts, heavy hitters, and more) as well as for graph problems.
 

16:15 to 17:15 Jelani Nelson (University of California, Berkeley, USA) Streaming algorithms: a tutorial

Streaming algorithms make one pass over a massive dataset, and should answer some queries on the data while maintaining a memory footprint sublinear in the data size. We show non-trivial streaming algorithms, and lower bounds, for computing various statistics of data streams
(counts, heavy hitters, and more) as well as for graph problems.

Thursday, 15 May 2025
Time Speaker Title Resources
09:00 to 10:00 Siqi Liu (Institute for Advanced Study) Detection and recovery of latent geometry in random graphs (Online)

In recent years, random graph models with latent geometric structures have received increasing attention. These models typically involve sampling random points from a metric space, followed by independently adding edges with probabilities that depend on the distances between corresponding point pairs. A central computational challenge is to detect the underlying geometry and recover the latent coordinates of the vertices based solely on the observed graph. Unlike classical random graph models, geometric models exhibit richer structural properties, such as correlations between edges. These features make them more realistic representations of real world networks and data. However, our current understanding of the information-theoretic and computational thresholds for detection in these models remains limited. In this talk, we will survey known algorithmic results and computational hardness findings for several random geometric graph models. We will also highlight open directions for future research.

10:15 to 11:15 Yogeshwaran D (Indian Statistical Institute, Bengaluru, India) Noise stability and sensitivity in continuum percolation

We look at the stability and sensitivity of planar percolation models generated by a Poisson point process under re-sampling dynamics. Noise stability refers to whether certain global percolation events remain unchanged when a small fraction of points are re-sampled, while noise sensitivity means these events become nearly independent even when only arbitrarily small fraction of points are re-sampled.

To analyze these properties, one wants to estimate chaos coefficients in the Wiener-Itô chaos expansion for Poisson functionals, analogue of Fourier-Walsh expansion for functionals of random bits. Motivated by substantial progress in the case of random bits, we introduce two tools to estimate the chaos coefficients. First approach is via stopping sets, which serve as a continuum analogue of randomized algorithms, and the second approach is using pivotal and spectral samples, which provide sharper bounds.

We illustrate these methods using two well-known models: Poisson Boolean percolation, where unit balls are placed at Poisson points, and Voronoi percolation, where Voronoi cells based on Poisson points are randomly retained. Our focus will be on sharp noise sensitivity or stability of crossing events, specifically whether a large rectangle is traversed by a connected component of the percolation model.

The talk is based on joint projects with Chinmoy Bhattacharjee, Guenter Last, and Giovanni Peccati

 

11:30 to 12:30 Purvi Gupta (Indian Institute of Science, Bengaluru, India) Determining depth using the Hilbert metric on convex bodies

The objective of the polytope membership problem is to preprocess a given convex polytope $K\subset\mathbb R^d$ into a data structure that efficiently decides whether a given query point $q\in\mathbb R^d$ is in $K$ or not. More efficient algorithms can be designed if one considers the approximate polytope membership problem where the membership data structure is allowed to answer incorrectly for points lying $\varepsilon$-close to the boundary of

the polytope, where $\varepsilon>0$ is a fixed error. Recently, efficient data structures with optimal storage space and query time were constructed for this problem using more shape-sensitive objects such as Macbeath regions and the Hilbert metric (Arya-da Fonseca-Mount, 2017; Abdelkader-Mount, 2018, 2024).

In this talk, we discuss a more refined version of the polytope membership problem. Given a convex body $K\subset\mathbb R^d$, the depth of a query point $q\in\mathbb R^d$ is the maximum value $\delta\in[0,1)$ such that $q$ lies outside all those half-spaces that contain $\delta$ proportion of the volume of $K$. We will discuss how the problem of determining the approximate depth of a query point can be converted into an approximate membership problem using the notion of convex floating bodies from convex geometry. We will then discuss the role of the Hilbert metric in the construction of a data structure to answer such queries. This is joint work with A. Narayanan.

 

14:15 to 15:15 - Short talks + Open Problem Session

TBA

15:15 to 16:15 - Poster Session

TBA

16:15 to 17:15 Nisheeth Vishnoi (Yale University, New Haven, USA) Sampling, Privacy, and Spectral Geometry: Insights from Low-Rank Approximation

This talk explores how problems in private optimization—specifically, low-rank matrix approximation—give rise to novel tools and results in sampling and statistical physics. I will present two recent advances:

Sampling from Harish-Chandra–Itzykson–Zuber (HCIZ) Distributions via Private Optimization:
We introduce an efficient algorithm for computing private low-rank approximations and show how its structure enables efficient sampling from HCIZ measures, which are central to mathematical physics and random matrix theory.

Spectral Sampling and Utility of the Gaussian Mechanism:
We provide a new analysis of the Gaussian Mechanism for differential privacy through the lens of Dyson Brownian motion, yielding refined spectral sampling guarantees and new bounds on eigenvalue gaps in random matrices.

These results illustrate how sampling tasks arising from privacy constraints can lead to powerful connections between random matrix theory, optimization, sampling, and statistical physics.

 

17:15 to 17:45 - Discussion + Posters
Friday, 16 May 2025
Time Speaker Title Resources
09:00 to 10:00 Mitali Bafna (Massachusetts Institute of Technology, Cambridge, USA) Efficient PCPs from HDX
The theory of probabilistically checkable proofs (PCPs) shows how to encode a proof for any theorem into a format where the theorem's correctness can be verified by making only a constant number of queries to the proof. The PCP Theorem [ALMSS] is a fundamental result in computer science with far-reaching consequences in hardness of approximation, cryptography, and cloud computing. A PCP has two important parameters: 1) the size of the encoding, and 2) soundness, which is the probability that the verifier accepts an incorrect proof, both of which we wish to minimize.

In 2005, Dinur gave a surprisingly elementary and purely combinatorial proof of the PCP theorem that relies only on tools such as graph expansion, while also giving the first construction of 2-query PCPs with quasi-linear size and constant soundness (close to 1). Our work improves upon Dinur's PCP and constructs 2-query, quasi-linear size PCPs with arbitrarily small constant soundness. As a direct consequence, assuming the exponential time hypothesis, we get that no approximation algorithm for 3-SAT can achieve an approximation ratio significantly better than 7/8 in time 2^{n/polylog n}.

In this talk, I will introduce PCPs and discuss the components that go into our proof. This talk is based on joint work with Dor Minzer and Nikhil Vyas, with an appendix by Zhiwei Yun.

 

 

10:30 to 11:30 Amit Kumar (Indian Institute of Technology, Delhi, India) Tight Results for Online Convex Paging

The online convex paging problem models a broad class of cost  functions for the classical paging problem. In particular, it naturally  captures fairness constraints: e.g., that no specific page (or groups of  pages) suffers an ``unfairly'' high number of evictions by considering
$\ell_p$ norms of  eviction vectors for $p>1$. The case of the $\ell_\infty$ norm has also been of special interest, and is called  min-max paging.

We give tight upper and lower bounds for the convex paging problem for a  broad class of convex functions.  Prior to our work, only fractional algorithms were known for this general setting. Moreover, our general result also improves on prior works for special cases of the problem.   For example, it implies that the randomized competitive ratio of the  min-max paging problem is $\Theta(\log k\log n)$; this improves both the  upper bound and the lower bound given in prior work by logarithmic factors. It also shows that the randomized and deterministic competitive
ratios for $\ell_p$-norm paging are $\Theta(p\log k)$ and $\Theta(pk)$  respectively.

This is joint work with Anupam Gupta and Debmalya Panigrahi.

12:00 to 13:00 Hariharan Narayanan (TIFR Mumbai) On the randomised Horn problem and the surface tension of hives

Given two nonincreasing n-tuples of real numbers l_n, m_n, the Horn problem asks for a description of all nonincreasing n-tuples of real numbers u_n such that there exist Hermitian matrices X_n, Y_n and Z_n respectively with these spectra such that X_n+Y_n=Z_n. There is also a randomized version of this problem where X_n and Y_n are sampled uniformly at random from orbits of Hermitian matrices arising from the conjugacy action by elements of the unitary group. One then asks for a description of the probability measure of the spectrum of the sum Z_n. Both the original Horn problem and its randomized version have solutions using the hives introduced by Knutson and Tao. In an asymptotic sense, as n tends to infinity, large deviations for the randomized Horn problem were given in joint work with Sheffield in terms of a notion of surface tension for hives. In this talk, we discuss upper and lower bounds on this surface tension function. We also obtain a closed-form expression for the total entropy of a surface tension minimizing continuum hive with boundary conditions arising from GUE eigenspectra. Finally, we give several empirical results for random hives and lozenge tilings arising from an application of the octahedron recurrence for large n and a numerical approximation of the surface tension function. This is a joint work with Aalok Gangopadhyay.