The ability to estimate and control tail risks, besides being an integral part of quantitative risk management, is central to running operations requiring high service levels and ML-driven cyber- physical systems with high-reliability specifications. Despite this significance, scalable algorithmic approaches have remained elusive: This is due to the rarity with which relevant risky samples get observed, and the critical role experts play in devising variance reduction techniques based on instance-specific large deviations studies. Our goal in this talk is to examine if such tailored variance reduction benefits can be instead achieved by instance-agnostic algorithms capable of scaling well across multitude of tail estimation and optimisation tasks. To this end, we identify an elementary transformation whose push-forward automatically induces efficient importance sampling distributions across a variety of models by replicating the concentration properties observed in less rare samples. This obviates the need to explicitly identify a good change of measure, thereby overcoming the primary bottleneck in the use of importance sampling beyond highly stylized models. Our novel approach is guided by a large deviations principle which brings out the phenomenon of self-similarity of zero variance distributions. Being a nonparametric phenomenon, this self-similarity is manifest in a rich set of objectives modeled with tools such as linear programs, piecewise linear/quadratic objectives, feature maps specified in terms of neural networks, etc., together with a spectrum of light and heavy-tailed multivariate distributions.
Zoom link: https://us02web.zoom.us/j/81379290349
Meeting ID: 813 7929 0349
For more details of past and upcoming seminars please click the link