报告人:Benjamin Guedj (University College London)
报告时间:2026年4月28日(周二)下午15:00 - 16:00
报告地点:腾讯会议388-984-701
报告摘要:Generalisation is arguably one of the central problems in machine learning and foundational AI. Generalisation theory has traditionally relied on KL-based PAC-Bayesian bounds, which, despite their elegance, often obscure geometry and limit applicability. In this talk, I will present recent advances that move beyond traditional bounds. One line of work replaces KL with Wasserstein distances, yielding high-probability bounds valid for heavy-tailed losses and leading to new, optimisable learning objectives. Another line introduces a general comparator framework, showing how optimal bounds naturally arise from convex conjugates of cumulant generating functions, unifying and extending many classical results. Together, these perspectives highlight how rethinking divergences and comparators opens new directions in both theory and practice. I will conclude by discussing links with information theory and how these ideas might shape the next generation of PAC-Bayesian learning algorithms.