Statistics Colloquium: Shivani Agarwal (University of Pennsylvania)

Date: 

Monday, November 1, 2021, 12:00pm to 1:00pm

Location: 

Zoom - please contact emilie_campanelli@fas.harvard.edu for more information

Headshot of Shivani AgarwalTitle:

Surrogate Loss Functions in Machine Learning: What are the Fundamental Design Principles?

Abstract:

Surrogate loss functions are widely used in machine learning. In particular, for many machine learning problems, the ideal objective or loss function is computationally hard to optimize, and therefore one instead works with a (usually convex) surrogate loss that can be optimized efficiently. What are the fundamental design principles for such surrogate losses, and what are the associated statistical behaviors of the resulting algorithms?

This talk will provide answers to some of these questions. In particular, we will discuss the theory of convex calibrated surrogate losses, which yield statistically consistent learning algorithms for the true learning objective, and will provide fundamental principles and tools for designing such surrogate losses for a wide variety of machine learning problems. Our surrogate losses effectively decompose complex multiclass and multi-label learning problems into simpler binary learning problems, and come with corresponding decoding schemes that make the overall learning approach statistically consistent. We will also discuss the tool of strongly proper losses, which act as a fundamental primitive in deriving statistical guarantees for various learning problems, and connections with the field of property elicitation and with PAC learning. We will conclude with some open questions.