Published: Sept. 3, 2019

Bo Waggoner
Department of Computer Science, University of Colorado Boulder

Toward a Characterization of Loss Functions for Distribution Learning

 

A common machine-learning task is to learn a probability distribution over a very large domain. Examples include natural language processing and generative adversarial networks. But how should the learned distribution be evaluated? A natural approach is to draw test samples and use a loss function. However, none, even the popular log loss, can satisfy natural axioms (inspired by the literature on evaluating human forecasters). We show this impossibility can be overturned, and many simple loss functions can have strong usefulness guarantees, by using "one weird trick" -- calibration, a classical forecasting requirement. These results imply that requiring learning algorithms to be calibrated, a kind of regularization, allows us to provably evaluate them while picking a loss function tailored to the setting.

Joint work with Nika Haghtalab and Cameron Musco; https://arxiv.org/abs/1906.02652

Speaker Bio: Bo Waggoner is a new Assistant Professor of Computer Science at CU Boulder working at the interface of game theory, machine learning, and theoretical CS. Prior to Colorado, he held postdoc positions at Microsoft Research NYC and the University of Pennsylvania, and received his PhD from Harvard in 2016. https://www.bowaggoner.com