Login with your Reed credentials to view all events.
About this Event
Higher-Order Calibration: Decomposing Uncertainty with Formal Guarantees -
When machine learning models make predictions, understanding why they're uncertain is just as important as measuring how uncertain they are. Existing rigorous uncertainty measures such as calibration are able to accurately assess the total amount of uncertainty present in a model's predictions, but they can't distinguish between uncertainty that's inherent to the problem (aleatoric uncertainty, e.g. classifying a genuinely ambiguous image) versus uncertainty due to the model's lack of knowledge (epistemic uncertainty, e.g. encountering a type of data it hasn't seen before). This distinction is crucial for improving models: aleatoric uncertainty tells you the problem is fundamentally hard, while epistemic uncertainty suggests you need more or better training data. In this talk, I'll introduce higher-order calibration, a principled method that decomposes predictive uncertainty into these aleatoric and epistemic components with formal guarantees. Our approach works by leveraging "k-snapshots" — examples where we have multiple independent labels for the same input — and to our knowledge provides the first rigorous uncertainty decomposition method that makes no assumptions about the underlying data distribution. I'll demonstrate how this framework not only gives us more informative uncertainty estimates but also provides a principled evaluation framework for modern ML approaches like ensembles and Bayesian neural networks.
0 people are interested in this event