Login with your Reed credentials to view all events.

Machine Learning Models by and for Humans -Machine learning (ML) can turn large amounts of data into valuable insights, but in many domains, extracting those insights requires interactions with multiple different types of users.  For example, ML models used in healthcare may interact with doctors and patients, among other users.  Doctors often have significant decision-making expertise and may want to understand how an ML model makes predictions so they can decide whether to trust it.  Patients, however, are generally not expert decision makers, but have significant knowledge about their own lives that they may want reflected in predictions that affect their healthcare.  Standard approaches to training ML models do not account for the human context in which they will be used, or effectively leverage these human strengths.  How much more useful could these ML models be if we trained them to collaborate effectively with these different types of users?  In this talk, I describe my work in interpretable and human-centered machine learning to develop and evaluate novel ML methods that account for different types of user context.  I will describe a human-in-the-loop approach I developed to train ML models with actual human feedback quantifying how useful their explanations are, which requires fundamentally distinct methodology that accounts for how much more valuable human time is than computer time.  I will then describe a computational method I developed to incorporate user insights from non-expert users into ML predictions that affect them.  I will end with future directions for developing interpretable and human-centered machine learning systems that work with and for diverse users.​​​​​​​

Event Details

0 people are interested in this event