Preview Mode Links will not work in preview mode

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders.

Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader.

Jun 29, 2020

Today we’re joined by Hima Lakkaraju, an Assistant Professor at Harvard University with appointments in both the Business School and Department of Computer Science. 

At CVPR, Hima was a keynote speaker at the Fair, Data-Efficient and Trusted Computer Vision Workshop, where she spoke on Understanding the Perils of Black Box Explanations. Hima talks us through her presentation, which focuses on the unreliability of explainability techniques that center perturbations, such as LIME or SHAP, as well as how attacks on these models can be carried out, and what these attacks look like. We also discuss people’s tendency to trust computer systems and their outputs, her thoughts on collaborator (and former TWIML guest) Cynthia Rudin’s theory that we shouldn’t use black-box algorithms, and much more.

For the complete show notes, visit twimlai.com/talk/387. For our continuing CVPR Coverage, visit twimlai.com/cvpr20.