Preview Mode Links will not work in preview mode

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders.

Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader.

May 25, 2020

Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley. 

Our main focus in the conversation is Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which explores compute-efficient training strategies, based on model size.

We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency? We also discuss the parallels between computer vision and NLP tasks, how he characterizes both “larger” and “faster” in the paper.

Check out the complete show notes for this episode at