May 25, 2020
Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley.
Our main focus in the conversation is Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which explores compute-efficient training strategies, based on model size.
We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency? We also discuss the parallels between computer vision and NLP tasks, how he characterizes both “larger” and “faster” in the paper.
Check out the complete show notes for this episode at twimlai.com/talk/378.