Chris Lattner: Compilers, LLVM, Swift, TPU, and ML Accelerators | Artificial Intelligence Podcast

By: Lex Fridman

1465   17   42817

Uploaded on 05/13/2019

Chris Lattner is a senior director at Google working on several projects including CPU, GPU, TPU accelerators for TensorFlow, Swift for TensorFlow, and all kinds of machine learning compiler magic going on behind the scenes. He is one of the top experts in the world on compiler technologies, which means he deeply understands the intricacies of how hardware and software come together to create efficient code. He created the LLVM compiler infrastructure project and the CLang compiler. He led major engineering efforts at Apple, including the creation of the Swift programming language. He also briefly spent time at Tesla as VP of Autopilot Software during the transition from Autopilot hardware 1 to hardware 2, when Tesla essentially started from scratch to build an in-house software infrastructure for Autopilot. This conversation is part of the Artificial Intelligence podcast at MIT and beyond. Audio podcast version is available on https://lexfridman.com/ai/

INFO:
Podcast website: https://lexfridman.com/ai
Course website: https://deeplearning.mit.edu
YouTube Playlist: http://bit.ly/2EcbaKf

OUTLINE:
0:00 - Introduction
1:30 - First program, BASIC, Pascal, C
4:20 - Compilers, LLVM, CLang
37:30 - Apple - LLVM, Objective-C, Swift
45:30 - Google - Swift, Swift for TensorFlow, compilers, Colab
57:32 - TPU & TensorFlow, hardware/software co-design
1:00:30 - MLIR (Multi-Level Intermediate Representation) framework
1:02:40 - Open sourcing of TensorFlow
1:05:10 - Tesla - transition from HW1 to HW2
1:07:24 - Elon Musk and time at Tesla
1:08:45 - Working hard
1:10:40 - Dragons

CONNECT:
- Subscribe to this YouTube channel
- Twitter: https://twitter.com/lexfridman
- LinkedIn: https://www.linkedin.com/in/lexfridman
- Facebook: https://www.facebook.com/lexfridman
- Instagram: https://www.instagram.com/lexfridman
- Medium: https://medium.com/@lexfridman

Comments (1):

By mynegation    2019-09-11

> Aren't there already many ML libraries for compiled languages out there or are you referring to something else?

Something else. If your program is a well-defined set of operations (basically matrix operations for ML), you can optimize the whole specific program instead of calling bunch of individually optimized functions from the library and target a specific hardware (e.g. given GPU). Check for example Chris Lattner’s interview [1] and presentations on MLIR, or proceedings of C4ML workshop [2].

Application in distributed systems: from the top of my head - protobufs, compilation of Erlang and Elixir to BEAM, dask.

[1] https://youtu.be/yCd3CzGSte8 [2] http://c4ml.org/

Original Thread