Google I/O 2019 | Cloud TPU Pods: AI Supercomputing for Large Machine Learning Problems

Cloud Tensor Processing Unit (TPU) is an ASIC designed by Google for neural network processing. TPUs feature a domain specific architecture designed specifically for accelerating TensorFlow training and prediction workloads and provides performance benefits on machine learning production use. Learn the technical details of Cloud TPU and Cloud TPU Pod and new features of TensorFlow that enables a large scale model parallelism for deep learning training.

 

Speakers: Kaz Sato and Martin Gorner

Session ID: TF6510

Previous AI Is Different Because It Lets Machines Weld The Emotional With The Physical
Next Google I/O 2019 | Cutting Edge TensorFlow: New Techniques