- Welcome to FPL2021
- Committees & Contact
- Call for Contributions
- Authors Resources
- Registration
- Program
- Workshops and Tutorials
- Keynotes
- Invited Industrial Talks
- Sponsors
Mike Hutton (Google, USA): Accelerating Deep Learning with TPUs
Abstract: Google introduced the first Tensor Processor in a 2017 ISCA paper. The TPU is a domain-specific, coarse-grained VLIW processor with dedicated matrix-multiply units designed to accelerate machine learning workloads over large-scale data. Multiple generations later, current TPUs now support both inference and training, massive compute power (more than 100 petaflops training systems) and drive all of the internal machine learning efforts at Google. This presentation will overview the TPU and its evolution, including some of the design principles and decisions shaping the architecture. |
|
Bio: Mike Hutton received his BMath in Computer Science in 1989 and MMath in Computer Science in 1991, both from the University of Waterloo, and his Ph.D. in Computer Science in 1997 from the University of Toronto. Across 20 years at Altera, Tabula and Intel, he worked on FPGA architecture, CAD and applications. He is author of 30 published papers and 100+ US patents in these areas, has served on multiple FPGA and CAD program committees, and is a former Associate Editor for IEEE Transactions on VLSI. In 2018 he joined Google to lead a group focused on performance modeling for the Tensor Processor (TPU) architecture. |
Cecilia Metra (University of Bologna, Italy): Safety, Reliability and Resiliency Challenges to Enable Highly Autonomous Intelligent Systems
Michaela Blott (Xilinx Research, Ireland): Innovative FPGA Approaches to AI
Abstract: Deep Learning is penetrating an ever-increasing number of applications such as communications. However, the associated computational complexity and memory demands are outpacing Moore’s Law ability to cater for the necessary performance scalability. Specialization of hardware architectures is one of the most successful approaches to address the sky-high requirements, and with FPGAs, this can take its most creative form. During this talk, we will take a look at some of the new emerging applications and discuss how various forms of innovative specializations in hardware architecture with FPGAs impact flexibility, performance and efficiency. | |
Bio: Michaela Blott is a Fellow at Xilinx Research in Dublin, Ireland, where she heads a team of international scientists driving exciting research into new application domains for Xilinx devices, such as machine learning. She earned a PhD from Trinity College Dublin and her Master’s degree from the University of Kaiserslautern, Germany, and brings over 25 years of leading edge computer architecture and advanced FPGA and board design, in research institutions (ETH Zurich and Bell Labs) and development organizations. She is highly active in the research community as industrial advisor to numerous EU projects, serves on technical program committees, and most recently received the Women in Tech Award 2019. |