Domain-Specific Accelerators

Speaker: Bill Dally , NVIDIA Corporation and Stanford University

Date: Monday, September 16, 2019

Time: 2:15 PM to 3:15 PM Note: all times are in the Eastern Time Zone

Public: Yes

Location: G32-449 (Patil/Kiva)

Event Type: Seminar

Room Description: G32-449 (Patil/Kiva)

Host: Song Han, MIT

Contact: Song Han, songhan@mit.edu

Relevant URL: http://fast-code.csail.mit.edu/

Speaker URL: https://research.nvidia.com/person/william-dally

Speaker Photo:
None

Reminders to: fast-code-seminar@lists.csail.mit.edu, seminars@csail.mit.edu, pl@csail.mit.edu

Reminder Subject: TALK: Domain-Specific Accelerators

Abstract: Increasing computing performance enables new applications and greater value from computing. With the end of Moore's Law and Dennard Scaling, continued performance scaling will come primarily from specialization. Specialized hardware engines can achieve performance and efficiency from 10x to 10,000x a CPU through specialization, parallelism, and optimized memory access. Graphics processing units are an ideal platform on which to build domain-specific accelerators. They provide very efficient, high performance communication and memory subsystems - which are needed by all domains. Specialization is provided via "cores", such as tensor cores that accelerate deep learning or ray-tracing cores that accelerate specific applications. This talk will describe some common characteristics of domain-specific accelerators via case studies.

Bio: Bill Dally is Chief Scientist and Senior Vice President of Research at NVIDIA Corporation and a Professor (Research) and former chair of Computer Science at Stanford University. Bill is currently working on developing hardware and software to accelerate demanding applications including machine learning, bioinformatics, and logical inference. He has a history of designing innovative and efficient experimental computing systems. While at Bell Labs Bill contributed to the BELLMAC32 microprocessor and designed the MARS hardware accelerator. At Caltech he designed the MOSSIM Simulation Engine and the Torus Routing Chip which pioneered wormhole routing and virtual-channel flow control. At the Massachusetts Institute of Technology his group built the J-Machine and the M- Machine, experimental parallel computer systems that pioneered the separation of mechanisms from programming models and demonstrated very low overhead synchronization and communication mechanisms. At Stanford University his group developed the Imagine processor, which introduced the concepts of stream processing and partitioned register organizations, the Merrimac supercomputer, which led to GPU computing, and the ELM low-power processor. Bill is a Member of the National Academy of Engineering, a Fellow of the IEEE, a Fellow of the ACM, and a Fellow of the American Academy of Arts and Sciences. He has received the ACM Eckert-Mauchly Award, the IEEE Seymour Cray Award, the ACM Maurice Wilkes award, the IEEE-CS Charles Babbage Award, and the IPSJ FUNAI Achievement Award. He currently leads projects on computer architecture, network architecture, circuit design, and programming systems. He has published over 250 papers in these areas, holds over 160 issued patents, and is an author of the textbooks, Digital Design: A Systems Approach, Digital Systems Engineering, and Principles and Practices of Interconnection Networks.

Livestream: https://www.youtube.com/channel/UCYs2iUgksAhgoidZwEAimmg/live

Research Areas:
AI & Machine Learning, Computer Architecture

Impact Areas:

See other events that are part of the Fast Code Seminar 2019.

Created by Julian J. Shun Email at Tuesday, August 13, 2019 at 5:12 PM.