EECS Special Seminar: Organizing Computation for High-Performance Graphics and Visual Computing
, University of California, Berkeley
Date: Wednesday, April 10, 2019
Time: 4:00 PM to 5:00 PM
Event Type: Seminar
Room Description: Patil/Kiva
Host: Antonio Torralba, MIT-CSAIL
Contact: Mary McDavitt, 617-253-9620, email@example.com
Speaker URL: None
TALK: EECS Special Seminar: Organizing Computation for High-Performance Graphics and Visual Computing
In the face of declining returns to Moore’s law, future visual computing applications—from photorealistic real-time rendering, to 4D light field cameras, to pervasive sensing with deep learning—still demand orders of magnitude more computation than we currently have. From data centers to mobile devices, performance and energy scaling is limited by locality (the distance over which data has to move, e.g., from nearby caches, far away main memory, or across networks) and parallelism. Because of this, I argue that we should think of the performance and efficiency of an application as determined not just by the algorithm and the hardware on which it runs, but critically also by the organization of its computations and data. For algorithms with the same complexity—even the exact same set of arithmetic operations—the order and granularity of execution and placement of data can easily change performance by an order of magnitude because of locality and parallelism. To extract the full potential of our machines, we must treat the organization of computation as a first-class concern, while working across all levels, from algorithms and data structures, to programming languages, to hardware.
This talk will present facets of this philosophy in systems I have built for image processing, 3D graphics, and machine learning. I will show that, for the data-parallel pipelines common in these data-intensive applications, the possible organizations of computations and data, and the effect they have on performance, are driven by the fundamental dependencies in a given problem. Then I will show how, by exploiting domain knowledge to define structured spaces of possible organizations and dependencies, we can enable radically simpler high-performance programs, smarter compilers, and more efficient hardware. Finally, I will show how we use these structured spaces to unlock the power of machine learning for optimizing systems.
Jonathan Ragan-Kelley is an assistant professor of Computer Science at UC Berkeley. He works on high-efficiency visual computing, including systems, compilers, and architectures for image processing and vision, 3D graphics, and machine learning. He is a recipient of the NSF CAREER award, the William A. Martin and Firestone thesis prizes, and multiple CACM Research Highlights. He was previously a visiting researcher at Google, a postdoc at Stanford, and earned his PhD from MIT in 2014, where he built the Halide language. Halide is used throughout industry to process billions of images every day, from data centers to billions of smartphones. Before Halide, Jonathan built the Lightspeed preview system, which was used on over a dozen films at Industrial Light & Magic and was a finalist for an Academy technical achievement award, and he worked in GPU architecture, compilers, and research at NVIDIA, Intel, and ATI.
Created by Mary McDavitt at Wednesday, April 03, 2019 at 1:48 PM.