- Format Abstractions for Spa...
- Edit Event
- Cancel Event
- Preview Reminder
- Send Reminder
- Other events happening in July 2020
Format Abstractions for Sparse Tensor Algebra Compilation
Speaker:
Stephen Chou
, MIT
Date: Monday, July 20, 2020
Time: 2:00 PM to 3:00 PM Note: all times are in the Eastern Time Zone
Public: Yes
Location: https://mit.zoom.us/meeting/register/tJUrdOqopj8uHdO4gUyVMnfglOFEqIye_Je0 (Registration required)
Event Type: Seminar
Room Description: https://mit.zoom.us/meeting/register/tJUrdOqopj8uHdO4gUyVMnfglOFEqIye_Je0 (Registration required)
Host: Julian Shun, MIT CSAIL
Contact: Julian Shun, jshun@mit.edu, lindalynch@csail.mit.edu
Relevant URL: http://fast-code.csail.mit.edu/
Speaker URL: http://people.csail.mit.edu/s3chou/
Speaker Photo:
None
Reminders to:
fast-code-seminar@lists.csail.mit.edu, seminars@csail.mit.edu, pl@csail.mit.edu, commit@lists.csail.mit.edu
Reminder Subject:
TALK: Format Abstractions for Sparse Tensor Algebra Compilation
Tensor algebra is a powerful tool for computing with multidimensional data and has applications in data analytics, machine learning, science, and engineering. Practical applications often work with sparse tensors, and there exists a wide variety of formats for representing such tensors in memory, each suited to specific types of applications and data. However, different sparse tensor formats use disparate data structures to store nonzero elements, so computing with tensors stored in different formats requires vastly dissimilar code that are complex and tedious to implement. Hand-optimized sparse linear and tensor algebra libraries thus typically only support a limited set of operations on tensors stored in a limited set of formats.
In this talk, I will show how to build a sparse tensor algebra compiler that supports efficiently computing with tensors stored in disparate formats. I will present an interface that describes per-dimension formats in terms of their capabilities and properties, and I will show how implementations of the interface compose to form commonly used tensor formats such as CSR, COO, DIA, ELL, CSF, and countless others. I will then show how, with these implementations at hand, a compiler can generate code to efficiently compute with tensors stored in arbitrary combinations of the aforementioned formats as well as to efficiently convert tensors between those formats. We have implemented our technique in the TACO tensor algebra compiler, which is the first compiler to generate code that computes any basic tensor algebra expression with sparse tensors stored in arbitrary formats. Our technique generates code that has performance competitive with equivalent hand-optimized implementations.
Bio: Stephen Chou is a fifth-year PhD student at MIT CSAIL, working with Prof. Saman Amarasinghe. His research interests include domain-specific programming languages and compilers for performance computing. He received a distinguished paper award at OOPSLA 2017 for his work on the TACO sparse tensor algebra compiler.
Research Areas:
Algorithms & Theory, Programming Languages & Software
Impact Areas:
Big Data
Created by Julian J. Shun at Wednesday, July 08, 2020 at 5:35 PM.