Sparse Matrices Beyond Solvers: Graphs, Biology, and Machine Learning
, Lawrence Berkeley National Laboratory and UC Berkeley
Date: Monday, June 22, 2020
Time: 2:00 PM to 3:00 PM Note: all times are in the Eastern Time Zone
Location: https://mit.zoom.us/j/536883569 (please email firstname.lastname@example.org for password, and use your full name as your user name in Zoom)
Event Type: Seminar
Room Description: https://mit.zoom.us/j/536883569 (please email email@example.com for password, and use your full name as your user name in Zoom)
Host: Julian Shun, MIT CSAIL
Contact: Julian Shun, firstname.lastname@example.org
Relevant URL: http://fast-code.csail.mit.edu/
Speaker URL: https://people.eecs.berkeley.edu/~aydin/index.html
email@example.com, firstname.lastname@example.org, email@example.com
TALK: Sparse Matrices Beyond Solvers: Graphs, Biology, and Machine Learning
Solving systems of linear equations have traditionally driven the research in sparse matrix computation for decades. Direct and iterative solvers, together with finite element computations, still account for the primary use case for sparse matrix data structures and algorithms. These sparse "solvers" often serve as the workhorse of many algorithms in spectral graph theory and traditional machine learning.
In this talk, I will be highlighting some of the emerging use cases of sparse matrices outside the domain of solvers. These include graph computations outside the spectral realm, computational biology, and emerging techniques in machine learning. A recurring theme in all these novel use cases is the concept of a semiring on which the sparse matrix computations are carried out. By overloading scalar addition and multiplication operators of a semiring, we can attack a much richer set of computational problems using the same sparse data structures and algorithms. This approach has been formalized by the GraphBLAS effort.
I will illustrate one example application from each problem domain, together with the most computationally demanding sparse matrix primitive required for its efficient execution. I will also briefly cover available software that implement these sparse matrix primitives efficiently on various architectures.
Aydın Buluç is a Staff Scientist and Principal Investigator at the Lawrence Berkeley National Laboratory (LBNL) and an Adjunct Assistant Professor of EECS at UC Berkeley. His research interests include parallel computing, combinatorial scientific computing, high performance graph analysis and machine learning, sparse matrix computations, and computational biology. Previously, he was a Luis W. Alvarez postdoctoral fellow at LBNL and a visiting scientist at the Simons Institute for the Theory of Computing. He received his PhD in Computer Science from the University of California, Santa Barbara in 2010 and his BS in Computer Science and Engineering from Sabanci University, Turkey in 2005. Dr. Buluç is a recipient of the DOE Early Career Award in 2013 and the IEEE TCSC Award for Excellence for Early Career Researchers in 2015. He is a founding associate editor of the ACM Transactions on Parallel Computing.
Algorithms & Theory, AI & Machine Learning, Computational Biology, Programming Languages & Software
Created by Julian J. Shun at Monday, June 08, 2020 at 6:02 PM.