- Computational Building Bloc...
- Edit Event
- Cancel Event
- Preview Reminder
- Send Reminder
- Other events happening in January 2021
Computational Building Blocks for Machine Learning on Graphs
Speaker:
Ariful Azad
, Indiana University Bloomington
Date: Monday, January 25, 2021
Time: 2:00 PM to 3:00 PM Note: all times are in the Eastern Time Zone
Public: Yes
Location: https://mit.zoom.us/meeting/register/tJUrdOqopj8uHdO4gUyVMnfglOFEqIye_Je0 (Registration required, if you haven't registered for this series before)
Event Type: Seminar
Room Description: https://mit.zoom.us/meeting/register/tJUrdOqopj8uHdO4gUyVMnfglOFEqIye_Je0 (Registration required, if you haven't registered for this series before)
Host: Julian Shun, MIT CSAIL
Contact: Julian Shun, jshun@mit.edu, lindalynch@csail.mit.edu
Relevant URL: http://fast-code.csail.mit.edu/
Speaker URL: https://arifulazad.com/
Speaker Photo:
None
Reminders to:
seminars@csail.mit.edu, fast-code-seminar@lists.csail.mit.edu, pl@csail.mit.edu, commit@lists.csail.mit.edu, mitml@mit.edu
Reminder Subject:
TALK: Computational Building Blocks for Machine Learning on Graphs
Abstract: A graph is a beautiful mathematical concept that can concisely model any interacting system such as protein interactions in organisms, chemical bonds in compounds, and friendships on social networks. Thus, machine learning on graphs for predicting edges, node features, and community structures plays an important role in computational biology, social science, neurology, and computational chemistry.
In this talk, I will discuss computational building blocks needed to design graph machine learning algorithms including graph embedding, graph neural networks (GNNs) and graph visualization. Computationally, major steps in these algorithms can be mapped to sampled dense-dense matrix multiplication (SDDMM) and sparse-dense matrix multiplication (SpMM). We can even fuse these two operations into a single kernel called FusedMM that captures almost all computational patterns needed by popular graph embedding and GNN approaches. I will then discuss parallel algorithms for these kernels. As the performances of these linear-algebraic operations are bound by the memory bandwidth, we develop algorithms that minimize data movements from the main memory and utilize cache and registers as much as possible. We also auto tune these operations so that the same code can perform equally well on Intel, AMD, IBM Power9 and ARM Processors. The kernel-based design and tuned parallel implementations speed up end-to-end graph embedding and GNN algorithms by up to 28x over existing approaches based on the message passing paradigm. The take home message of this talk is that developing graph machine learning algorithms using efficient building blocks provides a cleaner interface to application developers and boosts the performance of end-to-end graph learning applications.
Bio: Dr. Ariful Azad is an Assistant Professor of Intelligent Systems Engineering at Luddy School of Informatics, Computing, and Engineering in Indiana University (IU). He was a Research Scientist in the Computational Research Division at Lawrence Berkeley National Laboratory. Dr. Azad obtained Ph.D. from Purdue University and B.S. from Bangladesh University of Engineering and Technology. His research interests are in graph machine learning, sparse matrix algorithms, high-performance computing, and bioinformatics. His interdisciplinary research group strives to solve large-scale problems in genomics, earth science, and scientific computing.
IMPORTANT NOTE FOR ATTENDEES: If you have already registered for the Fast Code Seminars on Zoom since July 27, 2020, please use the Zoom link that you have received. This link will stay the same for subsequent Fast Code seminars this semester. Zoom does not recognize a second registration, and will not send out the link a second time. If you have any problems with registration, please contact jshun@mit.edu and lindalynch@csail.mit.edu by 1:30pm on the day of the seminar, so that we can try to resolve it before the seminar begins.
Research Areas:
Algorithms & Theory, AI & Machine Learning, Programming Languages & Software
Impact Areas:
Big Data
Created by Julian J. Shun at Monday, January 18, 2021 at 11:35 PM.