- Characterizing and Optimizi...
- Edit Event
- Cancel Event
- Preview Reminder
- Send Reminder
- Other events happening in December 2023
Characterizing and Optimizing End-to-End Systems for Private Inference
Speaker:
Karthik Garimella
Date: Wednesday, December 13, 2023
Time: 4:00 PM to 5:00 PM Note: all times are in the Eastern Time Zone
Public: Yes
Location: G-882, Hewlett Room
Event Type: Seminar
Room Description:
Host: Alexandra Henzinger
Contact:
Relevant URL:
Speaker URL: None
Speaker Photo:
None
Reminders to:
seminars@csail.mit.edu
Reminder Subject:
TALK: Characterizing and Optimizing End-to-End Systems for Private Inference
In two-party machine learning prediction services, the client's goal is to query a remote server's trained machine learning model to perform neural network inference in some application domain. However, sensitive information can be obtained during this process by either the client or the server, leading to potential collection, unauthorized secondary use, and inappropriate access to personal information. These security concerns have given rise to Private Inference (PI), in which both the client's personal data and the server's trained model are kept confidential. State-of-the-art PI protocols consist of a pre-processing or offline phase and an online phase that combine several cryptographic primitives: Homomorphic Encryption (HE), Secret Sharing (SS), Garbled Circuits (GC), and Oblivious Transfer (OT). Despite the need and recent performance improvements, PI remains largely arcane today and is too slow for practical use.
This paper addresses PI's shortcomings with a detailed characterization of a standard high-performance protocol to build foundational knowledge and intuition in the systems community. Our characterization pinpoints all sources of inefficiency -- compute, communication, and storage. In contrast to prior work, we consider inference request arrival rates rather than studying individual inferences in isolation and we find that the pre-processing phase cannot be ignored and is often incurred online as there is insufficient downtime to hide pre-compute latency. Finally, we leverage insights from our characterization and propose three optimizations to address the storage (Client-Garbler), computation (layer-parallel HE), and communication (wireless slot allocation) overheads.
Research Areas:
Impact Areas:
Created by Megan F Farmer at Friday, December 08, 2023 at 10:27 AM.