- Adaptively Sound Zero-Knowl...
- Edit Event
- Cancel Event
- Preview Reminder
- Send Reminder
- Other events happening in March 2024
Adaptively Sound Zero-Knowledge SNARKs for UP
Speaker:
Surya Mathialagan, MIT EECS
Date: Friday, March 15, 2024
Time: 10:30 AM to 12:00 PM Note: all times are in the Eastern Time Zone
Public: Yes
Location: 32-G882 Hewlett Room
Event Type: Seminar
Room Description:
Host: Vinod Vaikuntanathan & Yael Kalai
Contact: Megan F Farmer, mfarmer@csail.mit.edu
Relevant URL:
Speaker URL: None
Speaker Photo:
Reminders to:
seminars@csail.mit.edu, cis-seminars@csail.mit.edu
Reminder Subject:
TALK: Adaptively Sound Zero-Knowledge SNARKs for UP
Abstract:
We study succinct non-interactive arguments (SNARGs) and succinct non-interactive arguments of knowledge (SNARKs) for the class UP in the reusable designated verifier model. UP is an expressive subclass of NP consisting of all NP languages where each instance has at most one witness; a designated verifier SNARG (dvSNARG) is one where verification of the SNARG proof requires a private verification key; and such a dvSNARG is reusable if soundness holds even against a malicious prover with oracle access to the (private) verification algorithm. Our main results are as follows.
1. A reusably and adaptively sound zero-knowledge (zk) dvSNARG for UP, from subexponential LWE and evasive LWE (a relatively new but popular variant of LWE). Our SNARGs achieve very short proofs of length (1 + o(1))λ bits for 2^{-λ} soundness error.
2. A generic transformation that lifts any ``Sahai-Waters-like'' (zk) SNARG to an adaptively sound (zk) SNARG, in the designated-verifier setting. In particular, this shows that the Sahai-Waters SNARG for NP is adaptively sound in the designated verifier setting, assuming subexponential hardness of the underlying assumptions. The resulting SNARG proofs have length (1 + o(1))λ bits for 2^{-λ} soundness error. Our result sidesteps the Gentry-Wichs barrier for adaptive soundness by employing an exponential-time security reduction.
3. A generic transformation that lifts any adaptively sound (zk) SNARG for UP to an adaptively sound (zk) SNARK for UP, while preserving zero-knowledge. The resulting SNARK achieves the strong notion of black-box extraction. There are barriers to achieving such SNARKs for all of NP from falsifiable assumptions, so our restriction to UP is, in a sense, necessary.
Applying (3) to our SNARG for UP from evasive LWE (1), we obtain a reusably and adaptively sound designated-verifier zero-knowledge SNARK for UP from subexponential LWE and evasive LWE. Moreover, applying both (2) and (3) to the Sahai-Waters SNARG, we obtain the same result from LWE, subexponentially secure one-way functions, and subexponentially secure indistinguishability obfuscation. Both constructions have succinct proofs of size poly(λ). These are the first SNARK constructions (even in the designated-verifier setting) for a non-trivial subset of NP from (sub-exponentially) falsifiable assumptions.
Research Areas:
Impact Areas:
Created by Megan F Farmer at Tuesday, February 20, 2024 at 3:19 PM.