Generating adversarial programs

Speaker: Shashank Srikant , MIT

Date: Thursday, March 25, 2021

Time: 11:00 AM to 12:00 PM Note: all times are in the Eastern Time Zone

Public: Yes

Location:

Event Type: Seminar

Room Description:

Host: Srini Devadas, CSAIL

Contact: Kyle L Hogan, klhogan@csail.mit.edu

Relevant URL:

Speaker URL: None

Speaker Photo:
None

Reminders to: seminars@csail.mit.edu, shash@mit.edu

Reminder Subject: TALK: Generating adversarial programs

Abstract:
Machine learning (ML) models that learn and predict properties of computer programs are increasingly being adopted and deployed. These models have demonstrated success in applications such as auto-completing code, summarizing large programs, and detecting bugs and malware in programs.

In this work, we investigate principled ways to adversarially perturb a computer program to fool such learned models, and thus determine their adversarial robustness. We use program obfuscations, which have conventionally been used to avoid attempts at reverse engineering programs, as adversarial perturbations. These perturbations modify programs in ways that do not alter their functionality but can be crafted to deceive an ML model when making a decision.

We provide a general formulation for an adversarial program that allows applying multiple obfuscation transformations to a program in any language. We develop first-order optimization algorithms to efficiently determine two key aspects – which parts of the program to transform, and what transformations to use. We show that it is important to optimize both these aspects to generate the best adversarially perturbed program. Due to the discrete nature of this problem, we also propose using randomized smoothing to improve the attack loss landscape to ease optimization.

We evaluate our work on Python and Java programs on the problem of program summarization.

We show that our best attack proposal achieves a improvement over a state-of-the-art attack generation approach for programs trained on a \textsc{seq2seq} model. We further show that our formulation is better at training models that are robust to adversarial attacks.

This is joint work with Sijia Liu, Shiyu Chang, Quanfu Fan, Gaoyuan Zhang from MIT-IBM AI Lab, and Tamara Mitrovska, Una-May O’Reilly from CSAIL, MIT. This work will appear in ICLR 2021.

Zoom:

Kyle Hogan is inviting you to a scheduled Zoom meeting.

Topic: CSAIL Security Seminar
Time: This is a recurring meeting Meet anytime

Join Zoom Meeting
https://mit.zoom.us/j/97527284254

Password: <3security

One tap mobile
+16465588656,,97527284254# US (New York)
+16699006833,,97527284254# US (San Jose)

Meeting ID: 975 2728 4254

US : +1 646 558 8656 or +1 669 900 6833

International Numbers: https://mit.zoom.us/u/auBvg4NEV

Join by SIP
97527284254@zoomcrc.com

Join by Skype for Business
https://mit.zoom.us/skype/97527284254

Research Areas:
Security & Cryptography

Impact Areas:

See other events that are part of the CSAIL Security Seminar Series 2021.

Created by Kyle L Hogan Email at Monday, March 22, 2021 at 2:53 PM.