Abstract
Deep neural networks are susceptible to model piracy and adversarial attacks when malicious end-users have full access to the model parameters. Recently, a logic locking scheme called HPNN has been proposed. HPNN utilizes hardware root-of-trust to prevent end-users from accessing the model parameters. This paper investigates whether logic locking is secure on deep neural networks. Specifically, it presents a systematic I/O attack that combines algebraic and learning-based approaches. This attack incrementally extracts key values from the network to minimize sample complexity. Besides, it employs a rigorous procedure to ensure the correctness of the extracted key values. Our experiments demonstrate the accuracy and efficiency of this attack on large networks with complex architectures. Consequently, we conclude that HPNN-style logic locking and its variants we can foresee are insecure on deep neural networks.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the 61st ACM/IEEE Design Automation Conference, DAC 2024 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Electronic) | 9798400706011 |
DOIs | |
State | Published - Nov 7 2024 |
Event | 61st ACM/IEEE Design Automation Conference, DAC 2024 - San Francisco, United States Duration: Jun 23 2024 → Jun 27 2024 |
Publication series
Name | Proceedings - Design Automation Conference |
---|---|
ISSN (Print) | 0738-100X |
Conference
Conference | 61st ACM/IEEE Design Automation Conference, DAC 2024 |
---|---|
Country/Territory | United States |
City | San Francisco |
Period | 6/23/24 → 6/27/24 |
Funding
This work is partially supported by the NSF under grants 2113704 and 2148177.
Keywords
- IP protection
- Logic locking
- Reverse engineering neural networks
ASJC Scopus subject areas
- Computer Science Applications
- Control and Systems Engineering
- Electrical and Electronic Engineering
- Modeling and Simulation