홍상현 오레곤 주립 대학교
Office: Room 2029, Kelley Engineering Center (KEC)
2500 NW Monroe Ave
Corvallis, OR 97331 USA
Office Hours: Tu/Th: 2 - 3 pm
USENIX Enigma 2021
(Ted Talk for Security)
Grad. | AI579: Trustworthy ML |
Grad. | CS578: Cyber-Sec. |
UGrad. |
CS370: Intro to Sec. |
UGrad. | CS344: OS I |
Derek Lilienthal (PhD, AI)
Tahmid Prato (PhD, CS)
Jose Escamilla (PhD, CS
co-advise w. Huazheng Wang)
Gabriel Ritter (PhD, CS
co-advise w. Rakesh Bobba)
Eunjin Roh (MS, CS)
Zach Coalson (BS, CS)
Leo Marchyok (BS, CS)
AJ (BS, CS)
Dongwoo Kang (BS, CS)
Nyx (CS)
'24: Ramya Jayaraman (MS, AI)
'23: Hoang Le (MS, CS)
'24: Colin Pannikkat (BS, CS)
'24: Evan Mrazik (BS, CS)
'22: Peter M-Stevens (BS, CS)
'22: Ryan Little (BS, CS)
Now a PhD student at UMD
I am an Assistant Professor of Computer Science at Oregon State University.
I work at the intersection of computer security, privacy and machine learning.
I am an AI hacker working on building trustworthy and socially-responsible AI-enabled systems so that humans use those systems to improve our lives and society in the future. Thus far, I’ve been interested in characterizing the security/privacy and dependability issues of AI-enabled systems from a holistic view (i.e., systems security perspective). I received the Google Faculty Research Award 2023 and the Samsung Global Research (GRO) Award 2023, 2022. I am selected as a DARPA Riser (2022) and was invited as a speaker at USENIX Enigma (2021).
Please fill out this form if you're motivated to work with me.
I earned my Ph.D. from the University of Maryland, College Park, under the supervision of Prof. Tudor Dumitras in 2021. I received my bachelor's degree from Seoul National University in 2015. I was fortunate to spend a winter at Google Brain in 2021 (working with Dr. Nicholas Carlini and Dr. Alexey Kurakin) and to spend 6-months at Frame.io in 2017 (working with Dr. Abhinav Srivastava).
Learning Unforeseen Robustness from Out-of-distribution Data Using Equivariant Domain Translator
Sicheng Zhu, Bang An, Furong Huang, and Sanghyun Hong
International Conference on Machine Learning (ICML). 2023.
PDF |
Code |
Talk & Poster (on ICML'23 Website)
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong,
Nicholas Carlini (*authors ordered reverse-alphabetically)
The ACM Conference on Computer and Communications Security (CCS), 2022.
PDF |
Code |
Media
Data Poisoning Won't Save You From Facial Recognition
Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramer
International Conference on Learning Representations (ICLR) 2022.
PDF |
Code |
Poster
Qu-ANTI-zation: Exploiting Neural Network Quantization for Achieving Adversarial Outcomes
Sanghyun Hong, Michael-Andrei Panaitescu-Liess, Yigitcan Kaya, and Tudor Dumitraș
Advances in Neural Information Processing Systems (NeurIPS) 2021.
PDF |
Code |
Poster
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
*Sanghyun Hong, *Yigitcan Kaya, Ionuţ-Vlad Modoranu, and Tudor Dumitraș(* equal contribution)
International Conference on Learning Representations (ICLR) 2021.
[Spotlight]
PDF |
Code |
Spotlight Presentation
How to 0wn NAS in Your Spare Time
Sanghyun Hong, Michael Davinroy, Yigitcan Kaya, Dana Dachman-Soled, and Tudor Dumitraș
International Conference on Learning Representations (ICLR) 2020.
PDF |
Code |
Poster
Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks
Under Hardware Fault Attacks
Sanghyun Hong, Pietro Frigo, Yigitcan Kaya, Cristiano Giuffrida, and Tudor Dumitraș
Proceedings of The 28th USENIX Security Symposium (USENIX Security) 2019.
PDF |
Presentation
Shallow-Deep Networks: Understanding and Mitigating Network Overthinking
Yigitcan Kaya, Sanghyun Hong, and Tudor Dumitraș
International Conference on Machine Learning (ICML) 2019.
PDF |
Code