홍상현 오레곤 주립 대학교

Oregon State University
Computer Science Dept.
Cybersecurity | AI
Contact Information

Office: Room 2029, Kelley Engineering Center (KEC)
2500 NW Monroe Ave
Corvallis, OR 97331 USA
Office Hours: Tu/Th: 2 - 3 pm

          

Press

06.2023

OSU AI News

04.2022

TechXplore
Techradar.pro

06.2021

TechTalks

05.2021

Dev Podcast
MIT Tech Review

02.2021

USENIX Enigma 2021
(Ted Talk for Security)

Teaching

Grad. AI579: Trustworthy ML
Grad. CS578: Cyber-Sec.
UGrad. CS370: Intro to Sec.
UGrad. CS344: OS I
Students [Full list]

Derek Lilienthal (PhD, AI)
Tahmid Prato (PhD, CS)
Jose Escamilla (PhD, CS
  co-advise w. Huazheng Wang)
Gabriel Ritter (PhD, CS
  co-advise w. Rakesh Bobba)

Eunjin Roh (MS, CS)
Zach Coalson (BS, CS)
Leo Marchyok (BS, CS)
AJ (BS, CS)
Dongwoo Kang (BS, CS)
Nyx (CS)

Alumni

'24: Ramya Jayaraman (MS, AI)
'23: Hoang Le (MS, CS)

'24: Colin Pannikkat (BS, CS)
'24: Evan Mrazik (BS, CS)
'22: Peter M-Stevens (BS, CS)
'22: Ryan Little (BS, CS)
  Now a PhD student at UMD

I am an Assistant Professor of Computer Science at Oregon State University.
I work at the intersection of computer security, privacy and machine learning.

Research Interests

I am an AI hacker working on building trustworthy and socially-responsible AI-enabled systems so that humans use those systems to improve our lives and society in the future. Thus far, I’ve been interested in characterizing the security/privacy and dependability issues of AI-enabled systems from a holistic view (i.e., systems security perspective). I received the Google Faculty Research Award 2023 and the Samsung Global Research (GRO) Award 2023, 2022. I am selected as a DARPA Riser (2022) and was invited as a speaker at USENIX Enigma (2021).

Please fill out this form if you're motivated to work with me.

Bio

I earned my Ph.D. from the University of Maryland, College Park, under the supervision of Prof. Tudor Dumitras in 2021. I received my bachelor's degree from Seoul National University in 2015. I was fortunate to spend a winter at Google Brain in 2021 (working with Dr. Nicholas Carlini and Dr. Alexey Kurakin) and to spend 6-months at Frame.io in 2017 (working with Dr. Abhinav Srivastava).

News


Sep 25, 2024
Our Privacy-Backdoor paper is accepted to NeurIPS 2024. Congratulations Leo!
Aug. 19, 2024
One paper is accepted to ACSAC 2024
Jul. 18, 2024
Delivered a keynote talk at AI Summer Security Workshop 2024
Jul. 16, 2024
One paper is accepted to CIKM 2024
May. 1, 2024
Zachary's paper on poisoning attacks on NAS is on arXiv. Great job!
May. 1, 2024
Two papers are accepted to ICML 2024
Apr. 15, 2024
Derek has been awarded ARCS Foundation Oregon Scholar Award. Congratulations!
Apr. 1, 2024
Leo's contribution to Privacy-Backdoor paper is on arXiv. Great job!
Mar. 1, 2024
Received Samsung 2023 GRO Award. Thanks Samsung!
Feb. 20, 2024
My k-12 student (Ojas)'s paper will be in COLING 2024
Feb. 14, 2024
Jose's story about AI safety is on KBVR-FM. Congrats!
Jan. 16, 2024
One paper is accepted at ICLR 2024
Dec. 10, 2023
One paper is accepted at AAAI 2024
Oct. 12, 2023
Received Google exploreCSR Award 2023. Thanks Google Research!
Sep. 21, 2023
Zachary's first paper will be in NeurIPS 2023. Congratulations!
Before Jun. 15, 2023  
Jun. 15, 2023
Our team's story of advancing AI systems is on OSU AI Newsletter

Selected Publications [Full list]


Learning Unforeseen Robustness from Out-of-distribution Data Using Equivariant Domain Translator
Sicheng Zhu, Bang An, Furong Huang, and Sanghyun Hong
International Conference on Machine Learning (ICML). 2023.
PDF | Code | Talk & Poster (on ICML'23 Website)

Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong,
Nicholas Carlini
(*authors ordered reverse-alphabetically)
The ACM Conference on Computer and Communications Security (CCS), 2022.
PDF | Code | Media

Data Poisoning Won't Save You From Facial Recognition
Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramer
International Conference on Learning Representations (ICLR) 2022.
PDF | Code | Poster

Qu-ANTI-zation: Exploiting Neural Network Quantization for Achieving Adversarial Outcomes
Sanghyun Hong, Michael-Andrei Panaitescu-Liess, Yigitcan Kaya, and Tudor Dumitraș
Advances in Neural Information Processing Systems (NeurIPS) 2021.
PDF | Code | Poster

A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
*Sanghyun Hong, *Yigitcan Kaya, Ionuţ-Vlad Modoranu, and Tudor Dumitraș(* equal contribution)
International Conference on Learning Representations (ICLR) 2021. [Spotlight]
PDF | Code | Spotlight Presentation

How to 0wn NAS in Your Spare Time
Sanghyun Hong, Michael Davinroy, Yigitcan Kaya, Dana Dachman-Soled, and Tudor Dumitraș
International Conference on Learning Representations (ICLR) 2020.
PDF | Code | Poster

Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks
Under Hardware Fault Attacks

Sanghyun Hong, Pietro Frigo, Yigitcan Kaya, Cristiano Giuffrida, and Tudor Dumitraș
Proceedings of The 28th USENIX Security Symposium (USENIX Security) 2019.
PDF | Presentation

Shallow-Deep Networks: Understanding and Mitigating Network Overthinking
Yigitcan Kaya, Sanghyun Hong, and Tudor Dumitraș
International Conference on Machine Learning (ICML) 2019.
PDF | Code