홍상현 오레곤 주립 대학교

Oregon State University
Computer Science Dept.
Cybersecurity | AI
Contact Information

Office: Room 2029, Kelley Engineering Center (KEC)
2500 NW Monroe Ave
Corvallis, OR 97331 USA
Office Hours: Tu/Th: 2 - 3 pm

          

Press

06.2023

OSU AI News

04.2022

TechXplore
Techradar.pro

06.2021

TechTalks

05.2021

Dev Podcast
MIT Tech Review

02.2021

USENIX Enigma 2021
(Ted Talk for Security)

Teaching

Grad. AI579: Trustworthy ML
Grad. CS578: Cyber-Sec.
UGrad. CS370: Intro to Sec.
UGrad. CS344: OS I
Students [Full list]

Zachary Coalson (PhD, CS)
Derek Lilienthal (PhD, AI)
Jose Escamilla (PhD, CS
  co-advise w. Huazheng Wang)
Gabriel Ritter (PhD, CS
  co-advise w. Rakesh Bobba)

Eunjin Roh (MS, CS)
Tahmid Prato (MS, CS)
Leo Marchyok (BS, CS)
AJ (BS, CS)
Dongwoo Kang (BS, CS)
Nyx (CS)

Alumni

'24: Ramya Jayaraman (MS, AI)
'23: Hoang Le (MS, CS)

'24: Colin Pannikkat (BS, CS)
'24: Evan Mrazik (BS, CS)
'22: Peter M-Stevens (BS, CS)
'22: Ryan Little (BS, CS)
  Now a PhD student at UMD

I am a computer scientist and educator (and also an AI hacker) dedicated to addressing threats to the trustworthiness and social responsibility of AI-enabled systems while fostering the development of the next-generation workforce capable of auditing, characterizing, and countering these threats. I received the Google Faculty Research Award 2023 and the Samsung Global Research (GRO) Award 2023, 2022. I am selected as a DARPA Riser (2022) and was invited as a speaker at USENIX Enigma (2021).

Please fill out this form if you're motivated to work with me.

Bio

I am currently an Assistant Professor of Computer Science at Oregon State University. I earned my Ph.D. from the University of Maryland, College Park, under the supervision of Prof. Tudor Dumitras in 2021. I received my bachelor's degree from Seoul National University in 2015. I was fortunate to spend a winter at Google Brain in 2021 (working with Dr. Nicholas Carlini and Dr. Alexey Kurakin) and to spend 6-months at Frame.io in 2017 (working with Dr. Abhinav Srivastava).

News


Dec. 17, 2024
Received OSU-HP Seed Grant. Thanks HP!
Dec. 13, 2024
Eunjin and Sungwoo's paper on evaluating the (adversarial) robustness of recent phishing detectors is accepted to ASIA CCS 2025. Great job!
Dec. 10, 2024
Sep 25, 2024
Our Privacy-Backdoor paper is accepted to NeurIPS 2024. Congratulations Leo!
Aug. 19, 2024
One paper is accepted to ACSAC 2024
Jul. 18, 2024
Delivered a keynote talk at AI Summer Security Workshop 2024
Jul. 16, 2024
One paper is accepted to CIKM 2024
May. 1, 2024
Zachary's paper on poisoning attacks on NAS is on arXiv. Great job!
May. 1, 2024
Two papers are accepted to ICML 2024
Apr. 15, 2024
Derek has been awarded ARCS Foundation Oregon Scholar Award. Congratulations!
Apr. 1, 2024
Leo's contribution to Privacy-Backdoor paper is on arXiv. Great job!
Mar. 1, 2024
Received Samsung 2023 GRO Award. Thanks Samsung!
Feb. 20, 2024
My k-12 student (Ojas)'s paper will be in COLING 2024
Feb. 14, 2024
Jose's story about AI safety is on KBVR-FM. Congrats!
Jan. 16, 2024
One paper is accepted at ICLR 2024
Before Dec. 2023  

Selected Publications [Full list]


Learning Unforeseen Robustness from Out-of-distribution Data Using Equivariant Domain Translator
Sicheng Zhu, Bang An, Furong Huang, and Sanghyun Hong
International Conference on Machine Learning (ICML). 2023.
PDF | Code | Talk & Poster (on ICML'23 Website)

Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong,
Nicholas Carlini
(*authors ordered reverse-alphabetically)
The ACM Conference on Computer and Communications Security (CCS), 2022.
PDF | Code | Media

Data Poisoning Won't Save You From Facial Recognition
Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramer
International Conference on Learning Representations (ICLR) 2022.
PDF | Code | Poster

Qu-ANTI-zation: Exploiting Neural Network Quantization for Achieving Adversarial Outcomes
Sanghyun Hong, Michael-Andrei Panaitescu-Liess, Yigitcan Kaya, and Tudor Dumitraș
Advances in Neural Information Processing Systems (NeurIPS) 2021.
PDF | Code | Poster

A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
*Sanghyun Hong, *Yigitcan Kaya, Ionuţ-Vlad Modoranu, and Tudor Dumitraș(* equal contribution)
International Conference on Learning Representations (ICLR) 2021. [Spotlight]
PDF | Code | Spotlight Presentation

How to 0wn NAS in Your Spare Time
Sanghyun Hong, Michael Davinroy, Yigitcan Kaya, Dana Dachman-Soled, and Tudor Dumitraș
International Conference on Learning Representations (ICLR) 2020.
PDF | Code | Poster

Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks
Under Hardware Fault Attacks

Sanghyun Hong, Pietro Frigo, Yigitcan Kaya, Cristiano Giuffrida, and Tudor Dumitraș
Proceedings of The 28th USENIX Security Symposium (USENIX Security) 2019.
PDF | Presentation

Shallow-Deep Networks: Understanding and Mitigating Network Overthinking
Yigitcan Kaya, Sanghyun Hong, and Tudor Dumitraș
International Conference on Machine Learning (ICML) 2019.
PDF | Code