Security
Analyzing Privacy Implications of Data Collection in Android Automotive OS
(Submission) 25th Privacy Enhancing Technologies Symposium
Bulut Gozubuyuk and Brian Jay Tang and Mert D. Pesé and Kang G. Shin
A research project investigating the privacy implications of Android Automotive OS (AAOS) in modern vehicles. We developed PriDrive, an automotive privacy analysis tool that employs static, dynamic, and network traffic inspection methodologies to evaluate the data collected by OEMs and compare it against their privacy policies. Our findings highlighted significant gaps in OEM privacy policies and potential privacy violations.
Analyzing Privacy Implications of Data Collection in Android Automotive OS
(Submission) 25th Privacy Enhancing Technologies Symposium
A research project investigating the privacy implications of Android Automotive OS (AAOS) in modern vehicles. We developed PriDrive, an automotive privacy analysis tool that employs static, dynamic, and network traffic inspection methodologies to evaluate the data collected by OEMs and compare it against their privacy policies. Our findings highlighted significant gaps in OEM privacy policies and potential privacy violations.
Short: Achieving the Safety and Security of the End-to-End AV Pipeline
1st Cyber Security in Cars Workshop (CSCS) at CCS
Noah T. Curran and Minkyoung Cho and Ryan Feng and Liangkai Liu and Brian Jay Tang and Pedram Mohajer Ansari and Alkim Domeke and Mert D. Pesé and Kang G. Shin
A research paper examining the security and safety challenges in autonomous vehicles (AVs). We provide a comprehensive analysis of AV vulnerabilities, including surveillance risks, sensor system reliability, adversarial attacks, and regulatory concerns. I contributed to the writing and survey for the surveillance and environmental safety/security risks. Our paper showed the need for standardized security frameworks.
Short: Achieving the Safety and Security of the End-to-End AV Pipeline
1st Cyber Security in Cars Workshop (CSCS) at CCS
A research paper examining the security and safety challenges in autonomous vehicles (AVs). We provide a comprehensive analysis of AV vulnerabilities, including surveillance risks, sensor system reliability, adversarial attacks, and regulatory concerns. I contributed to the writing and survey for the surveillance and environmental safety/security risks. Our paper showed the need for standardized security frameworks.
Eye-Shield: Real-Time Protection of Mobile Device Screen Information from Shoulder Surfing
32nd USENIX Security Symposium (2023)
Brian Tang, Kang G. Shin
A novel defense system against shoulder surfing attacks on mobile devices. We designed Eye-Shield, a real-time system that makes on-screen content readable at close distances but blurred or pixelated at wider angles to prevent unauthorized viewing. I created and evaluated the system, ensuring it met real-time constraints while maintaining usability and minimal power consumption. Our findings demonstrated that Eye-Shield significantly reduces shoulder surfers’ ability to read on-screen information.
Eye-Shield: Real-Time Protection of Mobile Device Screen Information from Shoulder Surfing
32nd USENIX Security Symposium (2023)
A novel defense system against shoulder surfing attacks on mobile devices. We designed Eye-Shield, a real-time system that makes on-screen content readable at close distances but blurred or pixelated at wider angles to prevent unauthorized viewing. I created and evaluated the system, ensuring it met real-time constraints while maintaining usability and minimal power consumption. Our findings demonstrated that Eye-Shield significantly reduces shoulder surfers’ ability to read on-screen information.
Fairness Properties of Face Recognition and Obfuscation Systems
32nd USENIX Security Symposium (2023)
Harrison Rosenberg, Brian Tang, Kassem Fawaz, Somesh Jha
A research project investigating the demographic fairness of face obfuscation systems in evading automated face recognition. We analyze how these systems, which rely on metric embedding networks, may exhibit disparities across different demographic groups. I coded, implemented, and tested the theoretical framework proposed by the lead author relating to how fairness impacts disparities in anti face recognition systems.
Fairness Properties of Face Recognition and Obfuscation Systems
32nd USENIX Security Symposium (2023)
A research project investigating the demographic fairness of face obfuscation systems in evading automated face recognition. We analyze how these systems, which rely on metric embedding networks, may exhibit disparities across different demographic groups. I coded, implemented, and tested the theoretical framework proposed by the lead author relating to how fairness impacts disparities in anti face recognition systems.
Face-Off: Adversarial Face Obfuscation
21st Symposium Privacy Enhancing Technologies (2021)
Varun Chandrasekaran, Chuhan Gao, Brian Tang, Kassem Fawaz, Somesh Jha, Suman Banerjee
A security framework that protects user privacy by preventing unauthorized face recognition. Face-Off introduces strategic perturbations to facial images, making them unrecognizable to commercial face recognition models. I collaborated with the lead author to code, evaluate, and deploy the system, creating one of the first anti face recognition systems that uses targeted evasion attacks.
Face-Off: Adversarial Face Obfuscation
21st Symposium Privacy Enhancing Technologies (2021)
A security framework that protects user privacy by preventing unauthorized face recognition. Face-Off introduces strategic perturbations to facial images, making them unrecognizable to commercial face recognition models. I collaborated with the lead author to code, evaluate, and deploy the system, creating one of the first anti face recognition systems that uses targeted evasion attacks.
Rearchitecting Classification Frameworks For Increased Robustness
ArXiv Preprint
Varun Chandrasekaran, Brian Tang, Nicolas Papernot, Kassem Fawaz, Somesh Jha, Xi Wu
A case study and evaluation on how deep neural networks (DNNs) are highly effective but vulnerable to adversarial inputs -- subtle manipulations designed to deceive them. Existing defenses often sacrifice accuracy and require extensive training. Collaborating with the lead author, I implemented a design of a hierarchical classification approach that leverages invariant features to enhance adversarial robustness without compromising accuracy.
Rearchitecting Classification Frameworks For Increased Robustness
ArXiv Preprint
A case study and evaluation on how deep neural networks (DNNs) are highly effective but vulnerable to adversarial inputs -- subtle manipulations designed to deceive them. Existing defenses often sacrifice accuracy and require extensive training. Collaborating with the lead author, I implemented a design of a hierarchical classification approach that leverages invariant features to enhance adversarial robustness without compromising accuracy.