Conference
Eye-Shield: Real-Time Protection of Mobile Device Screen Information from Shoulder Surfing
32nd USENIX Security Symposium (2023)
Brian Tang, Kang G. Shin
A novel defense system against shoulder surfing attacks on mobile devices. We designed Eye-Shield, a real-time system that makes on-screen content readable at close distances but blurred or pixelated at wider angles to prevent unauthorized viewing. I created and evaluated the system, ensuring it met real-time constraints while maintaining usability and minimal power consumption. Our findings demonstrated that Eye-Shield significantly reduces shoulder surfers’ ability to read on-screen information.
Eye-Shield: Real-Time Protection of Mobile Device Screen Information from Shoulder Surfing
32nd USENIX Security Symposium (2023)
A novel defense system against shoulder surfing attacks on mobile devices. We designed Eye-Shield, a real-time system that makes on-screen content readable at close distances but blurred or pixelated at wider angles to prevent unauthorized viewing. I created and evaluated the system, ensuring it met real-time constraints while maintaining usability and minimal power consumption. Our findings demonstrated that Eye-Shield significantly reduces shoulder surfers’ ability to read on-screen information.
Detection of Inconsistencies in Privacy Practices of Browser Extensions
44th IEEE Symposium on Security and Privacy (2023)
Duc Bui, Brian Tang, Kang G. Shin
An evaluation of inconsistencies between browser extensions’ privacy disclosures and their actual data collection practices. We developed ExtPrivA, an automated system that detects privacy violations by analyzing privacy policies and tracking data transfers from extensions to external servers. I contributed to the data collection and evaluation of 47.2k Chrome Web Store extensions to identify misleading privacy disclosures. Our findings revealed widespread discrepancies, highlighting the need for stricter enforcement of privacy policies.
Detection of Inconsistencies in Privacy Practices of Browser Extensions
44th IEEE Symposium on Security and Privacy (2023)
An evaluation of inconsistencies between browser extensions’ privacy disclosures and their actual data collection practices. We developed ExtPrivA, an automated system that detects privacy violations by analyzing privacy policies and tracking data transfers from extensions to external servers. I contributed to the data collection and evaluation of 47.2k Chrome Web Store extensions to identify misleading privacy disclosures. Our findings revealed widespread discrepancies, highlighting the need for stricter enforcement of privacy policies.
Do Opt-Outs Really Opt Me Out?
29th ACM Conference on Computer and Communications Security (2022)
Duc Bui, Brian Tang, Kang G. Shin
A case study analyzing the reliability of opt-out choices provided by online trackers. We developed OptOutCheck, an automated framework that detects inconsistencies between trackers’ stated opt-out policies and their actual data collection practices. I contributed to the data collection and evaluation of 2.9k trackers. We found violations of privacy policies and opt-outs, underscoring the need to improve auditing of online trackers.
Do Opt-Outs Really Opt Me Out?
29th ACM Conference on Computer and Communications Security (2022)
A case study analyzing the reliability of opt-out choices provided by online trackers. We developed OptOutCheck, an automated framework that detects inconsistencies between trackers’ stated opt-out policies and their actual data collection practices. I contributed to the data collection and evaluation of 2.9k trackers. We found violations of privacy policies and opt-outs, underscoring the need to improve auditing of online trackers.
Fairness Properties of Face Recognition and Obfuscation Systems
32nd USENIX Security Symposium (2023)
Harrison Rosenberg, Brian Tang, Kassem Fawaz, Somesh Jha
A research project investigating the demographic fairness of face obfuscation systems in evading automated face recognition. We analyze how these systems, which rely on metric embedding networks, may exhibit disparities across different demographic groups. I coded, implemented, and tested the theoretical framework proposed by the lead author relating to how fairness impacts disparities in anti face recognition systems.
Fairness Properties of Face Recognition and Obfuscation Systems
32nd USENIX Security Symposium (2023)
A research project investigating the demographic fairness of face obfuscation systems in evading automated face recognition. We analyze how these systems, which rely on metric embedding networks, may exhibit disparities across different demographic groups. I coded, implemented, and tested the theoretical framework proposed by the lead author relating to how fairness impacts disparities in anti face recognition systems.
CONFIDANT: A Privacy Controller for Social Robots
17th ACM/IEEE International Conference on Human-Robot Interaction (2022)
Brian Tang, Dakota Sullivan, Bengisu Cagiltay, Varun Chandrasekaran, Kassem Fawaz, Bilge Mutlu
A research project exploring privacy management in conversational social robots. We developed CONFIDANT, a privacy controller that leverages NLP models to analyze conversational metadata. I theorized, implemented, and evaluated the privacy controller, integrating speech transcription, sentiment analysis, speaker recognition, and topic classification systems. Our findings demonstrated that robots equipped with privacy controls are perceived as more trustworthy, privacy-aware, and socially aware.
CONFIDANT: A Privacy Controller for Social Robots
17th ACM/IEEE International Conference on Human-Robot Interaction (2022)
A research project exploring privacy management in conversational social robots. We developed CONFIDANT, a privacy controller that leverages NLP models to analyze conversational metadata. I theorized, implemented, and evaluated the privacy controller, integrating speech transcription, sentiment analysis, speaker recognition, and topic classification systems. Our findings demonstrated that robots equipped with privacy controls are perceived as more trustworthy, privacy-aware, and socially aware.
Face-Off: Adversarial Face Obfuscation
21st Symposium Privacy Enhancing Technologies (2021)
Varun Chandrasekaran, Chuhan Gao, Brian Tang, Kassem Fawaz, Somesh Jha, Suman Banerjee
A security framework that protects user privacy by preventing unauthorized face recognition. Face-Off introduces strategic perturbations to facial images, making them unrecognizable to commercial face recognition models. I collaborated with the lead author to code, evaluate, and deploy the system, creating one of the first anti face recognition systems that uses targeted evasion attacks.
Face-Off: Adversarial Face Obfuscation
21st Symposium Privacy Enhancing Technologies (2021)
A security framework that protects user privacy by preventing unauthorized face recognition. Face-Off introduces strategic perturbations to facial images, making them unrecognizable to commercial face recognition models. I collaborated with the lead author to code, evaluate, and deploy the system, creating one of the first anti face recognition systems that uses targeted evasion attacks.