Research Projects
GenAI Advertising: Risks of Personalizing Ads with LLMs
(Submission) Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Brian Tang, Kaiwen Sun, Noah T. Curran, Florian Schaub, Kang G. Shin
A research project examining the risks of embedding personalized advertisements within chatbot responses. We developed a system that generates targeted ads in LLM chatbot conversations and conducted a user study to assess how ad injection impacts user trust and response quality. I created and evaluated the chatbot's ad personalization engine while running the user study. Our findings revealed that users struggle to detect chatbot ads, and undisclosed ads are rated more favorably, raising ethical concerns about AI-driven advertising.
GenAI Advertising: Risks of Personalizing Ads with LLMs
(Submission) Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
A research project examining the risks of embedding personalized advertisements within chatbot responses. We developed a system that generates targeted ads in LLM chatbot conversations and conducted a user study to assess how ad injection impacts user trust and response quality. I created and evaluated the chatbot's ad personalization engine while running the user study. Our findings revealed that users struggle to detect chatbot ads, and undisclosed ads are rated more favorably, raising ethical concerns about AI-driven advertising.
Analyzing Privacy Implications of Data Collection in Android Automotive OS
(Submission) 25th Privacy Enhancing Technologies Symposium
Bulut Gozubuyuk and Brian Jay Tang and Mert D. Pesé and Kang G. Shin
A research project investigating the privacy implications of Android Automotive OS (AAOS) in modern vehicles. We developed PriDrive, an automotive privacy analysis tool that employs static, dynamic, and network traffic inspection methodologies to evaluate the data collected by OEMs and compare it against their privacy policies. Our findings highlighted significant gaps in OEM privacy policies and potential privacy violations.
Analyzing Privacy Implications of Data Collection in Android Automotive OS
(Submission) 25th Privacy Enhancing Technologies Symposium
A research project investigating the privacy implications of Android Automotive OS (AAOS) in modern vehicles. We developed PriDrive, an automotive privacy analysis tool that employs static, dynamic, and network traffic inspection methodologies to evaluate the data collected by OEMs and compare it against their privacy policies. Our findings highlighted significant gaps in OEM privacy policies and potential privacy violations.
Short: Achieving the Safety and Security of the End-to-End AV Pipeline
1st Cyber Security in Cars Workshop (CSCS) at CCS
Noah T. Curran and Minkyoung Cho and Ryan Feng and Liangkai Liu and Brian Jay Tang and Pedram Mohajer Ansari and Alkim Domeke and Mert D. Pesé and Kang G. Shin
A research paper examining the security and safety challenges in autonomous vehicles (AVs). We provide a comprehensive analysis of AV vulnerabilities, including surveillance risks, sensor system reliability, adversarial attacks, and regulatory concerns. I contributed to the writing and survey for the surveillance and environmental safety/security risks. Our paper showed the need for standardized security frameworks.
Short: Achieving the Safety and Security of the End-to-End AV Pipeline
1st Cyber Security in Cars Workshop (CSCS) at CCS
A research paper examining the security and safety challenges in autonomous vehicles (AVs). We provide a comprehensive analysis of AV vulnerabilities, including surveillance risks, sensor system reliability, adversarial attacks, and regulatory concerns. I contributed to the writing and survey for the surveillance and environmental safety/security risks. Our paper showed the need for standardized security frameworks.
Navigating Cookie Compliance Around the Globe
(Submission) 34th USENIX Security Symposium (2025)
Brian Tang, Duc Bui, Kang G. Shin
A research project analyzing inconsistencies in cookie consent mechanisms on websites across the globe. We developed ConsentChk, an automated system that detects and categorizes violations between a website’s cookie usage and users’ consent preferences. I contributed to the design, writing, and analysis of cookie consent discrepancies across 1,458 globally-popular websites. Our findings revealed that regional privacy laws and consent management platforms significantly impact cookie consent behavior and violation rates.
Navigating Cookie Compliance Around the Globe
(Submission) 34th USENIX Security Symposium (2025)
A research project analyzing inconsistencies in cookie consent mechanisms on websites across the globe. We developed ConsentChk, an automated system that detects and categorizes violations between a website’s cookie usage and users’ consent preferences. I contributed to the design, writing, and analysis of cookie consent discrepancies across 1,458 globally-popular websites. Our findings revealed that regional privacy laws and consent management platforms significantly impact cookie consent behavior and violation rates.
Steward: Natural Language Web Automation
ArXiv Preprint
Brian Tang, Kang G. Shin
An LLM-powered web automation tool for scalable and efficient website interaction. We introduced Steward, an end-to-end system that interprets natural language instructions to navigate websites, automate tasks, and simulate user behavior without manual coding. I developed the system, focusing on improving execution speed, success rates, and cost efficiency. Our findings demonstrated that Steward enables more flexible and scalable web automation compared to traditional frameworks.
Steward: Natural Language Web Automation
ArXiv Preprint
An LLM-powered web automation tool for scalable and efficient website interaction. We introduced Steward, an end-to-end system that interprets natural language instructions to navigate websites, automate tasks, and simulate user behavior without manual coding. I developed the system, focusing on improving execution speed, success rates, and cost efficiency. Our findings demonstrated that Steward enables more flexible and scalable web automation compared to traditional frameworks.
Eye-Shield: Real-Time Protection of Mobile Device Screen Information from Shoulder Surfing
32nd USENIX Security Symposium (2023)
Brian Tang, Kang G. Shin
A novel defense system against shoulder surfing attacks on mobile devices. We designed Eye-Shield, a real-time system that makes on-screen content readable at close distances but blurred or pixelated at wider angles to prevent unauthorized viewing. I created and evaluated the system, ensuring it met real-time constraints while maintaining usability and minimal power consumption. Our findings demonstrated that Eye-Shield significantly reduces shoulder surfers’ ability to read on-screen information.
Eye-Shield: Real-Time Protection of Mobile Device Screen Information from Shoulder Surfing
32nd USENIX Security Symposium (2023)
A novel defense system against shoulder surfing attacks on mobile devices. We designed Eye-Shield, a real-time system that makes on-screen content readable at close distances but blurred or pixelated at wider angles to prevent unauthorized viewing. I created and evaluated the system, ensuring it met real-time constraints while maintaining usability and minimal power consumption. Our findings demonstrated that Eye-Shield significantly reduces shoulder surfers’ ability to read on-screen information.
Detection of Inconsistencies in Privacy Practices of Browser Extensions
44th IEEE Symposium on Security and Privacy (2023)
Duc Bui, Brian Tang, Kang G. Shin
An evaluation of inconsistencies between browser extensions’ privacy disclosures and their actual data collection practices. We developed ExtPrivA, an automated system that detects privacy violations by analyzing privacy policies and tracking data transfers from extensions to external servers. I contributed to the data collection and evaluation of 47.2k Chrome Web Store extensions to identify misleading privacy disclosures. Our findings revealed widespread discrepancies, highlighting the need for stricter enforcement of privacy policies.
Detection of Inconsistencies in Privacy Practices of Browser Extensions
44th IEEE Symposium on Security and Privacy (2023)
An evaluation of inconsistencies between browser extensions’ privacy disclosures and their actual data collection practices. We developed ExtPrivA, an automated system that detects privacy violations by analyzing privacy policies and tracking data transfers from extensions to external servers. I contributed to the data collection and evaluation of 47.2k Chrome Web Store extensions to identify misleading privacy disclosures. Our findings revealed widespread discrepancies, highlighting the need for stricter enforcement of privacy policies.
Do Opt-Outs Really Opt Me Out?
29th ACM Conference on Computer and Communications Security (2022)
Duc Bui, Brian Tang, Kang G. Shin
A case study analyzing the reliability of opt-out choices provided by online trackers. We developed OptOutCheck, an automated framework that detects inconsistencies between trackers’ stated opt-out policies and their actual data collection practices. I contributed to the data collection and evaluation of 2.9k trackers. We found violations of privacy policies and opt-outs, underscoring the need to improve auditing of online trackers.
Do Opt-Outs Really Opt Me Out?
29th ACM Conference on Computer and Communications Security (2022)
A case study analyzing the reliability of opt-out choices provided by online trackers. We developed OptOutCheck, an automated framework that detects inconsistencies between trackers’ stated opt-out policies and their actual data collection practices. I contributed to the data collection and evaluation of 2.9k trackers. We found violations of privacy policies and opt-outs, underscoring the need to improve auditing of online trackers.
Fairness Properties of Face Recognition and Obfuscation Systems
32nd USENIX Security Symposium (2023)
Harrison Rosenberg, Brian Tang, Kassem Fawaz, Somesh Jha
A research project investigating the demographic fairness of face obfuscation systems in evading automated face recognition. We analyze how these systems, which rely on metric embedding networks, may exhibit disparities across different demographic groups. I coded, implemented, and tested the theoretical framework proposed by the lead author relating to how fairness impacts disparities in anti face recognition systems.
Fairness Properties of Face Recognition and Obfuscation Systems
32nd USENIX Security Symposium (2023)
A research project investigating the demographic fairness of face obfuscation systems in evading automated face recognition. We analyze how these systems, which rely on metric embedding networks, may exhibit disparities across different demographic groups. I coded, implemented, and tested the theoretical framework proposed by the lead author relating to how fairness impacts disparities in anti face recognition systems.
CONFIDANT: A Privacy Controller for Social Robots
17th ACM/IEEE International Conference on Human-Robot Interaction (2022)
Brian Tang, Dakota Sullivan, Bengisu Cagiltay, Varun Chandrasekaran, Kassem Fawaz, Bilge Mutlu
A research project exploring privacy management in conversational social robots. We developed CONFIDANT, a privacy controller that leverages NLP models to analyze conversational metadata. I theorized, implemented, and evaluated the privacy controller, integrating speech transcription, sentiment analysis, speaker recognition, and topic classification systems. Our findings demonstrated that robots equipped with privacy controls are perceived as more trustworthy, privacy-aware, and socially aware.
CONFIDANT: A Privacy Controller for Social Robots
17th ACM/IEEE International Conference on Human-Robot Interaction (2022)
A research project exploring privacy management in conversational social robots. We developed CONFIDANT, a privacy controller that leverages NLP models to analyze conversational metadata. I theorized, implemented, and evaluated the privacy controller, integrating speech transcription, sentiment analysis, speaker recognition, and topic classification systems. Our findings demonstrated that robots equipped with privacy controls are perceived as more trustworthy, privacy-aware, and socially aware.
Face-Off: Adversarial Face Obfuscation
21st Symposium Privacy Enhancing Technologies (2021)
Varun Chandrasekaran, Chuhan Gao, Brian Tang, Kassem Fawaz, Somesh Jha, Suman Banerjee
A security framework that protects user privacy by preventing unauthorized face recognition. Face-Off introduces strategic perturbations to facial images, making them unrecognizable to commercial face recognition models. I collaborated with the lead author to code, evaluate, and deploy the system, creating one of the first anti face recognition systems that uses targeted evasion attacks.
Face-Off: Adversarial Face Obfuscation
21st Symposium Privacy Enhancing Technologies (2021)
A security framework that protects user privacy by preventing unauthorized face recognition. Face-Off introduces strategic perturbations to facial images, making them unrecognizable to commercial face recognition models. I collaborated with the lead author to code, evaluate, and deploy the system, creating one of the first anti face recognition systems that uses targeted evasion attacks.
Rearchitecting Classification Frameworks For Increased Robustness
ArXiv Preprint
Varun Chandrasekaran, Brian Tang, Nicolas Papernot, Kassem Fawaz, Somesh Jha, Xi Wu
A case study and evaluation on how deep neural networks (DNNs) are highly effective but vulnerable to adversarial inputs -- subtle manipulations designed to deceive them. Existing defenses often sacrifice accuracy and require extensive training. Collaborating with the lead author, I implemented a design of a hierarchical classification approach that leverages invariant features to enhance adversarial robustness without compromising accuracy.
Rearchitecting Classification Frameworks For Increased Robustness
ArXiv Preprint
A case study and evaluation on how deep neural networks (DNNs) are highly effective but vulnerable to adversarial inputs -- subtle manipulations designed to deceive them. Existing defenses often sacrifice accuracy and require extensive training. Collaborating with the lead author, I implemented a design of a hierarchical classification approach that leverages invariant features to enhance adversarial robustness without compromising accuracy.