ML
“Ads that Talk Back”: Implications and Perceptions of Injecting Personalized Advertising into LLM Chatbots
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Brian Tang, Kaiwen Sun, Noah T. Curran, Florian Schaub, Kang G. Shin
Embedding personalized advertisements within chatbot responses. We developed a system that generates targeted ads in LLM chatbot conversations and conducted a user study to assess how ad injection impacts trust and response quality. Users struggle to detect chatbot ads, and undisclosed ads are rated more favorably, raising ethical concerns. Contribution: lead author.
“Ads that Talk Back”: Implications and Perceptions of Injecting Personalized Advertising into LLM Chatbots
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Embedding personalized advertisements within chatbot responses. We developed a system that generates targeted ads in LLM chatbot conversations and conducted a user study to assess how ad injection impacts trust and response quality. Users struggle to detect chatbot ads, and undisclosed ads are rated more favorably, raising ethical concerns. Contribution: lead author.
Navigating Cookie Consent Violations Across the Globe
34th USENIX Security Symposium (2025)
Brian Tang, Duc Bui, Kang G. Shin
Analyzing inconsistencies in cookie consent mechanisms on websites across the globe. We developed ConsentChk, an automated system that detects and categorizes violations between a website’s cookie usage and users’ consent preferences. Contribution: measurement design, writing, and analysis of cookie consent discrepancies across 1,793 globally-popular websites.
Navigating Cookie Consent Violations Across the Globe
34th USENIX Security Symposium (2025)
Analyzing inconsistencies in cookie consent mechanisms on websites across the globe. We developed ConsentChk, an automated system that detects and categorizes violations between a website’s cookie usage and users’ consent preferences. Contribution: measurement design, writing, and analysis of cookie consent discrepancies across 1,793 globally-popular websites.
Analyzing Privacy Implications of Data Collection in Android Automotive OS
ArXiv Preprint
Bulut Gozubuyuk and Brian Jay Tang and Kang G. Shin and Mert D. Pesé
Investigating the privacy implications of Android Automotive OS (AAOS) in vehicles. We developed PriDrive, to perform static, dynamic, and network traffic inspection to evaluate the data collected by OEMs and compare it against their privacy policies. Contribution: privacy policy analyzer.
Analyzing Privacy Implications of Data Collection in Android Automotive OS
ArXiv Preprint
Investigating the privacy implications of Android Automotive OS (AAOS) in vehicles. We developed PriDrive, to perform static, dynamic, and network traffic inspection to evaluate the data collected by OEMs and compare it against their privacy policies. Contribution: privacy policy analyzer.
Short: Achieving the Safety and Security of the End-to-End AV Pipeline
1st Cyber Security in Cars Workshop (CSCS) at CCS
Noah T. Curran and Minkyoung Cho and Ryan Feng and Liangkai Liu and Brian Jay Tang and Pedram Mohajer Ansari and Alkim Domeke and Mert D. Pesé and Kang G. Shin
A survey paper examining security and safety challenges in autonomous vehicles (AVs). E.g., AV vulnerabilities, including surveillance risks, sensor system reliability, adversarial attacks, and regulatory concerns. Contribution: writing for surveillance and environmental safety risks.
Short: Achieving the Safety and Security of the End-to-End AV Pipeline
1st Cyber Security in Cars Workshop (CSCS) at CCS
A survey paper examining security and safety challenges in autonomous vehicles (AVs). E.g., AV vulnerabilities, including surveillance risks, sensor system reliability, adversarial attacks, and regulatory concerns. Contribution: writing for surveillance and environmental safety risks.
Steward: Natural Language Web Automation
ArXiv Preprint
Brian Tang, Kang G. Shin
An LLM-powered web automation tool for scalable and efficient website interaction. Steward is an end-to-end system that interprets natural language instructions to navigate websites, automate tasks, and simulate user behavior without manual coding. Contribution: lead author.
Steward: Natural Language Web Automation
ArXiv Preprint
An LLM-powered web automation tool for scalable and efficient website interaction. Steward is an end-to-end system that interprets natural language instructions to navigate websites, automate tasks, and simulate user behavior without manual coding. Contribution: lead author.
Detection of Inconsistencies in Privacy Practices of Browser Extensions
44th IEEE Symposium on Security and Privacy (2023)
Duc Bui, Brian Tang, Kang G. Shin
An evaluation of inconsistencies between browser extensions’ privacy disclosures and their actual data collection practices. We developed ExtPrivA to detect privacy violations by analyzing privacy policies and tracking data transfers from extensions to servers. Contribution: data collection and evaluation of 47.2k Chrome Web Store extensions finding misleading privacy disclosures.
Detection of Inconsistencies in Privacy Practices of Browser Extensions
44th IEEE Symposium on Security and Privacy (2023)
An evaluation of inconsistencies between browser extensions’ privacy disclosures and their actual data collection practices. We developed ExtPrivA to detect privacy violations by analyzing privacy policies and tracking data transfers from extensions to servers. Contribution: data collection and evaluation of 47.2k Chrome Web Store extensions finding misleading privacy disclosures.
Do Opt-Outs Really Opt Me Out?
29th ACM Conference on Computer and Communications Security (2022)
Duc Bui, Brian Tang, Kang G. Shin
A case study analyzing the reliability of opt-out choices provided by online trackers. We developed OptOutCheck to detect inconsistencies between trackers’ stated opt-out policies and their actual data collection practices. Contribution: data collection and evaluation of 2.9k trackers.
Do Opt-Outs Really Opt Me Out?
29th ACM Conference on Computer and Communications Security (2022)
A case study analyzing the reliability of opt-out choices provided by online trackers. We developed OptOutCheck to detect inconsistencies between trackers’ stated opt-out policies and their actual data collection practices. Contribution: data collection and evaluation of 2.9k trackers.
Fairness Properties of Face Recognition and Obfuscation Systems
32nd USENIX Security Symposium (2023)
Harrison Rosenberg, Brian Tang, Kassem Fawaz, Somesh Jha
Investigating the demographic fairness of anti face recognition systems. We analyze how these metric embedding networks may exhibit disparities across different demographic groups. Contribution: coded, implemented, and tested theoretical framework proposed by lead author.
Fairness Properties of Face Recognition and Obfuscation Systems
32nd USENIX Security Symposium (2023)
Investigating the demographic fairness of anti face recognition systems. We analyze how these metric embedding networks may exhibit disparities across different demographic groups. Contribution: coded, implemented, and tested theoretical framework proposed by lead author.
CONFIDANT: A Privacy Controller for Social Robots
17th ACM/IEEE International Conference on Human-Robot Interaction (2022)
Brian Tang, Dakota Sullivan, Bengisu Cagiltay, Varun Chandrasekaran, Kassem Fawaz, Bilge Mutlu
Exploring privacy management in conversational social robots. We developed CONFIDANT, a privacy controller that leverages various NLP models to analyze conversational metadata. Found that robots equipped with privacy controls are perceived as more trustworthy, privacy-aware, and socially aware. Contribution: lead author.
CONFIDANT: A Privacy Controller for Social Robots
17th ACM/IEEE International Conference on Human-Robot Interaction (2022)
Exploring privacy management in conversational social robots. We developed CONFIDANT, a privacy controller that leverages various NLP models to analyze conversational metadata. Found that robots equipped with privacy controls are perceived as more trustworthy, privacy-aware, and socially aware. Contribution: lead author.
Face-Off: Adversarial Face Obfuscation
21st Symposium Privacy Enhancing Technologies (2021)
Varun Chandrasekaran, Chuhan Gao, Brian Tang, Kassem Fawaz, Somesh Jha, Suman Banerjee
Protecting user privacy by preventing face recognition. Face-Off introduces perturbations to face images, making them unrecognizable to commercial face recognition models. Contribution: coding, evaluation, and deployment, creating one of the first anti face recognition systems.
Face-Off: Adversarial Face Obfuscation
21st Symposium Privacy Enhancing Technologies (2021)
Protecting user privacy by preventing face recognition. Face-Off introduces perturbations to face images, making them unrecognizable to commercial face recognition models. Contribution: coding, evaluation, and deployment, creating one of the first anti face recognition systems.
Rearchitecting Classification Frameworks For Increased Robustness
ArXiv Preprint
Varun Chandrasekaran, Brian Tang, Nicolas Papernot, Kassem Fawaz, Somesh Jha, Xi Wu
A case study and evaluation on how deep neural networks (DNNs) are highly effective but vulnerable to adversarial inputs. Contribution: implemented hierarchical classification approach that leverages invariant features to enhance adversarial robustness without compromising accuracy.
Rearchitecting Classification Frameworks For Increased Robustness
ArXiv Preprint
A case study and evaluation on how deep neural networks (DNNs) are highly effective but vulnerable to adversarial inputs. Contribution: implemented hierarchical classification approach that leverages invariant features to enhance adversarial robustness without compromising accuracy.