Generative Ads: Ethics of Targeting Ads using LLMs
August 1, 2024 • 1 min • 210 words •
Information
Authors
Brian Tang, Noah T. Curran , Kaiwen Sun , Florian Schaub , Kang G. Shin
Conference
Under submission at CHI
Demo
Blog
Intro
Recent advances in natural language processing (NLP) and Generative AI (GenAI) have enabled the creation of increasingly competent and effective conversational AI, which may be exploited by technology companies as the next-generation medium for targeted advertising. This paper raises concerns over the use of large language models (LLMs) for monetization and the potential risks it poses. To address these concerns, we investigate the potential risks and harms stemming from personalized advertising on chatbot platforms. We make several technical contributions: (1) a system design and retrieval augmented generation (RAG) technique to enable chatbots to serve targeted ads of products to users, (2) a quantitative evaluation of our system compared to unprompted models on various LLM benchmark datasets, and (3) a 2x3 interactive user study and survey investigating how different factors such as model type, ad disclosure presence, and advertisement presence effect user’s experiences and perceptions. We identify several risk factors introduced by LLMs such as deceptive ads, ad subtlety, and autonomy subversion. Finally, we suggest several mitigation strategies.
Design Overview
Discussion
Citation
@inproceedings{tang2023,
title={},
author={Tang, Brian and Shin, Kang G.},
booktitle={},
year={}
}