Mission Statement
Sep 1, 25
Mission Statement
Thesis
My thesis focuses on the security and privacy challenges of vision-language models. VLMs can augment human capabilities when paired with smart glasses, but these enhancements pose significant risks to users’ and bystanders’ security and privacy. My thesis explores new attack vectors that VLMs can pose to users, as well as creating built-in privacy mechanisms.
AI Adoption
AI will inevitably reshape society. My mission is to ensure that (1) the unexpected harms are minimized, (2) we have methods to opt-out if we so choose, and (3) we explore unconventional approaches to LLM reasoning. If we want to integrate LLM agents into cyber physical systems, we will need guarantees on output consistency and security. Likewise if we want AI to be widely accepted in our society, we will need more “human-like” AI.
-
Even if LLMs are unreliable, inconsistent, and incapable of novel knowledge generation, they will still be useful for filling gaps in knowledge and automating tasks.
--->

On Autonomy
LLMs reduce people’s autonomy, especially for those who rely on their outputs. There is no doubt that its best use cases are in surveillance (e.g., behavioral modification, advertising, profiling) and the automation of white collar jobs. It will take many years (decades?) before we have sufficiently robust and generalizable AI agents that can be used in safety-critical applications like humanoid robotics.
An easy question to ask is: “Who benefits from AI?” (hint: the answer is not “everyone”).
-
When people are upset with AI, their true concerns are about its impact on the average person's decisional autonomy, employment opportunities, and problem-solving capabilities.

On Alignment
Interpretability wrappers, snitch models, deception detection, etc., will inevitably fail. A sufficiently “intelligent” LLM will come up with ways to hide its “behavior” and “intentions” (maybe at the expense of performance). All it takes is one poisoning backdoor from a misaligned LLM’s code or data formatting. This will enable further misaligned outputs with longer hidden reasoning chains.
-
So long as there exist coding assistants, people working on LLMs will inevitably let malicious code/data from misaligned LLMs through.

More Details
If you’d like to know more actual technical details and research ideas, have a chat with me. I’m not going to disclose ideas I’m currently working on online.
Funding Disclosure
My work has been or is currently being funded by the following organizations (not extensive). My research ideas are my own. Thank you for your support!
Internships and Jobs
-
I am open to collaboration or internships at industry, research labs, etc! Shoot me an email if you're looking for a student with a deep interest in AI systems or security & privacy.

https://birdcheese.tumblr.com/post/134030028144/say-im-applying-for-a-job-where-i-may-be-helping
You can schedule a meeting here .