Brian Jay Tang

Brian Jay Tang

1st Year PhD Student

Ann Arbor, MI

About Me

I’m a 1st year Computer Science PhD student at the University of Michigan and a member of the Real-Time Computing Lab (advised by Prof. Kang Shin). My research focuses on the security, privacy, and ethics of machine learning systems. As an undergraduate student, I was a part of the Security and Privacy Research Group (Wisconsin-Privacy) working with Prof. Kassem Fawaz and Varun Chandrasekaran on machine learning security and privacy. In the past, I have interned at Roblox Corporation and Optum as a software engineer. Despite the challenges, I am having a wonderful time being a PhD student.

Please contact me via byron123t [at] gmail [dot] com or bjaytang [at] umich [dot] edu. I’m always willing to meet up in person as well if you happen to be on campus or in the Chicago area.


Research Assistant

Fall 2021 - Present · University of Michigan

Research Assistant

Fall 2018 - Summer 2021 · University of Wisconsin - Madison

Involved in 4 research projects, I contributed with experiment design, writing, dataset creation, literature surveys, and more.

Software Engineering Intern

Summer 2019 - Summer 2019 · Roblox

Using test-driven development, I designed and implemented core features for Roblox Studio’s script editor.

Software Engineering Intern

Summer 2018 - Summer 2018 · Optum

I designed and developed a web application which aggregates and visualizes over 50 million records from security scans and databases.


Ph.D. in Computer Science

2021 - 2026 · University of Michigan

Thesis: To be determined

M.S. in Computer Science

2021 - 2023 · University of Michigan

Thesis: To be determined

B.S. in Computer Science

2017 - 2020 · University of Wisconsin - Madison


Journal Articles

2021. H Rosenberg, B Tang, K Fawaz, S Jha. Fairness Properties of Face Recognition and Obfuscation Systems. arXiv preprint arXiv:2108.02707.

2019. V Chandrasekaran, B Tang, N Papernot, K Fawaz, S Jha, X Wu. Rearchitecting Classification Frameworks For Increased Robustness. arXiv preprint arXiv:1905.10900.


2020. V Chandrasekaran, C Gao, B Tang, K Fawaz, S Jha, S Banerjee. Face-off: Adversarial Face Obfuscation. Proceedings on Privacy Enhancing Technologies 2021 (2), 369-390.


Confidant: A Privacy Controller for Social Robots

As social robots become increasingly prevalent in day-to-day environments, they will participate in conversations and manage the information shared with them in an appropriate manner. However, little is known about how robots might appropriately discern the sensitivity of information, which has major implications for human-robot trust. In an effort to address this issue, we designed a privacy controller named CONFIDANT for conversational social robots, capable of using contextual metadata (e.g., sentiment, relationships, topic, etc.) from conversations to model privacy boundaries.

Fairness Properties of Face Recognition and Obfuscation Systems

The proliferation of automated facial recognition in various commercial and government sectors has caused significant privacy concerns for individuals. A recent and popular approach to address these privacy concerns is to employ evasion attacks against facial recognition systems. The key to these approaches is the generation of perturbations using a pre-trained metric embedding network. This dependence of face obfuscation on metric embedding networks, which are known to be unfair in the context of facial recognition, surfaces the question of demographic fairness - are there demographic disparities in the performance of face obfuscation systems?

Face-off: Adversarial Face Obfuscation

Advances in deep learning have made face recognition technologies pervasive. While useful to social media platforms and users, this technology carries significant privacy threats. Coupled with the abundant information they have about users, service providers can associate users with social interactions, visited places, activities, and preferences - some of which the user may not want to share. We propose Face-Off, a privacy-preserving framework that introduces strategic perturbations to the user’s face to prevent it from being correctly recognized.

Rearchitecting Classification Frameworks For Increased Robustness

While generalizing well over natural inputs, neural networks are vulnerable to adversarial inputs. Existing defenses against adversarial inputs have largely been detached from the real world. These defenses also come at a cost to accuracy. We find that applying invariants to the classification task makes robustness and accuracy feasible together. Two questions follow: how to extract and model these invariances? and how to design a classification paradigm that leverages these invariances to improve the robustness accuracy trade-off? The remainder of the paper discusses solutions to the aformenetioned questions.


Mandarin Chinese


  • Cooking
  • Reading
  • Investing
  • Video games, tabletop games, board games
  • Anime, manga
  • Skateboarding, biking
  • Meditation, taijiquan