Preprint

Rearchitecting Classification Frameworks For Increased Robustness

ArXiv Preprint

Varun Chandrasekaran, Brian Tang, Nicolas Papernot, Kassem Fawaz, Somesh Jha, Xi Wu

A case study and evaluation on how deep neural networks (DNNs) are highly effective but vulnerable to adversarial inputs -- subtle manipulations designed to deceive them. Existing defenses often sacrifice accuracy and require extensive training. Collaborating with the lead author, I implemented a design of a hierarchical classification approach that leverages invariant features to enhance adversarial robustness without compromising accuracy.

Blog Pdf Code

Follow or contact me

I publish and open-source my work. I also occasionally post random thoughts.