About me
Hello!! I’m Shreya, a 5th year PhD student at the University of Pennsylvania working with Penn NLP. I am advised by Lyle Ungar and Eric Wong. My research is supported by the NSF GRFP. Previously, I attended USC (Fight On!!) where I received a B.S. in Computer Science & Applied Mathematics and worked with Morteza Dehghani at the Morality and Language Lab. I’ve also interned at Google Deepmind and Spotify Research.
I research sociotechnical alignment of LLMs, with a focus on evaluating and mitigating cultural bias. My work looks at how LLMs handle context-dependent phenomena such as implied meaning and social norms, and I have developed frameworks to assess how they represent subjective concepts like politeness, emotion, and individualism. I have also built methods to improve model outputs, including preserving speaker intent in translation and adapting advice to multicultural users.
I’m currently on the postdoc market! I’m interested in pluralistic alignment in LLMs.
Beyond my research, I care deeply about representation and accessibility in ML. I founded the UPenn chapter of Women in ML (WiML) in 2023, and I’m currently creating and teaching a new course on designing LLM agents for Penn undergrads who are beginners in CS.
I also like to cook, run, and ask strangers if I can pet their dogs. Please reach out if you want to chat about my research, the GRFP, or the PhD application process!
Recent News!!
- [November 2025] 🌎 Attending EMNLP in Suzhou to chat about a new benchmark for culturally-aware agents in the HCI + NLP workshop
- [October 2025] 🎉 Named one of MIT’s Rising Stars in EECS
- [July 2025] 🌎 I’ll be attending ACL in Vienna to present our papers Style Alignment in Cross-Cultural Translation and Incorporating Implication into NLI
- [July 2025] 🌎 I’ll be attending IC2S2 in Norrköping to talk about Cultural Knowledge Injection and Benchmarking Conversational Ability of LLMs
- [June 2025] 📝 Our paper - The FIX Benchmark: Extracting Features Interpretable to eXperts was accepted to DMLR
- [June 2025] 📚 Excited to intern at Spotify Research this summer, working on podcast understanding and evaluation!
- [May 2025] 📝 New preprints - Probabilistic Soundness Guarantees in LLM Reasoning Chains and Adaptively Profiling Models with Task Elicitation
- [March 2024] 🎉 Our paper - Building Knowledge-Guided Lexica to Model Cultural Variation was selected for an oral at NAACL!
- [Feb 2024] 📚 Excited to intern at Google Deepmind this summer, working on social understanding in LLMs!
- [Dec 2023] 🌎 I’ll be attending EMNLP in Singapore and NeurIPS in New Orleans! Come say hi and chat about Explainable Comparison of Politeness across Languages
- [Sep 2023] 🎉 Our paper - Faithful Chain-of-Thought Reasoning - won the Area Chair Award (Interpretability and Analysis of Models for NLP) at AACL!
- [Aug 2023] 📝 New preprint - Human-Centered Metrics for Dialog System Evaluation
- [Jul 2023] 🌎 I am attending IC2S2 in Copenhagen to talk about Measuring Regional Variation in Culture Through Embedding-Based Lexica
- [May 2023] 🎉 Our paper - Multilingual Language Models are not Multicultural: A Case Study in Emotion - won best paper at ACL’s Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis!
- [Mar 2023] 🎉 I am honored to recieve the NSF Graduate Research Fellowship!
