Anthony Colisimo (Summer 2024 - Present, Co-supervised with Dr. Li)
Skye Reid (Summer 2024)
QiQi Gao (Summer 2022 – Summer 2023)
Summer Researchers (Mitacs Globalink Interns)
Ghofrane Faidi (Summer 2024)
Angel Loredo (Summer 2024)
Harsh Lalai (Summer 2024)
Alumni
Abhishek Kumar, M.Sc. (Full Stack Data Systems Specialist at 360 Energy Inc., 2024 – Present)
Our Mission
The Brock NLP lab is is working on developing fair, robust, and reliable AI systems. Our research focuses on three key areas:
Bias Detection and Mitigation in AI Models
Reasoning and Benchmarking of AI Systems
AI Interpretability and Reliability
Research Areas
1. Bias Detection and Mitigation in AI Models
We’re working on identifying and mitigating several forms of biases in AI models. We’re finding recently that the very recognition and classification of what is “toxic” or “biased” is quite tricky and culturally/temporally-bound. It is now time more than ever to collaborate with experts from beyond the field (e.g., Psychology, Anthropology, Philosophy) to tackle these problems!
Recent Publications:
Morabito, R., Madhusudan, S., McDonald, T., Emami, A. (2024) STOP! Benchmarking Large Language Models with Sensitivity Testing on Offensive Progressions. In Proceedings of EMNLP 2024
Kumar, A., Yunusov, S., Emami, A. (2024). Subtle Biases Need Subtler Measures: Dual Metrics for Evaluating Representative and Affinity Bias in Large Language Models. In Proceedings of ACL 2024.
Morabito, R., Kabbara, J., Emami, A. (2023). Debiasing should be Good and Bad: Measuring the Consistency of Debiasing Techniques in Language Models. In Findings of ACL 2023.
2. Reasoning and Benchmarking of AI Systems
We’re looking to test, harness, and push the boundaries of reasoning capabilities of AI systems. At the same time, we believe in this artificial “intelligence” as more of a means than an end. AI applied towards diversifying storytelling education, multilingual/multicultural representation, and precise language understanding are example of some of these means!
Recent Publications:
Yunusov, S., Sidat, H., Emami, A. (2024) MirrorStories: Reflecting Diversity through Personalized Narrative Generation with Large Language Models. In Proceedings of EMNLP 2024
Sun, J.H., & Emami, A. (2024). EvoGrad: A Dynamic Take on the Winograd Schema Challenge with Human Adversaries. In Proceedings of COLING-LREC 2024.
Zahraei, P.S., & Emami, A. (2024). WSC+: Enhancing The Winograd Schema Challenge Using Tree-of-Experts. In Proceedings of EACL 2024.
3. AI Interpretability and Reliability
We’re working on probing the inner workings of AI models (frustratingly blackbox as they are!), focusing on understanding their decision-making processes, biases, and limitations to enhance their reliability, interpretability, and overall performance.
Recent Publication:
Park, B., Janecek, M., Li, Y., Ezzati-Jivan, N., Emami, A. (2024). Picturing Ambiguity: A Visual Twist on the Winograd Schema Challenge. In Proceedings of ACL 2024.
Kumar, A., Morabito, R., Umbet, S., Kabbara, J., Emami, A. (2024). Confidence Under the Hood: An Investigation into the Confidence-Probability Alignment in Large Language Models. In Proceedings of ACL 2024.
Our our work is regularly presented at conferences such as ACL, EMNLP, NAACL, EACL, COLING-LREC, ICML, and NeurIPS.
Research Focus Areas
A fun word cloud generated from all of our research works!
Map of Student Origins
Join Us
We are recruiting new graduate students for Fall, 2024
Undergraduates: Please don’t hesitate to email me to inquire about research projects that I (or better, yet, you) may have in mind. Please also attach your transcript as well as a brief description of which areas of my research interests (e.g., natural language processing) you would like to work on and why. I highly encourage, and prefer, students that are planning on a summer internship (under the NSERC USRA or SURA program), or are planning to do an Honour’s thesis.
Graduates: M.Sc. (Computer Science) and PhD (Intelligent Systems and Data Science) admissions are handled centrally in our department. Please see this page for application instructions.