University of Virginia · Data Science & CS
ऐक्यम् — Oneness

Aikyam
Lab

Building the science of Trustworthy AI from explainability and safety to alignment and generalization in frontier models with a focus on multilingual reasoning and multimodal models and agents. Aikyam is Sanskrit for oneness and is a nod to our belief that breakthroughs happen when diverse people, ideas, and data come together.

The Lab

Aikyam Lab is a research group at the University of Virginia working on the foundations of Trustworthy Artificial Intelligence — making AI systems that are accurate, explainable, safe, and aligned.

We are affiliated with the School of Data Science, the Department of Computer Science, and School of Medicine. Our work spans Computer vision, NLP, graphs, and multimodal AI.

We are always looking for motivated Ph.D. students and collaborators. See open positions.

Without transparency, AI cannot earn trust and wide adoption
Chirag Agarwal
Chirag Agarwal Principal Investigator · About →
AAAI Faculty Highlight IUI'26 Accept AAAI'26 Oral Accept CapitalOne Fellowship Tinker Research Grant OpenAI Researcher Access Program Area Chair · ICLR'26 NAACL'25 · 3 Papers Cohere For AI Grant Adobe Research Award LaCross AI Fellowship
Latest Research

Current Work

All Publications
Alignment research
01 · 2026
Alignment · Probing

Polarity-Aware Probing for Latent Alignment

Quantifying latent alignment in language models through polarity-aware probing.

AAAI'26 Oral 🏆
Memorization research
01 · 2026
LLMs · Reasoning

CURE-Med: Curriculum-Informed Reinforcement Learning for Multilingual Medical Reasoning

How to improve multilingual medical reasoning using curriculum learning?

Data protection research
01 · 2026
Mechanistic Interpretability · LLM

Towards Understanding Unlearning Difficulty: A Mechanistic Perspective and Circuit-Guided Difficulty Metric

How to understand model memorization and unlearning in LLMs using circuit analysis?

Multilingual research
12 · 2025
Multilingual · Survey

Multilingual Trustworthiness in Language Models for Healthcare

Comprehensive evaluation on multilingual reasoning in LLMs — how models think across languages and where they fail.

Updates

What's Happening

More recent updates
12/08
2025
We won the CapitalOne Fellowship!
11/08
2025
Invited talk at the Multimodal Reasoning Workshop, ACM SIGMM
11/06
2025
We won the Tinker Research Grant!
10/31
2025
First podcast 🎉 at AI Exchange @ UVA
10/17
2025
Invited talk at Computer Science Colloquia, UMass Lowell
08/30
2025
Selected as Area Chair for AAAI'26 and ICLR'26
08/20
2025
Two papers accepted to EMNLP on Multilingual Reasoning and Hallucinations in Egocentric Video Understanding
06/30
2025
Awarded OpenAI Researcher Access Program
06/19
2025
Won the Environmental Institute's Spring 2025 Colab Award!
03/19
2025
Won the Cohere For AI Research Grant
02/14
2025
The Multilingual Mind — survey on Multilingual Reasoning in LLMs is out
01/23
2025
Awarded the 2025 Fellowships in AI Research from the LaCross AI Institute
12/06
2024
Invited talk at Value Chain of Ethical AI conference at LaCross AI Institute, Darden Business School
11/22
2024
Invited talk at Privacy and Interpretability in Generative AI workshop, IDEAL Chicago
10/24
2024
Lightning talk at Bay Area Alignment Workshop, FAR.AI
09/26
2024
Medical Safety Benchmark accepted to NeurIPS'24
08/10
2024
Started Assistant Professor role at UVA
07/10
2024
Certifying Robustness for LLMs accepted to COLM 2024
05/01
2024
Iterative Prompting for Truthfulness accepted at ICML 2024
11/21
2023
Selected as Top Reviewer for NeurIPS'2023
11/20
2023
Won the Harvard Data Science Initiative (HDSI) Azure Credits Award
10/27
2023
Papers at NeurIPS XAIA and Ro-FoMo (Spotlight!) workshops
08/15
2023
Workshop on "Regulatable ML" at NeurIPS'23 is accepting submissions
04/05
2023
Explain Like I am BM25 accepted to SIGIR'2023
03/18
2023
Evaluating Explainability for Graph Neural Networks accepted in Nature Scientific Data
02/27
2023
DeAR: Debiasing Vision-Language Models accepted to CVPR'23
09/16
2022
OpenXAI accepted at NeurIPS'22 · live on GitHub
03/03
2022
VoG accepted to CVPR 2022