Skip to content

Research

Main Focus

My research lies at the intersection of model-based reinforcement learning, graph representation learning, and cognitive architectures. I am particularly interested in how inductive biases—specifically from physics and graph theory—can make world models more robust and scalable for real-world, continuous control tasks.


World Models

My primary focus is developing physics-grounded world models for robotics and continuous control. Current state-of-the-art models often struggle with sample efficiency and physical consistency. My work aims to bridge this gap by embedding Newtonian principles directly into the learning process.

Key Projects:

  • Physics-grounded World Models: Investigating architectures that impose physics-informed inductive biases to improve long-horizon prediction and sample efficiency in control tasks.
  • Robustness & Scalability: Designing architectures that are robust to noise and scale to higher-dimensional environments.
  • Memory Systems: Designing and evaluating neural memory mechanisms to enhance long-horizon planning and physical consistency.

Graph Neural Networks

I view graphs as a universal language for modeling complex interactions, from molecular structures to physical dynamics. My research explores how Graph Neural Networks (GNNs) can serve as the "backbone" for more structured and interpretable world models.

Key Projects:

  • Message-Passing JEPAs: Exploring Joint Embedding Predictive Architectures (JEPA) combined with message-passing neural networks to learn predictive representations of dynamic systems.
  • Equivariant GNNs: Developing architectures that respect symmetries (e.g., rotation, translation) for applications in quantum chemistry and physical simulation.

Memory Systems

For world models to perform long-horizon planning, they require memory mechanisms that can predict future trajectories while remaining consistent with physical laws and past experiences. I am researching how to integrate explicit memory modules into world models to support agent localization and planning.

Key Projects:

  • Memory Augmented World Models: Comparing different memory architectures to understand their impact on a world model's ability to retain and utilize past experiences.

Explainable AI

As models become more complex, interpretability becomes critical, especially in scientific discovery. My work in Explainable AI (XAI) focuses on "opening the black box" of graph-based models to understand which substructures drive predictions.

Key Projects:

  • XInsight: A flow-based explanation method for GNNs that identifies key subgraphs responsible for model predictions, offering granular insights into decision-making processes.
  • Scientific Discovery: Applying XAI techniques to drug discovery platforms (like SmartCADD) to help chemists understand why a molecule is predicted to be effective.

Biometrics & Face Recognition

My background in data science includes extensive work on fairness, bias, and evaluation in biometric systems. I focus on developing rigorous metrics and synthetic data pipelines to audit and improve face recognition algorithms.

Key Projects:

  • Synthetic Identity Generation (SIG): A pipeline for generating large-scale, controllable synthetic datasets to evaluate face recognition systems without privacy concerns.
  • Bias Mitigation (GARBE): Proposed the Gini Aggregation Rate for Biometric Equitability (GARBE), a metric now included in ISO standards (ISO/IEC 19795-10:2024) for measuring demographic differentials in biometric performance.