Representation Learning
At the heart of our research in Representation Learning lies a deep focus on graph-based structures and knowledge graph embedding models. Our work is dedicated to devising advanced methods for effectively capturing and encoding complex relationships and attributes within data. By exploring innovative embedding techniques, we aim to enhance the interpretability and usability of knowledge graphs, enabling more nuanced and insightful analysis across various domains.
Reinforcement Learning
Our Reinforcement Learning research is pioneering new frontiers, particularly in the domain of healthcare and medicine. We develop sophisticated models that learn optimal strategies through trial and error, providing crucial insights and decision-support systems for medical scenarios. Our work attempts to harness the power of reinforcement learning to improve outcomes and efficiency in medical applications. As a separate part of our research , we also work on connecting Reinforcement Learning with graph learning approaches.
Large Language Models
Our work with Large Language Models focuses on bridging the gap between vast linguistic capabilities and factual accuracy through integration with knowledge graphs. By enhancing language models with structured, graph-based knowledge, we aim to significantly boost their understanding and generation of factually accurate and contextually relevant content. This cutting-edge approach promises to unlock new potentials in natural language processing, offering more reliable and insightful textual analysis and generation.
Application and Theory
On one hand, we are deeply engaged in the application of both existing and novel methodologies, particularly in the domain of Representation Learning, Reinforcement Learning, and Large Language Models. Our applied research focuses on practical implementations, ranging from enhancing knowledge graph embeddings for more insightful data representation to developing medically-oriented reinforcement learning models, and refining language models for greater factual accuracy through knowledge graph integration. On the other hand, we are equally committed to contributing to the theoretical and mathematical foundations of these models, with a special emphasis on Representation Learning. Our theoretical work focuses on the underlying mathematical structures and principles that govern these complex systems, towards advancing our fundamental understanding and to innovate at the core level of algorithmic design.