About
Hello! I’m Shreyan Mitra, a student in the Paul G. Allen School of Computer Science at the University of Washington, Seattle.
In recent years, machine learning (ML) has become a hot topic because of its tremendous capabilities and numerous applications. However, with great power comes great responsibility. Because of this, I have focused my college research career on the following three areas:
- Machine Learning Robustness - Making ML models less susceptible to error and more applicable to the tasks at hand.
- Machine Learning Accessibility - Ensuring people of different backgrounds, ages, and coding experiences can use ML tools to make their lives easier.
- Social and Environmental Impact - Ensuring ML, and artificial intelligence in general, are used in ethical and sustainable ways to further social and conservation goals.
I like to be hands-on in my research, and most of my research work is accompanied by libraries or software tools that apply my findings to the real world.
Specific lines of inquiry I have pursued / am currently pursuing include:
Explainability
In short, explainability refers to understanding why ML models behave the way they do. Several explainability tools ("explanatory systems") have been developed in the last few years. My work on this topic includes devising a metric to compare different explanations for a given model and deciding which of them better approximates the ground truth. My research also brought me to create a library and framework in Python and Matlab that brings all the different explanatory systems onto an unified interface. The library further democratizes AI by not requiring users to have any coding experience and to seamlessly deploy the models they need for their use case.
Hallucinations and Reasoning in Large Language Models
Large language models (LLMs) like ChatGPT or Llama have gained widespread popularity due to their ability to answer questions and converse like humans. However, unlike humans, LLMs often fail to reason about scenarios in a logical and accurate manner. When LLMs generate incorrect, irrelevant, or nonsensical output, we say that it "hallucinates". I study the factors leading to LLM hallucinations and attempt to generate faster, more memory-efficient ways to detect hallucinations when they occur. I also look at ways to reduce the risk of such erroneous output, such as by incorporating symbolic reasoning, integrating a common sense database, or adding self-evaluation stages.
Of course, I am open to any new research area other than the ones above if it allows me to grow and aligns with my interest in responsible computing.
In addition to my research, I am also the President and Co-founder of Computing for Environmental and Social Advocacy (CESA), a 30+ member team affiliated with the University of Washington Allen School that focuses on applied topics such as an AI-driven solution to the climate crisis and a software tool to check for potential ethical violations/accessibility concerns in code. Currently, we’re working on modeling pollution in disadvantaged communities of the Puget Sound area.
In my free time, I enjoy playing cricket. I am the President of the Husky Cricket Club at the University of Washington, and always open for a late afternoon game.
About this site
This is my professional portfolio. Here, you’ll find examples of my research, publications, and teaching experience. However, wherever possible, I have included links to external sites if you want to learn more about me as a person.