Charith Mendis
Office: 4118, Siebel Center for Computer Science
201 N. Goodwin Ave., Urbana IL 61801
Email: charithm (at) illinois.edu
New! Check out our new course on CS521: ML and compilers
Bio
Charith Mendis is an Assistant Professor in the Siebel School of Computing and Data Science at the University of Illinois at Urbana-Champaign. His broad research interests are at the intersection of compilers, program optimization and machine learning. He received his Ph.D. and Master’s from the Massachusetts Institute of Technology and his B.Sc. from the University of Moratuwa. He is the recipient of a DARPA Young Faculty Award, an NSF CAREER Award, a Google ML and Systems Junior Faculty Award, Outstanding Advisor Award at UIUC, the William A. Martin Outstanding Master’s Thesis Award at MIT and the University Gold Medal for his B.Sc. He has won numerous paper awards including a Distinguished Paper Award at POPL, a Best Student Paper Award at the IEEE BigData conference, an honorable mention for the Best Artifact Award at SIGMOD, a Best Paper Award at ML for Systems workshop at ISCA and an IEEE Top Picks Honorable Mention.
Research
It has been challenging to keep compilers up to date with workload and hardware changes. This problem is exacerbated in fast-evolving fields such as machine learning (ML). Traditional means of constructing compilers are too slow to adapt to these changes. My research agenda directly addresses these problems and brings agility and evolvability to compiler construction. Specifically, my group’s (ADAPT lab) work revolves around building the next-generation programming abstractions and compiler construction methodologies that can handle the diversity, complexity, and rapid evolution of both workloads and hardware, while preserving strict guarantees. Our research is centered around three main thrusts.
-
Automated and Parameterized Compiler Construction Methodologies
- Formalism-driven transformation space construction: Generating compiler components for accelerators (Accelerator Compiler Toolkit) [MICRO’25, POPL’25] and for commodity hardware [PLDI’25, ASPLOS’24, ASPLOS’21]
- Data-driven cost models and optimization strategies: ML-based cost models for commodity hardware [ICML’19, IISWC’22, MICRO’20], accelerators [MLSys’21, ICML’25] and sparse workloads [ASPLOS’23, CGO’26], learned optimizations [NeurIPS’19]
- Model-agnostic Compiler Verification and Optimizations for ML
- Model-aware Abstractions and Compiler Optimizations for ML and data science
Research Opportunities
ADAPT lab has opportunities for undergraduate and graduate students. If you are a student who is passionate about building high performance ML optimization or compilation techniques or is passionate about automated compiler construction techniques using either machine learning or formal methods, please be in touch with me.
Click to read instructions before contacting me.
Undergraduates or Master's students at UIUC:
Please include a CV, an up-to-date transcript, and your programming experience. The subject line should contain the word "prospective undergraduate/master's researcher" to indicate these instructions have been read. Please note that, since we are doing systems work at least 2 semesters worth of commitment is required to get fruitful results (e.g. a publication) out of your experience. Also, I will not agree to supervise any final year Master's students.
Prospective PhD students:
First, you must apply and gain admission to the graduate program in Computer Science at UIUC. You should mention me as a potential advisor in your application as well as in your personal statement. Once you apply, you can optionally send me an email with subject "prospective PhD student". If you are admitted, then I'm happy to discuss supervision.
PhD Students
- Damitha Lenadora
- Stefanos Baziotis
- Avaljot Singh (co-advised with Gagandeep Singh)
- Devansh Jain
- Jai Arora
- Chamika Sudusinghe
- Muyan Hu (co-advised with Vikram Adve)
- Heng Zhong
- Zirui Zhou
Selected Publications (all)
-
TAIDL: Tensor Accelerator ISA Definition Language with Auto-generation of Scalable Test OraclesIn Proceedings of the 58th IEEE/ACM International Symposium on Microarchitecture , Jan 2025First ISA definition language for acceleratorsUsed by Amazon for their accelerator offerings
-
OSDIVTC: DNN Compilation with Virtual Tensors for Data Movement EliminationIn 20th USENIX Symposium on Operating Systems Design and Implementation (OSDI) (to appear) , Jan 2026Novel optimization that goes beyond operator fusion
Funding
I am truly thankful for the sponsors of our research at the ADAPT lab. In particular, we are partially funded by the ACE center, one of the seven centers in JUMP 2.0, a Semiconductor Research Corporation (SRC) program sponsored by DARPA, by NSF (including CAREER Award), by DARPA (including Young Faculty Award), by IIDAI and through generous gifts and cloud computing resources from Google, Intel, Amazon and Qualcomm.