Ollie Liu

prof_pic.jpg

I’m a second-year Ph.D student in Computer Science at University of Southern California, fortunate to be co-advised by Prof. Dani Yogatama and Prof. Willie Neiswanger. In life, my friends call me Oliver 🫒

I’m passionate about designing multimodal foundation models that accelerate scientific discovery. Towards this, my research foci include foundation model pre-training, understanding and exploring their capabilities, as well as continual training to align them with human desiderata. By focusing on these areas, I aim to develop trustworthy models that not only advance artificial intelligence but also serve practical purposes in various scientific domains.

Before USC, I was a researcher in continuous optimization with Prof. Jorge Nocedal at Northwestern University. Even before that, I did my B.S+M.S at Carnegie Mellon University, majoring in machine learning.

At USC, I co-lead the AI Safety Club, a student-run organization that advocates for safety in advanced AI systems. We run semester-long curriculums in introductory topics and technical areas.

news

Apr 18, 2024 I gave a talk DeLLMa at the Information Science Institute NLG Seminar. Check out the video here ✌️
Apr 1, 2024 We introduce IsoBench🔥, an evaluation suite that benchmarks multimodal foundation models on isomorphic representations!
Mar 13, 2024 Our work, On Retrieval Augmentation and the Limitations of Language Model Training, has been accepted to NAACL 2024 🇲🇽
Feb 6, 2024 New preprint available! We introduce DeLLMa🤔, a large language model based framework for making rational decisions under uncertainty.
Jan 16, 2024 Our paper Interpretable Diffusion via Information Decomposition has been accepted for poster presentation at ICLR 2024! First time traveling to Vienna ✈️🇦🇹

selected publications

  1. IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations
    Deqing Fu*, Ghazal Khalighinejad*, Ollie Liu*, and 4 more authors
    arxiv preprint 2404.01266, 2024
  2. DeLLMa: A Framework for Decision Making Under Uncertainty with Large Language Models
    Ollie Liu*, Deqing Fu*, Dani Yogatama, and 1 more author
    arXiv preprint arXiv:2402.02392, 2024
  3. Interpretable Diffusion via Information Decomposition
    Xianghao Kong*, Ollie Liu*, Han Li, and 2 more authors
    In The Twelfth International Conference on Learning Representations, 2023
  4. How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model
    Michael Hanna, Ollie Liu, and Alexandre Variengien
    In Thirty-seventh Conference on Neural Information Processing Systems, 2023