Ollie Liu

prof_pic.jpg

Oliver IRL • 刘卓然 (Liú Zhuó-Rán) • he/him • me [at] [this-domain-url]

I’m a final-year Ph.D candidate in Computer Science at University of Southern California, fortunate to be co-advised by Prof. Dani Yogatama and Prof. Willie Neiswanger.

I work at the intersection of decision-making, generative models, and AI-for-science, with the goal of building modular, auditable foundation models that support reliable reasoning and discovery. I collaborate extensively with the Polymathic AI Collaboration on scaling scientific foundation models. I also interned at Meta MSL FAIR and Microsoft Research.

Before USC, I was a researcher in continuous optimization with Prof. Jorge Nocedal at Northwestern University. Even before that, I did my B.S.+M.S. at Carnegie Mellon University, majoring in machine learning.

news

Jan 26, 2026 Two papers–Tina and Zebra-CoT–are accepted to ICLR 2026.
Sep 23, 2025 Our work on Omnimodal Foundation Model for Astronomical Sciences, AION-1, has been accepted to NeurIPS 2025, with an oral presentation at the AI for Science Workshop!
Jul 7, 2025 Our work on LLM unlearning has been accepted to COLM 2025!
May 26, 2025 Starting as a summer intern at the Generative Model Foundations Team at Meta FAIR, working with Brandon Amos on reasoning and optimization.
Jan 22, 2025 Excited to share that DeLLMa has been accepted to ICLR 2025 as a Spotlight presentation. See you in Singapore 🇸🇬

selected publications

  1. AION-1: Omnimodal Foundation Model for Astronomical Sciences
    Liam Holden Parker*, Francois Lanusse*, Jeff Shen*, Ollie Liu, and 21 more authors
    In The Thirty-ninth Annual Conference on Neural Information Processing Systems, 2025
  2. METAGENE-1: Metagenomic Foundation Model for Pandemic Monitoring
    Ollie Liu, Sami Jaghouar, Johannes Hagemann, Shangshang Wang, and 3 more authors
    arXiv preprint arXiv:2501.02045, 2025
  3. IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations
    Deqing Fu*, Ghazal Khalighinejad*, Ruohao Guo*, Ollie Liu*, and 4 more authors
    In The First Conference on Language Modeling, 2024
  4. DeLLMa: Decision Making Under Uncertainty with Large Language Models
    Ollie Liu*, Deqing Fu*, Dani Yogatama, and Willie Neiswanger
    In [Spotlight] The Thirteenth International Conference on Learning Representations, 2024
  5. Interpretable Diffusion via Information Decomposition
    Xianghao Kong*, Ollie Liu*, Han Li, Dani Yogatama, and 1 more author
    In The Twelfth International Conference on Learning Representations, 2023
  6. How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model
    Michael Hanna, Ollie Liu, and Alexandre Variengien
    In Thirty-seventh Conference on Neural Information Processing Systems, 2023