Ollie Liu

prof_pic.jpg

Oliver IRL • 刘卓然 (Liú Zhuó-Rán) • he/him • me [at] [this-domain-url]

I’m a second-year Ph.D student in Computer Science at University of Southern California, fortunate to be co-advised by Prof. Dani Yogatama and Prof. Willie Neiswanger.

I’m interested in multimodal foundation models. These days, I’m exploring their potential as agents of complex reasoning and scientific discovery. I’m particularly excited about:

  • Designing and understanding architectures and algorithms that are applicable to scientific modalities, such as (meta)genomics, protein, physics, chemistry, and material sciences.
  • Developing post-training and inference-time methods that enable FMs to solve complex reasoning and decision making problems.

Before USC, I was a researcher in continuous optimization with Prof. Jorge Nocedal at Northwestern University. Even before that, I did my B.S+M.S at Carnegie Mellon University, majoring in machine learning.

At USC, I co-lead the AI Safety Club, a student-run organization that advocates for safety in advanced AI systems. We run semester-long curriculums in introductory topics and technical areas.

news

Jan 6, 2025 Introducing METAGENE-1🧬, a 7B parameter metagenomic foundation model capable of pandemic monitoring, pathogen detection, and multi-species genomics.
Oct 7, 2024 Visting Philadelphia 🥪 to attend the Conference on Language Modeling and present IsoBench!
Sep 3, 2024 Visiting Polymathic AI at the Flatiron Institute to work on foundation models for multi-discplinary sciences.
Jul 10, 2024 IsoBench has been accepted to the inaugural Conference on Language Modeling. Dataset preview now available on Hugging Face 🤗
May 20, 2024 Started an internship at Microsoft Research with the AI Frontiers Team

selected publications

  1. METAGENE-1: Metagenomic Foundation Model for Pandemic Monitoring
    Ollie Liu, Sami Jaghouar, Johannes Hagemann, Shangshang Wang, and 3 more authors
    arXiv preprint arXiv:2501.02045, 2025
  2. IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations
    Deqing Fu*, Ghazal Khalighinejad*, Ruohao Guo*, Ollie Liu*, and 4 more authors
    In The First Conference on Language Modeling, 2024
  3. DeLLMa: Decision Making Under Uncertainty with Large Language Models
    Ollie Liu*, Deqing Fu*, Dani Yogatama, and Willie Neiswanger
    arXiv preprint arXiv:2402.02392, 2024
  4. Interpretable Diffusion via Information Decomposition
    Xianghao Kong*, Ollie Liu*, Han Li, Dani Yogatama, and 1 more author
    In The Twelfth International Conference on Learning Representations, 2023
  5. How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model
    Michael Hanna, Ollie Liu, and Alexandre Variengien
    In Thirty-seventh Conference on Neural Information Processing Systems, 2023