Ollie Liu

prof_pic.jpg

刘卓然 (Liú Zhuó-Rán) • he/him • me [at] [this-domain-url]

I’m a second-year Ph.D student in Computer Science at University of Southern California, fortunate to be co-advised by Prof. Dani Yogatama and Prof. Willie Neiswanger. In life, my friends call me Oliver 🫒

I’m broadly interested in multimodal foundation models. These days, I’m exploring their potential as agents of complex reasoning and scientific discovery. I’m particularly excited about:

  • Designing and understanding architectures and algorithms that are applicable to scientific modalities, such as (meta)genomics, protein, physics, chemistry, and material sciences.
  • Developing post-training and inference-time methods that enable FMs to solve complex reasoning and decision making problems.

Before USC, I was a researcher in continuous optimization with Prof. Jorge Nocedal at Northwestern University. Even before that, I did my B.S+M.S at Carnegie Mellon University, majoring in machine learning.

At USC, I co-lead the AI Safety Club, a student-run organization that advocates for safety in advanced AI systems. We run semester-long curriculums in introductory topics and technical areas.

news

Oct 7, 2024 Visting Philadelphia 🥪 to attend the Conference on Language Modeling and present IsoBench!
Sep 3, 2024 I’m visiting the Polymathic Team at the Flatiron Institute to work on foundation models for multi-discplinary sciences.
Jul 10, 2024 IsoBench has been accepted to the inaugural Conference on Language Modeling. Dataset preview now available on Hugging Face 🤗
May 20, 2024 Started an internship at Microsoft Research with the AI Frontiers Team
Apr 18, 2024 I gave a talk DeLLMa at the Information Science Institute NLG Seminar. Check out the video here ✌️

selected publications

  1. A Foundation Model for Metagenomic Sequences
    Ollie Liu, Sami Jaghouar, Johannes Hagemann, and 2 more authors
    In Foundation Models for Science Workshop at NeurIPS, 2024
  2. IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations
    Deqing Fu*, Ghazal Khalighinejad*, Ollie Liu*, and 4 more authors
    In The First Conference on Language Modeling, 2024
  3. DeLLMa: Decision Making Under Uncertainty with Large Language Models
    Ollie Liu*, Deqing Fu*, Dani Yogatama, and 1 more author
    arXiv preprint arXiv:2402.02392, 2024
  4. Interpretable Diffusion via Information Decomposition
    Xianghao Kong*, Ollie Liu*, Han Li, and 2 more authors
    In The Twelfth International Conference on Learning Representations, 2023
  5. How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model
    Michael Hanna, Ollie Liu, and Alexandre Variengien
    In Thirty-seventh Conference on Neural Information Processing Systems, 2023