Reseacher in Google Brain


portrait

About me

Hi, I am a researcher in Google Brain working on deep learning, AutoML and NLP. Before joining Google, I obtained my PhD in AI and Machine Learning at Northwestern University, and Bachelor of Science in Physics at Peking University. Here's my Google Scholar page and LinkedIn profile.


News

Repleased our Symbolic Discovery of Optimization Algorithms paper (accepted to NeurIPS-2023), and the Lion optimizer.

Repleased our AutoML-Zero paper, which automatically discovers machine learning algorithms from basic math operations.

Timeline



Industrial Experience

Researcher, Google Brain, Mountain View, November 2018 - Present

Research Intern, Google Brain, Mountain View, November 2017 - February 2018

Research Intern, DeepMind, London, June 2017 - October 2017

Research Intern, Google Search, Mountain View, June 2016 - September 2016

Research Intern, Google Research, Mountain View, June 2015 - September 2016


Professional Services

Program Committee Member (Reviewer): NeurIPS, ICLR, ICML, ACL, EMNLP, IJCAI, AAAI, ECCV, UAI.


Education

PhD in Computer Science and Cognitive Science, Northwestern University, Evanston, US, Sept 2013 - Sept 2018

BSc in Physics, Peking University, Beijing, China, June 2016 - Sept 2016

Selected Publications


Symbolic Discovery of Optimization Algorithms
Chen, X.*, Liang, C.*, D. Huang, E. Real, K. Wang, Y. Liu, H. Pham, X. Dong, T. Luong, C. Hsieh, Y. Lu, Q. Le. *Equal contribution
NeurIPS, 2023


AutoML-Zero: Evolving Machine Learning Algorithms From Scratch
Real, E.*, Liang, C.*, So, D., and Le, Q. *Equal contribution
ICML, 2020


Neural Symbolic Reader: Scalable Integration of Distributed and Symbolic Representations for Reading Comprehension
Chen, X., Liang, C., Yu, A., Zhou, D., Song, D. and Le, Q.
Spotlight, ICLR 2019


The Evolved Transformer
So, D., Liang, C., and Le, Q.
ICML 2019


Learning to Generalize from Sparse and Underspecified Rewards
Agarwal, R., Liang, C., Schuurmans, D. and Norouzi, M.
ICML 2019


Memory Augmented Policy Optimization for Program Synthesis and Semantic Parsing
Liang, C., Norouzi, M., Berant, J., Le, Q., and Ni, L.
Spotlight paper (3.5% accept rate), NIPS 2018


Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision
Liang, C., Berant, J., Le, Q., Forbus, K., and Ni, L.
Oral Presentation, ACL 2017


Definition Modeling: Learning to define word embeddings in natural language
Noraset, T., Liang, C., Birnbaum, L., and Downey, D.
Poster, AAAI 2017


Representation and Computation in Cognitive Models
Forbus, K., Liang, C., and Rabkina, I.
Journal, Topics in Cognitive Science 2017


Learning Paraphrase Identification with Structural Alignment
Liang, C., Paritosh, P., Rajendran, V., and Forbus, K.
Oral Presentation, IJCAI 2016


Learning Plausible Inferences from Semantic Web Knowledge by Combining Analogical Generalization with Structured Logistic Regression
Liang, C. and Forbus, K.
Oral Presentation, AAAI 2015


Constructing Hierarchical Concepts via Analogical Generalization
Liang, C. and Forbus, K.
Poster, CogSci 2014

Selected Projects


AutoML-Zero

AutoML-Zero

From basic math operations, automatically discover machine learning algorithms such as neural networks, linear regression, bilinear models and other techniques like weight averaging, noisy ReLU, learning rate decay.

Paper Code and Demo A Short Intro Video by Henry AI Labs


Neural Symbolic Reader

Neural Symbolic Reader

Designing a neural symbolic layer to enable pretrained language models (eg. BERT) to perform multi-step compositional reasoning.

Paper Code coming soon


The Evolved Transformer

The Evolved Transformer

Apply Evolutionary Neural Architecture Search to discover new feedforward sequence models. The discovered architecture, Evolved Transformer, improves upon the Transformer significantly at different sizes on several translation datasets and language modeling.

Paper Code for Evolved Transformer Code for the search space


MeRL

Learning to Generalize from Sparse and Underspecified Rewards

Apply meta-learning to augment a sparse and underspecified reward signal so the RL agent can generalize better.

Paper Blog

mapo nsm

Memory Augmented Policy Optimization (MAPO)

A new policy optimization formulation that incorporates a memory buffer of promising trajectories to accelerate and stabilize policy gradient training, especially given sparse rewards. It is the first RL approach that achieves new state-of-the-art on learning program synthesis / semantic parsing for database tables from weak supervision.

Paper Github Repository Video


neural symbolic machine overview

Neural Symbolic Machines (NSM)

An end-to-end neural network learns to write Lisp programs to answer questions over a large open-domain knowledge base. First end-to-end neural network model that achieved new state-of-the-art result on learning semantic parsing over Freebase with weak supervision.

Paper Github Repository Slides Talk


neural symbolic machine overview

Definition Modeling

Distributed representations of words (embeddings) have been shown to capture lexical semantics, based on their effectiveness in word similarity. In this project, we study whether it is possible to utilize the embeddings to generate dictionary definitions of words, as a more direct and transparent representation of the embeddings' semantics.

Paper Github Repository Demo


SlogAn KBC overview

Knowledge Base Completion

Distributed representations of words (embeddings) have been shown to capture lexical semantics, based on their effectiveness in word similarity. In this project, we study whether it is possible to utilize the embeddings to generate dictionary definitions of words, as a more direct and transparent representation of the embeddings' semantics.

Paper


neural symbolic machine overview

Learning Concept Hierarchy

Distributed representations of words (embeddings) have been shown to capture lexical semantics, based on their effectiveness in word similarity. In this project, we study whether it is possible to utilize the embeddings to generate dictionary definitions of words, as a more direct and transparent representation of the embeddings' semantics.

Paper Github Repository

portrait

TensorFlow Char-RNN

A TensorFlow implementation of Andrej Karpathy's Char-RNN, a character level language model using multilayer Recurrent Neural Network (RNN, LSTM or GRU). See his blog article The Unreasonable Effectiveness of Recurrent Neural Network to learn more about this model.

Github Repository Blog article

portrait

TensorFlow Policy Gradient

A simple TensorFlow implementation of policy gradient, tested with Cartpole in Open AI Gym.

Github Repository Blog article