Hi, I am a researcher in Google Brain working on deep learning, AutoML and NLP. Before joining Google, I obtained my PhD in AI and Machine Learning at Northwestern University, and Bachelor of Science in Physics at Peking University. Here's my Google Scholar page and LinkedIn profile.
Repleased our Symbolic Discovery of Optimization Algorithms paper (accepted to NeurIPS-2023), and the Lion optimizer.
Repleased our AutoML-Zero paper, which automatically discovers machine learning algorithms from basic math operations.
Researcher, Google Brain, Mountain View, November 2018 - Present
Research Intern, Google Brain, Mountain View, November 2017 - February 2018
Research Intern, DeepMind, London, June 2017 - October 2017
Research Intern, Google Search, Mountain View, June 2016 - September 2016
Research Intern, Google Research, Mountain View, June 2015 - September 2016
Program Committee Member (Reviewer): NeurIPS, ICLR, ICML, ACL, EMNLP, IJCAI, AAAI, ECCV, UAI.
PhD in Computer Science and Cognitive Science, Northwestern University, Evanston, US, Sept 2013 - Sept 2018
BSc in Physics, Peking University, Beijing, China, June 2016 - Sept 2016
Symbolic Discovery of Optimization Algorithms
Chen, X.*, Liang, C.*, D. Huang, E. Real, K. Wang, Y. Liu, H. Pham, X. Dong, T. Luong, C. Hsieh, Y. Lu, Q. Le. *Equal contribution
NeurIPS, 2023
AutoML-Zero: Evolving Machine Learning Algorithms From Scratch
Real, E.*, Liang, C.*, So, D., and Le, Q. *Equal contribution
ICML, 2020
Neural Symbolic Reader: Scalable Integration of Distributed and Symbolic Representations for Reading Comprehension
Chen, X., Liang, C., Yu, A., Zhou, D., Song, D. and Le, Q.
Spotlight, ICLR 2019
The Evolved Transformer
So, D., Liang, C., and Le, Q.
ICML 2019
Learning to Generalize from Sparse and Underspecified Rewards
Agarwal, R., Liang, C., Schuurmans, D. and Norouzi, M.
ICML 2019
Memory Augmented Policy Optimization for Program Synthesis and Semantic Parsing
Liang, C., Norouzi, M., Berant, J., Le, Q., and Ni, L.
Spotlight paper (3.5% accept rate), NIPS 2018
Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision
Liang, C., Berant, J., Le, Q., Forbus, K., and Ni, L.
Oral Presentation, ACL 2017
Definition Modeling: Learning to define word embeddings in natural language
Noraset, T., Liang, C., Birnbaum, L., and Downey, D.
Poster, AAAI 2017
Representation and Computation in Cognitive Models
Forbus, K., Liang, C., and Rabkina, I.
Journal, Topics in Cognitive Science 2017
Learning Paraphrase Identification with Structural Alignment
Liang, C., Paritosh, P., Rajendran, V., and Forbus, K.
Oral Presentation, IJCAI 2016
Learning Plausible Inferences from Semantic Web Knowledge by Combining Analogical Generalization with Structured Logistic Regression
Liang, C. and Forbus, K.
Oral Presentation, AAAI 2015
Constructing Hierarchical Concepts via Analogical Generalization
Liang, C. and Forbus, K.
Poster, CogSci 2014
From basic math operations, automatically discover machine learning algorithms such as neural networks, linear regression, bilinear models and other techniques like weight averaging, noisy ReLU, learning rate decay.
Paper Code and Demo A Short Intro Video by Henry AI LabsDesigning a neural symbolic layer to enable pretrained language models (eg. BERT) to perform multi-step compositional reasoning.
Paper Code coming soonApply Evolutionary Neural Architecture Search to discover new feedforward sequence models. The discovered architecture, Evolved Transformer, improves upon the Transformer significantly at different sizes on several translation datasets and language modeling.
Paper Code for Evolved Transformer Code for the search spaceA new policy optimization formulation that incorporates a memory buffer of promising trajectories to accelerate and stabilize policy gradient training, especially given sparse rewards. It is the first RL approach that achieves new state-of-the-art on learning program synthesis / semantic parsing for database tables from weak supervision.
Paper Github Repository VideoAn end-to-end neural network learns to write Lisp programs to answer questions over a large open-domain knowledge base. First end-to-end neural network model that achieved new state-of-the-art result on learning semantic parsing over Freebase with weak supervision.
Paper Github Repository Slides TalkDistributed representations of words (embeddings) have been shown to capture lexical semantics, based on their effectiveness in word similarity. In this project, we study whether it is possible to utilize the embeddings to generate dictionary definitions of words, as a more direct and transparent representation of the embeddings' semantics.
Paper Github Repository DemoDistributed representations of words (embeddings) have been shown to capture lexical semantics, based on their effectiveness in word similarity. In this project, we study whether it is possible to utilize the embeddings to generate dictionary definitions of words, as a more direct and transparent representation of the embeddings' semantics.
PaperDistributed representations of words (embeddings) have been shown to capture lexical semantics, based on their effectiveness in word similarity. In this project, we study whether it is possible to utilize the embeddings to generate dictionary definitions of words, as a more direct and transparent representation of the embeddings' semantics.
Paper Github RepositoryA TensorFlow implementation of Andrej Karpathy's Char-RNN, a character level language model using multilayer Recurrent Neural Network (RNN, LSTM or GRU). See his blog article The Unreasonable Effectiveness of Recurrent Neural Network to learn more about this model.
Github Repository Blog articleA simple TensorFlow implementation of policy gradient, tested with Cartpole in Open AI Gym.
Github Repository Blog article