PhD student in AI and Machine Learning


portrait

About me

Hi, I am a PhD student in Northwestern University working on AI and Machine Learning. My research focuses on integrating machine learning models with symbolic modules and the applications in natural language understanding. Specifically I work on machine learning models that can induce latent structures from weak supervision such as question-answer pairs or reward signal, for example, structured prediction with latent variables and reinforcement learning.


News

July, 2017: I will present our paper Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision at ACL 2017 in Vancouver, CA.

June, 2017: started research internship in DeepMind, London.

Jan, 2017: visited Microsoft Research Asia in Beijing and talked about our work in neural semantic parsing.

Nov, 2017: visited FAIR, and presented our work in neural semantic parsing.

Nov, 2016: visited Baidu USA, and presented our work in neural semantic parsing.

Timeline



Industrial Experience

Research Intern, DeepMind, London, June 2017 - Present

Research Intern, Google, Mountain View, June 2016 - September 2016

Research Intern, Google, Mountain View, June 2015 - September 2016


Academic Experience

Research Assistant, Northwestern University, 2013 - Present

TA and lecturing in EECS349 Machine Learning, 2017

TA and lecturing in EECS349 Machine Learning, 2016

TA in EECS325 AI programming, 2014


Education

PhD in Computer Science and Cognitive Science, Northwestern University, Evanston, US, Sept 2013 - Present

BSc in Physics, Peking University, Beijing, China, June 2016 - Sept 2016

Publications


Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision
Liang, C., Berant, J., Le, Q., Forbus, K., and Ni, L.
Oral Presentation, ACL 2017


Definition Modeling: Learning to define word embeddings in natural language
Noraset, T., Liang, C., Birnbaum, L., and Downey, D.
Poster, AAAI 2017


Representation and Computation in Cognitive Models
Forbus, K., Liang, C., and Rabkina, I.
Journal, Topics in Cognitive Science 2017


Learning Paraphrase Identification with Structural Alignment
Liang, C., Paritosh, P., Rajendran, V., and Forbus, K.
Oral Presentation, IJCAI 2016


Learning Plausible Inferences from Semantic Web Knowledge by Combining Analogical Generalization with Structured Logistic Regression
Liang, C. and Forbus, K.
Oral Presentation, AAAI 2015


Constructing Hierarchical Concepts via Analogical Generalization
Liang, C. and Forbus, K.
Poster, CogSci 2014

Projects


neural symbolic machine overview

Neural Symbolic Machines

An end-to-end neural network learns to write Lisp programs to answer questions over a large open-domain knowledge base. First end-to-end neural network model that achieved new state-of-the-art result on learning semantic parsing over Freebase with weak supervision.

Paper


neural symbolic machine overview

Definition Modeling

Distributed representations of words (embeddings) have been shown to capture lexical semantics, based on their effectiveness in word similarity. In this project, we study whether it is possible to utilize the embeddings to generate dictionary definitions of words, as a more direct and transparent representation of the embeddings' semantics.

Github Repository Paper Demo


SlogAn KBC overview

Knowledge Base Completion

Distributed representations of words (embeddings) have been shown to capture lexical semantics, based on their effectiveness in word similarity. In this project, we study whether it is possible to utilize the embeddings to generate dictionary definitions of words, as a more direct and transparent representation of the embeddings' semantics.

Paper


neural symbolic machine overview

Learning Concept Hierarchy

Distributed representations of words (embeddings) have been shown to capture lexical semantics, based on their effectiveness in word similarity. In this project, we study whether it is possible to utilize the embeddings to generate dictionary definitions of words, as a more direct and transparent representation of the embeddings' semantics.

Paper Github Repository

portrait

TensorFlow Char-RNN

A TensorFlow implementation of Andrej Karpathy's Char-RNN, a character level language model using multilayer Recurrent Neural Network (RNN, LSTM or GRU). See his blog article The Unreasonable Effectiveness of Recurrent Neural Network to learn more about this model.

Github Repository Blog article

portrait

TensorFlow Policy Gradient

A simple TensorFlow implementation of policy gradient, tested with Cartpole in Open AI Gym.

Github Repository Blog article